Spatial frequency domain error budget
Hauschildt, H; Krulewich, D
1998-08-27
The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine
Frequency-Tracking-Error Detector
NASA Technical Reports Server (NTRS)
Randall, Richard L.
1990-01-01
Frequency-tracking-error detector compares average period of output signal from band-pass tracking filter with average period of signal of frequency 100 f(sub 0) that controls center frequency f(sub 0) of tracking filter. Measures difference between f(sub 0) and frequency of one of periodic components in output of bearing sensor. Bearing sensor is accelerometer, strain gauge, or deflectometer mounted on bearing housing. Detector part of system of electronic equipment used to measure vibrations in bearings in rotating machinery.
Derivational Morphophonology: Exploring Errors in Third Graders' Productions
ERIC Educational Resources Information Center
Jarmulowicz, Linda; Hay, Sarah E.
2009-01-01
Purpose: This study describes a post hoc analysis of segmental, stress, and syllabification errors in third graders' productions of derived English words with the stress-changing suffixes "-ity" and "-ic." We investigated whether (a) derived word frequency influences error patterns, (b) stress and syllabification errors always co-occur, and (c)…
Evaluation and control of spatial frequency errors in reflective telescopes
NASA Astrophysics Data System (ADS)
Zhang, Xuejun; Zeng, Xuefeng; Hu, Haixiang; Zheng, Ligong
2015-08-01
In this paper, the influence on the image quality of manufacturing residual errors was studied. By analyzing the statistical distribution characteristics of the residual errors and their effects on PSF and MTF, we divided those errors into low, middle and high frequency domains using the unit "cycles per aperture". Two types of mid-frequency errors, algorithm intrinsic and tool path induced were analyzed. Control methods in current deterministic polishing process, such as MRF or IBF were presented.
Compensation Low-Frequency Errors in TH-1 Satellite
NASA Astrophysics Data System (ADS)
Wang, Jianrong; Wang, Renxiang; Hu, Xin
2016-06-01
The topographic mapping products at 1:50,000 scale can be realized using satellite photogrammetry without ground control points (GCPs), which requires the high accuracy of exterior orientation elements. Usually, the attitudes of exterior orientation elements are obtained from the attitude determination system on the satellite. Based on the theoretical analysis and practice, the attitude determination system exists not only the high-frequency errors, but also the low-frequency errors related to the latitude of satellite orbit and the time. The low-frequency errors would affect the location accuracy without GCPs, especially to the horizontal accuracy. In SPOT5 satellite, the latitudinal model was proposed to correct attitudes using approximately 20 calibration sites data, and the location accuracy was improved. The low-frequency errors are also found in Tian Hui 1 (TH-1) satellite. Then, the method of compensation low-frequency errors is proposed in ground image processing of TH-1, which can detect and compensate the low-frequency errors automatically without using GCPs. This paper deal with the low-frequency errors in TH-1: First, the analysis about low-frequency errors of the attitude determination system is performed. Second, the compensation models are proposed in bundle adjustment. Finally, the verification is tested using data of TH-1. The testing results show: the low-frequency errors of attitude determination system can be compensated during bundle adjustment, which can improve the location accuracy without GCPs and has played an important role in the consistency of global location accuracy.
Antenna pointing systematic error model derivations
NASA Technical Reports Server (NTRS)
Guiar, C. N.; Lansing, F. L.; Riggs, R.
1987-01-01
The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.
Frequency of Consonant Articulation Errors in Dysarthric Speech
ERIC Educational Resources Information Center
Kim, Heejin; Martin, Katie; Hasegawa-Johnson, Mark; Perlman, Adrienne
2010-01-01
This paper analyses consonant articulation errors in dysarthric speech produced by seven American-English native speakers with cerebral palsy. Twenty-three consonant phonemes were transcribed with diacritics as necessary in order to represent non-phoneme misarticulations. Error frequencies were examined with respect to six variables: articulatory…
Error control coding for multi-frequency modulation
NASA Astrophysics Data System (ADS)
Ives, Robert W.
1990-06-01
Multi-frequency modulation (MFM) has been developed at NPS using both quadrature-phase-shift-keyed (QPSK) and quadrature-amplitude-modulated (QAM) signals with good bit error performance at reasonable signal-to-noise ratios. Improved performance can be achieved by the introduction of error control coding. This report documents a FORTRAN simulation of the implementation of error control coding into an MFM communication link with additive white Gaussian noise. Four Reed-Solomon codes were incorporated, two for 16-QAM and two for 32-QAM modulation schemes. The error control codes used were modified from the conventional Reed-Solomon codes in that one information symbol was sacrificed to parity in order to use a simplified decoding algorithm which requires no iteration and enhances error detection capability. Bit error rates as a function of SNR and E(sub b)/N(sub 0) were analyzed, and bit error performance was weighed against reduction in information rate to determine the value of the codes.
High Frequency of Imprinted Methylation Errors in Human Preimplantation Embryos
White, Carlee R.; Denomme, Michelle M.; Tekpetey, Francis R.; Feyles, Valter; Power, Stephen G. A.; Mann, Mellissa R. W.
2015-01-01
Assisted reproductive technologies (ARTs) represent the best chance for infertile couples to conceive, although increased risks for morbidities exist, including imprinting disorders. This increased risk could arise from ARTs disrupting genomic imprints during gametogenesis or preimplantation. The few studies examining ART effects on genomic imprinting primarily assessed poor quality human embryos. Here, we examined day 3 and blastocyst stage, good to high quality, donated human embryos for imprinted SNRPN, KCNQ1OT1 and H19 methylation. Seventy-six percent day 3 embryos and 50% blastocysts exhibited perturbed imprinted methylation, demonstrating that extended culture did not pose greater risk for imprinting errors than short culture. Comparison of embryos with normal and abnormal methylation didn’t reveal any confounding factors. Notably, two embryos from male factor infertility patients using donor sperm harboured aberrant methylation, suggesting errors in these embryos cannot be explained by infertility alone. Overall, these results indicate that ART human preimplantation embryos possess a high frequency of imprinted methylation errors. PMID:26626153
Error enhancement in geomagnetic models derived from scalar data
NASA Technical Reports Server (NTRS)
Stern, D. P.; Bredekamp, J. H.
1974-01-01
Models of the main geomagnetic field are generally represented by a scalar potential gamma expanded in a finite number of spherical harmonics. Very accurate observations of F were used, but indications exist that the accuracy of models derived from them is considerably lower. One problem is that F does not always characterize gamma uniquely. It is not clear whether such ambiguity can be encountered in deriving gamma from F in geomagnetic surveys, but there exists a connection, due to the fact that the counterexamples of Backus are related to the dipole field, while the geomagnetic field is dominated by its dipole component. If the models are recovered with a finite error (i.e. they cannot completely fit the data and consequently have a small spurious component), this connection allows the error in certain sequences of harmonic terms in gamma to be enhanced without unduly large effects on the fit of F to the model.
Frequency analysis of photoplethysmogram and its derivatives.
Elgendi, Mohamed; Fletcher, Richard R; Norton, Ian; Brearley, Matt; Abbott, Derek; Lovell, Nigel H; Schuurmans, Dale
2015-12-01
There are a limited number of studies on heat stress dynamics during exercise using the photoplethysmogram (PPG). We investigate the PPG signal and its derivatives for heat stress assessment using Welch (non-parametric) and autoregressive (parametric) spectral estimation methods. The preliminary results of this study indicate that applying the first and second derivatives to PPG waveforms is useful for determining heat stress level using 20-s recordings. Interestingly, Welch's and Yule-Walker's methods in agreement that the second derivative is an improved detector for heat stress. In fact, both spectral estimation methods showed a clear separation in the frequency domain between measurements before and after simulated heat-stress induction when the second derivative is applied. Moreover, the results demonstrate superior performance of the Welch's method over the Yule-Walker's method in separating before and after the three simulated heat-stress inductions. PMID:26498064
NASA Technical Reports Server (NTRS)
Moore, H. J.; Wu, S. C.
1973-01-01
The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.
NASA Astrophysics Data System (ADS)
Liu, Wei; Li, Chao; Sun, Zhao-Yang; Zhao, Yu; Wu, Shi-You; Fang, Guang-You
2016-08-01
In the terahertz (THz) band, the inherent shake of the human body may strongly impair the image quality of a beam scanning single frequency holography system for personnel screening. To realize accurate shake compensation in imaging processing, it is quite necessary to develop a high-precision measure system. However, in many cases, different parts of a human body may shake to different extents, resulting in greatly increasing the difficulty in conducting a reasonable measurement of body shake errors for image reconstruction. In this paper, a body shake error compensation algorithm based on the raw data is proposed. To analyze the effect of the body shake on the raw data, a model of echoed signal is rebuilt with considering both the beam scanning mode and the body shake. According to the rebuilt signal model, we derive the body shake error estimated method to compensate for the phase error. Simulation on the reconstruction of point targets with shake errors and proof-of-principle experiments on the human body in the 0.2-THz band are both performed to confirm the effectiveness of the body shake compensation algorithm proposed. Project supported by the Knowledge Innovation Program of the Chinese Academy of Sciences (Grant No. YYYJ-1123).
Jason-2 systematic error analysis in the GPS derived orbits
NASA Astrophysics Data System (ADS)
Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.
2011-12-01
Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced
The testing of the aspheric mirror high-frequency band error
NASA Astrophysics Data System (ADS)
Wan, JinLong; Li, Bo; Li, XinNan
2015-08-01
In recent years, high frequency errors of mirror surface are taken seriously gradually. In manufacturing process of advanced telescope, there is clear indicator about high frequency errors. However, the sub-mirror off-axis aspheric telescope used is large. If uses the full aperture interferometers shape measurement, you need to use complex optical compensation device. Therefore, we propose a method to detect non-spherical lens based on the high-frequency stitching errors. This method does not use compensation components, only to measure Aperture sub-surface shape. By analyzing Zernike polynomial coefficients corresponding to the frequency errors, removing the previous 15 Zernike polynomials, then joining the surface shape, you can get full bore inside tested mirror high-frequency errors. 330mm caliber off-axis aspherical hexagon are measured with this method, obtain a complete face type of high-frequency surface errors and the feasibility of the approach.
Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali
2015-08-01
In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier. PMID:26736619
Nature and frequency of medication errors in a geriatric ward: an Indonesian experience
Ernawati, Desak Ketut; Lee, Ya Ping; Hughes, Jeffery David
2014-01-01
Purpose To determine the nature and frequency of medication errors during medication delivery processes in a public teaching hospital geriatric ward in Bali, Indonesia. Methods A 20-week prospective study on medication errors occurring during the medication delivery process was conducted in a geriatric ward in a public teaching hospital in Bali, Indonesia. Participants selected were inpatients aged more than 60 years. Patients were excluded if they had a malignancy, were undergoing surgery, or receiving chemotherapy treatment. The occurrence of medication errors in prescribing, transcribing, dispensing, and administration were detected by the investigator providing in-hospital clinical pharmacy services. Results Seven hundred and seventy drug orders and 7,662 drug doses were reviewed as part of the study. There were 1,563 medication errors detected among the 7,662 drug doses reviewed, representing an error rate of 20.4%. Administration errors were the most frequent medication errors identified (59%), followed by transcription errors (15%), dispensing errors (14%), and prescribing errors (7%). Errors in documentation were the most common form of administration errors. Of these errors, 2.4% were classified as potentially serious and 10.3% as potentially significant. Conclusion Medication errors occurred in every stage of the medication delivery process, with administration errors being the most frequent. The majority of errors identified in the administration stage were related to documentation. Provision of in-hospital clinical pharmacy services could potentially play a significant role in detecting and preventing medication errors. PMID:24940067
Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth
Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan
2015-01-01
Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars. PMID:26347779
Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth.
Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan
2015-01-01
Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars. PMID:26347779
Bounding higher-order ionosphere errors for the dual-frequency GPS user
NASA Astrophysics Data System (ADS)
Datta-Barua, S.; Walter, T.; Blanch, J.; Enge, P.
2008-10-01
Civil signals at L2 and L5 frequencies herald a new phase of Global Positioning System (GPS) performance. Dual-frequency users typically assume a first-order approximation of the ionosphere index of refraction, combining the GPS observables to eliminate most of the ranging delay, on the order of meters, introduced into the pseudoranges. This paper estimates the higher-order group and phase errors that occur from assuming the ordinary first-order dual-frequency ionosphere model using data from the Federal Aviation Administration's Wide Area Augmentation System (WAAS) network on a solar maximum quiet day and an extremely stormy day postsolar maximum. We find that during active periods, when ionospheric storms may introduce slant range delays at L1 as high as 100 m, the higher-order group errors in the L1-L2 or L1-L5 dual-frequency combination can be tens of centimeters. The group and phase errors are no longer equal and opposite, so these errors accumulate in carrier smoothing of the dual-frequency code observable. We show the errors in the carrier-smoothed code are due to higher-order group errors and, to a lesser extent, to higher-order phase rate errors. For many applications, this residual error is sufficiently small as to be neglected. However, such errors can impact geodetic applications as well as the error budgets of GPS Augmentation Systems providing Category III precision approach.
To Err is Normable: The Computation of Frequency-Domain Error Bounds from Time-Domain Data
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Veillette, Robert J.; DeAbreuGarcia, J. Alexis; Chicatelli, Amy; Hartmann, Richard
1998-01-01
This paper exploits the relationships among the time-domain and frequency-domain system norms to derive information useful for modeling and control design, given only the system step response data. A discussion of system and signal norms is included. The proposed procedures involve only simple numerical operations, such as the discrete approximation of derivatives and integrals, and the calculation of matrix singular values. The resulting frequency-domain and Hankel-operator norm approximations may be used to evaluate the accuracy of a given model, and to determine model corrections to decrease the modeling errors.
ERIC Educational Resources Information Center
Sampson, Andrew
2012-01-01
This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…
NASA Astrophysics Data System (ADS)
Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng
2016-06-01
The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.
Guan, Yongtao; Li, Yehua; Sinha, Rajita
2011-01-01
In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854
Sampling Errors in Satellite-derived Infrared Sea Surface Temperatures
NASA Astrophysics Data System (ADS)
Liu, Y.; Minnett, P. J.
2014-12-01
Sea Surface Temperature (SST) measured from satellites has been playing a crucial role in understanding geophysical phenomena. Generating SST Climate Data Records (CDRs) is considered to be the one that imposes the most stringent requirements on data accuracy. For infrared SSTs, sampling uncertainties caused by cloud presence and persistence generate errors. In addition, for sensors with narrow swaths, the swath gap will act as another sampling error source. This study is concerned with quantifying and understanding such sampling errors, which are important for SST CDR generation and for a wide range of satellite SST users. In order to quantify these errors, a reference Level 4 SST field (Multi-scale Ultra-high Resolution SST) is sampled by using realistic swath and cloud masks of Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Along Track Scanning Radiometer (AATSR). Global and regional SST uncertainties are studied by assessing the sampling error at different temporal and spatial resolutions (7 spatial resolutions from 4 kilometers to 5.0° at the equator and 5 temporal resolutions from daily to monthly). Global annual and seasonal mean sampling errors are large in the high latitude regions, especially the Arctic, and have geographical distributions that are most likely related to stratus clouds occurrence and persistence. The region between 30°N and 30°S has smaller errors compared to higher latitudes, except for the Tropical Instability Wave area, where persistent negative errors are found. Important differences in sampling errors are also found between the broad and narrow swath scan patterns and between day and night fields. This is the first time that realistic magnitudes of the sampling errors are quantified. Future improvement in the accuracy of SST products will benefit from this quantification.
Vazin, Afsaneh; Zamani, Zahra; Hatam, Nahid
2014-01-01
This study was conducted with the purpose of determining the frequency of medication errors (MEs) occurring in tertiary care emergency department (ED) of a large academic hospital in Iran. The incidence of MEs was determined through the disguised direct observation method conducted by a trained observer. A total of 1,031 medication doses administered to 202 patients admitted to the tertiary care ED were observed over a course of 54 6-hour shifts. Following collection of the data and analysis of the errors with the assistance of a clinical pharmacist, frequency of errors in the different stages was reported and analyzed in SPSS-21 software. For the 202 patients and the 1,031 medication doses evaluated in the present study, 707 (68.5%) MEs were recorded in total. In other words, 3.5 errors per patient and almost 0.69 errors per medication are reported to have occurred, with the highest frequency of errors pertaining to cardiovascular (27.2%) and antimicrobial (23.6%) medications. The highest rate of errors occurred during the administration phase of the medication use process with a share of 37.6%, followed by errors of prescription and transcription with a share of 21.1% and 10% of errors, respectively. Omission (7.6%) and wrong time error (4.4%) were the most frequent administration errors. The less-experienced nurses (P=0.04), higher patient-to-nurse ratio (P=0.017), and the morning shifts (P=0.035) were positively related to administration errors. Administration errors marked the highest share of MEs occurring in the different medication use processes. Increasing the number of nurses and employing the more experienced of them in EDs can help reduce nursing errors. Addressing the shortcomings with further research should result in reduction of MEs in EDs. PMID:25525391
Vazin, Afsaneh; Zamani, Zahra; Hatam, Nahid
2014-01-01
This study was conducted with the purpose of determining the frequency of medication errors (MEs) occurring in tertiary care emergency department (ED) of a large academic hospital in Iran. The incidence of MEs was determined through the disguised direct observation method conducted by a trained observer. A total of 1,031 medication doses administered to 202 patients admitted to the tertiary care ED were observed over a course of 54 6-hour shifts. Following collection of the data and analysis of the errors with the assistance of a clinical pharmacist, frequency of errors in the different stages was reported and analyzed in SPSS-21 software. For the 202 patients and the 1,031 medication doses evaluated in the present study, 707 (68.5%) MEs were recorded in total. In other words, 3.5 errors per patient and almost 0.69 errors per medication are reported to have occurred, with the highest frequency of errors pertaining to cardiovascular (27.2%) and antimicrobial (23.6%) medications. The highest rate of errors occurred during the administration phase of the medication use process with a share of 37.6%, followed by errors of prescription and transcription with a share of 21.1% and 10% of errors, respectively. Omission (7.6%) and wrong time error (4.4%) were the most frequent administration errors. The less-experienced nurses (P=0.04), higher patient-to-nurse ratio (P=0.017), and the morning shifts (P=0.035) were positively related to administration errors. Administration errors marked the highest share of MEs occurring in the different medication use processes. Increasing the number of nurses and employing the more experienced of them in EDs can help reduce nursing errors. Addressing the shortcomings with further research should result in reduction of MEs in EDs. PMID:25525391
NASA Astrophysics Data System (ADS)
Georgiopoulos, M.; Kazakos, P.
1987-09-01
We compute the packet error probability induced in a frequency-hopped spread spectrum packet radio network, which utilizes first order Markov frequency hopping patterns. The frequency spectrum is divided into q frequency bins and the packets are divided into M bytes each. Every user in the network sends each of the M bytes of his packet at a frequency bin, which is different from the frequency bin used by the previous byte, but equally likely to be any one of the remaining q-1 frequency bins (Markov frequency hopping patterns). Furthermore, different users in the network utilize statistically independent frequency hopping patterns. Provided that, K users have simultaneously transmitted their packets on the channel, and a receiver has locked on to one of these K packets, we present a method for the computation of P sub e (K) (i.e. the probability that this packet is incorrectly decoded). Furthermore, we present numerical results (i.e. P sub e (K) versus K) for various values of the multiple access interference K, when Reed Solomon (RS) codes are used for the encoding of packets. Finally, some useful comparisons, with the packet error probability induced, if we assume that the byte errors are independent, are made; based on these comparisons, we can easily evaluate the performance of our spread spectrum system.
Disentangling the impacts of outcome valence and outcome frequency on the post-error slowing
Wang, Lijun; Tang, Dandan; Zhao, Yuanfang; Hitchman, Glenn; Wu, Shanshan; Tan, Jinfeng; Chen, Antao
2015-01-01
Post-error slowing (PES) reflects efficient outcome monitoring, manifested as slower reaction time after errors. Cognitive control account assumes that PES depends on error information, whereas orienting account posits that it depends on error frequency. This raises the question how the outcome valence and outcome frequency separably influence the generation of PES. To address this issue, we varied the probability of observation errors (50/50 and 20/80, correct/error) the “partner” committed by employing an observation-execution task and investigated the corresponding behavioral and neural effects. On each trial, participants first viewed the outcome of a flanker-run that was supposedly performed by a ‘partner’, and then performed a flanker-run themselves afterwards. We observed PES in the two error rate conditions. However, electroencephalographic data suggested error-related potentials (oERN and oPe) and rhythmic oscillation associated with attentional process (alpha band) were respectively sensitive to outcome valence and outcome frequency. Importantly, oERN amplitude was positively correlated with PES. Taken together, these findings support the assumption of the cognitive control account, suggesting that outcome valence and outcome frequency are both involved in PES. Moreover, the generation of PES is indexed by oERN, whereas the modulation of PES size could be reflected on the alpha band. PMID:25732237
Error Bounds for Quadrature Methods Involving Lower Order Derivatives
ERIC Educational Resources Information Center
Engelbrecht, Johann; Fedotov, Igor; Fedotova, Tanya; Harding, Ansie
2003-01-01
Quadrature methods for approximating the definite integral of a function f(t) over an interval [a,b] are in common use. Examples of such methods are the Newton-Cotes formulas (midpoint, trapezoidal and Simpson methods etc.) and the Gauss-Legendre quadrature rules, to name two types of quadrature. Error bounds for these approximations involve…
On low-frequency errors of uniformly modulated filtered white-noise models for ground motions
Safak, Erdal; Boore, David M.
1988-01-01
Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).
An Emprical Point Error Model for Tls Derived Point Clouds
NASA Astrophysics Data System (ADS)
Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin
2016-06-01
The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
Frequency-domain correction of sensor dynamic error for step response.
Yang, Shuang-Long; Xu, Ke-Jun
2012-11-01
To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly. PMID:23206091
Frequency-domain correction of sensor dynamic error for step response
NASA Astrophysics Data System (ADS)
Yang, Shuang-Long; Xu, Ke-Jun
2012-11-01
To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly.
NASA Astrophysics Data System (ADS)
Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim
2012-12-01
This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.
Superconvergence of the derivative patch recovery technique and a posteriorii error estimation
Zhang, Z.; Zhu, J.Z.
1995-12-31
The derivative patch recovery technique developed by Zienkiewicz and Zhu for the finite element method is analyzed. It is shown that, for one dimensional problems and two dimensional problems using tensor product elements, the patch recovery technique yields superconvergence recovery for the derivatives. Consequently, the error estimator based on the recovered derivative is asymptotically exact.
Online public reactions to frequency of diagnostic errors in US outpatient care
Giardina, Traber Davis; Sarkar, Urmimala; Gourley, Gato; Modi, Varsha; Meyer, Ashley N.D.; Singh, Hardeep
2016-01-01
Background Diagnostic errors pose a significant threat to patient safety but little is known about public perceptions of diagnostic errors. A study published in BMJ Quality & Safety in 2014 estimated that diagnostic errors affect at least 5% of US adults (or 12 million) per year. We sought to explore online public reactions to media reports on the reported frequency of diagnostic errors in the US adult population. Methods We searched the World Wide Web for any news article reporting findings from the study. We then gathered all the online comments made in response to the news articles to evaluate public reaction to the newly reported diagnostic error frequency (n=241). Two coders conducted content analyses of the comments and an experienced qualitative researcher resolved differences. Results Overall, there were few comments made regarding the frequency of diagnostic errors. However, in response to the media coverage, 44 commenters shared personal experiences of diagnostic errors. Additionally, commentary centered on diagnosis-related quality of care as affected by two emergent categories: (1) US health care providers (n=79; 63 commenters) and (2) US health care reform-related policies, most commonly the Affordable Care Act (ACA) and insurance/reimbursement issues (n=62; 47 commenters). Conclusion The public appears to have substantial concerns about the impact of the ACA and other reform initiatives on the diagnosis-related quality of care. However, policy discussions on diagnostic errors are largely absent from the current national conversation on improving quality and safety. Because outpatient diagnostic errors have emerged as a major safety concern, researchers and policymakers should consider evaluating the effects of policy and practice changes on diagnostic accuracy. PMID:27347474
Bit error rate performance of pi/4-DQPSK in a frequency-selective fast Rayleigh fading channel
NASA Technical Reports Server (NTRS)
Liu, Chia-Liang; Feher, Kamilo
1991-01-01
The bit error rate (BER) performance of pi/4-differential quadrature phase shift keying (DQPSK) modems in cellular mobile communication systems is derived and analyzed. The system is modeled as a frequency-selective fast Rayleigh fading channel corrupted by additive white Gaussian noise (AWGN) and co-channel interference (CCI). The probability density function of the phase difference between two consecutive symbols of M-ary differential phase shift keying (DPSK) signals is first derived. In M-ary DPSK systems, the information is completely contained in this phase difference. For pi/4-DQPSK, the BER is derived in a closed form and calculated directly. Numerical results show that for the 24 kBd (48 kb/s) pi/4-DQPSK operated at a carrier frequency of 850 MHz and C/I less than 20 dB, the BER will be dominated by CCI if the vehicular speed is below 100 mi/h. In this derivation, frequency-selective fading is modeled by two independent Rayleigh signal paths. Only one co-channel is assumed in this derivation. The results obtained are also shown to be valid for discriminator detection of M-ary DPSK signals.
NASA Technical Reports Server (NTRS)
Fetterman, Timothy L.; Noor, Ahmed K.
1987-01-01
Computational procedures are presented for evaluating the sensitivity derivatives of the vibration frequencies and eigenmodes of framed structures. Both a displacement and a mixed formulation are used. The two key elements of the computational procedure are: (a) Use of dynamic reduction techniques to substantially reduce the number of degrees of freedom; and (b) Application of iterative techniques to improve the accuracy of the derivatives of the eigenmodes. The two reduction techniques considered are the static condensation and a generalized dynamic reduction technique. Error norms are introduced to assess the accuracy of the eigenvalue and eigenvector derivatives obtained by the reduction techniques. The effectiveness of the methods presented is demonstrated by three numerical examples.
Efficient simulation for fixed-receiver bistatic SAR with time and frequency synchronization errors
NASA Astrophysics Data System (ADS)
Yan, Feifei; Chang, Wenge; Li, Xiangyang
2015-12-01
Raw signal simulation is a useful tool for synthetic aperture radar (SAR) system design, mission planning, processing algorithm testing, and inversion algorithm design. Time and frequency synchronization is the key technique of bistatic SAR (BiSAR) system, and raw data simulation is an effective tool for verifying the time and frequency synchronization techniques. According to the two-dimensional (2-D) frequency spectrum of fixed-receiver BiSAR, a rapid raw data simulation approach with time and frequency synchronization errors is proposed in this paper. Through 2-D inverse Stolt transform in 2-D frequency domain and phase compensation in range-Doppler frequency domain, this method can significantly improve the efficiency of scene raw data simulation. Simulation results of point targets and extended scene are presented to validate the feasibility and efficiency of the proposed simulation approach.
Sparsity-based moving target localization using multiple dual-frequency radars under phase errors
NASA Astrophysics Data System (ADS)
Al Kadry, Khodour; Ahmad, Fauzia; Amin, Moeness G.
2015-05-01
In this paper, we consider moving target localization in urban environments using a multiplicity of dual-frequency radars. Dual-frequency radars offer the benefit of reduced complexity and fast computation time, thereby permitting real-time indoor target localization and tracking. The multiple radar units are deployed in a distributed system configuration, which provides robustness against target obscuration. We develop the dual-frequency signal model for the distributed radar system under phase errors and employ a joint sparse scene reconstruction and phase error correction technique to provide accurate target location and velocity estimates. Simulation results are provided that validate the performance of the proposed scheme under both full and reduced data volumes.
A Derivation of the Unbiased Standard Error of Estimate: The General Case.
ERIC Educational Resources Information Center
O'Brien, Francis J., Jr.
This paper is part of a series of applied statistics monographs intended to provide supplementary reading for applied statistics students. In the present paper, derivations of the unbiased standard error of estimate for both the raw score and standard score linear models are presented. The derivations for raw score linear models are presented in…
Error analysis for semi-analytic displacement derivatives with respect to shape and sizing variables
NASA Technical Reports Server (NTRS)
Fenyes, Peter A.; Lust, Robert V.
1989-01-01
Sensitivity analysis is fundamental to the solution of structural optimization problems. Consequently, much research has focused on the efficient computation of static displacement derivatives. As originally developed, these methods relied on analytical representations for the derivatives of the structural stiffness matrix (K) with respect to the design variables (b sub i). To extend these methods for use with complex finite element formulations and facilitate their implementation into structural optimization programs using the general finite element method analysis codes, the semi-analytic method was developed. In this method the matrix the derivative of K/the derivative b sub i is approximated by finite difference. Although it is well known that the accuracy of the semi-analytic method is dependent on the finite difference parameter, recent work has suggested that more fundamental inaccuracies exist in the method when used for shape optimization. Another study has argued qualitatively that these errors are related to nonuniform errors in the stiffness matrix derivatives. The accuracy of the semi-analytic method is investigated. A general framework was developed for the error analysis and then it is shown analytically that the errors in the method are entirely accounted for by errors in delta K/delta b sub i. Furthermore, it is demonstrated that acceptable accuracy in the derivatives can be obtained through careful selection of the finite difference parameter.
NASA Astrophysics Data System (ADS)
Wang, Jie; Liang, Xingdong; Chen, Longyong; Ding, Chibiao
2015-01-01
Orthogonal frequency division multiplexing (OFDM) chirp waveform, which is composed of two successive identical linear frequency modulated subpulses, is a newly proposed orthogonal waveform scheme for multiinput multioutput synthetic aperture radar (SAR) systems. However, according to the waveform model, radar systematic error, which introduces phase or amplitude difference between the subpulses of the OFDM waveform, significantly degrades the orthogonality. The impact of radar systematic error on the waveform orthogonality is mainly caused by the systematic nonlinearity rather than the thermal noise or the frequency-dependent systematic error. Due to the influence of the causal filters, the first subpulse leaks into the second one. The leaked signal interacts with the second subpulse in the nonlinear components of the transmitter. This interaction renders a dramatic phase distortion in the beginning of the second subpulse. The resultant distortion, which leads to a phase difference between the subpulses, seriously damages the waveform's orthogonality. The impact of radar systematic error on the waveform orthogonality is addressed. Moreover, the impact of the systematic nonlinearity on the waveform is avoided by adding a standby between the subpulses. Theoretical analysis is validated by practical experiments based on a C-band SAR system.
Error Estimates Derived from the Data for Least-Squares Spline Fitting
Jerome Blair
2007-06-25
The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.
Error detection and correction for a multiple frequency quaternary phase shift keyed signal
NASA Astrophysics Data System (ADS)
Hopkins, Kevin S.
1989-06-01
A multiple frequency quaternary phased shift (MFQPSK) signaling system was developed and experimentally tested in a controlled environment. In order to insure that the quality of the received signal is such that information recovery is possible, error detection/correction (EDC) must be used. Various EDC coding schemes available are reviewed and their application to the MFQPSK signal system is analyzed. Hamming, Golay, Bose-Chaudhuri-Hocquenghem (BCH), Reed-Solomon (R-S) block codes as well as convolutional codes are presented and analyzed in the context of specific MFQPSK system parameters. A computer program was developed in order to compute bit error probabilities as a function of signal to noise ratio. Results demonstrate that various EDC schemes are suitable for the MFQPSK signal structure, and that significant performance improvements are possible with the use of certain error correction codes.
NASA Astrophysics Data System (ADS)
Wang, Ben; Zhang, Yimin D.; Qin, Si; Amin, Moeness G.
2016-05-01
In this paper, we propose a nonstationary jammer suppression method for GPS receivers when the signals are sparsely sampled. Missing data samples induce noise-like artifacts in the time-frequency (TF) distribution and ambiguity function of the received signals, which lead to reduced capability and degraded performance in jammer signature estimation and excision. In the proposed method, a data-dependent TF kernel is utilized to mitigate the artifacts and sparse reconstruction methods are then applied to obtain instantaneous frequency (IF) estimation of the jammers. In addition, an error tolerance of the IF estimate is applied is applied to achieve robust jammer suppression performance in the presence of IF estimation inaccuracy.
Minimizing high spatial frequency residual error in active space telescope mirrors
NASA Astrophysics Data System (ADS)
Gray, Thomas L.; Smith, Matthew W.; Cohan, Lucy E.; Miller, David W.
2009-08-01
The trend in future space telescopes is towards larger apertures, which provide increased sensitivity and improved angular resolution. Lightweight, segmented, rib-stiffened, actively controlled primary mirrors are an enabling technology, permitting large aperture telescopes to meet the mass and volume restrictions imposed by launch vehicles. Such mirrors, however, are limited in the extent to which their discrete surface-parallel electrostrictive actuators can command global prescription changes. Inevitably some amount of high spatial frequency residual error is added to the wavefront due to the discrete nature of the actuators. A parameterized finite element mirror model is used to simulate this phenomenon and determine designs that mitigate high spatial frequency residual errors in the mirror surface figure. Two predominant residual components are considered: dimpling induced by embedded actuators and print-through induced by facesheet polishing. A gradient descent algorithm is combined with the parameterized mirror model to allow rapid trade space navigation and optimization of the mirror design, yielding advanced design heuristics formulated in terms of minimum machinable rib thickness. These relationships produce mirrors that satisfy manufacturing constraints and minimize uncorrectable high spatial frequency error.
A frequency-domain derivation of shot-noise
NASA Astrophysics Data System (ADS)
Rice, Frank
2016-01-01
A formula for shot-noise is derived in the frequency-domain. The derivation is complete and reasonably rigorous while being appropriate for undergraduate students; it models a sequence of random pulses using Fourier sine and cosine series, and requires some basic statistical concepts. The text here may serve as a pedagogic introduction to the spectral analysis of random processes and may prove useful to introduce students to the logic behind stochastic problems. The concepts of noise power spectral density and equivalent noise bandwidth are introduced.
The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates
Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin
2011-01-01
An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030
The use of neural networks in identifying error sources in satellite-derived tropical SST estimates.
Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin
2011-01-01
An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030
Correction of phase-error for phase-resolved k-clocked optical frequency domain imaging
NASA Astrophysics Data System (ADS)
Mo, Jianhua; Li, Jianan; de Boer, Johannes F.
2012-01-01
Phase-resolved optical frequency domain imaging (OFDI) has emerged as a promising technique for blood flow measurement in human tissues. Phase stability is essential for this technique to achieve high accuracy in flow velocity measurement. In OFDI systems that use k-clocking for the data acquisition, phase-error occurs due to jitter in the data acquisition electronics. We presented a statistical analysis of jitter represented as point shifts of the k-clocked spectrum. We demonstrated a real-time phase-error correction algorithm for phase-resolved OFDI. A 50 KHz wavelength-swept laser (Axsun Technologies) based balanced-detection OFDI system was developed centered at 1310 nm. To evaluate the performance of this algorithm, a stationary gold mirror was employed as sample for phase analysis. Furthermore, we implemented this algorithm for imaging of human skin. Good-quality skin structure and Doppler image can be observed in real-time after phase-error correction. The results show that the algorithm can effectively correct the jitter-induced phase error in OFDI system.
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less
Topological derivatives for fundamental frequencies of elastic bodies
NASA Astrophysics Data System (ADS)
Kobelev, Vladimir
2016-01-01
In this article a new method for topological optimization of fundamental frequencies of elastic bodies, which could be considered as an improvement on the bubble method, is introduced. The method is based on generalized topological derivatives. For a body with different types of inclusion the vector genus is introduced. The dimension of the genus is the number of different elastic properties of the inclusions being introduced. The disturbances of stress and strain fields in an elastic matrix due to a newly inserted elastic inhomogeneity are given explicitly in terms of the stresses and strains in the initial body. The iterative positioning of inclusions is carried out by determination of the preferable position of the new inhomogeneity at the extreme points of the characteristic function. The characteristic function was derived using Eshelby's method. The expressions for optimal ratios of the semi-axes of the ellipse and angular orientation of newly inserted infinitesimally small inclusions of elliptical form are derived in closed analytical form.
Kaldjian, Lauris C; Jones, Elizabeth W; Rosenthal, Gary E; Tripp-Reimer, Toni; Hillis, Stephen L
2006-01-01
BACKGROUND Physician disclosure of medical errors to institutions, patients, and colleagues is important for patient safety, patient care, and professional education. However, the variables that may facilitate or impede disclosure are diverse and lack conceptual organization. OBJECTIVE To develop an empirically derived, comprehensive taxonomy of factors that affects voluntary disclosure of errors by physicians. DESIGN A mixed-methods study using qualitative data collection (structured literature search and exploratory focus groups), quantitative data transformation (sorting and hierarchical cluster analysis), and validation procedures (confirmatory focus groups and expert review). RESULTS Full-text review of 316 articles identified 91 impeding or facilitating factors affecting physicians' willingness to disclose errors. Exploratory focus groups identified an additional 27 factors. Sorting and hierarchical cluster analysis organized factors into 8 domains. Confirmatory focus groups and expert review relocated 6 factors, removed 2 factors, and modified 4 domain names. The final taxonomy contained 4 domains of facilitating factors (responsibility to patient, responsibility to self, responsibility to profession, responsibility to community), and 4 domains of impeding factors (attitudinal barriers, uncertainties, helplessness, fears and anxieties). CONCLUSIONS A taxonomy of facilitating and impeding factors provides a conceptual framework for a complex field of variables that affects physicians' willingness to disclose errors to institutions, patients, and colleagues. This taxonomy can be used to guide the design of studies to measure the impact of different factors on disclosure, to assist in the design of error-reporting systems, and to inform educational interventions to promote the disclosure of errors to patients. PMID:16918739
Martin, D.L.
1992-01-01
Water-leaving radiances and phytoplankton pigment concentrations are calculated from Coastal Zone Color Scanner (CZCS) total radiance measurements by separating atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. Multiple scattering interactions between Rayleigh and aerosol components together with other meteorologically-moderated radiances cause systematic errors in calculated water-leaving radiances and produce errors in retrieved phytoplankton pigment concentrations. This thesis developed techniques which minimize the effects of these systematic errors in Level IIA CZCS imagery. Results of previous radiative transfer modeling by Gordon and Castano are extended to predict the pixel-specific magnitude of systematic errors caused by Rayleigh-aerosol multiple scattering interactions. CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere are simulated mathematically and radiance-retrieval errors are calculated for a range of aerosol optical depths. Pixels which exceed an error threshold in the simulated CZCS image are rejected in a corresponding actual image. Meteorological phenomena also cause artifactual errors in CZCS-derived phytoplankton pigment concentration imagery. Unless data contaminated with these effects are masked and excluded from analysis, they will be interpreted as containing valid biological information and will contribute significantly to erroneous estimates of phytoplankton temporal and spatial variability. A method is developed which minimizes these errors through a sequence of quality-control procedures including the calculation of variable cloud-threshold radiances, the computation of the extent of electronic overshoot from bright reflectors, and the imposition of a buffer zone around clouds to exclude contaminated data.
Autocorrelation and Error Structure of Rainfall Derived from NEXRAD in Central and South Florida
NASA Astrophysics Data System (ADS)
Pathak, C. S.; Vieux, B. E.
2007-12-01
Motivation for this study comes from the South Florida Water Management District (District) who is responsible for managing water resources in 16-counties over a 46,439-square kilometer (17,930 square-mile) area. Near-real- time rainfall data are used in operation of approximately 3,000 kilometers (~1,800 miles) of canals, 22 major pump stations and 200 water control structures. The spatial extent of the District extends from Orlando to Key West and from the Gulf Coast to the Atlantic Ocean and contains major water features including Lake Okeechobee and the Everglades wetlands. Rainfall is a key factor in the water management decisions made by the District in real-time and through studies that rely on archival rainfall data derived from radar and rain gauge observations. Rainfall measurements are obtained from a combination of four NEXRAD radars and a rain gauge network that comprises 280 active rain gauge stations located in the more populated areas. Four NEXRAD (Next Generation Weather Radar) sites operated by the National Weather Service cover the region. Rain gauges are used for frequency analysis and for adjustment of the radar rainfall products. An optimization study of the rain gauge network is accomplished by removing gauges in areas of excess coverage, and by adding or moving rain gauges to gain a more even spatial distribution over the District. Rainfall fields measured at daily and hourly timesteps exhibit autocorrelation which can affect the network design subject to optimality constraints. This presentation will describe the autocorrelation and error structure found in rainfall measurements derived from rain gage and NEXRAD data. The data used in the analysis includes rain gage data and the NEXRAD rainfall data that was collected during 1995-2005 at 2 x 2 km resolution. A set of clusters of rain gages and a regular array of analysis blocks that were 20 x 20 km in size for the NEXRAD data were used to account for variability of the rainfall processes
Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint
Florita, A.; Hodge, B. M.; Milligan, M.
2012-08-01
The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.
ERIC Educational Resources Information Center
Ramsey, Robert J.; Frank, James
2007-01-01
Drawing on a sample of 798 Ohio criminal justice professionals (police, prosecutors, defense attorneys, judges), the authors examine respondents' perceptions regarding the frequency of system errors (i.e., professional error and misconduct suggested by previous research to be associated with wrongful conviction), and wrongful felony conviction.…
NASA Astrophysics Data System (ADS)
Weiner, M. M.
1994-01-01
The performance of ground-based high-frequency (HF) receiving arrays is reduced when the array elements have electrically small ground planes. The array rms phase error and beam-pointing errors, caused by multipath rays reflected from a nonhomogeneous Earth, are determined for a sparse array of elements that are modeled as Hertzian dipoles in close proximity to Earth with no ground planes. Numerical results are presented for cases of randomly distributed and systematically distributed Earth nonhomogeneities where one-half of vertically polarized array elements are located in proximity to one type of Earth and the remaining half are located in proximity to a second type of Earth. The maximum rms phase errors, for the cases examined, are 18 deg and 9 deg for randomly distributed and systematically distributed nonhomogeneities, respectively. The maximum beampointing errors are 0 and 0.3 beam widths for randomly distributed and systematically distributed nonhomogeneities, respectively.
Estimates of ocean forecast error covariance derived from Hessian Singular Vectors
NASA Astrophysics Data System (ADS)
Smith, Kevin D.; Moore, Andrew M.; Arango, Hernan G.
2015-05-01
Experience in numerical weather prediction suggests that singular value decomposition (SVD) of a forecast can yield useful a priori information about the growth of forecast errors. It has been shown formally that SVD using the inverse of the expected analysis error covariance matrix to define the norm at initial time yields the Empirical Orthogonal Functions (EOFs) of the forecast error covariance matrix at the final time. Because of their connection to the 2nd derivative of the cost function in 4-dimensional variational (4D-Var) data assimilation, the initial time singular vectors defined in this way are often referred to as the Hessian Singular Vectors (HSVs). In the present study, estimates of ocean forecast errors and forecast error covariance were computed using SVD applied to a baroclinically unstable temperature front in a re-entrant channel using the Regional Ocean Modeling System (ROMS). An identical twin approach was used in which a truth run of the model was sampled to generate synthetic hydrographic observations that were then assimilated into the same model started from an incorrect initial condition using 4D-Var. The 4D-Var system was run sequentially, and forecasts were initialized from each ocean analysis. SVD was performed on the resulting forecasts to compute the HSVs and corresponding EOFs of the expected forecast error covariance matrix. In this study, a reduced rank approximation of the inverse expected analysis error covariance matrix was used to compute the HSVs and EOFs based on the Lanczos vectors computed during the 4D-Var minimization of the cost function. This has the advantage that the entire spectrum of HSVs and EOFs in the reduced space can be computed. The associated singular value spectrum is found to yield consistent and reliable estimates of forecast error variance in the space spanned by the EOFs. In addition, at long forecast lead times the resulting HSVs and companion EOFs are able to capture many features of the actual
Estimation of Error in Western Pacific Geoid Heights Derived from Gravity Data Only
NASA Astrophysics Data System (ADS)
Peters, M. F.; Brozena, J. M.
2012-12-01
The goal of the Western Pacific Geoid estimation project was to generate geoid height models for regions in the Western Pacific Ocean, and formal error estimates for those geoid heights, using all available gravity data and statistical parameters of the quality of the gravity data. Geoid heights were to be determined solely from gravity measurements, as a gravimetric geoid model and error estimates for that model would have applications in oceanography and satellite altimetry. The general method was to remove the gravity field associated with a "lower" order spherical harmonic global gravity model from the regional gravity set; to fit a covariance model to the residual gravity, and then calculate the (residual) geoid heights and error estimates by least-squares collocation fit with residual gravity, available statistical estimates of the gravity and the covariance model. The geoid heights corresponding to the lower order spherical harmonic model can be added back to the heights from the residual gravity to produce a complete geoid height model. As input we requested from NGA all unclassified available gravity data in the western Pacific between 15° to 45° N and 105° to 141°W. The total data set that was used to model and estimate errors in gravimetric geoid comprised an unclassified, open file data set (540,012 stations), a proprietary airborne survey of Taiwan (19,234 stations), and unclassified NAVO SSP survey data (95,111 stations), for official use only. Various programs were adapted to the problem including N.K. Pavlis' HSYNTH program and the covariance fit program GPFIT and least-squares collocation program GPCOL from the GRAVSOFT package (Forsberg and Schering, 2008 version) which were modified to handle larger data sets, but in some regions data were still too numerous. Formulas were derived that could be used to block-mean the data in a statistically optimal sense and still retain the error estimates required for the collocation algorithm. Running the
NASA Technical Reports Server (NTRS)
Kaufmann, D. C.
1976-01-01
The fine frequency setting of a cesium beam frequency standard is accomplished by adjusting the C field control with the appropriate Zeeman frequency applied to the harmonic generator. A novice operator in the field, even when using the correct Zeeman frequency input, may mistakenly set the C field to any one of seven major Beam I peaks (fingers) represented by the Ramsey curve. This can result in frequency offset errors of as much as 2.5 parts in ten to the tenth. The effects of maladjustment are demonstrated and suggestions are discussed on how to avoid the subtle traps associated with C field adjustments.
Modeling work zone crash frequency by quantifying measurement errors in work zone length.
Yang, Hong; Ozbay, Kaan; Ozturk, Ozgur; Yildirimoglu, Mehmet
2013-06-01
Work zones are temporary traffic control zones that can potentially cause safety problems. Maintaining safety, while implementing necessary changes on roadways, is an important challenge traffic engineers and researchers have to confront. In this study, the risk factors in work zone safety evaluation were identified through the estimation of a crash frequency (CF) model. Measurement errors in explanatory variables of a CF model can lead to unreliable estimates of certain parameters. Among these, work zone length raises a major concern in this analysis because it may change as the construction schedule progresses generally without being properly documented. This paper proposes an improved modeling and estimation approach that involves the use of a measurement error (ME) model integrated with the traditional negative binomial (NB) model. The proposed approach was compared with the traditional NB approach. Both models were estimated using a large dataset that consists of 60 work zones in New Jersey. Results showed that the proposed improved approach outperformed the traditional approach in terms of goodness-of-fit statistics. Moreover it is shown that the use of the traditional NB approach in this context can lead to the overestimation of the effect of work zone length on the crash occurrence. PMID:23563145
2013-01-01
Background 454 sequencing technology is a promising approach for characterizing HIV-1 populations and for identifying low frequency mutations. The utility of 454 technology for determining allele frequencies and linkage associations in HIV infected individuals has not been extensively investigated. We evaluated the performance of 454 sequencing for characterizing HIV populations with defined allele frequencies. Results We constructed two HIV-1 RT clones. Clone A was a wild type sequence. Clone B was identical to clone A except it contained 13 introduced drug resistant mutations. The clones were mixed at ratios ranging from 1% to 50% and were amplified by standard PCR conditions and by PCR conditions aimed at reducing PCR-based recombination. The products were sequenced using 454 pyrosequencing. Sequence analysis from standard PCR amplification revealed that 14% of all sequencing reads from a sample with a 50:50 mixture of wild type and mutant DNA were recombinants. The majority of the recombinants were the result of a single crossover event which can happen during PCR when the DNA polymerase terminates synthesis prematurely. The incompletely extended template then competes for primer sites in subsequent rounds of PCR. Although less often, a spectrum of other distinct crossover patterns was also detected. In addition, we observed point mutation errors ranging from 0.01% to 1.0% per base as well as indel (insertion and deletion) errors ranging from 0.02% to nearly 50%. The point errors (single nucleotide substitution errors) were mainly introduced during PCR while indels were the result of pyrosequencing. We then used new PCR conditions designed to reduce PCR-based recombination. Using these new conditions, the frequency of recombination was reduced 27-fold. The new conditions had no effect on point mutation errors. We found that 454 pyrosequencing was capable of identifying minority HIV-1 mutations at frequencies down to 0.1% at some nucleotide positions. Conclusion
NASA Technical Reports Server (NTRS)
Noll, R. J.
1979-01-01
In many of today's telescopes the effects of surface errors on image quality and scattered light are very important. The influence of optical fabrication surface errors on the performance of an optical system is discussed. The methods developed by Hopkins (1957) for aberration tolerancing and Barakat (1972) for random wavefront errors are extended to the examination of mid- and high-spatial frequency surface errors. The discussion covers a review of the basic concepts of image quality, an examination of manufacturing errors as a function of image quality performance, a demonstration of mirror scattering effects in relation to surface errors, and some comments on the nature of the correlation functions. Illustrative examples are included.
Lexical Frequency and Third-Graders' Stress Accuracy in Derived English Word Production
ERIC Educational Resources Information Center
Jarmulowicz, Linda; Taran, Valentina L.; Hay, Sarah E.
2008-01-01
This study examined the effects of lexical frequency on children's production of accurate primary stress in words derived with nonneutral English suffixes. Forty-four third-grade children participated in an elicited derived word task in which they produced high-frequency, low-frequency, and nonsense-derived words with stress-changing suffixes…
On the uncertainty of stream networks derived from elevation data: the error propagation approach
NASA Astrophysics Data System (ADS)
Hengl, T.; Heuvelink, G. B. M.; van Loon, E. E.
2010-07-01
DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks) using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations), and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations). All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial) clearly depicts areas where the extracted stream network is least precise - usually areas of low local relief and slightly convex (0-10 difference from the mean value). In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor) show high error (H>0.5) of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy the required accuracy level. Such error propagation tool should become a standard functionality in any modern GIS. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the
Systematic vertical error in UAV-derived topographic models: Origins and solutions
NASA Astrophysics Data System (ADS)
James, Mike R.; Robson, Stuart
2014-05-01
Unmanned aerial vehicles (UAVs) equipped with consumer cameras are increasingly being used to produce high resolution digital elevation models (DEMs). However, although such DEMs may achieve centimetric detail, they can also display broad-scale systematic deformation (usually a vertical 'doming') that restricts their wider use. This effect can be particularly apparent in DEMs derived by structure-from-motion (SfM) processing, especially when control point data have not been incorporated in the bundle adjustment process. We illustrate that doming error results from a combination of inaccurate description of radial lens distortion and the use of imagery captured in near-parallel viewing directions. With such imagery, enabling camera self-calibration within the processing inherently leads to erroneous radial distortion values and associated DEM error. Using a simulation approach, we illustrate how existing understanding of systematic DEM error in stereo-pairs (from unaccounted radial distortion) up-scales in typical multiple-image blocks of UAV surveys. For image sets with dominantly parallel viewing directions, self-calibrating bundle adjustment (as normally used with images taken using consumer cameras) will not be able to derive radial lens distortion accurately, and will give associated systematic 'doming' DEM deformation. In the presence of image measurement noise (at levels characteristic of SfM software), and in the absence of control measurements, our simulations display domed deformation with amplitude of ~2 m over horizontal distances of ~100 m. We illustrate the sensitivity of this effect to variations in camera angle and flight height. Deformation will be reduced if suitable control points can be included within the bundle adjustment, but residual systematic vertical error may remain, accommodated by the estimated precision of the control measurements. Doming bias can be minimised by the inclusion of inclined images within the image set, for example
NASA Astrophysics Data System (ADS)
Hordyniec, Pawel; Bosy, Jaroslaw; Rohm, Witold
2015-07-01
Among the new remote sensing techniques, one of the most promising is a GNSS meteorology, which provides continuous remote monitoring of the troposphere water vapor in all weather conditions with high temporal and spatial resolution. The Continuously Operating Reference Station (CORS) network and available meteorological instrumentation and models were scrutinized (we based our analysis on ASG-EUPOS network in Poland) as a troposphere water vapor retrieval system. This paper shows rigorous mathematical derivation of Precipitable Water errors based on uncertainties propagation method using all available data source quality measures (meteorological sensors and models precisions, ZTD estimation error, interpolation discrepancies, and ZWD to PW conversion inaccuracies). We analyze both random and systematic errors introduced by indirect measurements and interpolation procedures, hence estimate the PW system integrity capabilities. The results for PW show that the systematic errors can be under half-millimeter level as long as pressure and temperature are measured at the observation site. In other case, i.e. no direct observations, numerical weather model fields (we used in this study Coupled Ocean Atmospheric Mesoscale Prediction System) serves as the most accurate source of data. Investigated empirical pressure and temperature models, such as GPT2, GPT, UNB3m and Berg introduced into WV retrieval system, combined bias and random errors exceeding PW standard level of accuracy (3 mm according to E-GVAP report). We also found that the pressure interpolation procedure is introducing over 0.5 hPa bias and 1 hPa standard deviation into the system (important in Zenith Total Delay reduction) and hence has negative impact on the WV estimation quality.
Frequency, Types, and Potential Clinical Significance of Medication-Dispensing Errors
Bohand, Xavier; Simon, Laurent; Perrier, Eric; Mullot, Hélène; Lefeuvre, Leslie; Plotton, Christian
2009-01-01
INTRODUCTION AND OBJECTIVES: Many dispensing errors occur in the hospital, and these can endanger patients. The purpose of this study was to assess the rate of dispensing errors by a unit dose drug dispensing system, to categorize the most frequent types of errors, and to evaluate their potential clinical significance. METHODS: A prospective study using a direct observation method to detect medication-dispensing errors was used. From March 2007 to April 2007, “errors detected by pharmacists” and “errors detected by nurses” were recorded under six categories: unauthorized drug, incorrect form of drug, improper dose, omission, incorrect time, and deteriorated drug errors. The potential clinical significance of the “errors detected by nurses” was evaluated. RESULTS: Among the 734 filled medication cassettes, 179 errors were detected corresponding to a total of 7249 correctly fulfilled and omitted unit doses. An overall error rate of 2.5% was found. Errors detected by pharmacists and nurses represented 155 (86.6%) and 24 (13.4%) of the 179 errors, respectively. The most frequent types of errors were improper dose (n = 57, 31.8%) and omission (n = 54, 30.2%). Nearly 45% of the 24 errors detected by nurses had the potential to cause a significant (n = 7, 29.2%) or serious (n = 4, 16.6%) adverse drug event. CONCLUSIONS: Even if none of the errors reached the patients in this study, a 2.5% error rate indicates the need for improving the unit dose drug-dispensing system. Furthermore, it is almost certain that this study failed to detect some medication errors, further arguing for strategies to prevent their recurrence. PMID:19142545
NASA Technical Reports Server (NTRS)
Bryant, W. H.; Hodge, W. F.
1974-01-01
An error analysis program based on an output error estimation method was used to evaluate the effects of sensor and instrumentation errors on the estimation of aircraft stability and control derivatives. A Monte Carlo analysis was performed using simulated flight data for a high performance military aircraft, a large commercial transport, and a small general aviation aircraft for typical cruise flight conditions. The effects of varying the input sequence and combinations of the sensor and instrumentation errors were investigated. The results indicate that both the parameter accuracy and the corresponding measurement trajectory fit error can be significantly affected. Of the error sources considered, instrumentation lags and control measurement errors were found to be most significant.
NASA Astrophysics Data System (ADS)
Roca, R.; Chambon, P.; jobard, I.; Viltard, N.
2012-04-01
Measuring rainfall requires a high density of observations, which, over the whole tropical elt, can only be provided from space. For several decades, the availability of satellite observations has greatly increased; thanks to newly implemented missions like the Megha-Tropiques mission and the forthcoming GPM constellation, measurements from space become available from a set of observing systems. In this work, we focus on rainfall error estimations at the 1 °/1-day accumulated scale, key scale of meteorological and hydrological studies. A novel methodology for quantitative precipitation estimation is introduced; its name is TAPEER (Tropical Amount of Precipitation with an Estimate of ERrors) and it aims to provide 1 °/1-day rain accumulations and associated errors over the whole Tropical belt. This approach is based on a combination of infrared imagery from a fleet of geostationary satellites and passive microwave derived rain rates from a constellation of low earth orbiting satellites. A three-stage disaggregation of error into sampling, algorithmic and calibration errors is performed; the magnitudes of the three terms are then estimated separately. A dedicated error model is used to evaluate sampling errors and a forward error propagation approach is used for an estimation of algorithmic and calibration errors. One of the main findings in this study is the large contribution of the sampling errors and the algorithmic errors of BRAIN on medium rain rates (2 mm h-1 to 10 mm h-1) in the total error budget.
NASA Astrophysics Data System (ADS)
Melnychuk, O.; Grassellino, A.; Romanenko, A.
2014-12-01
In this paper, we discuss error analysis for intrinsic quality factor (Q0) and accelerating gradient (Eacc) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab, we estimated total uncertainty for both Q0 and Eacc to be at the level of approximately 4% for input coupler coupling parameter β1 in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q0 uncertainty increases (decreases) with β1 whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27], is independent of β1. Overall, our estimated Q0 uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27].
Nie, Xuqing; Li, Shengyi; Hu, Hao; Li, Qi
2014-10-01
Mid-spatial frequency error (MSFR) should be strictly controlled in modern optical systems. As an effective approach to suppress MSFR, the smoothing polishing (SP) process is not easy to handle because it can be affected by many factors. This paper mainly focuses on the influence of the pad groove, which has not been researched yet. The SP process is introduced, and the important role of the pad groove is explained in detail. The relationship between the contact pressure distribution and the groove feature including groove section type, groove width, and groove depth is established, and the optimized result is achieved with the finite element method. The different kinds of groove patterns are compared utilizing the numerical superposition method established scrupulously. The optimal groove is applied in the verification experiment conducted on a self-developed SP machine. The root mean square value of the MSFR after the SP process is diminished from 2.38 to 0.68 nm, which reveals that the selected pad can smooth out the MSFR to a great extent with proper SP parameters, while the newly generated MSFR due to the groove can be suppressed to a very low magnitude. PMID:25322215
Derivative-based scale invariant image feature detector with error resilience.
Mainali, Pradip; Lafruit, Gauthier; Tack, Klaas; Van Gool, Luc; Lauwereins, Rudy
2014-05-01
We present a novel scale-invariant image feature detection algorithm (D-SIFER) using a newly proposed scale-space optimal 10th-order Gaussian derivative (GDO-10) filter, which reaches the jointly optimal Heisenberg's uncertainty of its impulse response in scale and space simultaneously (i.e., we minimize the maximum of the two moments). The D-SIFER algorithm using this filter leads to an outstanding quality of image feature detection, with a factor of three quality improvement over state-of-the-art scale-invariant feature transform (SIFT) and speeded up robust features (SURF) methods that use the second-order Gaussian derivative filters. To reach low computational complexity, we also present a technique approximating the GDO-10 filters with a fixed-length implementation, which is independent of the scale. The final approximation error remains far below the noise margin, providing constant time, low cost, but nevertheless high-quality feature detection and registration capabilities. D-SIFER is validated on a real-life hyperspectral image registration application, precisely aligning up to hundreds of successive narrowband color images, despite their strong artifacts (blurring, low-light noise) typically occurring in such delicate optical system setups. PMID:24723627
Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat.
de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A; Kölzsch, Andrea; Prins, Herbert H T; de Boer, W Fred
2015-01-01
The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations
Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat
de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A.; Kölzsch, Andrea; Prins, Herbert H. T.; de Boer, W. Fred
2015-01-01
The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations
NASA Astrophysics Data System (ADS)
Duan, Beiping; Zheng, Zhoushun; Cao, Wen
2016-08-01
In this paper, we revisit two spectral approximations, including truncated approximation and interpolation for Caputo fractional derivative. The two approaches have been studied to approximate Riemann-Liouville (R-L) fractional derivative by Chen et al. and Zayernouri et al. respectively in their most recent work. For truncated approximation the reconsideration partly arises from the difference between fractional derivative in R-L sense and Caputo sense: Caputo fractional derivative requires higher regularity of the unknown than R-L version. Another reason for the reconsideration is that we distinguish the differential order of the unknown with the index of Jacobi polynomials, which is not presented in the previous work. Also we provide a way to choose the index when facing multi-order problems. By using generalized Hardy's inequality, the gap between the weighted Sobolev space involving Caputo fractional derivative and the classical weighted space is bridged, then the optimal projection error is derived in the non-uniformly Jacobi-weighted Sobolev space and the maximum absolute error is presented as well. For the interpolation, analysis of interpolation error was not given in their work. In this paper we build the interpolation error in non-uniformly Jacobi-weighted Sobolev space by constructing fractional inverse inequality. With combining collocation method, the approximation technique is applied to solve fractional initial-value problems (FIVPs). Numerical examples are also provided to illustrate the effectiveness of this algorithm.
Rieche, Marie; Komenský, Tomás; Husar, Peter
2011-01-01
Radio Frequency Identification (RFID) systems in healthcare facilitate the possibility of contact-free identification and tracking of patients, medical equipment and medication. Thereby, patient safety will be improved and costs as well as medication errors will be reduced considerably. However, the application of RFID and other wireless communication systems has the potential to cause harmful electromagnetic disturbances on sensitive medical devices. This risk mainly depends on the transmission power and the method of data communication. In this contribution we point out the reasons for such incidents and give proposals to overcome these problems. Therefore a novel modulation and transmission technique called Gaussian Derivative Frequency Modulation (GDFM) is developed. Moreover, we carry out measurements to show the inteference properties of different modulation schemes in comparison to our GDFM. PMID:22254771
NASA Technical Reports Server (NTRS)
Rao, P. Anil; Velden, Christopher S.; Braun, Scott A.; Einaudi, Franco (Technical Monitor)
2001-01-01
Errors in the height assignment of some satellite-derived winds exist because the satellites sense radiation emitted from a finite layer of the atmosphere rather than a specific level. Potential problems in data assimilation may arise because the motion of a measured layer is often represented by a single-level value. In this research, cloud and water vapor motion winds that are derived from the Geostationary Operational Environmental Satellites (GOES winds) are compared to collocated rawinsonde observations (RAOBs). An important aspect of this work is that in addition to comparisons at each assigned height, the GOES winds are compared to the entire profile of the collocated RAOB data to determine the vertical error characteristics of the GOES winds. The impact of these results on numerical weather prediction is then investigated. The comparisons at individual vector height assignments indicate that the error of the GOES winds range from approx. 3 to 10 m/s and generally increase with height. However, if taken as a percentage of the total wind speed, accuracy is better at upper levels. As expected, comparisons with the entire profile of the collocated RAOBs indicate that clear-air water vapor winds represent deeper layers than do either infrared or water vapor cloud-tracked winds. This is because in cloud-free regions the signal from water vapor features may result from emittance over a thicker layer. To further investigate characteristics of the clear-air water vapor winds, they are stratified into two categories that are dependent on the depth of the layer represented by the vector. It is found that if the vertical gradient of moisture is smooth and uniform from near the height assignment upwards, the clear-air water vapor wind tends to represent a relatively deep layer. The information from the comparisons is then used in numerical model simulations of two separate events to determine the forecast impacts. Four simulations are performed for each case: 1) A
The distribution of particulate matter (PM) concentrations has an impact on human health effects and the setting of PM regulations. Since PM is commonly sampled on less than daily schedules, the magnitude of sampling errors needs to be determined. Daily PM data from Spokane, W...
NASA Technical Reports Server (NTRS)
Platnick, Steven; Wind, Galina; Xiong, Xiaoxiong
2011-01-01
MODIS retrievals of cloud optical thickness and effective particle radius employ a well-known VNIR/SWIR solar reflectance technique. For this type of algorithm, we evaluate the uncertainty in simultaneous retrievals of these two parameters to pixel-level (scene-dependent) radiometric error estimates as well as other tractable error sources.
NASA Technical Reports Server (NTRS)
Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo
2016-01-01
The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer [1]). However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link [2]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop
NASA Astrophysics Data System (ADS)
Tinkham, W. T.; Hoffman, C. M.; Falkowski, M. J.; Smith, A. M.; Link, T. E.; Marshall, H.
2011-12-01
Light Detection and Ranging (LiDAR) has become one of the most effective and reliable means of characterizing surface topography and vegetation structure. Most LiDAR-derived estimates such as vegetation height, snow depth, and floodplain boundaries rely on the accurate creation of digital terrain models (DTM). As a result of the importance of an accurate DTM in using LiDAR data to estimate snow depth, it is necessary to understand the variables that influence the DTM accuracy in order to assess snow depth error. A series of 4 x 4 m plots that were surveyed at 0.5 m spacing in a semi-arid catchment were used for training the Random Forests algorithm along with a series of 35 variables in order to spatially predict vertical error within a LiDAR derived DTM. The final model was utilized to predict the combined error resulting from snow volume and snow water equivalent estimates derived from a snow-free LiDAR DTM and a snow-on LiDAR acquisition of the same site. The methodology allows for a statistical quantification of the spatially-distributed error patterns that are incorporated into the estimation of snow volume and snow water equivalents from LiDAR.
Cumberland, Phillippa M.; Bao, Yanchun; Hysi, Pirro G.; Foster, Paul J.; Hammond, Christopher J.; Rahi, Jugnoo S.
2015-01-01
Purpose To report the methodology and findings of a large scale investigation of burden and distribution of refractive error, from a contemporary and ethnically diverse study of health and disease in adults, in the UK. Methods U K Biobank, a unique contemporary resource for the study of health and disease, recruited more than half a million people aged 40–69 years. A subsample of 107,452 subjects undertook an enhanced ophthalmic examination which provided autorefraction data (a measure of refractive error). Refractive error status was categorised using the mean spherical equivalent refraction measure. Information on socio-demographic factors (age, gender, ethnicity, educational qualifications and accommodation tenure) was reported at the time of recruitment by questionnaire and face-to-face interview. Results Fifty four percent of participants aged 40–69 years had refractive error. Specifically 27% had myopia (4% high myopia), which was more common amongst younger people, those of higher socio-economic status, higher educational attainment, or of White or Chinese ethnicity. The frequency of hypermetropia increased with age (7% at 40–44 years increasing to 46% at 65–69 years), was higher in women and its severity was associated with ethnicity (moderate or high hypermetropia at least 30% less likely in non-White ethnic groups compared to White). Conclusions Refractive error is a significant public health issue for the UK and this study provides contemporary data on adults for planning services, health economic modelling and monitoring of secular trends. Further investigation of risk factors is necessary to inform strategies for prevention. There is scope to do this through the planned longitudinal extension of the UK Biobank study. PMID:26430771
Error correction coding for frequency-hopping multiple-access spread spectrum communication systems
NASA Technical Reports Server (NTRS)
Healy, T. J.
1982-01-01
A communication system which would effect channel coding for frequency-hopped multiple-access is described. It is shown that in theory coding can increase the spectrum utilization efficiency of a system with mutual interference to 100 percent. Various coding strategies are discussed and some initial comparisons are given. Some of the problems associated with implementing the type of system described here are discussed.
An analysis of perceptual errors in reading mammograms using quasi-local spatial frequency spectra.
Mello-Thoms, C; Dunn, S M; Nodine, C F; Kundel, H L
2001-09-01
In this pilot study the authors examined areas on a mammogram that attracted the visual attention of experienced mammographers and mammography fellows, as well as areas that were reported to contain a malignant lesion, and, based on their spatial frequency spectrum, they characterized these areas by the type of decision outcome that they yielded: true-positives (TP), false-positives (FP), true-negatives (TN), and false-negatives (FN). Five 2-view (craniocaudal and medial-lateral oblique) mammogram cases were examined by 8 experienced observers, and the eye position of the observers was tracked. The observers were asked to report the location and nature of any malignant lesions present in the case. The authors analyzed each area in which either the observer made a decision or in which the observer had prolonged (>1,000 ms) visual dwell using wavelet packets, and characterized these areas in terms of the energy contents of each spatial frequency band. It was shown that each decision outcome is characterized by a specific profile in the spatial frequency domain, and that these profiles are significantly different from one another. As a consequence of these differences, the profiles can be used to determine which type of decision a given observer will make when examining the area. Computer-assisted perception correctly predicted up to 64% of the TPs made by the observers, 77% of the FPs, and 70% of the TNs. PMID:11720333
Ocean data assimilation with background error covariance derived from OGCM outputs
NASA Astrophysics Data System (ADS)
Fu, Weiwei; Zhou, Guangqing; Wang, Huijun
2004-04-01
The background error covariance plays an important role in modern data assimilation and analysis systems by determining the spatial spreading of information in the data. A novel method based on model output is proposed to estimate background error covariance for use in Optimum Interpolation. At every model level, anisotropic correlation scales are obtained that give a more detailed description of the spatial correlation structure. Furthermore, the impact of the background field itself is included in the background error covariance. The methodology of the estimation is presented and the structure of the covariance is examined. The results of 20-year assimilation experiments are compared with observations from TOGA-TAO (The Tropical Ocean-Global Atmosphere-Tropical Atmosphere Ocean) array and other analysis data.
NASA Astrophysics Data System (ADS)
Mohamed, Khairi Ashour; Pap, Laszlo
1994-05-01
This paper is concerned with the performance analysis of frequency-hopped packet radio networks with random signal levels. We assume that a hit from an interfering packet necessitates a symbol error if and only if it brings on enough energy that exceeds the energy received from the wanted signal. The interdependence between symbol errors of an arbitrary packet is taken into consideration through the joint probability generating function of the so-called effective multiple access interference. Slotted networks, with both random and deterministic hopping patterns, are considered in the case of both synchronous and asynchronous hopping. A general closed-form expression is given for packet capture probability, in the case of Reed-Solomon error only decoding. After introducing a general description method, the following examples are worked out in details: (1) networks with random spatial distribution of stations (a model for mobile packet radio networks); (2) networks operating in slow fading channels; (3) networks with different power levels which are chosen randomly according to either discrete or continuous probability distribution (created captures).
Stable radio frequency phase delivery by rapid and endless post error cancellation.
Wu, Zhongle; Dai, Yitang; Yin, Feifei; Xu, Kun; Li, Jianqiang; Lin, Jintong
2013-04-01
We propose and demonstrate a phase stabilization method for transfer and downconvert radio frequency (RF) signal from remote antenna to center station via a radio-over-fiber (ROF) link. Different from previous phase-locking-loop-based schemes, we post-correct any phase fluctuation by mixing during the downconversion process at the center station. A rapid and endless operation is predicted. The ROF technique transfers the received RF signal directly, which will reduce the electronic complexity at the antenna end. The proposed scheme is experimentally demonstrated, with a phase fluctuation compression factor of about 200. The theory and performance are also discussed. PMID:23546256
Systematic Error in UAV-derived Topographic Models: The Importance of Control
NASA Astrophysics Data System (ADS)
James, M. R.; Robson, S.; d'Oleire-Oltmanns, S.
2014-12-01
UAVs equipped with consumer cameras are increasingly being used to produce high resolution digital elevation models (DEMs) for a wide variety of geoscience applications. Image processing and DEM-generation is being facilitated by parallel increases in the use of software based on 'structure from motion' algorithms. However, recent work [1] has demonstrated that image networks from UAVs, for which camera pointing directions are generally near-parallel, are susceptible to producing systematic error in the resulting topographic surfaces (a vertical 'doming'). This issue primarily reflects error in the camera lens distortion model, which is dominated by the radial K1 term. Common data processing scenarios, in which self-calibration is used to refine the camera model within the bundle adjustment, can inherently result in such systematic error via poor K1 estimates. Incorporating oblique imagery into such data sets can mitigate error by enabling more accurate calculation of camera parameters [1]. Here, using a combination of simulated image networks and real imagery collected from a fixed wing UAV, we explore the additional roles of external ground control and the precision of image measurements. We illustrate similarities and differences between a variety of structure from motion software, and underscore the importance of well distributed and suitably accurate control for projects where a demonstrated high accuracy is required. [1] James & Robson (2014) Earth Surf. Proc. Landforms, 39, 1413-1420, doi: 10.1002/esp.3609
NASA Technical Reports Server (NTRS)
Mitchell, J. R.
1972-01-01
The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.
NASA Astrophysics Data System (ADS)
Rao, Kota S.; Al Jassar, Hala K.
2010-09-01
The aim of this paper is to analyze the errors in the Digital Elevation Models (DEMs) derived through repeat pass SAR interferometry (InSAR). Out of 29 ASAR images available to us, 8 are selected for this study which has unique data set forming 7 InSAR pairs with single master image. The perpendicular component of baseline (B highmod) varies between 200 to 400 m to generate good quality DEMs. The Temporal baseline (T) varies from 35 days to 525 days to see the effect of temporal decorrelation. It is expected that all the DEMs be similar to each other spatially with in the noise limits. However, they differ very much with one another. The 7 DEMs are compared with the DEM of SRTM for the estimation of errors. The spatial and temporal distribution of errors in the DEM is analyzed by considering several case studies. Spatial and temporal variability of precipitable water vapour is analysed. Precipitable water vapour (PWV) corrections to the DEMs are implemented and found to have no significant effect. The reasons are explained. Temporal decorrelation of phases and soil moisture variations seem to have influence on the accuracy of the derived DEM. It is suggested that installing a number of corner reflectors (CRs) and the use of Permanent Scatter approach may improve the accuracy of the results in desert test sites.
NASA Astrophysics Data System (ADS)
Kim, Do-Hyung; Cho, Janghyun; Moon, Hyungbae; Jeon, Sungbin; Park, No-Cheol; Yang, Hyunseok; Park, Kyoung-Su; Park, Young-Pil
2013-09-01
Optimized image restoration is suggested in angular-multiplexing-page-based holographic data storage. To improve the bit error rate (BER), an extended high frequency enhancement filter is recalculated from the point spread function (PSF) and Gaussian mask as the image restoration filter. Using the extended image restoration filter, the proposed system reduces the number of processing steps compared with the image upscaling method and provides better performance in BER and SNR. Numerical simulations and experiments were performed to verify the proposed method. The proposed system exhibited a marked improvement in BER from 0.02 to 0.002 for a Nyquist factor of 1.1, and from 0.006 to 0 for a Nyquist factor of 1.2. Moreover, more than 3 times faster performance in calculation time was achieved compared with image restoration with PSF upscaling owing to the reductions in the number of system process and calculation load.
Error assessment of satellite-derived lead fraction in the Arctic
NASA Astrophysics Data System (ADS)
Ivanova, Natalia; Rampal, Pierre; Bouillon, Sylvain
2016-03-01
Leads within consolidated sea ice control heat exchange between the ocean and the atmosphere during winter, thus constituting an important climate parameter. These narrow elongated features occur when sea ice is fracturing under the action of wind and currents, reducing the local mechanical strength of the ice cover, which in turn impact the sea ice drift pattern. This creates a high demand for a high-quality lead fraction (LF) data set for sea ice model evaluation, initialization, and for the assimilation of such data in regional models. In this context, an available LF data set retrieved from satellite passive microwave observations (Advanced Microwave Scanning Radiometer - Earth Observing System, AMSR-E) is of great value, which has been providing pan-Arctic light- and cloud-independent daily coverage since 2002. In this study errors in this data set are quantified using accurate LF estimates retrieved from Synthetic Aperture Radar (SAR) images employing a threshold technique. A consistent overestimation of LF by a factor of 2-4 is found in the AMSR-E LF product. It is shown that a simple adjustment of the upper tie point used in the method to estimate the LF can reduce the pixel-wise error by a factor of 2 on average. Applying such an adjustment to the full data set may thus significantly increase the quality and value of the original data set.
NASA Astrophysics Data System (ADS)
Rose, Julian A. R.; Tong, Jenna R.; Allain, Damien J.; Mitchell, Cathryn N.
2011-01-01
Signals from Global Positioning System (GPS) satellites at the horizon or at low elevations are often excluded from a GPS solution because they experience considerable ionospheric delays and multipath effects. Their exclusion can degrade the overall satellite geometry for the calculations, resulting in greater errors; an effect known as the Dilution of Precision (DOP). In contrast, signals from high elevation satellites experience less ionospheric delays and multipath effects. The aim is to find a balance in the choice of elevation mask, to reduce the propagation delays and multipath whilst maintaining good satellite geometry, and to use tomography to correct for the ionosphere and thus improve single-frequency GPS timing accuracy. GPS data, collected from a global network of dual-frequency GPS receivers, have been used to produce four GPS timing solutions, each with a different ionospheric compensation technique. One solution uses a 4D tomographic algorithm, Multi-Instrument Data Analysis System (MIDAS), to compensate for the ionospheric delay. Maps of ionospheric electron density are produced and used to correct the single-frequency pseudorange observations. This method is compared to a dual-frequency solution and two other single-frequency solutions: one does not include any ionospheric compensation and the other uses the broadcast Klobuchar model. Data from the solar maximum year 2002 and October 2003 have been investigated to display results when the ionospheric delays are large and variable. The study focuses on Europe and results are produced for the chosen test site, VILL (Villafranca, Spain). The effects of excluding all of the GPS satellites below various elevation masks, ranging from 5° to 40°, on timing solutions for fixed (static) and mobile (moving) situations are presented. The greatest timing accuracies when using the fixed GPS receiver technique are obtained by using a 40° mask, rather than a 5° mask. The mobile GPS timing solutions are most
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Cohen, Gerald A.; Mroz, Zenon
1990-01-01
A uniform variational approach to sensitivity analysis of vibration frequencies and bifurcation loads of nonlinear structures is developed. Two methods of calculating the sensitivities of bifurcation buckling loads and vibration frequencies of nonlinear structures, with respect to stiffness and initial strain parameters, are presented. A direct method requires calculation of derivatives of the prebuckling state with respect to these parameters. An adjoint method bypasses the need for these derivatives by using instead the strain field associated with the second-order postbuckling state. An operator notation is used and the derivation is based on the principle of virtual work. The derivative computations are easily implemented in structural analysis programs. This is demonstrated by examples using a general purpose, finite element program and a shell-of-revolution program.
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Ackerman, Thomas P.
1996-01-01
A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. We have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. We conclude that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, we conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results.
Austin Chalk fracture mapping using frequency data derived from seismic data
NASA Astrophysics Data System (ADS)
Najmuddin, Ilyas Juzer
Frequency amplitude spectra derived from P-wave seismic data can be used to derive a fracture indicator. This fracture indicator can be used to delineate fracture zones in subsurface layers. Mapping fractures that have no vertical offset is difficult on seismic sections. Fracturing changes the rock properties and therefore the attributes of the seismic data reflecting off the fractured interface and data passing through the fractured layers. Fractures have a scattering effect on seismic energy reflected from the fractured layer. Fractures attenuate amplitudes of higher frequencies in seismic data preferentially than lower frequencies. The amplitude spectrum of the frequencies in the seismic data shifts towards lower frequencies when a spectrum from a time window above the fractured layer is compared with one below the fractured layer. This shift in amplitudes of frequency spectra can be derived from seismic data and used to indicate fracturing. A method is developed to calculate a parameter t* to measure this change in the frequency spectra for small time windows (100ms) above and below the fractured layer. The Austin Chalk in South Central Texas is a fractured layer, and it produces hydrocarbons from fracture zones with the layer (Sweet Spots). 2D and 3D P-wave seismic data are used from Burleson and Austin Counties in Texas to derive the t* parameter. Case studies are presented for 2D data from Burleson County and 3D data from Austin County. The t* parameter mapped on the 3D data shows a predominant fracture trend parallel to strike. The fracture zones have a good correlation with the faults interpreted on the top of Austin Chalk reflector. Production data in Burleson County (Giddings Field) is a proxy for fracturing. Values of t* mapped on the 2D data have a good correlation with the cumulative production map presented in this study.
Kobayashi, Jyumpei; Wada, Keisuke; Furukawa, Megumi; Doi, Katsumi
2014-01-01
Thermostability is an important property of enzymes utilized for practical applications because it allows long-term storage and use as catalysts. In this study, we constructed an error-prone strain of the thermophile Geobacillus kaustophilus HTA426 and investigated thermoadaptation-directed enzyme evolution using the strain. A mutation frequency assay using the antibiotics rifampin and streptomycin revealed that G. kaustophilus had substantially higher mutability than Escherichia coli and Bacillus subtilis. The predominant mutations in G. kaustophilus were A · T→G · C and C · G→T · A transitions, implying that the high mutability of G. kaustophilus was attributable in part to high-temperature-associated DNA damage during growth. Among the genes that may be involved in DNA repair in G. kaustophilus, deletions of the mutSL, mutY, ung, and mfd genes markedly enhanced mutability. These genes were subsequently deleted to construct an error-prone thermophile that showed much higher (700- to 9,000-fold) mutability than the parent strain. The error-prone strain was auxotrophic for uracil owing to the fact that the strain was deficient in the intrinsic pyrF gene. Although the strain harboring Bacillus subtilis pyrF was also essentially auxotrophic, cells became prototrophic after 2 days of culture under uracil starvation, generating B. subtilis PyrF variants with an enhanced half-denaturation temperature of >10°C. These data suggest that this error-prone strain is a promising host for thermoadaptation-directed evolution to generate thermostable variants from thermolabile enzymes. PMID:25326311
Suzuki, Hirokazu; Kobayashi, Jyumpei; Wada, Keisuke; Furukawa, Megumi; Doi, Katsumi
2015-01-01
Thermostability is an important property of enzymes utilized for practical applications because it allows long-term storage and use as catalysts. In this study, we constructed an error-prone strain of the thermophile Geobacillus kaustophilus HTA426 and investigated thermoadaptation-directed enzyme evolution using the strain. A mutation frequency assay using the antibiotics rifampin and streptomycin revealed that G. kaustophilus had substantially higher mutability than Escherichia coli and Bacillus subtilis. The predominant mutations in G. kaustophilus were A · T→G · C and C · G→T · A transitions, implying that the high mutability of G. kaustophilus was attributable in part to high-temperature-associated DNA damage during growth. Among the genes that may be involved in DNA repair in G. kaustophilus, deletions of the mutSL, mutY, ung, and mfd genes markedly enhanced mutability. These genes were subsequently deleted to construct an error-prone thermophile that showed much higher (700- to 9,000-fold) mutability than the parent strain. The error-prone strain was auxotrophic for uracil owing to the fact that the strain was deficient in the intrinsic pyrF gene. Although the strain harboring Bacillus subtilis pyrF was also essentially auxotrophic, cells became prototrophic after 2 days of culture under uracil starvation, generating B. subtilis PyrF variants with an enhanced half-denaturation temperature of >10°C. These data suggest that this error-prone strain is a promising host for thermoadaptation-directed evolution to generate thermostable variants from thermolabile enzymes. PMID:25326311
NASA Astrophysics Data System (ADS)
Congedo, Giuseppe
2015-04-01
The measurement of frequency shifts for light beams exchanged between two test masses nearly in free fall is at the heart of gravitational-wave detection. It is envisaged that the derivative of the frequency shift is in fact limited by differential forces acting on those test masses. We calculate the derivative of the frequency shift with a fully covariant, gauge-independent and coordinate-free method. This method is general and does not require a congruence of nearby beams' null geodesics as done in previous work. We show that the derivative of the parallel transport is the only means by which gravitational effects shows up in the frequency shift. This contribution is given as an integral of the Riemann tensor—the only physical observable of curvature—along the beam's geodesic. The remaining contributions are the difference of velocities, the difference of nongravitational forces, and finally fictitious forces, either locally at the test masses or nonlocally integrated along the beam's geodesic. As an application relevant to gravitational-wave detection, we work out the frequency shift in the local Lorentz frame of nearby geodesics.
Mace, G.G.; Ackerman, T.P.
1996-07-01
A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. The authors have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. It is concluded that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, the authors conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results. 18 refs., 9 figs., 6 tabs.
GOME Total Ozone and Calibration Error Derived Usign Version 8 TOMS Algorithm
NASA Technical Reports Server (NTRS)
Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.
2003-01-01
The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local stiucture as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The lb detector appears to be quite well behaved throughout this time period.
Skutan, Stefan; Aschenbrenner, Philipp
2012-12-01
Components with extraordinarily high analyte contents, for example copper metal from wires or plastics stabilized with heavy metal compounds, are presumed to be a crucial source of errors in refuse-derived fuel (RDF) analysis. In order to study the error generation of those 'analyte carrier components', synthetic samples spiked with defined amounts of carrier materials were mixed, milled in a high speed rotor mill to particle sizes <1 mm, <0.5 mm and <0.2 mm, respectively, and analyzed repeatedly. Copper (Cu) metal and brass were used as Cu carriers, three kinds of polyvinylchloride (PVC) materials as lead (Pb) and cadmium (Cd) carriers, and paper and polyethylene as bulk components. In most cases, samples <0.2 mm delivered good recovery rates (rec), and low or moderate relative standard deviations (rsd), i.e. metallic Cu 87-91% rec, 14-35% rsd, Cd from flexible PVC yellow 90-92% rec, 8-10% rsd and Pb from rigid PVC 92-96% rec, 3-4% rsd. Cu from brass was overestimated (138-150% rec, 13-42% rsd), Cd from flexible PVC grey underestimated (72-75% rec, 4-7% rsd) in <0.2 mm samples. Samples <0.5 mm and <1 mm spiked with Cu or brass produced errors of up to 220% rsd (<0.5 mm) and 370% rsd (<1 mm). In the case of Pb from rigid PVC, poor recoveries (54-75%) were observed in spite of moderate variations (rsd 11-29%). In conclusion, time-consuming milling to <0.2 mm can reduce variation to acceptable levels, even given the presence of analyte carrier materials. Yet, the sources of systematic errors observed (likely segregation effects) remain uncertain. PMID:23027034
NASA Astrophysics Data System (ADS)
Alstad, K. P.; Venterea, R. T.; Tan, S. M.; Saad, N.
2015-12-01
Understanding chamber-based soil flux model fitting and measurement error is key to scaling soils GHG emissions and resolving the primary uncertainties in climate and management feedbacks at regional scales. One key challenge is the selection of the correct empirical model applied to soil flux rate analysis in chamber-based experiments. Another challenge is the characterization of error in the chamber measurement. Traditionally, most chamber-based N2O and CH4 measurements and model derivations have used discrete sampling for GC analysis, and have been conducted using extended chamber deployment periods (DP) which are expected to result in substantial alteration of the pre-deployment flux. The development of high-precision, high-frequency CRDS analyzers has advanced the science of soil flux analysis by facilitating much shorter DP and, in theory, less chamber-induced suppression of the soil-atmosphere diffusion gradient. As well, a new software tool developed by Picarro (the "Soil Flux Processor" or "SFP") links the power of Cavity Ring-Down Spectroscopy (CRDS) technology with an easy-to-use interface that features flexible sample-ID and run-schemes, and provides real-time monitoring of chamber accumulations and environmental conditions. The SFP also includes a sophisticated flux analysis interface which offers a user-defined model selection, including three predominant fit algorithms as default, and an open-code interface for user-composed algorithms. The SFP is designed to couple with the Picarro G2508 system, an analyzer which simplifies soils flux studies by simultaneously measuring primary GHG species -- N2O, CH4, CO2 and H2O. In this study, Picarro partners with the ARS USDA Soil & Water Management Research Unit (R. Venterea, St. Paul), to examine the degree to which the high-precision, high-frequency Picarro analyzer allows for much shorter DPs periods in chamber-based flux analysis, and, in theory, less chamber-induced suppression of the soil
NASA Technical Reports Server (NTRS)
Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.
2011-01-01
Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.
NASA Astrophysics Data System (ADS)
Bartsotas, Nikolaos S.; Nikolopoulos, Efthymios I.; Anagnostou, Emmanouil N.; Kallos, George
2015-04-01
Mountainous regions account for a significant part of the Earth's surface. Such areas are persistently affected by heavy precipitation episodes, which induce flash floods and landslides. The limitation of inadequate in-situ observations has put remote sensing rainfall estimates on a pedestal concerning the analyses of these events, as in many mountainous regions worldwide they serve as the only available data source. However, well-known issues of remote sensing techniques over mountainous areas, such as the strong underestimation of precipitation associated with low-level orographic enhancement, limit the way these estimates can accommodate operational needs. Even locations that fall within the range of weather radars suffer from strong biases in precipitation estimates due to terrain blockage and vertical rainfall profile issues. A novel approach towards the reduction of error in quantitative precipitation estimates lies upon the utilization of high-resolution numerical simulations in order to derive error correction functions for corresponding satellite precipitation data. The correction functions examined consist of 1) mean field bias adjustment and 2) pdf matching, two procedures that are simple and have been widely used in gauge-based adjustment techniques. For the needs of this study, more than 15 selected storms over the mountainous Upper Adige region of Northern Italy were simulated at 1-km resolution from a state-of-the-art atmospheric model (RAMS/ICLAMS), benefiting from the explicit cloud microphysical scheme, prognostic treatment of natural pollutants such as dust and sea-salt and the detailed SRTM90 topography that are implemented in the model. The proposed error correction approach is applied on three quasi-global and widely used satellite precipitation datasets (CMORPH, TRMM 3B42 V7 and PERSIANN) and the evaluation of the error model is based on independent in situ precipitation measurements from a dense rain gauge network (1 gauge / 70 km2
de Waal, Eric; Mak, Winifred; Calhoun, Sondra; Stein, Paula; Ord, Teri; Krapp, Christopher; Coutifaris, Christos; Schultz, Richard M.; Bartolomei, Marisa S.
2014-01-01
ABSTRACT Assisted reproductive technologies (ART) have enabled millions of couples with compromised fertility to conceive children. Nevertheless, there is a growing concern regarding the safety of these procedures due to an increased incidence of imprinting disorders, premature birth, and low birth weight in ART-conceived offspring. An integral aspect of ART is the oxygen concentration used during in vitro development of mammalian embryos, which is typically either atmospheric (∼20%) or reduced (5%). Both oxygen tension levels have been widely used, but 5% oxygen improves preimplantation development in several mammalian species, including that of humans. To determine whether a high oxygen tension increases the frequency of epigenetic abnormalities in mouse embryos subjected to ART, we measured DNA methylation and expression of several imprinted genes in both embryonic and placental tissues from concepti generated by in vitro fertilization (IVF) and exposed to 5% or 20% oxygen during culture. We found that placentae from IVF embryos exhibit an increased frequency of abnormal methylation and expression profiles of several imprinted genes, compared to embryonic tissues. Moreover, IVF-derived placentae exhibit a variety of epigenetic profiles at the assayed imprinted genes, suggesting that these epigenetic defects arise by a stochastic process. Although culturing embryos in both of the oxygen concentrations resulted in a significant increase of epigenetic defects in placental tissues compared to naturally conceived controls, we did not detect significant differences between embryos cultured in 5% and those cultured in 20% oxygen. Thus, further optimization of ART should be considered to minimize the occurrence of epigenetic errors in the placenta. PMID:24337315
Low frequency vibrational modes of oxygenated myoglobin, hemoglobins, and modified derivatives.
Jeyarajah, S; Proniewicz, L M; Bronder, H; Kincaid, J R
1994-12-01
The low frequency resonance Raman spectra of the dioxygen adducts of myoglobin, hemoglobin, its isolated subunits, mesoheme-substituted hemoglobin, and several deuteriated heme derivatives are reported. The observed oxygen isotopic shifts are used to assign the iron-oxygen stretching (approximately 570 cm-1) and the heretofore unobserved delta (Fe-O-O) bending (approximately 420 cm-1) modes. Although the delta (Fe-O-O) is not enhanced in the case of oxymyoglobin, it is observed for all the hemoglobin derivatives, its exact frequency being relatively invariable among the derivatives. The lack of sensitivity to H2O/D2O buffer exchange is consistent with our previous interpretation of H2O/D2O-induced shifts of v(O-O) in the resonance Raman spectra of dioxygen adducts of cobalt-substituted heme proteins; namely, that those shifts are associated with alterations in vibrational coupling of v(O-O) with internal modes of proximal histidyl imidazole rather than to steric or electronic effects of H/D exchange at the active site. No evidence is obtained for enhancement of the v(Fe-N) stretching frequency of the linkage between the heme iron and the imidazole group of the proximal histidine. PMID:7983043
Ren, Yong; Li, Shengrong; Luo, Jianming; He, Zhonghu; Du, Xiaoying; Zhou, Qiang; He, Yuanjiang; Wei, Yuming; Zheng, Youliang
2014-02-01
The development and utilization of outstanding germplasm in breeding programs can expedite breeding process. The high yielding variety Mianmai 37, grown widely in southwestern China, has been used widely in breeding programs. Comparisons between Mianmai 37 and its derivatives for yield and yield components were conducted. Simple sequence repeat (SSR) markers were used to test the frequency of specific alleles transferred from Mianmai 37 to its derivative culti-var Mianmai 367. The results indicated that the yield of the derivative cultivars was significantly higher than Mianmai 37, due to an increased grain number per spike. Favorable traits from Mianmai 37 such as resistance to stripe rust, were trans-ferred to its derivatives. At molecular level, 78.9% loci in Mianmai 367 were derived from Mianmai 37 with 75.0, 83.6 and 74.2% from A, B and D genomes, respectively. Mianmai 367 shared common loci with its parent Mianmai 37, such as re-gions Xgwm374-Xbarc167-Xbarc128-Xgwm129-Xgwm388-Xbarc101 on chromosome 2B and Xwmc446-Xwmc366- Xwmc533-Xbarc164-Xwmc418 on chromosome 3B, these regions were associated with grain number, 1000-kernel weight and resistance. The preferred transmission of alleles from Mianmai 37 to its derivatives probably can be explained by the strong selection pressures because of its favorable agronomic traits and the disease resistance. PMID:24846943
Soil Moisture derivation from the multi-frequency sensor AMSR-2
NASA Astrophysics Data System (ADS)
Parinussa, Robert; de Nijs, Anne; de Jeu, Richard; Holmes, Thomas; Dorigo, Wouter; Wanders, Niko; Schellekens, Jaap
2015-04-01
We present a method to derive soil moisture from the multi-frequency sensor Advanced Microwave Scanning Radiometer 2 (AMSR-2). Its predecessor, the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E), has already provided Earth scientists with a consistent and continuous global soil moisture dataset. However, the AMSR-2 sensor has one big advantage in relation to the AMSR-E sensor; is has an additional channel in the C-band frequency (7.3 GHz). This channel creates the opportunity to have a better screening for Radio Frequency Interference (RFI) and could eventually lead to improved soil moisture retrievals. The soil moisture retrievals from AMSR-2 we present here use the Land Parameter Retrieval Model (LPRM) in combination with a new radio frequency interference masking method. We used observations of the multi-frequency microwave radiometer onboard the Tropical Rainfall Measuring Mission (TRMM) satellite to intercalibrate the brightness temperatures in order to improve consistency between AMSR-E and AMSR-2. Several scenarios to accomplish synergy between the AMSR-E and AMSR-2 soil moisture products were evaluated. A global comparison of soil moisture retrievals against ERA Interim re-analysis soil moisture demonstrates the need for an intercalibration procedure. Several different scenarios based on filtering were tested and the impact on the soil moisture retrievals was evaluated against two independent reference soil moisture datasets (reanalysis and in situ soil moisture) that cover the observation periods of the AMSR-E and AMSR-2 sensors. Results show a high degree of consistency between both satellite products and two independent reference products for the soil moisture products. In addition, the added value of an additional frequency for RFI detection is demonstrated within this study with a reduction of the total contaminated pixels in the 6.9 GHz of 66% for horizontal observations and even 85% for vertical observations when 7.3 and 10
System for adjusting frequency of electrical output pulses derived from an oscillator
Bartholomew, David B.
2006-11-14
A system for setting and adjusting a frequency of electrical output pulses derived from an oscillator in a network is disclosed. The system comprises an accumulator module configured to receive pulses from an oscillator and to output an accumulated value. An adjustor module is configured to store an adjustor value used to correct local oscillator drift. A digital adder adds values from the accumulator module to values stored in the adjustor module and outputs their sums to the accumulator module, where they are stored. The digital adder also outputs an electrical pulse to a logic module. The logic module is in electrical communication with the adjustor module and the network. The logic module may change the value stored in the adjustor module to compensate for local oscillator drift or change the frequency of output pulses. The logic module may also keep time and calculate drift.
NASA Astrophysics Data System (ADS)
Thyer, Mark; Li, Jing; Lambert, Martin; Kuczera, George; Metcalfe, Andrew
2015-04-01
Flood extremes are driven by highly variable and complex climatic and hydrological processes. Derived flood frequency methods are often used to predict the flood frequency distribution (FFD) because they can provide predictions in ungauged catchments and evaluate the impact of land-use or climate change. This study presents recent work on development of a new derived flood frequency method called the hybrid causative events (HCE) approach. The advantage of the HCE approach is that it combines the accuracy of the continuous simulation approach with the computational efficiency of the event-based approaches. Derived flood frequency methods, can be divided into two classes. Event-based approaches provide fast estimation, but can also lead to prediction bias due to limitations of inherent assumptions required for obtaining input information (rainfall and catchment wetness) for events that cause large floods. Continuous simulation produces more accurate predictions, however, at the cost of massive computational time. The HCE method uses a short continuous simulation to provide inputs for a rainfall-runoff model running in an event-based fashion. A proof-of-concept pilot study that the HCE produces estimates of the flood frequency distribution with similar accuracy as the continuous simulation, but with dramatically reduced computation time. Recent work incorporated seasonality into the HCE approach and evaluated with a more realistic set of eight sites from a wide range of climate zones, typical of Australia, using a virtual catchment approach. The seasonal hybrid-CE provided accurate predictions of the FFD for all sites. Comparison with the existing non-seasonal hybrid-CE showed that for some sites the non-seasonal hybrid-CE significantly over-predicted the FFD. Analysis of the underlying cause of whether a site had a high, low or no need to use seasonality found it was based on a combination of reasons, that were difficult to predict apriori. Hence it is recommended
Frequency and origins of hemoglobin S mutation in African-derived Brazilian populations.
De Mello Auricchio, Maria Teresa Balester; Vicente, João Pedro; Meyer, Diogo; Mingroni-Netto, Regina Célia
2007-12-01
Africans arrived in Brazil as slaves in great numbers, mainly after 1550. Before the abolition of slavery in Brazil in 1888, many communities, called quilombos, were formed by runaway or abandoned African slaves. These communities are presently referred to as remnants of quilombos, and many are still partially genetically isolated. These remnants can be regarded as relicts of the original African genetic contribution to the Brazilian population. In this study we assessed frequencies and probable geographic origins of hemoglobin S (HBB*S) mutations in remnants of quilombo populations in the Ribeira River valley, São Paulo, Brazil, to reconstruct the history of African-derived populations in the region. We screened for HBB*S mutations in 11 quilombo populations (1,058 samples) and found HBB*S carrier frequencies that ranged from 0% to 14%. We analyzed beta-globin gene cluster haplotypes linked to the HBB*S mutation in 86 chromosomes and found the four known African haplotypes: 70 (81.4%) Bantu (Central Africa Republic), 7 (8.1%) Benin, 7 (8.1%) Senegal, and 2 (2.3%) Cameroon haplotypes. One sickle cell homozygote was Bantu/Bantu and two homozygotes had Bantu/Benin combinations. The high frequency of the sickle cell trait and the diversity of HBB*S linked haplotypes indicate that Brazilian remnants of quilombos are interesting repositories of genetic diversity present in the ancestral African populations. PMID:18494376
NASA Astrophysics Data System (ADS)
Kuhn, Michael; Hirt, Christian
2016-05-01
In gravity forward modelling, the concept of Rock-Equivalent Topography (RET) is often used to simplify the computation of gravity implied by rock, water, ice and other topographic masses. In the RET concept, topographic masses are compressed (approximated) into equivalent rock, allowing the use of a single constant mass-density value. Many studies acknowledge the approximate character of the RET, but few have attempted yet to quantify and analyse the approximation errors in detail for various gravity field functionals and heights of computation points. Here, we provide an in-depth examination of approximation errors associated with the RET compression for the topographic gravitational potential and its first- and second-order derivatives. Using the Earth2014 layered topography suite we apply Newtonian integration in the spatial domain in the variants (a) rigorous forward modelling of all mass bodies, (b) approximative modelling using RET. The differences among both variants, which reflect the RET approximation error, are formed and studied for an ensemble of 10 different gravity field functionals at three levels of altitude (on and 3 km above the Earth's surface and at 250 km satellite height). The approximation errors are found to be largest at the Earth's surface over RET compression areas (oceans, ice shields) and to increase for the first- and second-order derivatives. Relative errors, computed here as ratio between the range of differences between both variants relative to the range in signal, are at the level of 0.06-0.08 % for the potential, ˜ 3-7 % for the first-order derivatives at the Earth's surface (˜ 0.1 % at satellite altitude). For the second-order derivatives, relative errors are below 1 % at satellite altitude, at the 10-20 % level at 3 km and reach maximum values as large as ˜ 20 to 110 % near the surface. As such, the RET approximation errors may be acceptable for functionals computed far away from the Earth's surface or studies focussing on
Large-scale derived flood frequency analysis based on continuous simulation
NASA Astrophysics Data System (ADS)
Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several
Mardani, Mohammad; Roshankhah, Shiva; Hashemibeni, Batool; Salahshoor, Mohammadreza; Naghsh, Erfan; Esfandiari, Ebrahim
2016-01-01
Background: Since when the cartilage damage (e.g., with the osteoarthritis) it could not be repaired in the body, hence for its reconstruction needs cell therapy. For this purpose, adipose-derived stem cells (ADSCs) is one of the best cell sources because by the tissue engineering techniques it can be differentiated into chondrocytes. Chemical and physical inducers is required order to stem cells to chondrocytes differentiating. We have decided to define the role of electric field (EF) in inducing chondrogenesis process. Materials and Methods: A low frequency EF applied the ADSCs as a physical inducer for chondrogenesis in a 3D micromass culture system which ADSCs were extracted from subcutaneous abdominal adipose tissue. Also enzyme-linked immunosorbent assay, methyl thiazolyl tetrazolium, real time polymerase chain reaction and flowcytometry techniques were used for this study. Results: We found that the 20 minutes application of 1 kHz, 20 mv/cm EF leads to chondrogenesis in ADSCs. Although our results suggest that application of physical (EF) and chemical (transforming growth factor-β3) inducers at the same time, have best results in expression of collagen type II and SOX9 genes. It is also seen EF makes significant decreased expression of collagens type I and X genes. Conclusion: The low frequency EF can be a good motivator to promote chondrogenic differentiation of human ADSCs. PMID:27308269
Multi-frequency acoustic derivation of particle size using 'off-the-shelf" ADCPs.
NASA Astrophysics Data System (ADS)
Haught, D. R.; Wright, S. A.; Venditti, J. G.; Church, M. A.
2015-12-01
Suspended sediment particle size in rivers is of great interest due to its influence on riverine and coastal morphology, socio-economic viability, and ecological health and restoration. Prediction of suspended sediment transport from hydraulics remains a stubbornly difficult problem, particularly for the washload component, which is controlled by sediment supply from the drainage basin. This has led to a number of methods for continuously monitoring suspended sediment concentration and mean particle size, the most popular currently being hydroacoustic methods. Here, we explore the possibility of using theoretical inversion of the sonar equation to derive an estimate of mean particle size and standard deviation of the grain size distribution (GSD) using three 'off-the-shelf' acoustic Doppler current profiles (ADCP) with frequencies of 300, 600 and 1200 kHz. The instruments were deployed in the sand-bedded reach of the Fraser River, British Columbia. We use bottle samples collected in the acoustic beams to test acoustics signal inversion methods. Concentrations range from 15-300 mg/L and the suspended load at the site is ~25% sand, ~75 % silt/clay. Measured mean particle radius from samples ranged from 10-40 microns with relative standard deviations ranging from 0.75 to 2.5. Initial results indicate the acoustically derived mean particle radius compares well with measured particle radius, using a theoretical inversion method adapted to the Fraser River sediment.
Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar
2015-01-01
For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489
NASA Astrophysics Data System (ADS)
Nakamura, Satoshi; Goto, Hayato; Kujiraoka, Mamiko; Ichimura, Kouichi; Quantum Computer Team
The rare-earth-ion-doped crystals, such as Pr3+: Y2SiO5, are promising materials for scalable quantum computers, because the crystals contain a large number of ions which have long coherence time. The frequency-domain quantum computation (FDQC) enables us to employ individual ions coupled to a common cavity mode as qubits by identifying with their transition frequencies. In the FDQC, operation lights with detuning interact with transitions which are not intended to operate, because ions are irradiated regardless of their positions. This crosstalk causes serious errors of the quantum gates in the FDQC. When ``resonance conditions'' between eigenenergies of the whole system and transition-frequency differences among ions are satisfied, the gate errors increase. Ions for qubits must have transitions avoiding the conditions for high-fidelity gate. However, when a large number of ions are employed as qubits, it is difficult to avoid the conditions because of many combinations of eigenenergies and transitions. We propose new implementation using extra ions to control the resonance conditions, and show the effect of the extra ions by a numerical simulation. Our implementation is useful to realize a scalable quantum computer using rare-earth-ion-doped crystal based on the FDQC.
Seoane, Fernando; Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar; Ward, Leigh C
2015-01-01
For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489
NASA Astrophysics Data System (ADS)
Pietrzyk, Mariusz W.; Donovan, Tim; Brennan, Patrick C.; Dix, Alan; Manning, David J.
2011-03-01
Aim: To optimize automated classification of radiological errors during lung nodule detection from chest radiographs (CxR) using a support vector machine (SVM) run on the spatial frequency features extracted from the local background of selected regions. Background: The majority of the unreported pulmonary nodules are visually detected but not recognized; shown by the prolonged dwell time values at false-negative regions. Similarly, overestimated nodule locations are capturing substantial amounts of foveal attention. Spatial frequency properties of selected local backgrounds are correlated with human observer responses either in terms of accuracy in indicating abnormality position or in the precision of visual sampling the medical images. Methods: Seven radiologists participated in the eye tracking experiments conducted under conditions of pulmonary nodule detection from a set of 20 postero-anterior CxR. The most dwelled locations have been identified and subjected to spatial frequency (SF) analysis. The image-based features of selected ROI were extracted with un-decimated Wavelet Packet Transform. An analysis of variance was run to select SF features and a SVM schema was implemented to classify False-Negative and False-Positive from all ROI. Results: A relative high overall accuracy was obtained for each individually developed Wavelet-SVM algorithm, with over 90% average correct ratio for errors recognition from all prolonged dwell locations. Conclusion: The preliminary results show that combined eye-tracking and image-based features can be used for automated detection of radiological error with SVM. The work is still in progress and not all analytical procedures have been completed, which might have an effect on the specificity of the algorithm.
Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique
2016-01-01
Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865
Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique
2016-01-01
Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865
Implications of the shape of design hyetograph in the derived flood frequency distribution
NASA Astrophysics Data System (ADS)
Sordo-Ward, A.; Bianucci, P.; Garrote, L.
2012-04-01
Hydrometeorological methods for rainfall-runoff transformation are frequently used when the hydrological design of hydraulic infrastructures is considered. These methods imply to determinate the design storm which is usually characterised by the return period of its total depth of precipitation. In the other hand, the shape of the hyetograph, i.e. the temporal pattern of the storm, has a relevant implication in the resulting hydrograph. In this work we analysed the influence that the within-storm rainfall intensity distribution has on the derived flood frequency (DFF) law. This was addressed by comparing the DFF's obtained from two different ensembles of hyetographs with the same total depth frequency distribution and constant total duration. One ensemble of hyetograph (BA) was determined using the alternating blocks method which is usually assumed to provide more adverse hydrological load. The second ensemble (MC) was obtained using a stochastic storm generator developed in a Monte Carlo framework. The ratios between corresponding maximum flows were calculated for selected return periods (RP) as a measure of the difference between both DFF's. The variation of this quotient was analysed regarding the return period and basin configuration. We considered three different discretization scales for the 1241-km2 Manzanares River basin with outlet near Rivas-Vaciamadrid, in the Region of Madrid (Spain). The three levels correspond to high resolution (HR, basin divided into 62 sub-catchments), medium resolution (MR, 33 sub-catchments), and low resolution (LR, 14 sub-catchments). For the case studied, and for the three configuration considered, the DFF obtained from the alternating blocks hyetograph was not such adverse as it was expected to be. The flow peak ratio kept practically constant across the RP range. While the BA-quantiles for each subbasin's DFF were higher than MC-quantiles in a 10% to 40%; the peak flow ratios at the catchment outlet took values close to one
Ambridge, Ben; Pine, Julian M; Rowland, Caroline F; Young, Chris R
2008-01-01
Participants (aged 5-6 yrs, 9-10 yrs and adults) rated (using a five-point scale) grammatical (intransitive) and overgeneralized (transitive causative)(1) uses of a high frequency, low frequency and novel intransitive verb from each of three semantic classes [Pinker, S. (1989a). Learnability and cognition: The acquisition of argument structure. Cambridge, MA: MIT Press]: "directed motion" (fall, tumble), "going out of existence" (disappear, vanish) and "semivoluntary expression of emotion" (laugh, giggle). In support of Pinker's semantic verb class hypothesis, participants' preference for grammatical over overgeneralized uses of novel (and English) verbs increased between 5-6 yrs and 9-10 yrs, and was greatest for the latter class, which is associated with the lowest degree of direct external causation (the prototypical meaning of the transitive causative construction). In support of Braine and Brooks's [Braine, M.D.S., & Brooks, P.J. (1995). Verb argument strucure and the problem of avoiding an overgeneral grammar. In M. Tomasello & W. E. Merriman (Eds.), Beyond names for things: Young children's acquisition of verbs (pp. 352-376). Hillsdale, NJ: Erlbaum] entrenchment hypothesis, all participants showed the greatest preference for grammatical over ungrammatical uses of high frequency verbs, with this preference smaller for low frequency verbs, and smaller again for novel verbs. We conclude that both the formation of semantic verb classes and entrenchment play a role in children's retreat from argument-structure overgeneralization errors. PMID:17316595
Bell, T.L. ); Abdullah, A.; Martin, R.L. ); North, G.R. )
1990-02-28
Estimates of monthly average rainfall based on satellite observations from a low Earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The authors estimate the size of this error for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). They first examine in detail the statistical description of rainfall on scales from 1 to 10{sup 3} km, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10% of the mean for rainfall averaged over a 500 {times} 500 km{sup 2} area.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.
1990-01-01
Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.
NASA Astrophysics Data System (ADS)
Fusi, F.; Congedo, P. M.
2016-03-01
In this work, a strategy is developed to deal with the error affecting the objective functions in uncertainty-based optimization. We refer to the problems where the objective functions are the statistics of a quantity of interest computed by an uncertainty quantification technique that propagates some uncertainties of the input variables through the system under consideration. In real problems, the statistics are computed by a numerical method and therefore they are affected by a certain level of error, depending on the chosen accuracy. The errors on the objective function can be interpreted with the abstraction of a bounding box around the nominal estimation in the objective functions space. In addition, in some cases the uncertainty quantification methods providing the objective functions also supply the possibility of adaptive refinement to reduce the error bounding box. The novel method relies on the exchange of information between the outer loop based on the optimization algorithm and the inner uncertainty quantification loop. In particular, in the inner uncertainty quantification loop, a control is performed to decide whether a refinement of the bounding box for the current design is appropriate or not. In single-objective problems, the current bounding box is compared to the current optimal design. In multi-objective problems, the decision is based on the comparison of the error bounding box of the current design and the current Pareto front. With this strategy, fewer computations are made for clearly dominated solutions and an accurate estimate of the objective function is provided for the interesting, non-dominated solutions. The results presented in this work prove that the proposed method improves the efficiency of the global loop, while preserving the accuracy of the final Pareto front.
NASA Astrophysics Data System (ADS)
Ou, Z. W.; Tong, H.; Kou, F. F.; Ding, G. Q.
2016-04-01
Eight pulsars have low braking indices, which challenge the magnetic dipole braking of pulsars. 222 pulsars and 15 magnetars have abnormal distribution of frequency second derivatives, which also make contradiction with classical understanding. How neutron star magnetospheric activities affect these two phenomena are investigated by using the wind braking model of pulsars. It is based on the observational evidence that pulsar timing is correlated with emission and both aspects reflect the magnetospheric activities. Fluctuations are unavoidable for a physical neutron star magnetosphere. Young pulsars have meaningful braking indices, while old pulsars' and magnetars' fluctuation item dominates their frequency second derivatives. It can explain both the braking index and frequency second derivative of pulsars uniformly. The braking indices of eight pulsars are the combined effect of magnetic dipole radiation and particle wind. During the lifetime of a pulsar, its braking index will evolve from three to one. Pulsars with low braking index may put strong constraint on the particle acceleration process in the neutron star magnetosphere. The effect of pulsar death should be considered during the long term rotational evolution of pulsars. An equation like the Langevin equation for Brownian motion was derived for pulsar spin-down. The fluctuation in the neutron star magnetosphere can be either periodic or random, which result in anomalous frequency second derivative and they have similar results. The magnetospheric activities of magnetars are always stronger than those of normal pulsars.
NASA Astrophysics Data System (ADS)
Emile-Geay, J.; Cobb, K.; Mann, M. E.; Rutherford, S. D.; Wittenberg, A. T.
2009-12-01
Since surface conditions over the tropical Pacific can organize climate variability at near-global scales, and since there is wide disagreement over their projected course under greenhouse forcing, it is of considerable interest to reconstruct their low-frequency evolution over the past millennium. To this end, we make use of the hybrid RegEM climate reconstruction technique (Mann et al. 2008; Schneider 2001), which aims to reconstruct decadal and longer-scale variations of sea-surface temperature (SST) from an array of climate proxies. We first assemble a database of published and new, high-resolution proxy data from ENSO-sensitive regions, screened for significant correlation with a common ENSO metric (NINO3 index). Proxy observations come primarily from coral, speleothem, marine and lake sediment, and ice core sources, as well as long tree-ring chronologies. The hybrid RegEM methodology is then validated within a pseudoproxy context using two coupled general circulation model simulations of the past millennium’s climate; one using the NCAR CSM1.4, the other the GFDL CM2.1, models (Ammann et al. 2007; Wittenberg 2009). Validation results are found to be sensitive to the ratio of interannual to lower-frequency variability, with poor reconstruction skill for CM2.1 but good skill for CSM1.4. The latter features prominent changes in NINO3 at decadal-to-centennial timescales, which the network and method detect relatively easily. In contrast, the unforced CM2.1 NINO3 is dominated by interannual variations, and its long-term oscillations are more difficult to reconstruct. These two limit cases bracket the observed NINO3 behavior over the historical period. We then apply the method to the proxy observations and extend the decadal-scale history of tropical Pacific SSTs over the past millennium, analyzing the sensitivity of such reconstruction to the inclusion of various key proxy timeseries and details of the statistical analysis, emphasizing metrics of uncertainty
Banerjee, Biswanath; Walsh, Timothy F.; Aquino, Wilkins; Bonnet, Marc
2012-01-01
This paper presents the formulation and implementation of an Error in Constitutive Equations (ECE) method suitable for large-scale inverse identification of linear elastic material properties in the context of steady-state elastodynamics. In ECE-based methods, the inverse problem is postulated as an optimization problem in which the cost functional measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses. Furthermore, in a more recent modality of this methodology introduced by Feissel and Allix (2007), referred to as the Modified ECE (MECE), the measured data is incorporated into the formulation as a quadratic penalty term. We show that a simple and efficient continuation scheme for the penalty term, suggested by the theory of quadratic penalty methods, can significantly accelerate the convergence of the MECE algorithm. Furthermore, a (block) successive over-relaxation (SOR) technique is introduced, enabling the use of existing parallel finite element codes with minimal modification to solve the coupled system of equations that arises from the optimality conditions in MECE methods. Our numerical results demonstrate that the proposed methodology can successfully reconstruct the spatial distribution of elastic material parameters from partial and noisy measurements in as few as ten iterations in a 2D example and fifty in a 3D example. We show (through numerical experiments) that the proposed continuation scheme can improve the rate of convergence of MECE methods by at least an order of magnitude versus the alternative of using a fixed penalty parameter. Furthermore, the proposed block SOR strategy coupled with existing parallel solvers produces a computationally efficient MECE method that can be used for large scale materials identification problems, as demonstrated on a 3D example involving about 400,000 unknown moduli. Finally, our numerical results suggest that the proposed MECE
NASA Astrophysics Data System (ADS)
Eshagh, Mehdi; Ghorbannia, Morteza
2014-07-01
The spatial truncation error (STE) is a significant systematic error in the integral inversion of satellite gradiometric and orbital data to gravity anomalies at sea level. In order to reduce the effect of STE, a larger area than the desired one is considered in the inversion process, but the anomalies located in its central part are selected as the final results. The STE influences the variance of the results as well because the residual vector, which is contaminated with STE, is used for its estimation. The situation is even more complicated in variance component estimation because of its iterative nature. In this paper, we present a strategy to reduce the effect of STE on the a posteriori variance factor and the variance components for inversion of satellite orbital and gradiometric data to gravity anomalies at sea level. The idea is to define two windowing matrices for reducing this error from the estimated residuals and anomalies. Our simulation studies over Fennoscandia show that the differences between the 0.5°×0.5° gravity anomalies obtained from orbital data and an existing gravity model have standard deviation (STD) and root mean squared error (RMSE) of 10.9 and 12.1 mGal, respectively, and those obtained from gradiometric data have 7.9 and 10.1 in the same units. In the case that they are combined using windowed variance components the STD and RMSE become 6.1 and 8.4 mGal. Also, the mean value of the estimated RMSE after using the windowed variances is in agreement with the RMSE of the differences between the estimated anomalies and those obtained from the gravity model.
Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.
2009-01-01
In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956
ERIC Educational Resources Information Center
Kearsley, Greg P.
This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1996-01-01
The phase of a frequency standard that uses periodic interrogation and control of a local oscillator (LO) is degraded by a long-term random-walk component induced by downconversion of LO noise into the loop passband. The Dick formula for the noise level of this degradation is derived from an explicit solution of an LO control-loop model.
NASA Technical Reports Server (NTRS)
Huang, Dong; Yang, Wenze; Tan, Bin; Rautiainen, Miina; Zhang, Ping; Hu, Jiannan; Shabanov, Nikolay V.; Linder, Sune; Knyazikhin, Yuri; Myneni, Ranga B.
2006-01-01
The validation of moderate-resolution satellite leaf area index (LAI) products such as those operationally generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor data requires reference LAI maps developed from field LAI measurements and fine-resolution satellite data. Errors in field measurements and satellite data determine the accuracy of the reference LAI maps. This paper describes a method by which reference maps of known accuracy can be generated with knowledge of errors in fine-resolution satellite data. The method is demonstrated with data from an international field campaign in a boreal coniferous forest in northern Sweden, and Enhanced Thematic Mapper Plus images. The reference LAI map thus generated is used to assess modifications to the MODIS LAI/fPAR algorithm recently implemented to derive the next generation of the MODIS LAI/fPAR product for this important biome type.
SAR image quality effects of damped phase and amplitude errors
NASA Astrophysics Data System (ADS)
Zelenka, Jerry S.; Falk, Thomas
The effects of damped multiplicative, amplitude, or phase errors on the image quality of synthetic-aperture radar systems are considered. These types of errors can result from aircraft maneuvers or the mechanical steering of an antenna. The proper treatment of damped multiplicative errors can lead to related design specifications and possibly an enhanced collection capability. Only small, high-frequency errors are considered. Expressions for the average intensity and energy associated with a damped multiplicative error are presented and used to derive graphic results. A typical example is used to show how to apply the results of this effort.
NASA Astrophysics Data System (ADS)
Krueger, Tobias; Inman, Alex; Paling, Nick
2014-05-01
Catchment management, as driven by legislation such as the EU WFD or grassroots initiatives, requires the apportionment of in-stream pollution to point and diffuse sources so that mitigation measures can be targeted and costs and benefits shared. Source apportionment is typically done via modelling. Given model imperfections and input data errors, it has become state-of-the-art to employ an uncertainty framework. However, what is not easily incorporated in such a framework, and currently much discussed in hydrology, are epistemic uncertainties, i.e. those uncertainties that relate to lack of knowledge about processes and data. For example, what if an otherwise negligible source suddenly matters because of an accidental pollution incident? In this paper we present such a case of epistemic error, an industrial spill ignored in a water quality model, demonstrate the bias of the resulting model simulations, and show how the error was discovered somewhat incidentally by auxiliary high-frequency data and finally corrected through the collective intelligence of a stakeholder network. We suggest that accidental pollution incidents like this are a wide-spread, though largely ignored, problem. Hence our discussion will reflect on the practice of catchment monitoring, modelling and management in general. The case itself occurred as part of ongoing modelling support in the Tamar catchment, one of the priority catchments of the UK government's new approach to managing water resources more decentralised and collaboratively. An Extended Export Coefficient Model (ECM+) had been developed with stakeholders to simulate transfers of nutrients (N & P), sediment and Faecal Coliforms from land to water and down the river network as a function of sewage treatment options, land use, livestock densities and farm management practices. In the process of updating the model for the hydrological years 2008-2012 an over-prediction of the annual average P concentration by the model was found at
NASA Technical Reports Server (NTRS)
Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.
2007-01-01
Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross
Standard Errors for Matrix Correlations.
ERIC Educational Resources Information Center
Ogasawara, Haruhiko
1999-01-01
Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)
An analysis of the effects of secondary reflections on dual-frequency reflectometers
NASA Technical Reports Server (NTRS)
Hearn, C. P.; Cockrell, C. R.; Harrah, S. D.
1990-01-01
The error-producing mechanism involving secondary reflections in a dual-frequency, distance measuring reflectometer is examined analytically. Equations defining the phase, and hence distance, error are derived. The error-reducing potential of frequency-sweeping is demonstrated. It is shown that a single spurious return can be completely nullified by optimizing the sweep width.
TOA/FOA geolocation error analysis.
Mason, John Jeffrey
2008-08-01
This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.
Design and analysis of vector color error diffusion halftoning systems.
Damera-Venkata, N; Evans, B L
2001-01-01
Traditional error diffusion halftoning is a high quality method for producing binary images from digital grayscale images. Error diffusion shapes the quantization noise power into the high frequency regions where the human eye is the least sensitive. Error diffusion may be extended to color images by using error filters with matrix-valued coefficients to take into account the correlation among color planes. For vector color error diffusion, we propose three contributions. First, we analyze vector color error diffusion based on a new matrix gain model for the quantizer, which linearizes vector error diffusion. The model predicts the key characteristics of color error diffusion, esp. image sharpening and noise shaping. The proposed model includes linear gain models for the quantizer by Ardalan and Paulos (1987) and by Kite et al. (1997) as special cases. Second, based on our model, we optimize the noise shaping behavior of color error diffusion by designing error filters that are optimum with respect to any given linear spatially-invariant model of the human visual system. Our approach allows the error filter to have matrix-valued coefficients and diffuse quantization error across color channels in an opponent color representation. Thus, the noise is shaped into frequency regions of reduced human color sensitivity. To obtain the optimal filter, we derive a matrix version of the Yule-Walker equations which we solve by using a gradient descent algorithm. Finally, we show that the vector error filter has a parallel implementation as a polyphase filterbank. PMID:18255498
Kanuri, Manorama; Minko, Irina G; Nechev, Lubomir V; Harris, Thomas M; Harris, Constance M; Lloyd, R Stephen
2002-05-24
8-Hydroxy-5,6,7,8-tetrahydropyrimido[1,2-a]purin- 10(3H)-one,3-(2'-deoxyriboside) (1,N(2)-gamma-hydroxypropano deoxyguanosine, gamma-HOPdG) is a major DNA adduct that forms as a result of exposure to acrolein, an environmental pollutant and a product of endogenous lipid peroxidation. gamma-HOPdG has been shown previously not to be a miscoding lesion when replicated in Escherichia coli. In contrast to those prokaryotic studies, in vivo replication and mutagenesis assays in COS-7 cells using single stranded DNA containing a specific gamma-HOPdG adduct, revealed that the gamma-HOPdG adduct was significantly mutagenic. Analyses revealed both transversion and transition types of mutations at an overall mutagenic frequency of 7.4 x 10(-2)/translesion synthesis. In vitro gamma-HOPdG strongly blocks DNA synthesis by two major polymerases, pol delta and pol epsilon. Replicative blockage of pol delta by gamma-HOPdG could be diminished by the addition of proliferating cell nuclear antigen, leading to highly mutagenic translesion bypass across this adduct. The differential functioning and processing capacities of the mammalian polymerases may be responsible for the higher mutation frequencies observed in this study when compared with the accurate and efficient nonmutagenic bypass observed in the bacterial system. PMID:11889127
Kumar Sahu, Rabindra; Panda, Sidhartha; Biswal, Ashutosh; Chandra Sekhar, G T
2016-03-01
In this paper, a novel Tilt Integral Derivative controller with Filter (TIDF) is proposed for Load Frequency Control (LFC) of multi-area power systems. Initially, a two-area power system is considered and the parameters of the TIDF controller are optimized using Differential Evolution (DE) algorithm employing an Integral of Time multiplied Absolute Error (ITAE) criterion. The superiority of the proposed approach is demonstrated by comparing the results with some recently published heuristic approaches such as Firefly Algorithm (FA), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) optimized PID controllers for the same interconnected power system. Investigations reveal that proposed TIDF controllers provide better dynamic response compared to PID controller in terms of minimum undershoots and settling times of frequency as well as tie-line power deviations following a disturbance. The proposed approach is also extended to two widely used three area test systems considering nonlinearities such as Generation Rate Constraint (GRC) and Governor Dead Band (GDB). To improve the performance of the system, a Thyristor Controlled Series Compensator (TCSC) is also considered and the performance of TIDF controller in presence of TCSC is investigated. It is observed that system performance improves with the inclusion of TCSC. Finally, sensitivity analysis is carried out to test the robustness of the proposed controller by varying the system parameters, operating condition and load pattern. It is observed that the proposed controllers are robust and perform satisfactorily with variations in operating condition, system parameters and load pattern. PMID:26712682
Holderna-Natkaniec, K.; Natkaniec, I.; Khavryutchenko, V. D.
1999-06-15
The observed and calculated INS vibrational densities of states for globular molecules of norbornane, norborneole and borneole are compared in the frequency range up to 600 cm{sup -1}. Inelastic incoherent neutron scattering (IINS) spectra were measured at ca. 20 K on the high resolution NERA spectrometer at the IBR-2 pulsed reactor. The IINS intensities were calculated by semi-empirical quantum chemistry method and the assignments of the low-frequency internal modes were proposed.
NASA Astrophysics Data System (ADS)
Glazunov, V. P.; Berdyshev, D. V.
2014-09-01
Absorption bands in the carbonyl range 1750-1500 cm-1 of the IR spectrum of 2,3-dihydroxy-1,4-naphthoquinone and some of its derivatives were assigned based on calculations of normal mode frequencies using the B3LYP/cc-pVTZ method for isolated molecules and the polarized continuum model taking into account the influence of weakly and moderately polar solvents (CCl4, CDCl3, and CH2Cl2). It was shown that the frequency of the quinone C(2)=C(3) stretching vibration for 2,3-OH- and 2,5,8-OH-1,4-naphthoquinones (2-OH-naphthazarins) was 50-60 cm-1 higher than that of the carbonyl stretching vibration. The frequency difference reached 100 cm-1 for 2,3,5,8-OH-1,4-naphthoquinones (2,3-OH-naphthazarins).
NASA Astrophysics Data System (ADS)
Torregrosa, A.; Flint, L. E.; Flint, A. L.; Peters, J.; Combs, C.
2014-12-01
Coastal fog modifies the hydrodynamic and thermodynamic properties of California watersheds with the greatest impact to ecosystem functioning during arid summer months. Lowered maximum temperatures resulting from inland penetration of marine fog are probably adequate to capture fog effects on thermal land surface characteristics however the hydrologic impact from lowered rates of evapotranspiration due to shade, fog drip, increased relative humidity, and other factors associated with fog events are more difficult to gauge. Fog products, such as those derived from National Weather Service Geostationary Operational Environmental Satellite (GOES) imagery, provide high frequency (up to 15 min) views of fog and low cloud cover and can potentially improve water balance models. Even slight improvements in water balance calculations can benefit urban water managers and agricultural irrigation. The high frequency of GOES output provides the opportunity to explore options for integrating fog frequency data into water balance models. This pilot project compares GOES-derived fog frequency intervals (6, 12 and 24 hour) to explore the most useful for water balance models and to develop model-relevant relationships between climatic and water balance variables. Seasonal diurnal thermal differences, plant ecophysiological processes, and phenology suggest that a day/night differentiation on a monthly basis may be adequate. To explore this hypothesis, we examined discharge data from stream gages and outputs from the USGS Basin Characterization Model for runoff, recharge, potential evapotranspiration, and actual evapotranspiration for the Russian River Watershed under low, medium, and high fog event conditions derived from hourly GOES imagery (1999-2009). We also differentiated fog events into daytime and nighttime versus a 24-hour compilation on a daily, monthly, and seasonal basis. Our data suggest that a daily time-step is required to adequately incorporate the hydrologic effect of
NASA Astrophysics Data System (ADS)
Govaerts, Y. M.; Lattanzio, A.
2007-03-01
The extraction of critical geophysical variables from multidecade archived satellite observations, such as those acquired by the European Meteosat First Generation satellite series, for the generation of climate data records is recognized as a pressing challenge by international environmental organizations. This paper presents a statistical method for the estimation of the surface albedo retrieval error that explicitly accounts for the measurement uncertainties and differences in the Meteosat radiometer characteristics. The benefit of this approach is illustrated with a simple case study consisting of a meaningful comparison of surface albedo derived from observations acquired at a 20 year interval by sensors with different radiometric performances. In particular, it is shown how it is possible to assess the magnitude of minimum detectable significant surface albedo change.
Xu, Hua; Stetson, Peter D; Friedman, Carol
2012-01-01
Abbreviations are widely used in clinical notes and are often ambiguous. Word sense disambiguation (WSD) for clinical abbreviations therefore is a critical task for many clinical natural language processing (NLP) systems. Supervised machine learning based WSD methods are known for their high performance. However, it is time consuming and costly to construct annotated samples for supervised WSD approaches and sense frequency information is often ignored by these methods. In this study, we proposed a profile-based method that used dictated discharge summaries as an external source to automatically build sense profiles and applied them to disambiguate abbreviations in hospital admission notes via the vector space model. Our evaluation using a test set containing 2,386 annotated instances from 13 ambiguous abbreviations in admission notes showed that the profile-based method performed better than two baseline methods and achieved a best average precision of 0.792. Furthermore, we developed a strategy to combine sense frequency information estimated from a clustering analysis with the profile-based method. Our results showed that the combined approach largely improved the performance and achieved a highest precision of 0.875 on the same test set, indicating that integrating sense frequency information with local context is effective for clinical abbreviation disambiguation. PMID:23304376
Protein interaction hotspot identification using sequence-based frequency-derived features.
Nguyen, Quang-Thang; Fablet, Ronan; Pastor, Dominique
2013-11-01
Finding good descriptors, capable of discriminating hotspot residues from others, is still a challenge in many attempts to understand protein interaction. In this paper, descriptors issued from the analysis of amino acid sequences using digital signal processing (DSP) techniques are shown to be as good as those derived from protein tertiary structure and/or information on the complex. The simulation results show that our descriptors can be used separately to predict hotspots, via a random forest classifier, with an accuracy of 79% and a precision of 75%. They can also be used jointly with features derived from tertiary structures to boost the performance up to an accuracy of 82% and a precision of 80%. PMID:21742567
CORRELATED ERRORS IN EARTH POINTING MISSIONS
NASA Technical Reports Server (NTRS)
Bilanow, Steve; Patt, Frederick S.
2005-01-01
Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor
NASA Astrophysics Data System (ADS)
Perrusson, G.; Lambert, M.; Lesselier, D.; Charalambopoulos, A.; Dassios, G.
2000-03-01
The field resulting from the illumination by a localized time-harmonic low-frequency source (typically a magnetic dipole) of a voluminous lossy dielectric body placed in a lossy dielectric embedding is determined within the framework of the localized nonlinear approximation by means of a low-frequency Rayleigh analysis. It is sketched (1) how one derives a low-frequency series expansion in positive integral powers of (jk), where k is the embedding complex wavenumber, of the depolarization dyad that relates the background electric field to the total electric field inside the body; (2) how this expansion is used to determine the magnetic field resulting outside the body and how the corresponding series expansion of this field, up to the power 5 in (jk), follows once the series expansion of the incident electric field in the body volume is known up to the same power; and (3) how the needed nonzero coefficients of the depolarization dyad (up to the power 3 in (jk)) are obtained, for a general triaxial ellipsoid and after careful reduction for the geometrically degenerate geometries, with the help of the elliptical harmonic theory. Numerical results obtained by this hybrid low-frequency approach illustrate its capability to provide accurate magnetic fields at low computational cost, in particular, in comparison with a general purpose method-of-moments code.
A genome signature derived from the interplay of word frequencies and symbol correlations
NASA Astrophysics Data System (ADS)
Möller, Simon; Hameister, Heike; Hütt, Marc-Thorsten
2014-11-01
Genome signatures are statistical properties of DNA sequences that provide information on the underlying species. It is not understood, how such species-discriminating statistical properties arise from processes of genome evolution and from functional properties of the DNA. Investigating the interplay of different genome signatures can contribute to this understanding. Here we analyze the statistical dependences of two such genome signatures: word frequencies and symbol correlations at short and intermediate distances. We formulate a statistical model of word frequencies in DNA sequences based on the observed symbol correlations and show that deviations of word counts from this correlation-based null model serve as a new genome signature. This signature (i) performs better in sorting DNA sequence segments according to their species origin and (ii) reveals unexpected species differences in the composition of microsatellites, an important class of repetitive DNA. While the first observation is a typical task in metagenomics projects and therefore an important benchmark for a genome signature, the latter suggests strong species differences in the biological mechanisms of genome evolution. On a more general level, our results highlight that the choice of null model (here: word abundances computed via symbol correlations rather than shorter word counts) substantially affects the interpretation of such statistical signals.
NASA Technical Reports Server (NTRS)
Donegan, James J; Robinson, Samuel W , Jr; Gates, Ordway, B , jr
1955-01-01
A method is presented for determining the lateral-stability derivatives, transfer-function coefficients, and the modes for lateral motion from frequency-response data for a rigid aircraft. The method is based on the application of the vector technique to the equations of lateral motion, so that the three equations of lateral motion can be separated into six equations. The method of least squares is then applied to the data for each of these equations to yield the coefficients of the equations of lateral motion from which the lateral-stability derivatives and lateral transfer-function coefficients are computed. Two numerical examples are given to demonstrate the use of the method.
NASA Astrophysics Data System (ADS)
Igoshev, Andrei; Verbunt, Frank; Cator, Eric
2016-06-01
We use a Bayesian approach to derive the distance probability distribution for one object from its parallax with measurement uncertainty for two spatial distribution priors, a homogeneous spherical distribution and a galactocentric distribution - applicable for radio pulsars - observed from Earth. We investigate the dependence on measurement uncertainty, and show that a parallax measurement can underestimate or overestimate the actual distance, depending on the spatial distribution prior. We derive the probability distributions for distance and luminosity combined - and for each separately when a flux with measurement error for the object is also available - and demonstrate the necessity of and dependence on the luminosity function prior. We apply this to estimate the distance and the radio and gamma-ray luminosities of PSR J0218+4232. The use of realistic priors improves the quality of the estimates for distance and luminosity compared to those based on measurement only. Use of the wrong prior, for example a homogeneous spatial distribution without upper bound, may lead to very incorrect results.
NASA Astrophysics Data System (ADS)
El-Dardiry, Hisham Abd El-Kareem
The Radar-based Quantitative Precipitation Estimates (QPE) is one of the NEXRAD products that are available in a high temporal and spatial resolution compared with gauges. Radar-based QPEs have been widely used in many hydrological and meteorological applications; however, a few studies have focused on using radar QPE products in deriving of Precipitation Frequency Estimates (PFE). Accurate and regionally specific information on PFE is critically needed for various water resources engineering planning and design purposes. This study focused first on examining the data quality of two main radar products, the near real-time Stage IV QPE product, and the post real-time RFC/MPE product. Assessment of the Stage IV product showed some alarming data artifacts that contaminate the identification of rainfall maxima. Based on the inter-comparison analysis of the two products, Stage IV and RFC/MPE, the latter was selected for the frequency analysis carried out throughout the study. The precipitation frequency analysis approach used in this study is based on fitting Generalized Extreme Value (GEV) distribution as a statistical model for the hydrologic extreme rainfall data that based on Annual Maximum Series (AMS) extracted from 11 years (2002-2012) over a domain covering Louisiana. The parameters of the GEV model are estimated using method of linear moments (L-moments). Two different approaches are suggested for estimating the precipitation frequencies; Pixel-Based approach, in which PFEs are estimated at each individual pixel and Region-Based approach in which a synthetic sample is generated at each pixel by using observations from surrounding pixels. The region-based technique outperforms the pixel based estimation when compared with results obtained by NOAA Atlas 14; however, the availability of only short record of observations and the underestimation of radar QPE for some extremes causes considerable reduction in precipitation frequencies in pixel-based and region
NASA Technical Reports Server (NTRS)
Fisher, Lewis R
1958-01-01
Three wing models were oscillated in yaw about their vertical axes to determine the effects of systematic variations of frequency and amplitude of oscillation on the in-phase and out-of-phase combination lateral stability derivatives resulting from this motion. The tests were made at low speeds for a 60 degree delta wing, a 45 degree swept wing, and an unswept wing; the swept and unswept wings had aspect ratios of 4. The results indicate that large changes in the magnitude of the stability derivatives due to the variation of frequency occur at high angles of attack, particularly for the delta wing. The greatest variations of the derivatives with frequency take place for the lowest frequencies of oscillation; at the higher frequencies, the effects of frequency are smaller and the derivatives become more linear with angle of attack. Effects of amplitude of oscillation on the stability derivatives for delta wings were evident for certain high angles of attack and for the lowest frequencies of oscillation. As the frequency became high, the amplitude effects tended to disappear.
Keightley, Peter D; Campos, José L; Booker, Tom R; Charlesworth, Brian
2016-06-01
Many approaches for inferring adaptive molecular evolution analyze the unfolded site frequency spectrum (SFS), a vector of counts of sites with different numbers of copies of derived alleles in a sample of alleles from a population. Accurate inference of the high-copy-number elements of the SFS is difficult, however, because of misassignment of alleles as derived vs. ancestral. This is a known problem with parsimony using outgroup species. Here we show that the problem is particularly serious if there is variation in the substitution rate among sites brought about by variation in selective constraint levels. We present a new method for inferring the SFS using one or two outgroups that attempts to overcome the problem of misassignment. We show that two outgroups are required for accurate estimation of the SFS if there is substantial variation in selective constraints, which is expected to be the case for nonsynonymous sites in protein-coding genes. We apply the method to estimate unfolded SFSs for synonymous and nonsynonymous sites in a population of Drosophila melanogaster from phase 2 of the Drosophila Population Genomics Project. We use the unfolded spectra to estimate the frequency and strength of advantageous and deleterious mutations and estimate that ∼50% of amino acid substitutions are positively selected but that <0.5% of new amino acid mutations are beneficial, with a scaled selection strength of Nes ≈ 12. PMID:27098912
Kitano, Shigehisa; Postow, Michael A.; Ziegler, Carly G.K.; Kuk, Deborah; Panageas, Katherine S.; Cortez, Czrina; Rasalan, Teresa; Adamow, Mathew; Yuan, Jianda; Wong, Philip; Altan-Bonnet, Gregoire; Wolchok, Jedd D.; Lesokhin, Alexander M.
2014-01-01
Evaluation of myeloid-derived suppressor cells (MDSC), a cell type implicated in T-cell suppression, may inform immune status. However, a uniform methodology is necessary for prospective testing as a biomarker. We report the use of a computational algorithm-driven analysis of whole blood and cryopreserved samples for monocytic MDSC (m-MDSC) quantity that removes variables related to blood processing and user definitions. Applying these methods to samples from melanoma patients identifies differing frequency distribution of m-MDSC relative to that in healthy donors (HD). Patients with a pre-treatment m-MDSC frequency outside a preliminary definition of HD range (<14.9%) were significantly more likely to achieve prolonged overall survival following treatment with ipilimumab, an antibody that promotes T-cell activation and proliferation. m-MDSC frequencies inversely correlated with peripheral CD8+ T-cell expansion following ipilimumab. Algorithm-driven analysis may enable not only development of a novel pre-treatment biomarker for ipilimumab therapy, but also prospective validation of peripheral blood m-MDSC as a biomarker in multiple disease settings. PMID:24844912
NASA Technical Reports Server (NTRS)
Palumbo, Dan
2008-01-01
The lifetimes of coherent structures are derived from data correlated over a 3 sensor array sampling streamwise sidewall pressure at high Reynolds number (> 10(exp 8)). The data were acquired at subsonic, transonic and supersonic speeds aboard a Tupolev Tu-144. The lifetimes are computed from a variant of the correlation length termed the lifelength. Characteristic lifelengths are estimated by fitting a Gaussian distribution to the sensors cross spectra and are shown to compare favorably with Efimtsov s prediction of correlation space scales. Lifelength distributions are computed in the time/frequency domain using an interval correlation technique on the continuous wavelet transform of the original time data. The median values of the lifelength distributions are found to be very close to the frequency averaged result. The interval correlation technique is shown to allow the retrieval and inspection of the original time data of each event in the lifelength distributions, thus providing a means to locate and study the nature of the coherent structure in the turbulent boundary layer. The lifelength data are converted to lifetimes using the convection velocity. The lifetime of events in the time/frequency domain are displayed in Lifetime Maps. The primary purpose of the paper is to validate these new analysis techniques so that they can be used with confidence to further characterize the behavior of coherent structures in the turbulent boundary layer.
NASA Astrophysics Data System (ADS)
Marra, Francesco; Morin, Efrat
2015-12-01
Intensity-Duration-Frequency (IDF) curves are widely used in flood risk management because they provide an easy link between the characteristics of a rainfall event and the probability of its occurrence. Weather radars provide distributed rainfall estimates with high spatial and temporal resolutions and overcome the scarce representativeness of point-based rainfall for regions characterized by large gradients in rainfall climatology. This work explores the use of radar quantitative precipitation estimation (QPE) for the identification of IDF curves over a region with steep climatic transitions (Israel) using a unique radar data record (23 yr) and combined physical and empirical adjustment of the radar data. IDF relationships were derived by fitting a generalized extreme value distribution to the annual maximum series for durations of 20 min, 1 h and 4 h. Arid, semi-arid and Mediterranean climates were explored using 14 study cases. IDF curves derived from the study rain gauges were compared to those derived from radar and from nearby rain gauges characterized by similar climatology, taking into account the uncertainty linked with the fitting technique. Radar annual maxima and IDF curves were generally overestimated but in 70% of the cases (60% for a 100 yr return period), they lay within the rain gauge IDF confidence intervals. Overestimation tended to increase with return period, and this effect was enhanced in arid climates. This was mainly associated with radar estimation uncertainty, even if other effects, such as rain gauge temporal resolution, cannot be neglected. Climatological classification remained meaningful for the analysis of rainfall extremes and radar was able to discern climatology from rainfall frequency analysis.
Carotid ultrasound segmentation using radio-frequency derived phase information and gabor filters.
Azzopardi, Carl; Camilleri, Kenneth P; Hicks, Yulia A
2015-01-01
Ultrasound image segmentation is a field which has garnered much interest over the years. This is partially due to the complexity of the problem, arising from the lack of contrast between different tissue types which is quite typical of ultrasound images. Recently, segmentation techniques which treat RF signal data have also become popular, particularly with the increasing availability of such data from open-architecture machines. It is believed that RF data provides a rich source of information whose integrity remains intact, as opposed to the loss which occurs through the signal processing chain leading to Brightness Mode Images. Furthermore, phase information contained within RF data has not been studied in much detail, as the nature of the information here appears to be mostly random. In this work however, we show that phase information derived from RF data does elicit structure, characterized by texture patterns. Texture segmentation of this data permits the extraction of rough, but well localized, carotid boundaries. We provide some initial quantitative results, which report the performance of the proposed technique. PMID:26737742
... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close ...
Surface Roughness of the Moon Derived from Multi-frequency Radar Data
NASA Astrophysics Data System (ADS)
Fa, W.
2011-12-01
Surface roughness of the Moon provides important information concerning both significant questions about lunar surface processes and engineering constrains for human outposts and rover trafficabillity. Impact-related phenomena change the morphology and roughness of lunar surface, and therefore surface roughness provides clues to the formation and modification mechanisms of impact craters. Since the Apollo era, lunar surface roughness has been studied using different approaches, such as direct estimation from lunar surface digital topographic relief, and indirect analysis of Earth-based radar echo strengths. Submillimeter scale roughness at Apollo landing sites has been studied by computer stereophotogrammetry analysis of Apollo Lunar Surface Closeup Camera (ALSCC) pictures, whereas roughness at meter to kilometer scale has been studied using laser altimeter data from recent missions. Though these studies shown lunar surface roughness is scale dependent that can be described by fractal statistics, roughness at centimeter scale has not been studied yet. In this study, lunar surface roughnesses at centimeter scale are investigated using Earth-based 70 cm Arecibo radar data and miniature synthetic aperture radar (Mini-SAR) data at S- and X-band (with wavelengths 12.6 cm and 4.12 cm). Both observations and theoretical modeling show that radar echo strengths are mostly dominated by scattering from the surface and shallow buried rocks. Given the different penetration depths of radar waves at these frequencies (< 30 m for 70 cm wavelength, < 3 m at S-band, and < 1 m at X-band), radar echo strengths at S- and X-band will yield surface roughness directly, whereas radar echo at 70-cm will give an upper limit of lunar surface roughness. The integral equation method is used to model radar scattering from the rough lunar surface, and dielectric constant of regolith and surface roughness are two dominate factors. The complex dielectric constant of regolith is first estimated
Schurr, T G; Ballinger, S W; Gan, Y Y; Hodge, J A; Merriwether, D A; Lawrence, D N; Knowler, W C; Weiss, K M; Wallace, D C
1990-01-01
The mitochondrial DNA (mtDNA) sequence variation of the South American Ticuna, the Central American Maya, and the North American Pima was analyzed by restriction-endonuclease digestion and oligonucleotide hybridization. The analysis revealed that Amerindian populations have high frequencies of mtDNAs containing the rare Asian RFLP HincII morph 6, a rare HaeIII site gain, and a unique AluI site gain. In addition, the Asian-specific deletion between the cytochrome c oxidase subunit II (COII) and tRNA(Lys) genes was also prevalent in both the Pima and the Maya. These data suggest that Amerindian mtDNAs derived from at least four primary maternal lineages, that new tribal-specific variants accumulated as these mtDNAs became distributed throughout the Americas, and that some genetic variation may have been lost when the progenitors of the Ticuna separated from the North and Central American populations. Images Figure 1 PMID:1968708
Estimating Filtering Errors Using the Peano Kernel Theorem
Jerome Blair
2009-02-20
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.
Estimating Filtering Errors Using the Peano Kernel Theorem
Jerome Blair
2008-03-01
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.
Motion error analysis of the 3D coordinates of airborne lidar for typical terrains
NASA Astrophysics Data System (ADS)
Peng, Tao; Lan, Tian; Ni, Guoqiang
2013-07-01
A motion error model of 3D coordinates is established and the impact on coordinate errors caused by the non-ideal movement of the airborne platform is analyzed. The simulation results of the model show that when the lidar system operates at high altitude, the influence on the positioning errors derived from laser point cloud spacing is small. For the model the positioning errors obey simple harmonic vibration whose amplitude envelope gradually reduces with the increase of the vibration frequency. When the vibration period number is larger than 50, the coordinate errors are almost uncorrelated with time. The elevation error is less than the plane error and in the plane the error in the scanning direction is less than the error in the flight direction. Through the analysis of flight test data, the conclusion is verified.
NASA Technical Reports Server (NTRS)
Kato, Seiji; Sun-Mack, Sunny; Miller, Walter F.; Rose, Fred G.; Chen, Yan; Minnis, Patrick; Wielicki, Bruce A.
2009-01-01
A cloud frequency of occurrence matrix is generated using merged cloud vertical profile derived from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and Cloud Profiling Radar (CPR). The matrix contains vertical profiles of cloud occurrence frequency as a function of the uppermost cloud top. It is shown that the cloud fraction and uppermost cloud top vertical pro les can be related by a set of equations when the correlation distance of cloud occurrence, which is interpreted as an effective cloud thickness, is introduced. The underlying assumption in establishing the above relation is that cloud overlap approaches the random overlap with increasing distance separating cloud layers and that the probability of deviating from the random overlap decreases exponentially with distance. One month of CALIPSO and CloudSat data support these assumptions. However, the correlation distance sometimes becomes large, which might be an indication of precipitation. The cloud correlation distance is equivalent to the de-correlation distance introduced by Hogan and Illingworth [2000] when cloud fractions of both layers in a two-cloud layer system are the same.
Razavi, Shahnaz; Salimi, Marzieh; Shahbazi-Gahrouei, Daryoush; Karbasi, Saeed; Kermani, Saeed
2014-01-01
Background: Extremely low-frequency electromagnetic fields (ELF-EMF) can effect on biological systems and alters some cell functions like proliferation rate. Therefore, we aimed to attempt the evaluation effect of ELF-EMF on the growth of human adipose derived stem cells (hADSCs). Materials and Methods: ELF-EMF was generated by a system including autotransformer, multi-meter, solenoid coils, teslameter and its probe. We assessed the effect of ELF-EMF with intensity of 0.5 and 1 mT and power line frequency 50 Hz on the survival of hADSCs for 20 and 40 min/day for 7 days by MTT assay. One-way analysis of variance was used to assessment the significant differences in groups. Results: ELF-EMF has maximum effect with intensity of 1 mT for 20 min/day on proliferation of hADSCs. The survival and proliferation effect (PE) in all exposure groups were significantly higher than that in sham groups (P < 0.05) except in group of 1 mT and 40 min/day. Conclusion: Our results show that between 0.5 m and 1 mT ELF-EMF could be enhances survival and PE of hADSCs conserving the duration of exposure. PMID:24592372
Hadad, Ielham; Veithen, Alex; Springael, Jean–Yves; Sotiropoulou, Panagiota A.; Mendes Da Costa, Agnès; Miot, Françoise; Naeije, Robert
2013-01-01
Stroma cell-derived factor-1α (SDF-1α) is a cardioprotective chemokine, acting through its G-protein coupled receptor CXCR4. In experimental acute myocardial infarction, administration of SDF-1α induces an early improvement of systolic function which is difficult to explain solely by an anti-apoptotic and angiogenic effect. We wondered whether SDF-1α signaling might have direct effects on calcium transients and beating frequency. Primary rat neonatal cardiomyocytes were culture-expanded and characterized by immunofluorescence staining. Calcium sparks were studied by fluorescence microscopy after calcium loading with the Fluo-4 acetoxymethyl ester sensor. The cardiomyocyte enriched cellular suspension expressed troponin I and CXCR4 but was vimentin negative. Addition of SDF-1α in the medium increased cytoplasmic calcium release. The calcium response was completely abolished by using a neutralizing anti-CXCR4 antibody and partially suppressed and delayed by preincubation with an inositol triphosphate receptor (IP3R) blocker, but not with a ryanodine receptor (RyR) antagonist. Calcium fluxes induced by caffeine, a RyR agonist, were decreased by an IP3R blocker. Treatment with forskolin or SDF-1α increased cardiomyocyte beating frequency and their effects were additive. In vivo, treatment with SDF-1α increased left ventricular dP/dtmax. These results suggest that in rat neonatal cardiomyocytes, the SDF-1α/CXCR4 signaling increases calcium transients in an IP3-gated fashion leading to a positive chronotropic and inotropic effect. PMID:23460790
Madani, Nima; Kimball, John S.; Nazeri, Mona; Kumar, Lalit; Affleck, David L. R.
2016-01-01
Species distribution modeling has been widely used in studying habitat relationships and for conservation purposes. However, neglecting ecological knowledge about species, e.g. their seasonal movements, and ignoring the proper environmental factors that can explain key elements for species survival (shelter, food and water) increase model uncertainty. This study exemplifies how these ecological gaps in species distribution modeling can be addressed by modeling the distribution of the emu (Dromaius novaehollandiae) in Australia. Emus cover a large area during the austral winter. However, their habitat shrinks during the summer months. We show evidence of emu summer habitat shrinkage due to higher fire frequency, and low water and food availability in northern regions. Our findings indicate that emus prefer areas with higher vegetation productivity and low fire recurrence, while their distribution is linked to an optimal intermediate (~0.12 m3 m-3) soil moisture range. We propose that the application of three geospatial data products derived from satellite remote sensing, namely fire frequency, ecosystem productivity, and soil water content, provides an effective representation of emu general habitat requirements, and substantially improves species distribution modeling and representation of the species’ ecological habitat niche across Australia. PMID:26799732
Madani, Nima; Kimball, John S; Nazeri, Mona; Kumar, Lalit; Affleck, David L R
2016-01-01
Species distribution modeling has been widely used in studying habitat relationships and for conservation purposes. However, neglecting ecological knowledge about species, e.g. their seasonal movements, and ignoring the proper environmental factors that can explain key elements for species survival (shelter, food and water) increase model uncertainty. This study exemplifies how these ecological gaps in species distribution modeling can be addressed by modeling the distribution of the emu (Dromaius novaehollandiae) in Australia. Emus cover a large area during the austral winter. However, their habitat shrinks during the summer months. We show evidence of emu summer habitat shrinkage due to higher fire frequency, and low water and food availability in northern regions. Our findings indicate that emus prefer areas with higher vegetation productivity and low fire recurrence, while their distribution is linked to an optimal intermediate (~0.12 m3 m(-3)) soil moisture range. We propose that the application of three geospatial data products derived from satellite remote sensing, namely fire frequency, ecosystem productivity, and soil water content, provides an effective representation of emu general habitat requirements, and substantially improves species distribution modeling and representation of the species' ecological habitat niche across Australia. PMID:26799732
Hadad, Ielham; Veithen, Alex; Springael, Jean-Yves; Sotiropoulou, Panagiota A; Mendes Da Costa, Agnès; Miot, Françoise; Naeije, Robert; De Deken, Xavier; Entee, Kathleen Mc
2013-01-01
Stroma cell-derived factor-1α (SDF-1α) is a cardioprotective chemokine, acting through its G-protein coupled receptor CXCR4. In experimental acute myocardial infarction, administration of SDF-1α induces an early improvement of systolic function which is difficult to explain solely by an anti-apoptotic and angiogenic effect. We wondered whether SDF-1α signaling might have direct effects on calcium transients and beating frequency.Primary rat neonatal cardiomyocytes were culture-expanded and characterized by immunofluorescence staining. Calcium sparks were studied by fluorescence microscopy after calcium loading with the Fluo-4 acetoxymethyl ester sensor. The cardiomyocyte enriched cellular suspension expressed troponin I and CXCR4 but was vimentin negative. Addition of SDF-1α in the medium increased cytoplasmic calcium release. The calcium response was completely abolished by using a neutralizing anti-CXCR4 antibody and partially suppressed and delayed by preincubation with an inositol triphosphate receptor (IP3R) blocker, but not with a ryanodine receptor (RyR) antagonist. Calcium fluxes induced by caffeine, a RyR agonist, were decreased by an IP3R blocker. Treatment with forskolin or SDF-1α increased cardiomyocyte beating frequency and their effects were additive. In vivo, treatment with SDF-1α increased left ventricular dP/dtmax.These results suggest that in rat neonatal cardiomyocytes, the SDF-1α/CXCR4 signaling increases calcium transients in an IP3-gated fashion leading to a positive chronotropic and inotropic effect. PMID:23460790
NASA Astrophysics Data System (ADS)
Morioka, T.; Kawanishi, S.; Saruwatari, M.
1994-05-01
Error-free, tunable optical frequency conversion of a transform-limited 4.0 ps optical pulse signalis demonstrated at 6.3 Gbit/s using four-wave mixing in a polarization-maintaining optical fibre. The process generates 4.0-4.6 ps pulses over a 25nm range with time-bandwidth products of 0.31-0.43 and conversion power penalties of less than 1.5 dB.
Kobayashi, Jyumpei; Tanabiki, Misaki; Doi, Shohei; Kondo, Akihiko; Ohshiro, Takashi; Suzuki, Hirokazu
2015-11-01
The plasmid pGKE75-catA138T, which comprises pUC18 and the catA138T gene encoding thermostable chloramphenicol acetyltransferase with an A138T amino acid replacement (CATA138T), serves as an Escherichia coli-Geobacillus kaustophilus shuttle plasmid that confers moderate chloramphenicol resistance on G. kaustophilus HTA426. The present study examined the thermoadaptation-directed mutagenesis of pGKE75-catA138T in an error-prone thermophile, generating the mutant plasmid pGKE75(αβ)-catA138T responsible for substantial chloramphenicol resistance at 65°C. pGKE75(αβ)-catA138T contained no mutation in the catA138T gene but had two mutations in the pUC replicon, even though the replicon has no apparent role in G. kaustophilus. Biochemical characterization suggested that the efficient chloramphenicol resistance conferred by pGKE75(αβ)-catA138T is attributable to increases in intracellular CATA138T and acetyl-coenzyme A following a decrease in incomplete forms of pGKE75(αβ)-catA138T. The decrease in incomplete plasmids may be due to optimization of plasmid replication by RNA species transcribed from the mutant pUC replicon, which were actually produced in G. kaustophilus. It is noteworthy that G. kaustophilus was transformed with pGKE75(αβ)-catA138T using chloramphenicol selection at 60°C. In addition, a pUC18 derivative with the two mutations propagated in E. coli at a high copy number independently of the culture temperature and high plasmid stability. Since these properties have not been observed in known plasmids, the outcomes extend the genetic toolboxes for G. kaustophilus and E. coli. PMID:26319877
Kobayashi, Jyumpei; Tanabiki, Misaki; Doi, Shohei; Kondo, Akihiko; Ohshiro, Takashi
2015-01-01
The plasmid pGKE75-catA138T, which comprises pUC18 and the catA138T gene encoding thermostable chloramphenicol acetyltransferase with an A138T amino acid replacement (CATA138T), serves as an Escherichia coli-Geobacillus kaustophilus shuttle plasmid that confers moderate chloramphenicol resistance on G. kaustophilus HTA426. The present study examined the thermoadaptation-directed mutagenesis of pGKE75-catA138T in an error-prone thermophile, generating the mutant plasmid pGKE75αβ-catA138T responsible for substantial chloramphenicol resistance at 65°C. pGKE75αβ-catA138T contained no mutation in the catA138T gene but had two mutations in the pUC replicon, even though the replicon has no apparent role in G. kaustophilus. Biochemical characterization suggested that the efficient chloramphenicol resistance conferred by pGKE75αβ-catA138T is attributable to increases in intracellular CATA138T and acetyl-coenzyme A following a decrease in incomplete forms of pGKE75αβ-catA138T. The decrease in incomplete plasmids may be due to optimization of plasmid replication by RNA species transcribed from the mutant pUC replicon, which were actually produced in G. kaustophilus. It is noteworthy that G. kaustophilus was transformed with pGKE75αβ-catA138T using chloramphenicol selection at 60°C. In addition, a pUC18 derivative with the two mutations propagated in E. coli at a high copy number independently of the culture temperature and high plasmid stability. Since these properties have not been observed in known plasmids, the outcomes extend the genetic toolboxes for G. kaustophilus and E. coli. PMID:26319877
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
NASA Astrophysics Data System (ADS)
Noble, Viveca K.
1993-11-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
ERIC Educational Resources Information Center
Ambridge, Ben; Pine, Julian M.; Rowland, Caroline F.; Young, Chris R.
2008-01-01
Participants (aged 5-6 yrs, 9-10 yrs and adults) rated (using a five-point scale) grammatical (intransitive) and overgeneralized (transitive causative) uses of a high frequency, low frequency and novel intransitive verb from each of three semantic classes [Pinker, S. (1989a). "Learnability and cognition: the acquisition of argument structure."…
NASA Astrophysics Data System (ADS)
Xie, Yi; Zhang, Shuang-Nan; Liao, Jin-Yuan
2015-07-01
We model the evolution of the spin frequency's second derivative v̈ and the braking index n of radio pulsars with simulations within the phenomenological model of their surface magnetic field evolution, which contains a long-term power-law decay modulated by short-term oscillations. For the pulsar PSR B0329+54, a model with three oscillation components can reproduce its v̈ variation. We show that the “averaged” n is different from the instantaneous n, and its oscillation magnitude decreases abruptly as the time span increases, due to the “averaging” effect. The simulated timing residuals agree with the main features of the reported data. Our model predicts that the averaged v̈ of PSR B0329+54 will start to decrease rapidly with newer data beyond those used in Hobbs et al. We further perform Monte Carlo simulations for the distribution of the reported data in |v̈| and |n| versus characteristic age τC diagrams. It is found that the magnetic field oscillation model with decay index α = 0 can reproduce the distributions quite well. Compared with magnetic field decay due to the ambipolar diffusion (α = 0.5) and the Hall cascade (α = 1.0), the model with no long term decay (α = 0) is clearly preferred for old pulsars by the p-values of the two-dimensional Kolmogorov-Smirnov test. Supported by the National Natural Science Foundation of China.
Abdollahi, M R; Moieni, A; Mousavi, A; Salmanian, A H
2011-02-01
Transgenic doubled haploid rapeseed (Brassica napus L. cvs. Global and PF(704)) plants were obtained from microspore-derived embryo (MDE) hypocotyls using the microprojectile bombardment. The binary vector pCAMBIA3301 containing the gus and bar genes under control of CaMV 35S promoter was used for bombardment experiments. Transformed plantlets were selected and continuously maintained on selective medium containing 10 mg l(-1) phosphinothricin (PPT) and transgenic plants were obtained by selecting transformed secondary embryos. The presence, copy numbers and expression of the transgenes were confirmed by PCR, Southern blot, RT-PCR and histochemical GUS analyses. In progeny test, three out of four primary transformants for bar gene produced homozygous lines. The ploidy level of transformed plants was confirmed by flow cytometery analysis before colchicine treatment. All of the regenerated plants were haploid except one that was spontaneous diploid. High frequency of transgenic doubled haploid rapeseeds (about 15.55% for bar gene and 11.11% for gus gene) were considerably produced after colchicines treatment of the haploid plantlets. This result show a remarkable increase in production of transgenic doubled haploid rapeseed plants compared to previous studies. PMID:20419350
Financial errors in dementia: Testing a neuroeconomic conceptual framework
Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.
2013-01-01
Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884
NASA Astrophysics Data System (ADS)
Chen, Wanqun; Sun, Yazhou
2015-02-01
In using a fly cutter to machine potassium dihydrogen phosphate (KDP) crystals, rippling in machined surfaces will remain that will have a significant impact on the optical performance. An analysis of these low-spatial frequency ripples is presented and its influence on the root-mean-squared gradient (GRMS) of the wavefront discussed. A frequency analysis of the machined KDP crystal surfaces is performed using wavelet transform and power spectral density methods. Based on a classification of the time frequencies for these macroripples, the multimode vibration of the machine tool is found to be the main reason surface ripples are produced. Improvements in the machine design parameters are proposed to limit such effects on the wavefront performance of the KDP crystal.
Shi, Y C; Parker, D L; Dillon, C R
2016-08-01
This study evaluates the sensitivity of two magnetic resonance-guided focused ultrasound (MRgFUS) thermal property estimation methods to errors in required inputs and different data inclusion criteria. Using ex vivo pork muscle MRgFUS data, sensitivities to required inputs are determined by introducing errors to ultrasound beam locations (r error = -2 to 2 mm) and time vectors (t error = -2.2 to 2.2 s). In addition, the sensitivity to user-defined data inclusion criteria is evaluated by choosing different spatial (r fit = 1-10 mm) and temporal (t fit = 8.8-61.6 s) regions for fitting. Beam location errors resulted in up to 50% change in property estimates with local minima occurring at r error = 0 and estimate errors less than 10% when r error < 0.5 mm. Errors in the time vector led to property estimate errors up to 40% and without local minimum, indicating the need to trigger ultrasound sonications with the MR image acquisition. Regarding the selection of data inclusion criteria, property estimates reached stable values (less than 5% change) when r fit > 2.5 × FWHM, and were most accurate with the least variability for longer t fit. Guidelines provided by this study highlight the importance of identifying required inputs and choosing appropriate data inclusion criteria for robust and accurate thermal property estimation. Applying these guidelines will prevent the introduction of biases and avoidable errors when utilizing these property estimation techniques for MRgFUS thermal modeling applications. PMID:27385508
NASA Astrophysics Data System (ADS)
Shi, Y. C.; Parker, D. L.; Dillon, C. R.
2016-08-01
This study evaluates the sensitivity of two magnetic resonance-guided focused ultrasound (MRgFUS) thermal property estimation methods to errors in required inputs and different data inclusion criteria. Using ex vivo pork muscle MRgFUS data, sensitivities to required inputs are determined by introducing errors to ultrasound beam locations (r error = ‑2 to 2 mm) and time vectors (t error = ‑2.2 to 2.2 s). In addition, the sensitivity to user-defined data inclusion criteria is evaluated by choosing different spatial (r fit = 1–10 mm) and temporal (t fit = 8.8–61.6 s) regions for fitting. Beam location errors resulted in up to 50% change in property estimates with local minima occurring at r error = 0 and estimate errors less than 10% when r error < 0.5 mm. Errors in the time vector led to property estimate errors up to 40% and without local minimum, indicating the need to trigger ultrasound sonications with the MR image acquisition. Regarding the selection of data inclusion criteria, property estimates reached stable values (less than 5% change) when r fit > 2.5 × FWHM, and were most accurate with the least variability for longer t fit. Guidelines provided by this study highlight the importance of identifying required inputs and choosing appropriate data inclusion criteria for robust and accurate thermal property estimation. Applying these guidelines will prevent the introduction of biases and avoidable errors when utilizing these property estimation techniques for MRgFUS thermal modeling applications.
... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...
Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...
Thermodynamics of Error Correction
NASA Astrophysics Data System (ADS)
Sartori, Pablo; Pigolotti, Simone
2015-10-01
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Ansell, Juliet; Butts, Christine A; Paturi, Gunaranjan; Eady, Sarah L; Wallace, Alison J; Hedderley, Duncan; Gearry, Richard B
2015-05-01
The worldwide growth in the incidence of gastrointestinal disorders has created an immediate need to identify safe and effective interventions. In this randomized, double-blind, placebo-controlled study, we examined the effects of Actazin and Gold, kiwifruit-derived nutritional ingredients, on stool frequency, stool form, and gastrointestinal comfort in healthy and functionally constipated (Rome III criteria for C3 functional constipation) individuals. Using a crossover design, all participants consumed all 4 dietary interventions (Placebo, Actazin low dose [Actazin-L] [600 mg/day], Actazin high dose [Actazin-H] [2400 mg/day], and Gold [2400 mg/day]). Each intervention was taken for 28 days followed by a 14-day washout period between interventions. Participants recorded their daily bowel movements and well-being parameters in daily questionnaires. In the healthy cohort (n = 19), the Actazin-H (P = .014) and Gold (P = .009) interventions significantly increased the mean daily bowel movements compared with the washout. No significant differences were observed in stool form as determined by use of the Bristol stool scale. In a subgroup analysis of responders in the healthy cohort, Actazin-L (P = .005), Actazin-H (P < .001), and Gold (P = .001) consumption significantly increased the number of daily bowel movements by greater than 1 bowel movement per week. In the functionally constipated cohort (n = 9), there were no significant differences between interventions for bowel movements and the Bristol stool scale values or in the subsequent subgroup analysis of responders. This study demonstrated that Actazin and Gold produced clinically meaningful increases in bowel movements in healthy individuals. PMID:25931419
Technology Transfer Automated Retrieval System (TEKTRAN)
The MTDFREML (Boldman et al., 1995) set of programs was written to handle partially missing data in an expedient manner. When estimating (co)variance components and genetic parameters for multiple trait models, the programs have not been able to estimate standard errors of those estimates for multi...
Jacob, Benjamin G; Griffith, Daniel A; Muturi, Ephantus J; Caamano, Erick X; Githure, John I; Novak, Robert J
2009-01-01
Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices) in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression). The eigenfunction values from the spatial
Marycz, Krzysztof; Lewandowski, Daniel; Tomaszewski, Krzysztof A; Henry, Brandon M; Golec, Edward B; Marędziak, Monika
2016-01-01
The aim of this study was to evaluate if low-frequency, low-magnitude vibrations (LFLM) could enhance chondrogenic differentiation potential of human adipose derived mesenchymal stem cells (hASCs) with simultaneous inhibition of their adipogenic properties for biomedical purposes. We developed a prototype device that induces low-magnitude (0.3 g) low-frequency vibrations with the following frequencies: 25, 35 and 45 Hz. Afterwards, we used human adipose derived mesenchymal stem cell (hASCS), to investigate their cellular response to the mechanical signals. We have also evaluated hASCs morphological and proliferative activity changes in response to each frequency. Induction of chondrogenesis in hASCs, under the influence of a 35 Hz signal leads to most effective and stable cartilaginous tissue formation through highest secretion of Bone Morphogenetic Protein 2 (BMP-2), and Collagen type II, with low concentration of Collagen type I. These results correlated well with appropriate gene expression level. Simultaneously, we observed significant up-regulation of α3, α4, β1 and β3 integrins in chondroblast progenitor cells treated with 35 Hz vibrations, as well as Sox-9. Interestingly, we noticed that application of 35 Hz frequencies significantly inhibited adipogenesis of hASCs. The obtained results suggest that application of LFLM vibrations together with stem cell therapy might be a promising tool in cartilage regeneration. PMID:26966645
Lewandowski, Daniel; Tomaszewski, Krzysztof A.; Henry, Brandon M.; Golec, Edward B.; Marędziak, Monika
2016-01-01
The aim of this study was to evaluate if low-frequency, low-magnitude vibrations (LFLM) could enhance chondrogenic differentiation potential of human adipose derived mesenchymal stem cells (hASCs) with simultaneous inhibition of their adipogenic properties for biomedical purposes. We developed a prototype device that induces low-magnitude (0.3 g) low-frequency vibrations with the following frequencies: 25, 35 and 45 Hz. Afterwards, we used human adipose derived mesenchymal stem cell (hASCS), to investigate their cellular response to the mechanical signals. We have also evaluated hASCs morphological and proliferative activity changes in response to each frequency. Induction of chondrogenesis in hASCs, under the influence of a 35 Hz signal leads to most effective and stable cartilaginous tissue formation through highest secretion of Bone Morphogenetic Protein 2 (BMP-2), and Collagen type II, with low concentration of Collagen type I. These results correlated well with appropriate gene expression level. Simultaneously, we observed significant up-regulation of α3, α4, β1 and β3 integrins in chondroblast progenitor cells treated with 35 Hz vibrations, as well as Sox-9. Interestingly, we noticed that application of 35 Hz frequencies significantly inhibited adipogenesis of hASCs. The obtained results suggest that application of LFLM vibrations together with stem cell therapy might be a promising tool in cartilage regeneration. PMID:26966645
NASA Astrophysics Data System (ADS)
Székely, Balázs; Raveloson, Andrea; Rasztovits, Sascha; Molnár, Gábor; Dorninger, Peter
2013-04-01
It is a common task in geoscience to determine the volume of a topographic depression (e.g., a valley, a crater, a gully, etc.) based on a digital terrain model (DTM). In case of DTMs based on laser scanned data this task can be fulfilled with a relatively high accuracy. However, if the DTM is generated using terrestrial photogrammetric methods, the limitations of the technology often makes geodetically inaccurate/biased models at forested or purely visible areas or if the landform has an ill-posed geometry (e.g. it is elongated). In these cases the inaccuracies may hamper the generation of a proper DTM. On the other hand if we are interested rather in the determination of the volume of the feature with a certain accuracy or we intend to carry out an order of magnitude volumetric estimation, a DTM having larger inaccuracies is tolerable. In this case the volume calculation can be still done by setting realistic assumptions about the errors of the DTM. In our approach two DTMs are generated to create top and bottom envelope surfaces that confine the "true" but unknown DTM. The varying accuracy of the photogrammetric DTM is considered via the varying deviation of these two surfaces: at problematic corners of the feature the deviation of the two surfaces will be larger, whereas at well-renderable domains the deviation of the surfaces remain minimal. Since such topographic depressions may have a complicated geometry, the error-prone areas may complicate the geometry of the aforementioned envelopes even more. The proper calculation of the volume may turn to be difficult. To reduce this difficulty, a voxel-based approach is used. The volumetric error is calculated based on the gridded envelopes using an appropriate voxel resolution. The method is applied for gully features termed lavakas existing in large numbers in Madagascar. These landforms are typically characterised by a complex shape, steep walls, they are often elongated, and have internal crests. All these
Automatic Locking of Laser Frequency to an Absorption Peak
NASA Technical Reports Server (NTRS)
Koch, Grady J.
2006-01-01
An electronic system adjusts the frequency of a tunable laser, eventually locking the frequency to a peak in the optical absorption spectrum of a gas (or of a Fabry-Perot cavity that has an absorption peak like that of a gas). This system was developed to enable precise locking of the frequency of a laser used in differential absorption LIDAR measurements of trace atmospheric gases. This system also has great commercial potential as a prototype of means for precise control of frequencies of lasers in future dense wavelength-division-multiplexing optical communications systems. The operation of this system is completely automatic: Unlike in the operation of some prior laser-frequency-locking systems, there is ordinarily no need for a human operator to adjust the frequency manually to an initial value close enough to the peak to enable automatic locking to take over. Instead, this system also automatically performs the initial adjustment. The system (see Figure 1) is based on a concept of (1) initially modulating the laser frequency to sweep it through a spectral range that includes the desired absorption peak, (2) determining the derivative of the absorption peak with respect to the laser frequency for use as an error signal, (3) identifying the desired frequency [at the very top (which is also the middle) of the peak] as the frequency where the derivative goes to zero, and (4) thereafter keeping the frequency within a locking range and adjusting the frequency as needed to keep the derivative (the error signal) as close as possible to zero. More specifically, the system utilizes the fact that in addition to a zero crossing at the top of the absorption peak, the error signal also closely approximates a straight line in the vicinity of the zero crossing (see Figure 2). This vicinity is the locking range because the linearity of the error signal in this range makes it useful as a source of feedback for a proportional + integral + derivative control scheme that
NASA Astrophysics Data System (ADS)
Wentz, Frank J.; Meissner, Thomas
2016-05-01
The Liebe and Rosenkranz atmospheric absorption models for dry air and water vapor below 100 GHz are refined based on an analysis of antenna temperature (TA) measurements taken by the Global Precipitation Measurement Microwave Imager (GMI) in the frequency range 10.7 to 89.0 GHz. The GMI TA measurements are compared to the TA predicted by a radiative transfer model (RTM), which incorporates both the atmospheric absorption model and a model for the emission and reflection from a rough-ocean surface. The inputs for the RTM are the geophysical retrievals of wind speed, columnar water vapor, and columnar cloud liquid water obtained from the satellite radiometer WindSat. The Liebe and Rosenkranz absorption models are adjusted to achieve consistency with the RTM. The vapor continuum is decreased by 3% to 10%, depending on vapor. To accomplish this, the foreign-broadening part is increased by 10%, and the self-broadening part is decreased by about 40% at the higher frequencies. In addition, the strength of the water vapor line is increased by 1%, and the shape of the line at low frequencies is modified. The dry air absorption is increased, with the increase being a maximum of 20% at the 89 GHz, the highest frequency considered here. The nonresonant oxygen absorption is increased by about 6%. In addition to the RTM comparisons, our results are supported by a comparison between columnar water vapor retrievals from 12 satellite microwave radiometers and GPS-retrieved water vapor values.
Retransmission error control with memory
NASA Technical Reports Server (NTRS)
Sindhu, P. S.
1977-01-01
In this paper, an error control technique that is a basic improvement over automatic-repeat-request ARQ is presented. Erroneously received blocks in an ARQ system are used for error control. The technique is termed ARQ-with-memory (MRQ). The general MRQ system is described, and simple upper and lower bounds are derived on the throughput achievable by MRQ. The performance of MRQ with respect to throughput, message delay and probability of error is compared to that of ARQ by simulating both systems using error data from a VHF satellite channel being operated in the ALOHA packet broadcasting mode.
NASA Astrophysics Data System (ADS)
Mobley, Joel; Waters, Kendall R.; Miller, James G.
2005-07-01
Kramers-Kronig (KK) analyses of experimental data are complicated by the extrapolation problem, that is, how the unexamined spectral bands impact KK calculations. This work demonstrates the causal linkages in resonant-type data provided by acoustic KK relations for the group velocity (cg) and the derivative of the attenuation coefficient (α') (components of the derivative of the acoustic complex wave number) without extrapolation or unmeasured parameters. These relations provide stricter tests of causal consistency relative to previously established KK relations for the phase velocity (cp) and attenuation coefficient (α) (components of the undifferentiated acoustic wave number) due to their shape invariance with respect to subtraction constants. For both the group velocity and attenuation derivative, three forms of the relations are derived. These relations are equivalent for bandwidths covering the entire infinite spectrum, but differ when restricted to bandlimited spectra. Using experimental data from suspensions of elastic spheres in saline, the accuracy of finite-bandwidth KK predictions for cg and α' is demonstrated. Of the multiple methods, the most accurate were found to be those whose integrals were expressed only in terms of the phase velocity and attenuation coefficient themselves, requiring no differentiated quantities.
Standard Errors of the Kernel Equating Methods under the Common-Item Design.
ERIC Educational Resources Information Center
Liou, Michelle; And Others
This research derives simplified formulas for computing the standard error of the frequency estimation method for equating score distributions that are continuized using a uniform or Gaussian kernel function (P. W. Holland, B. F. King, and D. T. Thayer, 1989; Holland and Thayer, 1987). The simplified formulas are applicable to equating both the…
Phase Errors and the Capture Effect
Blair, J., and Machorro, E.
2011-11-01
This slide-show presents analysis of spectrograms and the phase error of filtered noise in a signal. When the filtered noise is smaller than the signal amplitude, the phase error can never exceed 90{deg}, so the average phase error over many cycles is zero: this is called the capture effect because the largest signal captures the phase and frequency determination.
Chen, Xi; He, Fan; Zhong, Dong-Yan; Luo, Zong-Ping
2015-01-01
Osteoporosis can be associated with the disordered balance between osteogenesis and adipogenesis of bone marrow-derived mesenchymal stem cells (BM-MSCs). Although low-frequency mechanical vibration has been demonstrated to promote osteogenesis, little is known about the influence of acoustic-frequency vibratory stimulation (AFVS). BM-MSCs were subjected to AFVS at frequencies of 0, 30, 400, and 800 Hz and induced toward osteogenic or adipogenic-specific lineage. Extracellular matrix mineralization was determined by Alizarin Red S staining and lipid accumulation was assessed by Oil Red O staining. Transcript levels of osteogenic and adipogenic marker genes were evaluated by real-time reverse transcription-polymerase chain reaction. Cell proliferation of BM-MSCs was promoted following exposure to AFVS at 800 Hz. Vibration at 800 Hz induced the highest level of calcium deposition and significantly increased mRNA expression of COL1A1, ALP, RUNX2, and SPP1. The 800 Hz group downregulated lipid accumulation and levels of adipogenic genes, including FABP4, CEBPA, PPARG, and LEP, while vibration at 30 Hz supported adipogenesis. BM-MSCs showed a frequency-dependent response to acoustic vibration. AFVS at 800 Hz was the most favorable for osteogenic differentiation and simultaneously suppressed adipogenesis. Thus, acoustic vibration could potentially become a novel means to prevent and treat osteoporosis. PMID:25738155
Batic, D.; Kelkar, N. G.; Nowakowski, M.
2011-05-15
It is shown here that the extraction of quasinormal modes within the first Born approximation of the scattering amplitude is mathematically not well-founded. Indeed, the constraints on the existence of the scattering amplitude integral lead to inequalities for the imaginary parts of the quasinormal mode frequencies. For instance, in the Schwarzschild case, 0{<=}{omega}{sub I}<{kappa} (where {kappa} is the surface gravity at the horizon) invalidates the poles deduced from the first Born approximation method, namely, {omega}{sub n}=in{kappa}.
Bergström, Gunnar; Christoffersson, Jonas; Schwanke, Kristin; Zweigerdt, Robert; Mandenius, Carl-Fredrik
2015-08-01
Beating in vivo-like human cardiac bodies (CBs) were used in a microfluidic device for testing cardiotoxicity. The CBs, cardiomyocyte cell clusters derived from induced pluripotent stem cells, exhibited typical structural and functional properties of the native human myocardium. The CBs were captured in niches along a perfusion channel in the device. Video imaging was utilized for automatic monitoring of the beating frequency of each individual CB. The device allowed assessment of cardiotoxic effects of drug substances doxorubicin, verapamil and quinidine on the 3D clustered cardiomyocytes. Beating frequency data recorded over a period of 6 hours are presented and compared to literature data. The results indicate that this microfluidic setup with imaging of CB characteristics provides a new opportunity for label-free, non-invasive investigation of toxic effects in a 3D microenvironment. PMID:26135270
Radar error statistics for the space shuttle
NASA Technical Reports Server (NTRS)
Lear, W. M.
1979-01-01
Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.
Automatic oscillator frequency control system
NASA Technical Reports Server (NTRS)
Smith, S. F. (Inventor)
1985-01-01
A frequency control system makes an initial correction of the frequency of its own timing circuit after comparison against a frequency of known accuracy and then sequentially checks and corrects the frequencies of several voltage controlled local oscillator circuits. The timing circuit initiates the machine cycles of a central processing unit which applies a frequency index to an input register in a modulo-sum frequency divider stage and enables a multiplexer to clock an accumulator register in the divider stage with a cyclical signal derived from the oscillator circuit being checked. Upon expiration of the interval, the processing unit compares the remainder held as the contents of the accumulator against a stored zero error constant and applies an appropriate correction word to a correction stage to shift the frequency of the oscillator being checked. A signal from the accumulator register may be used to drive a phase plane ROM and, with periodic shifts in the applied frequency index, to provide frequency shift keying of the resultant output signal. Interposition of a phase adder between the accumulator register and phase plane ROM permits phase shift keying of the output signal by periodic variation in the value of a phase index applied to one input of the phase adder.
Error diffusion with a more symmetric error distribution
NASA Astrophysics Data System (ADS)
Fan, Zhigang
1994-05-01
In this paper a new error diffusion algorithm is presented that effectively eliminates the `worm' artifacts appearing in the standard methods. The new algorithm processes each scanline of the image in two passes, a forward pass followed by a backward one. This enables the error made at one pixel to be propagated to all the `future' pixels. A much more symmetric error distribution is achieved than that of the standard methods. The frequency response of the noise shaping filter associated with the new algorithm is mirror-symmetric in magnitude.
Interpolation Errors in Spectrum Analyzers
NASA Technical Reports Server (NTRS)
Martin, J. L.
1996-01-01
To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.
NASA Astrophysics Data System (ADS)
Dabbagh, Hossein A.; Teimouri, Abbas; Chermahini, Alireza Najafi; Shiasi, Rezvan
2007-06-01
We present a detailed analysis of the structural, infrared spectra and visible spectra of the 4-substituted aminoazo-benzenesulfonyl azides. The preparation of 4-sulfonyl azide benzenediazonium chloride with cyclic amines of various ring sizes (pyrrolidine, piperidine, 4-methylpiperidine, N-methylpiperazine, morpholine and hexamethyleneimine) have been investigated theoretically by performing HF and DFT levels of theory using the standard 6-31G* basis set. The optimized geometries and calculated vibrational frequencies are evaluated via comparison with experimental values. The vibrational spectral data obtained from solid phase FT-IR spectra are assigned modes based on the results of the theoretical calculations. The observed spectra are found to be in good agreement with the calculations.
Xi, Qinhua; Li, Yueqin; Dai, Juan; Chen, Weichang
2015-01-01
Exacerbation and relapse of inflammatory bowel disease (IBD) is associated with reduced antibacterial immunity and increased immune regulatory activity, but the source of increased immune regulation during episodes of disease activity is unclear. Myeloid-derived suppressor cells (MDSCs) are a cell type with a well-recognized role in limiting immune reactions. MDSC function in IBD and its relation to disease activity, however, remains unexplored. Here we show that patients with either ulcerative colitis (UC) or Crohn's disease (CD) have high peripheral blood levels of mononuclear MDSCs. Especially exacerbation of disease is associated with higher levels of mononuclear MDSC counts compared with those in remission of disease. Interestingly, chronic experimental colitis in mice coincides with increased MDCS mobilization. Thus, our results suggest that mononuclear MDCS are endogenous antagonists of immune system functionality in mucosal inflammation and the depression of antibacterial immunity associated with exacerbation of disease might involve increased activity of the MDSC compartment. PMID:25775229
La Rota, Mauricio; Kantety, Ramesh V; Yu, Ju-Kyung; Sorrells, Mark E
2005-01-01
Background Earlier comparative maps between the genomes of rice (Oryza sativa L.), barley (Hordeum vulgare L.) and wheat (Triticum aestivum L.) were linkage maps based on cDNA-RFLP markers. The low number of polymorphic RFLP markers has limited the development of dense genetic maps in wheat and the number of available anchor points in comparative maps. Higher density comparative maps using PCR-based anchor markers are necessary to better estimate the conservation of colinearity among cereal genomes. The purposes of this study were to characterize the proportion of transcribed DNA sequences containing simple sequence repeats (SSR or microsatellites) by length and motif for wheat, barley and rice and to determine in-silico rice genome locations for primer sets developed for wheat and barley Expressed Sequence Tags. Results The proportions of SSR types (di-, tri-, tetra-, and penta-nucleotide repeats) and motifs varied with the length of the SSRs within and among the three species, with trinucleotide SSRs being the most frequent. Distributions of genomic microsatellites (gSSRs), EST-derived microsatellites (EST-SSRs), and transcribed regions in the contiguous sequence of rice chromosome 1 were highly correlated. More than 13,000 primer pairs were developed for use by the cereal research community as potential markers in wheat, barley and rice. Conclusion Trinucleotide SSRs were the most common type in each of the species; however, the relative proportions of SSR types and motifs differed among rice, wheat, and barley. Genomic microsatellites were found to be primarily located in gene-rich regions of the rice genome. Microsatellite markers derived from the use of non-redundant EST-SSRs are an economic and efficient alternative to RFLP for comparative mapping in cereals. PMID:15720707
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1981-01-01
Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
[Diagnostic Errors in Medicine].
Buser, Claudia; Bankova, Andriyana
2015-12-01
The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954
Yang, Guangtao Swaaij, R. A. C. M. M. van; Dobrovolskiy, S.; Zeman, M.
2014-01-21
In this contribution, we demonstrate the application temperature dependent capacitance-frequency measurements (C-f) to n-i-p hydrogenated amorphous silicon (a-Si:H) solar cells that are forward-biased. By using a forward bias, the C-f measurement can detect the density of defect states in a particular energy range of the interface region. For this contribution, we have carried out this measurement method on n-i-p a-Si:H solar cells of which the intrinsic layer has been exposed to a H{sub 2}-plasma before p-type layer deposition. After this treatment, the open-circuit voltage and fill factor increased significantly, as well as the blue response of the solar cells as is concluded from external quantum efficiency. For single junction, n-i-p a-Si:H solar cells initial efficiency increased from 6.34% to 8.41%. This performance enhancement is believed to be mainly due to a reduction of the defect density in the i-p interface region after the H{sub 2}-plasma treatment. These results are confirmed by the C-f measurements. After H{sub 2}-plasma treatment, the defect density in the intrinsic layer near the i-p interface region is lower and peaks at an energy level deeper in the band gap. These C-f measurements therefore enable us to monitor changes in the defect density in the interface region as a result of a hydrogen plasma. The lower defect density at the i-p interface as detected by the C-f measurements is supported by dark current-voltage measurements, which indicate a lower carrier recombination rate.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.
Statistical errors in Monte Carlo estimates of systematic errors
NASA Astrophysics Data System (ADS)
Roe, Byron P.
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.
Prejac, J; Višnjević, V; Drmić, S; Skalny, A A; Mimica, N; Momčilović, B
2014-04-01
Today, human iodine deficiency is next to iron the most common nutritional deficiency in developed European and underdeveloped third world countries, respectively. A current biological indicator of iodine status is urinary iodine that reflects the very recent iodine exposure, whereas some long term indicator of iodine status remains to be identified. We analyzed hair iodine in a prospective, observational, cross-sectional, and exploratory study involving 870 apparently healthy Croatians (270 men and 600 women). Hair iodine was analyzed with the inductively coupled plasma mass spectrometry (ICP MS). Population (n870) hair iodine (IH) respective median was 0.499μgg(-1) (0.482 and 0.508μgg(-1)) for men and women, respectively, suggesting no sex related difference. We studied the hair iodine uptake by the logistic sigmoid saturation curve of the median derivatives to assess iodine deficiency, adequacy and excess. We estimated the overt iodine deficiency to occur when hair iodine concentration is below 0.15μgg(-1). Then there was a saturation range interval of about 0.15-2.0μgg(-1) (r(2)=0.994). Eventually, the sigmoid curve became saturated at about 2.0μgg(-1) and upward, suggesting excessive iodine exposure. Hair appears to be a valuable and robust long term biological indicator tissue for assessing the iodine body status. We propose adequate iodine status to correspond with the hair iodine (IH) uptake saturation of 0.565-0.739μgg(-1) (55-65%). PMID:24629671
Rundle, J.B. )
1989-09-10
The purpose of this paper is to show that the various observational parameters characterizing the statistical properties of earthquakes can be related to each other. The fundamental postulate which is used to obtain quantitative results is the idea that the physics of earthquake occurrence scales as a power law, similar to properties one often sees in critical phenomena. When the physics of earthquake occurrence is exactly scale invariant, {ital b}=1, and it can be shown as a consequence that earthquakes in any magnitude band {Delta}m cover the same area in unit time. This result therefore implies the existence of a universal'' covering interval {tau}{sub {ital T}}, which is here called the cycle interval.'' Using this idea, the complete Gutenberg-Richter relation is derived in terms of the fault area {ital S}{sub {ital T}}, which is available to events of any given size, the average stress drop {Delta}{sigma}{sub {ital T}} for events occurring on {ital S}{sub {tau}}, the interval {tau}{sub {ital T}} for events of stress drop {Delta}{sigma}{sub {ital T}} to cover an area {ital S}{sub {ital T}}, and the scaling exponent {alpha}, which is proportional to the {ital b} value. Observationally, the average recurrence time interval for great earthquakes, or perhaps equivalently, the recurrence interval for characteristic earthquakes on a fault segment, is a measure of the cycle interval {tau}{sub {ital T}}. The exponent {alpha} may depend on time, but scale invariance (self similarity) demands that {alpha}=1. It is shown in the appendix that the {ital A} value in the Gutenberg-Richter relation can be written in terms of {ital S}{sub {ital T}}, {tau}{sub {ital T}}, {Delta}{sigma}{sub {ital T}}, and the parameter {alpha}. The {ital b} value is either 1 or 1.5 (depending on the geometry of the fault zone) multiplied by {alpha}. {copyright} American Geophysical Union 1989
Unified Analysis for Antenna Pointing and Structural Errors. Part 1. Review
NASA Technical Reports Server (NTRS)
Abichandani, K.
1983-01-01
A necessary step in the design of a high accuracy microwave antenna system is to establish the signal error budget due to structural, pointing, and environmental parameters. A unified approach in performing error budget analysis as applicable to ground-based microwave antennas of different size and operating frequency is discussed. Major error sources contributing to the resultant deviation in antenna boresighting in pointing and tracking modes and the derivation of the governing equations are presented. Two computer programs (SAMCON and EBAP) were developed in-house, including the antenna servo-control program, as valuable tools in the error budget determination. A list of possible errors giving their relative contributions and levels is presented.
Operational Interventions to Maintenance Error
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki
1997-01-01
A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.
NASA Technical Reports Server (NTRS)
Blucker, T. J.; Ferry, W. W.
1971-01-01
An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.
NASA Astrophysics Data System (ADS)
Orem, C. A.; Pelletier, J. D.
2015-11-01
Flood-envelope curves (FEC) are useful for constraining the upper limit of possible flood discharges within drainage basins in a particular hydroclimatic region. Their usefulness, however, is limited by their lack of a well-defined recurrence interval. In this study we use radar-derived precipitation estimates to develop an alternative to the FEC method, i.e. the frequency-magnitude-area-curve (FMAC) method, that incorporates recurrence intervals. The FMAC method is demonstrated in two well-studied U.S. drainage basins, i.e. the Upper and Lower Colorado River basins (UCRB and LCRB, respectively), using Stage III Next-Generation-Radar (NEXRAD) gridded products and the diffusion-wave flow-routing algorithm. The FMAC method can be applied worldwide using any radar-derived precipitation estimates. In the FMAC method, idealized basins of similar contributing area are grouped together for frequency-magnitude analysis of precipitation intensity. These data are then routed through the idealized drainage basins of different contributing areas, using contributing-area-specific estimates for channel slope and channel width. Our results show that FMACs of precipitation discharge are power-law functions of contributing area with an average exponent of 0.79 ± 0.07 for recurrence intervals from 10 to 500 years. We compare our FMACs to published FECs and find that for wet antecedent-moisture conditions, the 500-year FMAC of flood discharge in the UCRB is on par with the US FEC for contributing areas of ~ 102 to 103 km2. FMACs of flood discharge for the LCRB exceed the published FEC for the LCRB for contributing areas in the range of ~ 102 to 104 km2. The FMAC method retains the power of the FEC method for constraining flood hazards in basins that are ungauged or have short flood records, yet it has the added advantage that it includes recurrence interval information necessary for estimating event probabilities.
Study of geopotential error models used in orbit determination error analysis
NASA Technical Reports Server (NTRS)
Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.
1991-01-01
The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic
Køllgaard, Tania; Ugurel-Becker, Selma; Idorn, Manja; Andersen, Mads Hald
2015-01-01
Various subsets of immune regulatory cells are suggested to influence the outcome of therapeutic antigen-specific anti-tumor vaccinations. We performed an exploratory analysis of a possible correlation of pre-vaccination Th17 cells, MDSCs, and Tregs with both vaccination-induced T-cell responses as well as clinical outcome in metastatic melanoma patients vaccinated with survivin-derived peptides. Notably, we observed dysfunctional Th1 and cytotoxic T cells, i.e. down-regulation of the CD3ζchain (p=0.001) and an impaired IFNγ-production (p=0.001) in patients compared to healthy donors, suggesting an altered activity of immune regulatory cells. Moreover, the frequencies of Th17 cells (p=0.03) and Tregs (p=0.02) were elevated as compared to healthy donors. IL-17-secreting CD4+ T cells displayed an impact on the immunological and clinical effects of vaccination: Patients characterized by high frequencies of Th17 cells at pre-vaccination were more likely to develop survivin-specific T-cell reactivity post-vaccination (p=0.03). Furthermore, the frequency of Th17 (p=0.09) and Th17/IFNγ+ (p=0.19) cells associated with patient survival after vaccination. In summary, our explorative, hypothesis-generating study demonstrated that immune regulatory cells, in particular Th17 cells, play a relevant role for generation of the vaccine-induced anti-tumor immunity in cancer patients, hence warranting further investigation to test for validity as predictive biomarkers. PMID:26176858
Impact of harmonics on the interpolated DFT frequency estimator
NASA Astrophysics Data System (ADS)
Belega, Daniel; Petri, Dario; Dallet, Dominique
2016-01-01
The paper investigates the effect of the interference due to spectral leakage on the frequency estimates returned by the Interpolated Discrete Fourier Transform (IpDFT) method based on the Maximum Sidelobe Decay (MSD) windows when harmonically distorted sine-waves are analyzed. The expressions for the frequency estimation error due to both the image of the fundamental tone and harmonics, and the frequency estimator variance due to the combined effect of both the above disturbances and wideband noise are derived. The achieved expressions allow us to identify which harmonics significantly contribute to frequency estimation uncertainty. A new IpDFT-based procedure capable to compensate all the significant effects of harmonics on the frequency estimation accuracy is then proposed. The derived theoretical results are verified through computer simulations. Moreover, the accuracy of the proposed procedure is compared with those of other state-of-the-art frequency estimation methods by means of both computer simulations and experimental results.
Lu, Bing; Pan, Wei; Zou, Xihua; Yan, Xianglei; Yan, Lianshan; Luo, Bin
2015-05-15
A photonic approach for both wideband Doppler frequency shift (DFS) measurement and direction ambiguity resolution is proposed and experimentally demonstrated. In the proposed approach, a light wave from a laser diode is split into two paths. In one path, the DFS information is converted into an optical sideband close to the optical carrier by using two cascaded electro-optic modulators, while in the other path, the optical carrier is up-shifted by a specific value (e.g., from several MHz to hundreds of MHz) using an optical-frequency shift module. Then the optical signals from the two paths are combined and detected by a low-speed photodetector (PD), generating a low-frequency electronic signal. Through a subtraction between the specific optical frequency shift and the measured frequency of the low-frequency signal, the value of DFS is estimated from the derived absolute value, and the direction ambiguity is resolved from the derived sign (i.e., + or -). In the proof-of-concept experiments, DFSs from -90 to 90 kHz are successfully estimated for microwave signals at 10, 15, and 20 GHz, where the estimation errors are lower than ±60 Hz. The estimation errors can be further reduced via the use of a more stable optical frequency shift module. PMID:26393729
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. PMID:23999403
NASA Astrophysics Data System (ADS)
Marra, Francesco; Morin, Efrat; Peleg, Nadav; Mei, Yiwen; Anagnostou, Emmanouil N.
2016-04-01
Intensity-duration-frequency (IDF) curves are used in flood risk management and hydrological design studies to relate the characteristics of a rainfall event to the probability of its occurrence. The usual approach relies on long records of raingauge data providing accurate estimates of the IDF curves for a specific location, but whose representativeness decreases with distance. Radar rainfall estimates have recently been tested over the Eastern Mediterranean area, characterized by steep climatological gradients, showing that radar IDF curves generally lay within the raingauge confidence interval and that radar is able to identify the climatology of extremes. Recent availability of relatively long records (>15 years) of high resolution satellite rainfall information allows to explore the spatial distribution of extreme rainfall with increased detail over wide areas, thus providing new perspectives for the study of precipitation regimes and promising both practical and theoretical implications. This study aims to (i) identify IDF curves obtained from radar rainfall estimates and (ii) identify and assess IDF curves obtained from two high resolution satellite retrieval algorithms (CMORPH and PERSIANN) over the Eastern Mediterranean region. To do so, we derive IDF curves fitting a GEV distribution to the annual maxima series from 23 years (1990-2013) of carefully corrected data from a C-Band radar located in Israel (covering Mediterranean to arid climates) as well as from 15 years (1998-2014) of gauge-adjusted high-resolution CMORPH and 10 years (2003-2013) of gauge-adjusted high-resolution PERSIANN data. We present the obtained IDF curves and we compare the curves obtained from the satellite algorithms to the ones obtained from the radar during overlapping periods; this analysis will draw conclusions on the reliability of the two satellite datasets for deriving rainfall frequency analysis over the region and provide IDF corrections. We compare then the curves obtained
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies w_{r} in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = w_{r}/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.
NASA Astrophysics Data System (ADS)
Takemura, Shunsuke; Furumura, Takashi
2013-04-01
We studied the scattering properties of high-frequency seismic waves due to the distribution of small-scale velocity fluctuations in the crust and upper mantle beneath Japan based on an analysis of three-component short-period seismograms and comparison with finite difference method (FDM) simulation of seismic wave propagation using various stochastic random velocity fluctuation models. Using a large number of dense High-Sensitivity Seismograph network waveform data of 310 shallow crustal earthquakes, we examined the P-wave energy partition of transverse component (PEPT), which is caused by scattering of the seismic wave in heterogeneous structure, as a function of frequency and hypocentral distances. At distance of less than D = 150 km, the PEPT increases with increasing frequency and is approximately constant in the range of from D = 50 to 150 km. The PEPT was found to increase suddenly at a distance of over D = 150 km and was larger in the high-frequency band (f > 4 Hz). Therefore, strong scattering of P wave may occur around the propagation path (upper crust, lower crust and around Moho discontinuity) of the P-wave first arrival phase at distances of larger than D = 150 km. We also found a regional difference in the PEPT value, whereby the PEPT value is large at the backarc side of northeastern Japan compared with southwestern Japan and the forearc side of northeastern Japan. These PEPT results, which were derived from shallow earthquakes, indicate that the shallow structure of heterogeneity at the backarc side of northeastern Japan is stronger and more complex compared with other areas. These hypotheses, that is, the depth and regional change of small-scale velocity fluctuations, are examined by 3-D FDM simulation using various heterogeneous structure models. By comparing the observed feature of the PEPT with simulation results, we found that strong seismic wave scattering occurs in the lower crust due to relatively higher velocity and stronger heterogeneities
Remediating Common Math Errors.
ERIC Educational Resources Information Center
Wagner, Rudolph F.
1981-01-01
Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)
Computing Instantaneous Frequency by normalizing Hilbert Transform
Huang, Norden E.
2005-05-31
This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.
Computing Instantaneous Frequency by normalizing Hilbert Transform
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2005-01-01
This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.
NASA Astrophysics Data System (ADS)
Brasington, J.; Hicks, M.; Wheaton, J. M.; Williams, R. D.; Vericat, D.
2013-12-01
Repeat surveys of channel morphology provide a means to quantify fluvial sediment storage and enable inferences about changes in long-term sediment supply, watershed delivery and bed level adjustment; information vital to support effective river and land management. Over shorter time-scales, direct differencing of fluvial terrain models may also offer a route to predict reach-averaged sediment transport rates and quantify the patterns of channel morphodynamics and the processes that force them. Recent and rapid advances in geomatics have facilitated these goals by enabling the acquisition of topographic data at spatial resolutions and precisions suitable for characterising river morphology at the scale of individual grains over multi-kilometre reaches. Despite improvements in topographic surveying, inverting the terms of the sediment budget to derive estimates of sediment transport and link these to morphodynamic processes is, nonetheless, often confounded by limited knowledge of either the sediment supply or efflux across a boundary of the control volume, or unobserved cut-and-fill taking place between surveys. This latter problem is particularly poorly constrained, as field logistics frequently preclude surveys at a temporal frequency sufficient to capture changes in sediment storage associated with each competent event, let alone changes during individual floods. In this paper, we attempt to quantify the principal sources of uncertainty in morphologically-derived bedload transport rates for the large, labile, gravel-bed braided Rees River which drains the Southern Alps of NZ. During the austral summer of 2009-10, a unique timeseries of 10 high quality DEMs was derived for a 3 x 0.7 km reach of the Rees, using a combination of mobile terrestrial laser scanning, aDcp soundings and aerial image analysis. Complementary measurements of the forcing flood discharges and estimates of event-based particle step lengths were also acquired during the field campaign
Coding for frequency hopped spread spectrum satellite communications
NASA Astrophysics Data System (ADS)
Wang, Q.; Li, G.; Blake, I. F.; Bhargava, V. K.; Chen, Q.
1992-04-01
The performance of fast frequency hopped spread spectrum systems with M-ary frequency shift keying and error correction coding under jamming conditions are analyzed. Ratio threshold diversity combining is used. The decoding scheme is error erasure decoding with metrics generated by the diversity combiner. The bit error probability of the system is computed and improvements offered by error correction coding are shown. The performance of several error correction codes is compared under different channel conditions. The notion of an arbitrarily varying channel (AVC) is discussed, including capacities of AVC's and a discrete memoryless channel, and two Gaussian AVC models are described. Coded performance of a slow frequency hopped differential phase shift keying (DPSK) system in presence of both additive white Gaussian noise and tone jamming is studied. The error correlation due to DPSK demodulation and the effect of tone jamming are considered in evaluating the block and decoded error probabilities. The effect of interleaving on system performance is addressed. A nearly optimum code rate for a length 255 Reed-Solomon code is derived for systems employing interleaving. Finally, a parallel approach to the design of universal receivers for unknown and time-varying channels is applied to DPSK systems in the presence of noise and tone interference.
Error and efficiency of simulated tempering simulations
Rosta, Edina; Hummer, Gerhard
2010-01-01
We derive simple analytical expressions for the error and computational efficiency of simulated tempering (ST) simulations. The theory applies to the important case of systems whose dynamics at long times is dominated by the slow interconversion between two metastable states. An extension to the multistate case is described. We show that the relative gain in efficiency of ST simulations over regular molecular dynamics (MD) or Monte Carlo (MC) simulations is given by the ratio of their reactive fluxes, i.e., the number of transitions between the two states summed over all ST temperatures divided by the number of transitions at the single temperature of the MD or MC simulation. This relation for the efficiency is derived for the limit in which changes in the ST temperature are fast compared to the two-state transitions. In this limit, ST is most efficient. Our expression for the maximum efficiency gain of ST simulations is essentially identical to the corresponding expression derived by us for replica exchange MD and MC simulations [E. Rosta and G. Hummer, J. Chem. Phys. 131, 165102 (2009)] on a different route. We find quantitative agreement between predicted and observed efficiency gains in a test against ST and replica exchange MC simulations of a two-dimensional Ising model. Based on the efficiency formula, we provide recommendations for the optimal choice of ST simulation parameters, in particular, the range and number of temperatures, and the frequency of attempted temperature changes. PMID:20095723
Error and efficiency of simulated tempering simulations.
Rosta, Edina; Hummer, Gerhard
2010-01-21
We derive simple analytical expressions for the error and computational efficiency of simulated tempering (ST) simulations. The theory applies to the important case of systems whose dynamics at long times is dominated by the slow interconversion between two metastable states. An extension to the multistate case is described. We show that the relative gain in efficiency of ST simulations over regular molecular dynamics (MD) or Monte Carlo (MC) simulations is given by the ratio of their reactive fluxes, i.e., the number of transitions between the two states summed over all ST temperatures divided by the number of transitions at the single temperature of the MD or MC simulation. This relation for the efficiency is derived for the limit in which changes in the ST temperature are fast compared to the two-state transitions. In this limit, ST is most efficient. Our expression for the maximum efficiency gain of ST simulations is essentially identical to the corresponding expression derived by us for replica exchange MD and MC simulations [E. Rosta and G. Hummer, J. Chem. Phys. 131, 165102 (2009)] on a different route. We find quantitative agreement between predicted and observed efficiency gains in a test against ST and replica exchange MC simulations of a two-dimensional Ising model. Based on the efficiency formula, we provide recommendations for the optimal choice of ST simulation parameters, in particular, the range and number of temperatures, and the frequency of attempted temperature changes. PMID:20095723
Error analysis of quartz crystal resonator applications
Lucklum, R.; Behling, C.; Hauptmann, P.; Cernosek, R.W.; Martin, S.J.
1996-12-31
Quartz crystal resonators in chemical sensing applications are usually configured as the frequency determining element of an electrical oscillator. By contrast, the shear modulus determination of a polymer coating needs a complete impedance analysis. The first part of this contribution reports the error made if common approximations are used to relate the frequency shift to the sorbed mass. In the second part the authors discuss different error sources in the procedure to determine shear parameters.
A Review of Errors in the Journal Abstract
ERIC Educational Resources Information Center
Lee, Eunpyo; Kim, Eun-Kyung
2013-01-01
(percentage) of abstracts that involved with errors, the most erroneous part of the abstract, and the types and frequency of errors. Also the purpose expanded to compare the results with those of the previous…
Impact of Measurement Error on Synchrophasor Applications
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.
NASA Astrophysics Data System (ADS)
Zhang, Xiaotong; Van de Moortele, Pierre-Francois; Liu, Jiaen; Schmitter, Sebastian; He, Bin
2014-12-01
Electrical Properties Tomography (EPT) technique utilizes measurable radio frequency (RF) coil induced magnetic fields (B1 fields) in a Magnetic Resonance Imaging (MRI) system to quantitatively reconstruct the local electrical properties (EP) of biological tissues. Information derived from the same data set, e.g., complex numbers of B1 distribution towards electric field calculation, can be used to estimate, on a subject-specific basis, local Specific Absorption Rate (SAR). SAR plays a significant role in RF pulse design for high-field MRI applications, where maximum local tissue heating remains one of the most constraining limits. The purpose of the present work is to investigate the feasibility of such B1-based local SAR estimation, expanding on previously proposed EPT approaches. To this end, B1 calibration was obtained in a gelatin phantom at 7 T with a multi-channel transmit coil, under a particular multi-channel B1-shim setting (B1-shim I). Using this unique set of B1 calibration, local SAR distribution was subsequently predicted for B1-shim I, as well as for another B1-shim setting (B1-shim II), considering a specific set of parameter for a heating MRI protocol consisting of RF pulses plaid at 1% duty cycle. Local SAR results, which could not be directly measured with MRI, were subsequently converted into temperature change which in turn were validated against temperature changes measured by MRI Thermometry based on the proton chemical shift.
Zhang, Xiaotong; Liu, Jiaen; Van de Moortele, Pierre-Francois; Schmitter, Sebastian; He, Bin
2014-12-15
Electrical Properties Tomography (EPT) technique utilizes measurable radio frequency (RF) coil induced magnetic fields (B1 fields) in a Magnetic Resonance Imaging (MRI) system to quantitatively reconstruct the local electrical properties (EP) of biological tissues. Information derived from the same data set, e.g., complex numbers of B1 distribution towards electric field calculation, can be used to estimate, on a subject-specific basis, local Specific Absorption Rate (SAR). SAR plays a significant role in RF pulse design for high-field MRI applications, where maximum local tissue heating remains one of the most constraining limits. The purpose of the present work is to investigate the feasibility of such B1-based local SAR estimation, expanding on previously proposed EPT approaches. To this end, B1 calibration was obtained in a gelatin phantom at 7 T with a multi-channel transmit coil, under a particular multi-channel B1-shim setting (B1-shim I). Using this unique set of B1 calibration, local SAR distribution was subsequently predicted for B1-shim I, as well as for another B1-shim setting (B1-shim II), considering a specific set of parameter for a heating MRI protocol consisting of RF pulses plaid at 1% duty cycle. Local SAR results, which could not be directly measured with MRI, were subsequently converted into temperature change which in turn were validated against temperature changes measured by MRI Thermometry based on the proton chemical shift.
NASA Astrophysics Data System (ADS)
Kwon, Do-Kyun; Goh, Yumin; Son, Dongsu; Kim, Baek-Hyun; Bae, Hyunjeong; Perini, Steve; Lanagan, Michael
2016-01-01
A sol-gel-derived powder synthesis method has been used to prepare BaTiO3-NaNbO3 (BT-NN) solid-solution ceramic samples with various compositions. Fine and homogeneous complex perovskite ceramics were obtained at lower processing temperatures than used in conventional solid-state processing. The ferroelectric and relaxor ferroelectric properties of the sol-gel-synthesized (1 - x)BaTiO3- xNaNbO3 [(1 - x)BT- xNN] ceramics in the wide composition range of 0 < x ≤ 0.7 were extensively studied. Structural and dielectric characterization results revealed that a low level of NN addition ( x = 0.04) to BT is sufficient to cause a continuous relaxor-to-ferroelectric transition, and the relaxor behavior was consistently observed at compositions with high NN content up to x = 0.7. A number of relaxor parameters including the Curie temperature, Burns temperature, freezing temperature, γ, diffuseness parameter ( δ), and activation energy were determined from the temperature and frequency dependency of the real part of the dielectric permittivity for various BT-NN compositions using the Curie-Weiss law and Vögel-Fulcher relationship. The systematic changes of these parameters with respect to composition indicate that a continuous crossover between BT-based relaxor and NN-based relaxor occurs at a composition near x = 0.4.
On the combination procedure of correlated errors
NASA Astrophysics Data System (ADS)
Erler, Jens
2015-09-01
When averages of different experimental determinations of the same quantity are computed, each with statistical and systematic error components, then frequently the statistical and systematic components of the combined error are quoted explicitly. These are important pieces of information since statistical errors scale differently and often more favorably with the sample size than most systematical or theoretical errors. In this communication we describe a transparent procedure by which the statistical and systematic error components of the combination uncertainty can be obtained. We develop a general method and derive a general formula for the case of Gaussian errors with or without correlations. The method can easily be applied to other error distributions, as well. For the case of two measurements, we also define disparity and misalignment angles, and discuss their relation to the combination weight factors.
Adjoint Error Estimation for Linear Advection
Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S
2011-03-30
An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
How does human error affect safety in anesthesia?
Gravenstein, J S
2000-01-01
Anesthesia morbidity and mortality, while acceptable, are not zero. Most mishaps have a multifactorial cause in which human error plays a significant part. Good design of anesthesia machines, ventilators, and monitors can prevent some, but not all, human error. Attention to the system in which the errors occur is important. Modern training with simulators is designed to reduce the frequency of human errors and to teach anesthesiologists how to deal with the consequences of such errors. PMID:10601526
AUTOMATIC FREQUENCY CONTROL SYSTEM
Hansen, C.F.; Salisbury, J.D.
1961-01-10
A control is described for automatically matching the frequency of a resonant cavity to that of a driving oscillator. The driving oscillator is disconnected from the cavity and a secondary oscillator is actuated in which the cavity is the frequency determining element. A low frequency is mixed with the output of the driving oscillator and the resultant lower and upper sidebands are separately derived. The frequencies of the sidebands are compared with the secondary oscillator frequency. deriving a servo control signal to adjust a tuning element in the cavity and matching the cavity frequency to that of the driving oscillator. The driving oscillator may then be connected to the cavity.
NASA Technical Reports Server (NTRS)
Lichtenstein, Jacob H.; Williams, James L.
1961-01-01
A low-speed investigation has been conducted in the Langley stability tunnel to study the effects of frequency and amplitude of sideslipping motion on the lateral stability derivatives of a 60 deg. delta wing, a 45 deg. sweptback wing, and an unswept wing. The investigation was made for values of the reduced-frequency parameter of 0.066 and 0.218 and for a range of amplitudes from +/- 2 to +/- 6 deg. The results of the investigation indicated that increasing the frequency of the oscillation generally produced an appreciable change in magnitude of the lateral oscillatory stability derivatives in the higher angle-of-attack range. This effect was greatest for the 60 deg. delta wing and smallest for the unswept wing and generally resulted in a more linear variation of these derivatives with angle of attack. For the relatively high frequency at which the amplitude was varied, there appeared to be little effect on the measured derivatives as a result of the change in amplitude of the oscillation.
Elliott, C.J.; McVey, B. ); Quimby, D.C. )
1990-01-01
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.
ERIC Educational Resources Information Center
Gressang, Jane E.
2010-01-01
Second language (L2) learners notoriously have trouble using articles in their target languages (e.g., "a", "an", "the" in English). However, researchers disagree about the patterns and causes of these errors. Past studies have found that L2 English learners: (1) Predominantly omit articles (White 2003, Robertson 2000), (2) Overuse "the" (Huebner…
Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...
Partial processing satellite relays for frequency-hop antijam communications
NASA Astrophysics Data System (ADS)
Sussman, S. M.; Kotiveeriah, P.
1982-08-01
Jamming effects on the uplink and downlink are combined to derive the earth station-to-earth station performance. In the linear dehop-rehop transponder (DRT), the retransmitted uplink noise and jamming for various ratios of relay bandwidth to data rate are taken into account, and a new end-to-end SNR relation is obtained. The symbol regenerative processor (SRP) analysis is based on the appropriate combining of uplink and downlink error probabilities to yield the end-to-end error probability. The jammer is assumed to employ either full-band noise or optimum partial band jamming, and several combinations of these two jamming strategies are evaluated. The relationship between uplink and downlink SNR are obtained for specified end-to-end SNR or error probability, and drawn between the DRT and the SRP. The analysis treats M-ary orthogonal frequency shift typing with incoherent combining and detection of multiple frequency hops per symbol.
Refractive errors in children.
Tongue, A C
1987-12-01
Optical correction of refractive errors in infants and young children is indicated when the refractive errors are sufficiently large to cause unilateral or bilateral amblyopia, if they are impairing the child's ability to function normally, or if the child has accommodative strabismus. Screening for refractive errors is important and should be performed as part of the annual physical examination in all verbal children. Screening for significant refractive errors in preverbal children is more difficult; however, the red reflex test of Bruckner is useful for the detection of anisometropic refractive errors. The photorefraction test, which is an adaptation of Bruckner's red reflex test, may prove to be a useful screening device for detecting bilateral as well as unilateral refractive errors. Objective testing as well as subjective testing enables ophthalmologists to prescribe proper optical correction for refractive errors for infants and children of any age. PMID:3317238
Johnstone, R A; Grafen, A
1992-06-22
The handicap principle of Zahavi is potentially of great importance to the study of biological communication. Existing models of the handicap principle, however, make the unrealistic assumption that communication is error free. It seems possible, therefore, that Zahavi's arguments do not apply to real signalling systems, in which some degree of error is inevitable. Here, we present a general evolutionarily stable strategy (ESS) model of the handicap principle which incorporates perceptual error. We show that, for a wide range of error functions, error-prone signalling systems must be honest at equilibrium. Perceptual error is thus unlikely to threaten the validity of the handicap principle. Our model represents a step towards greater realism, and also opens up new possibilities for biological signalling theory. Concurrent displays, direct perception of quality, and the evolution of 'amplifiers' and 'attenuators' are all probable features of real signalling systems, yet handicap models based on the assumption of error-free communication cannot accommodate these possibilities. PMID:1354361
Blomquist, Thomas; Crawford, Erin L.; Yeo, Jiyoun; Zhang, Xiaolu; Willey, James C.
2015-01-01
Background Clinical implementation of Next-Generation Sequencing (NGS) is challenged by poor control for stochastic sampling, library preparation biases and qualitative sequencing error. To address these challenges we developed and tested two hypotheses. Methods Hypothesis 1: Analytical variation in quantification is predicted by stochastic sampling effects at input of (a) amplifiable nucleic acid target molecules into the library preparation, (b) amplicons from library into sequencer, or (c) both. We derived equations using Monte Carlo simulation to predict assay coefficient of variation (CV) based on these three working models and tested them against NGS data from specimens with well characterized molecule inputs and sequence counts prepared using competitive multiplex-PCR amplicon-based NGS library preparation method comprising synthetic internal standards (IS). Hypothesis 2: Frequencies of technically-derived qualitative sequencing errors (i.e., base substitution, insertion and deletion) observed at each base position in each target native template (NT) are concordant with those observed in respective competitive synthetic IS present in the same reaction. We measured error frequencies at each base position within amplicons from each of 30 target NT, then tested whether they correspond to those within the 30 respective IS. Results For hypothesis 1, the Monte Carlo model derived from both sampling events best predicted CV and explained 74% of observed assay variance. For hypothesis 2, observed frequency and type of sequence variation at each base position within each IS was concordant with that observed in respective NTs (R2 = 0.93). Conclusion In targeted NGS, synthetic competitive IS control for stochastic sampling at input of both target into library preparation and of target library product into sequencer, and control for qualitative errors generated during library preparation and sequencing. These controls enable accurate clinical diagnostic reporting of
The Nature of Error in Adolescent Student Writing
ERIC Educational Resources Information Center
Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang
2014-01-01
This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…
Ainsworth, Nathan G; Grijalva, Prof. Santiago
2013-01-01
This paper discusses a proposed frequency restoration controller which operates as an outer loop to frequency droop for voltage-source inverters. By quasi-equilibrium analysis, we show that the proposed controller is able to provide arbitrarily small steady-state frequency error while maintaing power sharing between inverters without need for communication or centralized control. We derive rate of convergence, discuss design considerations (including a fundamental trade-off that must be made in design), present a design procedure to meet a maximum frequency error requirement, and show simulation results verifying our analysis and design method. The proposed controller will allow flexible plug-and-play inverter-based networks to meet a specified maximum frequency error requirement.
Analysis of Medication Error Reports
Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.
2004-11-15
In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.
Aircraft system modeling error and control error
NASA Technical Reports Server (NTRS)
Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)
2012-01-01
A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.
Grammatical Errors Produced by English Majors: The Translation Task
ERIC Educational Resources Information Center
Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad
2011-01-01
This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…
Perceptual Bias in Speech Error Data Collection: Insights from Spanish Speech Errors
ERIC Educational Resources Information Center
Perez, Elvira; Santiago, Julio; Palma, Alfonso; O'Seaghdha, Padraig G.
2007-01-01
This paper studies the reliability and validity of naturalistic speech errors as a tool for language production research. Possible biases when collecting naturalistic speech errors are identified and specific predictions derived. These patterns are then contrasted with published reports from Germanic languages (English, German and Dutch) and one…
Keidser, Gitte; Carter, Lyndal; Chalupper, Josef; Dillon, Harvey
2007-10-01
When the frequency range over which vent-transmitted sound dominates amplification increases, the potential benefit from directional microphones and noise reduction decreases. Fitted with clinically appropriate vent sizes, 23 aided listeners with varying low-frequency hearing thresholds evaluated six schemes comprising three levels of gain at 250 Hz (0, 6, and 12 dB) combined with two features (directional microphone and noise reduction) enabled or disabled in the field. The low-frequency gain was 0 dB for vent-dominated sound, while the higher gains were achieved by amplifier-dominated sounds. A majority of listeners preferred 0-dB gain at 250 Hz and the features enabled. While the amount of low-frequency gain had no significant effect on speech recognition in noise or horizontal localization, speech recognition and front/back discrimination were significantly improved when the features were enabled, even when vent-transmitted sound dominated the low frequencies. The clinical implication is that there is no need to increase low-frequency gain to compensate for vent effects to achieve benefit from directionality and noise reduction over a wider frequency range. PMID:17922345
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
The Error in Total Error Reduction
Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.
2013-01-01
Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930
2010-01-01
Aims Cardiovascular magnetic resonance (CMR) allows non-invasive phase contrast measurements of flow through planes transecting large vessels. However, some clinically valuable applications are highly sensitive to errors caused by small offsets of measured velocities if these are not adequately corrected, for example by the use of static tissue or static phantom correction of the offset error. We studied the severity of uncorrected velocity offset errors across sites and CMR systems. Methods and Results In a multi-centre, multi-vendor study, breath-hold through-plane retrospectively ECG-gated phase contrast acquisitions, as are used clinically for aortic and pulmonary flow measurement, were applied to static gelatin phantoms in twelve 1.5 T CMR systems, using a velocity encoding range of 150 cm/s. No post-processing corrections of offsets were implemented. The greatest uncorrected velocity offset, taken as an average over a 'great vessel' region (30 mm diameter) located up to 70 mm in-plane distance from the magnet isocenter, ranged from 0.4 cm/s to 4.9 cm/s. It averaged 2.7 cm/s over all the planes and systems. By theoretical calculation, a velocity offset error of 0.6 cm/s (representing just 0.4% of a 150 cm/s velocity encoding range) is barely acceptable, potentially causing about 5% miscalculation of cardiac output and up to 10% error in shunt measurement. Conclusion In the absence of hardware or software upgrades able to reduce phase offset errors, all the systems tested appeared to require post-acquisition correction to achieve consistently reliable breath-hold measurements of flow. The effectiveness of offset correction software will still need testing with respect to clinical flow acquisitions. PMID:20074359
Image defects from surface and alignment errors in grazing incidence telescopes
NASA Technical Reports Server (NTRS)
Saha, Timo T.
1989-01-01
The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.
Farré, R; Rotger, M; Navajas, D
1997-03-01
The forced oscillation technique (FOT) allows the measurement of respiratory resistance (Rrs) and reactance (Xrs) and their associated coherence (gamma2). To avoid unreliable data, it is usual to reject Rrs and Xrs measurements with a gamma2 <0.95. This procedure makes it difficult to obtain acceptable data at the lowest frequencies of interest. The aim of this study was to derive expressions to compute the random error of Rrs and Xrs from gamma2 and the number (N) of data blocks involved in a FOT measurement. To this end, we developed theoretical equations for the variances and covariances of the pressure and flow auto- and cross-spectra used to compute Rrs and Xrs. Random errors of Rrs and Xrs were found to depend on the values of Rrs and Xrs, and to be proportional to ((1-gamma2)/(2 x N x gamma2))1/2. Reliable Rrs and Xrs data can be obtained in measurements with low gamma2 by enlarging the data recording (i.e. N). Therefore, the error equations derived may be useful to extend the frequency band of the forced oscillation technique to frequencies lower than usual, characterized by low coherence. PMID:9073006
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2008-01-01
An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.
Entropic error-disturbance relations
NASA Astrophysics Data System (ADS)
Coles, Patrick; Furrer, Fabian
2014-03-01
We derive an entropic error-disturbance relation for a sequential measurement scenario as originally considered by Heisenberg, and we discuss how our relation could be tested using existing experimental setups. Our relation is valid for discrete observables, such as spin, as well as continuous observables, such as position and momentum. The novel aspect of our relation compared to earlier versions is its clear operational interpretation and the quantification of error and disturbance using entropic quantities. This directly relates the measurement uncertainty, a fundamental property of quantum mechanics, to information theoretical limitations and offers potential applications in for instance quantum cryptography. PC is funded by National Research Foundation Singapore and Ministry of Education Tier 3 Grant ``Random numbers from quantum processes'' (MOE2012-T3-1-009). FF is funded by Japan Society for the Promotion of Science, KAKENHI grant No. 24-02793.
Error decomposition and estimation of inherent optical properties.
Salama, Mhd Suhyb; Stein, Alfred
2009-09-10
We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation. PMID:19745859
New approximating results for data with errors in both variables
NASA Astrophysics Data System (ADS)
Bogdanova, N.; Todorov, S.
2015-05-01
We introduce new data from mineral water probe Lenovo Bulgaria, measured with errors in both variables. We apply our Orthonormal Polynomial Expansion Method (OPEM), based on Forsythe recurrence formula to describe the data in the new error corridor. The development of OPEM gives the approximating curves and their derivatives in optimal orthonormal and usual expansions including the errors in both variables with special criteria.
Statistics of the residual refraction errors in laser ranging data
NASA Technical Reports Server (NTRS)
Gardner, C. S.
1977-01-01
A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.
NASA Technical Reports Server (NTRS)
Tai, Chang-Kou
1991-01-01
Formulas analogous to the frequency response functions for commonly used filters in orbit error removal are analytically derived to devise observational strategies for the large-scale oceanic variability and to decipher the signal contents of previous results. These include the polynomial orbit error approximations, i.e., the linear, bias-only and quadratic corrections, and the sinusoidal orbit error approximations (the purely sinusoidal correction, and the sinusoid-and-bias correction). It is shown that the frequency response function for a polynomial correction is a function of the ratio of wavelength/track length and to retain 90 percent or more of the signal at a certain wavelength, the ratio must be less than 0.65 (for the quadratic case), 0.90 (linear), and 1.54 (bias-only).
Nonlinear amplification of side-modes in frequency combs.
Probst, R A; Steinmetz, T; Wilken, T; Hundertmark, H; Stark, S P; Wong, G K L; Russell, P St J; Hänsch, T W; Holzwarth, R; Udem, Th
2013-05-20
We investigate how suppressed modes in frequency combs are modified upon frequency doubling and self-phase modulation. We find, both experimentally and by using a simplified model, that these side-modes are amplified relative to the principal comb modes. Whereas frequency doubling increases their relative strength by 6 dB, the growth due to self-phase modulation can be much stronger and generally increases with nonlinear propagation length. Upper limits for this effect are derived in this work. This behavior has implications for high-precision calibration of spectrographs with frequency combs used for example in astronomy. For this application, Fabry-Pérot filter cavities are used to increase the mode spacing to exceed the resolution of the spectrograph. Frequency conversion and/or spectral broadening after non-perfect filtering reamplify the suppressed modes, which can lead to calibration errors. PMID:23736390
Error and adjustment of reflecting prisms
NASA Astrophysics Data System (ADS)
Mao, Wenwei
1997-12-01
A manufacturing error in the orientation of the working planes of a reflecting prism, such as an angle error or an edge error, will cause the optical axis to deviate and the image to lean. So does an adjustment (position error) of a reflecting prism. A universal method to be used to calculate the optical axis deviation and the image lean caused by the manufacturing error of a reflecting prism is presented. It is suited to all types of reflecting prisms. A means to offset the position error against the manufacturing error of a reflecting prism and the changes of image orientation is discussed. For the calculation to be feasible, a surface named the 'separating surface' is introduced just in front of the real exit face of a real prism. It is the image of the entrance face formed by all reflecting surfaces of the real prism. It can be used to separate the image orientation change caused by the error of the prism's reflecting surfaces from the image orientation change caused by the error of the prism's refracting surface. Based on ray tracing, a set of simple and explicit formulas of the optical axis deviation and the image lean for a general optical wedge is derived.
ERIC Educational Resources Information Center
Saavedra, Pedro; Kuchak, JoAnn
An error-prone model (EPM) to predict financial aid applicants who are likely to misreport on Basic Educational Opportunity Grant (BEOG) applications was developed, based on interviews conducted with a quality control sample of 1,791 students during 1978-1979. The model was designed to identify corrective methods appropriate for different types of…
ERIC Educational Resources Information Center
Burrows, J. K.
Research on error patterns associated with whole number computation is reviewed. Details of the results of some of the individual studies cited are given in the appendices. In Appendix A, 33 addition errors, 27 subtraction errors, 41 multiplication errors, and 41 division errors are identified, and the frequency of these errors made by 352…
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
An exact frequency equation in closed form for Timoshenko beam clampled at both ends
NASA Astrophysics Data System (ADS)
Kang, Jae-Hoon
2014-07-01
The author has discovered several errors which are not typographical in the frequency equations for a Timoshenko beam clamped at both ends by Huang who presented the frequency equations and normal mode equations for all six common types of simple, finite beams in closed form for the first time. The exact frequency equations in closed form for Timoshenko beams clamped at both ends are derived based on his analysis. And then in order to justify the amended solutions of Huang, two versions of the closed form exact method and the Ritz method are applied. The frequency equations by the previous researcher present frequencies for only the flexural modes, while the closed form exact method and the Ritz method give ones for the thickness-shear modes as well as the bending modes. The purpose of the present study is to reveal the errors, correct them, and give some numerical results.
Calculating the CEP (Circular Error Probable)
NASA Technical Reports Server (NTRS)
1987-01-01
This report compares the probability contained in the Circular Error Probable associated with an Elliptical Error Probable to that of the EEP at a given confidence level. The levels examined are 50 percent and 95 percent. The CEP is found to be both more conservative and less conservative than the associated EEP, depending on the eccentricity of the ellipse. The formulas used are derived in the appendix.
ERIC Educational Resources Information Center
Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.
2010-01-01
Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…
White, Andrew A; Gallagher, Thomas H
2013-01-01
Errors occur commonly in healthcare and can cause significant harm to patients. Most errors arise from a combination of individual, system, and communication failures. Neurologists may be involved in harmful errors in any practice setting and should familiarize themselves with tools to prevent, report, and examine errors. Although physicians, patients, and ethicists endorse candid disclosure of harmful medical errors to patients, many physicians express uncertainty about how to approach these conversations. A growing body of research indicates physicians often fail to meet patient expectations for timely and open disclosure. Patients desire information about the error, an apology, and a plan for preventing recurrence of the error. To meet these expectations, physicians should participate in event investigations and plan thoroughly for each disclosure conversation, preferably with a disclosure coach. Physicians should also anticipate and attend to the ongoing medical and emotional needs of the patient. A cultural change towards greater transparency following medical errors is in motion. Substantial progress is still required, but neurologists can further this movement by promoting policies and environments conducive to open reporting, respectful disclosure to patients, and support for the healthcare workers involved. PMID:24182370
Relationships between GPS-signal propagation errors and EISCAT observations
NASA Astrophysics Data System (ADS)
Jakowski, N.; Sardon, E.; Engler, E.; Jungstand, A.; Klähn, D.
1996-12-01
When travelling through the ionosphere the signals of space-based radio navigation systems such as the Global Positioning System (GPS) are subject to modifications in amplitude, phase and polarization. In particular, phase changes due to refraction lead to propagation errors of up to 50 m for single-frequency GPS users. If both the L1 and the L2 frequencies transmitted by the GPS satellites are measured, first-order range error contributions of the ionosphere can be determined and removed by difference methods. The ionospheric contribution is proportional to the total electron content (TEC) along the ray path between satellite and receiver. Using about ten European GPS receiving stations of the International GPS Service for Geodynamics (IGS), the TEC over Europe is estimated within the geographic ranges -20°leq
Improving patient safety by examining pathology errors.
Raab, Stephen S
2004-12-01
A considerable void exists in the information available regarding anatomic pathology diagnostic errors and their impact on clinical outcomes. To fill this void and improve patient safety, four institutional pathology departments (University of Pittsburgh, Western Pennsylvania Hospital, University of Iowa Hospitals and Clinics, and Henry Ford Hospital System) have proposed the development of a voluntary, Web-based, multi-institutional database for the collection and analysis of diagnostic errors. These institutions intend to use these data proactively to implement internal changes in pathology practice and to measure the effect of such changes on errors and clinical outcomes. They believe that the successful implementation of this project will result in the study of other types of diagnostic pathology error and the expansion to national participation. The project will involve the collection of multi-institutional anatomic pathology diagnostic errors in a large database that will facilitate a more detailed analysis of these errors, including their effect on patient outcomes. Participating institutions will perform root cause analysis for diagnostic errors and plan and execute appropriate process changes aimed at error reduction. The success of these interventions will be tracked through analysis of postintervention error data collected in the database. Based on their preliminary studies, these institutions proposed the following specific aims: Specific aim #1: To use a Web-based database to collect diagnostic errors detected by cytologic histologic correlation and by second-pathologist review of conference cases. Specific aim #2: To analyze the collected error data quantitatively and generate quality performance reports that are useful for institutional quality improvement programs. Specific aim #3: To plan and implement interventions to reduce errors and improve clinical outcomes, based on information derived from root cause analysis of diagnostic errors. Specific
Fisher classifier and its probability of error estimation
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.
Error and efficiency of replica exchange molecular dynamics simulations
Rosta, Edina; Hummer, Gerhard
2009-01-01
We derive simple analytical expressions for the error and computational efficiency of replica exchange molecular dynamics (REMD) simulations (and by analogy replica exchange Monte Carlo simulations). The theory applies to the important case of systems whose dynamics at long times is dominated by the slow interconversion between two metastable states. As a specific example, we consider the folding and unfolding of a protein. The efficiency is defined as the rate with which the error in an estimated equilibrium property, as measured by the variance of the estimator over repeated simulations, decreases with simulation time. For two-state systems, this rate is in general independent of the particular property. Our main result is that, with comparable computational resources used, the relative efficiency of REMD and molecular dynamics (MD) simulations is given by the ratio of the number of transitions between the two states averaged over all replicas at the different temperatures, and the number of transitions at the single temperature of the MD run. This formula applies if replica exchange is frequent, as compared to the transition times. High efficiency of REMD is thus achieved by including replica temperatures in which the frequency of transitions is higher than that at the temperature of interest. In tests of the expressions for the error in the estimator, computational efficiency, and the rate of equilibration we find quantitative agreement with the results both from kinetic models of REMD and from actual all-atom simulations of the folding of a peptide in water. PMID:19894977
Error and efficiency of replica exchange molecular dynamics simulations.
Rosta, Edina; Hummer, Gerhard
2009-10-28
We derive simple analytical expressions for the error and computational efficiency of replica exchange molecular dynamics (REMD) simulations (and by analogy replica exchange Monte Carlo simulations). The theory applies to the important case of systems whose dynamics at long times is dominated by the slow interconversion between two metastable states. As a specific example, we consider the folding and unfolding of a protein. The efficiency is defined as the rate with which the error in an estimated equilibrium property, as measured by the variance of the estimator over repeated simulations, decreases with simulation time. For two-state systems, this rate is in general independent of the particular property. Our main result is that, with comparable computational resources used, the relative efficiency of REMD and molecular dynamics (MD) simulations is given by the ratio of the number of transitions between the two states averaged over all replicas at the different temperatures, and the number of transitions at the single temperature of the MD run. This formula applies if replica exchange is frequent, as compared to the transition times. High efficiency of REMD is thus achieved by including replica temperatures in which the frequency of transitions is higher than that at the temperature of interest. In tests of the expressions for the error in the estimator, computational efficiency, and the rate of equilibration we find quantitative agreement with the results both from kinetic models of REMD and from actual all-atom simulations of the folding of a peptide in water. PMID:19894977
Temprana, E; Myslivets, E; Liu, L; Ataie, V; Wiberg, A; Kuo, B P P; Alic, N; Radic, S
2015-08-10
We demonstrate a two-fold reach extension of 16 GBaud 16-Quadrature Amplitude Modulation (QAM) wavelength division multiplexed (WDM) system based on erbium doped fiber amplifier (EDFA)-only amplified standard and single mode fiber -based link. The result is enabled by transmitter-side digital backpropagation and frequency referenced carriers drawn from a parametric comb. PMID:26367930
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755
Uncorrected refractive errors.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755
Insulin use: preventable errors.
2014-01-01
Insulin is vital for patients with type 1 diabetes and useful for certain patients with type 2 diabetes. The serious consequences of insulin-related medication errors are overdose, resulting in severe hypoglycaemia, causing seizures, coma and even death; or underdose, resulting in hyperglycaemia and sometimes ketoacidosis. Errors associated with the preparation and administration of insulin are often reported, both outside and inside the hospital setting. These errors are preventable. By analysing reports from organisations devoted to medication error prevention and from poison control centres, as well as a few studies and detailed case reports of medication errors, various types of error associated with insulin use have been identified, especially in the hospital setting. Generally, patients know more about the practicalities of their insulin treatment than healthcare professionals with intermittent involvement. Medication errors involving insulin can occur at each step of the medication-use process: prescribing, data entry, preparation, dispensing and administration. When prescribing insulin, wrong-dose errors have been caused by the use of abbreviations, especially "U" instead of the word "units" (often resulting in a 10-fold overdose because the "U" is read as a zero), or by failing to write the drug's name correctly or in full. In electronic prescribing, the sheer number of insulin products is a source of confusion and, ultimately, wrong-dose errors, and often overdose. Prescribing, dispensing or administration software is rarely compatible with insulin prescriptions in which the dose is adjusted on the basis of the patient's subsequent capillary blood glucose readings, and can therefore generate errors. When preparing and dispensing insulin, a tuberculin syringe is sometimes used instead of an insulin syringe, leading to overdose. Other errors arise from confusion created by similar packaging, between different insulin products or between insulin and other
NASA Technical Reports Server (NTRS)
1987-01-01
In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.
Low-dimensional Representation of Error Covariance
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.
Reduction of Maintenance Error Through Focused Interventions
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)
1997-01-01
It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.
On the decode error probability for Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Swanson, L.
1986-01-01
Upper bounds on the decoder error probability for Reed-Solomon codes are derived. By definition, decoder error occurs when the decoder finds a codeword other than the transmitted codeword; this is in contrast to decoder failure, which occurs when the decoder fails to find any codeword at all. The results imply, for example, that for a t error correcting Reed-Solomon code of length q - 1 over GF(q), if more than t errors occur, the probability of decoder error is less than 1/t! In particular, for the Voyager Reed-Solomon code, the probability of decoder error given a word error is smaller than 3 x 10 to the minus 14th power. Thus, in a typical operating region with probability 100,000 of word error, the probability of undetected word error is about 10 to the minus 14th power.
On the decoder error probability for Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Mceliece, Robert J.; Swanson, Laif
1986-01-01
Upper bounds on the decoder error probability for Reed-Solomon codes are derived. By definition, decoder error occurs when the decoder finds a codeword other than the transmitted codeword; this is in contrast to decoder failure, which occurs when the decoder fails to find any codeword at all. The results imply, for example, that for a t error-correcting Reed-Solomon code of length q - 1 over GF(q), if more than t errors occur, the probability of decoder error is less than 1/t. In particular, for the Voyager Reed-Solomon code, the probability of decoder error given a word error is smaller than 3 x 10 to the minus 14th power. Thus, in a typical operating region with probability 100,000 of word error, the probability of undetected word error is about 10 to the minus 14th power.
On the decode error probability for Reed-Solomon codes
NASA Astrophysics Data System (ADS)
McEliece, R. J.; Swanson, L.
1986-02-01
Upper bounds on the decoder error probability for Reed-Solomon codes are derived. By definition, decoder error occurs when the decoder finds a codeword other than the transmitted codeword; this is in contrast to decoder failure, which occurs when the decoder fails to find any codeword at all. The results imply, for example, that for a t error correcting Reed-Solomon code of length q - 1 over GF(q), if more than t errors occur, the probability of decoder error is less than 1/t] In particular, for the Voyager Reed-Solomon code, the probability of decoder error given a word error is smaller than 3 x 10 to the minus 14th power. Thus, in a typical operating region with probability 100,000 of word error, the probability of undetected word error is about 10 to the minus 14th power.
Error Detection Processes during Observational Learning
ERIC Educational Resources Information Center
Badets, Arnaud; Blandin, Yannick; Wright, David L.; Shea, Charles H.
2006-01-01
The purpose of this experiment was to determine whether a faded knowledge of results (KR) frequency during observation of a model's performance enhanced error detection capabilities. During the observation phase, participants observed a model performing a timing task and received KR about the model's performance on each trial or on one of two…
Verb-Form Errors in EAP Writing
ERIC Educational Resources Information Center
Wee, Roselind; Sim, Jacqueline; Jusoff, Kamaruzaman
2010-01-01
This study was conducted to identify and describe the written verb-form errors found in the EAP writing of 39 second year learners pursuing a three-year Diploma Programme from a public university in Malaysia. Data for this study, which were collected from a written 350-word discursive essay, were analyzed to determine the types and frequency of…
NASA Astrophysics Data System (ADS)
Gao, J.
2014-12-01
Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a
... the lens can cause refractive errors. What is refraction? Refraction is the bending of light as it passes ... rays entering the eye, causing a more precise refraction or focus. In many cases, contact lenses provide ...
Anumba, Dilly O C
2013-08-01
Prenatal screening and diagnosis are integral to antenatal care worldwide. Prospective parents are offered screening for common fetal chromosomal and structural congenital malformations. In most developed countries, prenatal screening is routinely offered in a package that includes ultrasound scan of the fetus and the assay in maternal blood of biochemical markers of aneuploidy. Mistakes can arise at any point of the care pathway for fetal screening and diagnosis, and may involve individual or corporate systemic or latent errors. Special clinical circumstances, such as maternal size, fetal position, and multiple pregnancy, contribute to the complexities of prenatal diagnosis and to the chance of error. Clinical interventions may lead to adverse outcomes not caused by operator error. In this review I discuss the scope of the errors in prenatal diagnosis, and highlight strategies for their prevention and diagnosis, as well as identify areas for further research and study to enhance patient safety. PMID:23725900
Hollnagel, E; Kaarstad, M; Lee, H C
1999-11-01
The study of accidents ('human errors') has been dominated by efforts to develop 'error' taxonomies and 'error' models that enable the retrospective identification of likely causes. In the field of Human Reliability Analysis (HRA) there is, however, a significant practical need for methods that can predict the occurrence of erroneous actions--qualitatively and quantitatively. The present experiment tested an approach for qualitative performance prediction based on the Cognitive Reliability and Error Analysis Method (CREAM). Predictions of possible erroneous actions were made for operators using different types of alarm systems. The data were collected as part of a large-scale experiment using professional nuclear power plant operators in a full scope simulator. The analysis showed that the predictions were correct in more than 70% of the cases, and also that the coverage of the predictions depended critically on the comprehensiveness of the preceding task analysis. PMID:10582035
ERIC Educational Resources Information Center
Kaper, Willem
1976-01-01
Contradicts a previous assertion by C. Tanz that children commit substitution errors usually using objective pronoun forms for nominative ones. Examples from Dutch and German provide evidence that substitutions are made in both directions. (CHK)
Estimating Bias Error Distributions
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Finley, Tom D.
2001-01-01
This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.
NASA Technical Reports Server (NTRS)
Meinel, Aden B.; Meinel, Marjorie P.; Stacy, John E.
1989-01-01
Proposed reflecting telescope includes large, low-precision primary mirror stage and small, precise correcting mirror. Correcting mirror machined under computer control to compensate for error in primary mirror. Correcting mirror machined by diamond cutting tool. Computer analyzes interferometric measurements of primary mirror to determine shape of surface of correcting mirror needed to compensate for errors in wave front reflected from primary mirror and commands position and movement of cutting tool accordingly.
Hosein, Mervyn; Mohiuddin, Sidra; Fatima, Nazish
2015-01-01
Background: Oral submucous fibrosis (OSMF) is a chronic, premalignant condition of the oral mucosa and one of the commonest potentially malignant disorders amongst the Asian population. The objective of this study was to investigate the association of etiologic factors with: age, frequency, duration of consumption of areca nut and its derivatives, and the severity of clinical manifestations. Methods: A cross-sectional, multi centric study was conducted over 8 years on clinically diagnosed OSMF cases (n = 765) from both public and private tertiary care centers. Sample size was determined by World Health Organization sample size calculator. Consumption of areca nut in different forms, frequency of daily usage, years of chewing, degree of mouth opening and duration of the condition were recorded. Level of significance was kept at P ≤ 0.05. Results: A total of 765 patients of OSMF were examined, of whom 396 (51.8%) were male and 369 (48.2%) female with a mean age of 29.17 years. Mild OSMF was seen in 61 cases (8.0%), moderate OSMF in 353 (46.1%) and severe OSMF in 417 (54.5%) subjects. Areca nut and other derivatives were most frequently consumed and showed significant risk in the severity of OSMF (P ≤ 0.0001). Age of the sample and duration of chewing years were also significant (P = 0.012). Conclusions: The relative risk of OSMF increased with duration and frequency of areca nut consumption especially from an early age of onset. PMID:26473161
Gear Transmission Error Measurement System Made Operational
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2002-01-01
A system directly measuring the transmission error between the meshing spur or helical gears was installed at the NASA Glenn Research Center and made operational in August 2001. This system employs light beams directed by lenses and prisms through gratings mounted on the two gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. The device is capable of resolution better than 0.1 mm (one thousandth the thickness of a human hair). The measured transmission error can be displayed in a "map" that shows how the transmission error varies with the gear rotation or it can be converted to spectra to show the components at the meshing frequencies. Accurate transmission error data will help researchers better understand the mechanisms that cause gear noise and vibration and will lead to The Design Unit at the University of Newcastle in England specifically designed the new system for NASA. It is the only device in the United States that can measure dynamic transmission error at high rotational speeds. The new system will be used to develop new techniques to reduce dynamic transmission error along with the resulting noise and vibration of aeronautical transmissions.
[Occurrence and prevention of errors in intensive care units].
Valentin, A
2012-05-01
Recognition and analysis of error constitutes an essential tool for quality improvement in intensive care units (ICUs). The potential for the occurrence of error is considerably high in ICUs. Although errors will never be completely preventable, it is necessary to reduce frequency and consequences of error. A system approach needs to consider human limitations and to design working conditions, workplace, and processes in ICUs in a way that promotes reduction of error. The development of a preventive safety culture must be seen as an essential task for ICUs. PMID:22476763
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Human error in aviation operations
NASA Technical Reports Server (NTRS)
Nagel, David C.
1988-01-01
The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.
Stiffness-matrix condition number and shape sensitivity errors
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1990-01-01
This paper derives an error magnification index for assessing the sensitivity of the displacement field to errors in the load vector. It is shown that the error magnification index is less conservative than the stiffness-matrix condition number and that, for some cases, no error magnification occurs even when the condition number is very high. The proposed index was used to calculate the derivatives of beam response to changes in the beam structural parameters, using a semianalytical method. It is shown that the proposed index discriminates well between the calculation of the derivative with respect to length, which is very sensitive to errors, and the calculation of the derivative with respect to cross-sectional height, which is not sensitive.
Errata: Papers in Error Analysis.
ERIC Educational Resources Information Center
Svartvik, Jan, Ed.
Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…
NASA Astrophysics Data System (ADS)
Mongeon, R. J.; Henschke, R. W.
1984-08-01
The document describes a frequency control system for a laser for compensating for thermally-induced laser resonator length changes. The frequency control loop comprises a frequency reference for producing an error signal and electrical means to move a length-controlling transducer in response thereto. The transducer has one of the laser mirrors attached thereto. The effective travel of the transducer is multiplied severalfold by circuitry for sensing when the transducer is running out of extension and in response thereto rapidly moving the transducer and its attached mirror toward its midrange position.
Quantification of model error via an interval model with nonparametric error bound
NASA Technical Reports Server (NTRS)
Lew, Jiann-Shiun; Keel, Lee H.; Juang, Jer-Nan
1993-01-01
The quantification of model uncertainty is becoming increasingly important as robust control is an important tool for control system design and analysis. This paper presents an algorithm that effectively characterizes the model uncertainty in terms of parametric and nonparametric uncertainties. The algorithm utilizes the frequency domain model error which is estimated from the spectra of output error and input data. The parametric uncertainty is represented as an interval transfer function while the nonparametric uncertainty is bounded by a designed error bound transfer function. Both discrete and continuous systems are discussed in this paper. The algorithm is applied to the Mini-Mast example, and the detail analysis is given.
SMOS SSS uncertainties associated with errors on auxiliary parameters
NASA Astrophysics Data System (ADS)
Yin, Xiaobin; Boutin, Jacqueline; Dinnat, Emmanuel; Martin, Nicolas; Guimbard, Sebastien
2014-05-01
The European Soil Moisture and Ocean Salinity (SMOS) mission, aimed at observing sea surface salinity (SSS) from space, has been launched in November 2009. The L-band frequency (1413 MHz) has been chosen as a tradeoff between a sufficient sensitivity of radiometric measurements to changes in salinity, a high sensitivity to soil moisture and spatial resolution constraints. It is also a band protected against human-made emissions. But, even at this frequency, the sensitivity of brightness temperature (TB) to SSS remains low requiring accurate correction for other sources of error. Two significant sources of error for retrieved SSS are the uncertainties on the correction for surface roughness and sea surface temperature (SST). One main geophysical source of error in the retrieval of SSS from L-band TB comes from the need for correcting the effect of the surface roughness and foam. In the SMOS processing, the wind speed (WS) provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) is used to initialize the retrieval process of WS and Sea Surface Salinity (SSS). This process compensates for the lack of onboard instrument providing a measure of ocean surface WS independent of the L-band radiometer measurements. Using multi-angular polarimetric SMOS TBs, it is possible to adjust the WS from the initial value in the center of the swath (within ±300km) by taking advantage of the different sensitivities of L-band H-pol and V-pol TBs to WS and SSS at various incidence angles. As a consequence, the inconsistencies between the MIRAS sensed roughness and the roughness simulated with the ECMWF WS are reduced by the retrieval scheme but they still lead to residual biases in the SMOS SSS. We have developed an alternative two-step method for retrieving WS from SMOS TB, with larger error on prior ECMWF wind speed in a first step. We show that although it improves SSS in some areas characterized by large currents, it is more sensitive to SMOS TB errors in the
Protecting weak measurements against systematic errors
NASA Astrophysics Data System (ADS)
Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.
2016-07-01
In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe undergoes decoherence can be significantly lower than that of a standard weak measurement. This indicates another advantage of weak-value amplification in improving the performance of parameter estimation. We illustrate the results by an exact numerical simulation of decoherence arising from a bosonic mode and compare it to the first-order analytical result we obtain.
NASA Astrophysics Data System (ADS)
von Clarmann, T.
2014-09-01
The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the
Frequency coverter PDFSV control based on inertia identification
NASA Astrophysics Data System (ADS)
Bai, Guochang; Qi, Xiaoye; Wang, Zhanlin
2008-10-01
Excitation current and torque current are decoupled through vector control in frequency converter, so the linear control method can be used in speed loop. Pseudo derivative feedback with sub-variable control (PDFSV) has excellent performance and easy to realize. Its control parameters are obtained through present equation which is related to inertia parameter. To the frequency converter system with parameter changes at extensive range, the PDFSV control with variable control coefficient is presented: to construct full-order state observer, to observe the speed error signal which contains error information on the moment of inertia and calculate control coefficient by presented equations, and feed forward the observed load. The simulation results show that the speed control response performance well when the moment of inertia and disturb load varied at extensive range.
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Some Surprising Errors in Numerical Differentiation
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2012-01-01
Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…
Analyzing human errors in flight mission operations
NASA Technical Reports Server (NTRS)
Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef
1993-01-01
A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).
Experimental Quantum Error Detection
Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi
2012-01-01
Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047
NASA Astrophysics Data System (ADS)
Henderson, Robert K.
1999-12-01
It is widely accepted in the electronics industry that measurement gauge error variation should be no larger than 10% of the related specification window. In a previous paper, 'What Amount of Measurement Error is Too Much?', the author used a framework from the process industries to evaluate the impact of measurement error variation in terms of both customer and supplier risk (i.e., Non-conformance and Yield Loss). Application of this framework in its simplest form suggested that in many circumstances the 10% criterion might be more stringent than is reasonably necessary. This paper reviews the framework and results of the earlier work, then examines some of the possible extensions to this framework suggested in that paper, including variance component models and sampling plans applicable in the photomask and semiconductor businesses. The potential impact of imperfect process control practices will be examined as well.
Investigation of Measurement Errors in Doppler Global Velocimetry
NASA Technical Reports Server (NTRS)
Meyers, James F.; Lee, Joseph W.
1999-01-01
While the initial development phase of Doppler Global Velocimetry (DGV) has been successfully completed, there remains a critical next phase to be conducted, namely the determination of an error budget to provide quantitative bounds for measurements obtained by this technology. This paper describes a laboratory investigation that consisted of a detailed interrogation of potential error sources to determine their contribution to the overall DGV error budget. A few sources of error were obvious; e.g., iodine vapor adsorption lines, optical systems, and camera characteristics. However, additional non-obvious sources were also discovered; e.g., laser frequency and single-frequency stability, media scattering characteristics, and interference fringes. This paper describes each identified error source, its effect on the overall error budget, and where possible, corrective procedures to reduce or eliminate its effect.
Surprise beyond prediction error
Chumbley, Justin R; Burke, Christopher J; Stephan, Klaas E; Friston, Karl J; Tobler, Philippe N; Fehr, Ernst
2014-01-01
Surprise drives learning. Various neural “prediction error” signals are believed to underpin surprise-based reinforcement learning. Here, we report a surprise signal that reflects reinforcement learning but is neither un/signed reward prediction error (RPE) nor un/signed state prediction error (SPE). To exclude these alternatives, we measured surprise responses in the absence of RPE and accounted for a host of potential SPE confounds. This new surprise signal was evident in ventral striatum, primary sensory cortex, frontal poles, and amygdala. We interpret these findings via a normative model of surprise. PMID:24700400
NASA Astrophysics Data System (ADS)
Knox, Keith T.
1999-10-01
As we approach the new millennium, error diffusion is approaching the 25th anniversary of its invention. Because of its exceptionally high image quality, it continues to be a popular choice among digital halftoning algorithms. Over the last 24 years, many attempts have been made to modify and improve the algorithm--to eliminate unwanted textures and to extend it to printing media and color. Some of these modifications have been very successful and are in use today. This paper will review the history of the algorithm and its modifications. Three watershed events in the development of error diffusion will be described, together with the lessons learned along the way.
NASA Astrophysics Data System (ADS)
Knox, Keith T.
1998-12-01
As we approach the new millennium, error diffusion is approaching the 25th anniversary of its invention. Because of its exceptionally high image quality, it continues to be a popular choice among digital halftoning algorithms. Over the last 24 years, many attempts have been made to modify and improve the algorithm - to eliminate unwanted textures and to extend it to printing media and color. Some of these modifications have been very successful and are in use today. This paper will review the history of the algorithm and its modifications. Three watershed events in the development of error diffusion will be described, together with the lesions learned along the way.
NASA Technical Reports Server (NTRS)
1985-01-01
A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.
Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization
LaMar, E; Hamann, B; Joy, K I
2001-10-16
Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.
Atmospheric refraction effects on baseline error in satellite laser ranging systems
NASA Technical Reports Server (NTRS)
Im, K. E.; Gardner, C. S.
1982-01-01
Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.
Measuring Cyclic Error in Laser Heterodyne Interferometers
NASA Technical Reports Server (NTRS)
Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter
2010-01-01
An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-
A nonmystical treatment of tape speed compensation for frequency modulated signals
NASA Astrophysics Data System (ADS)
Solomon, O. M., Jr.
After briefly reviewing frequency modulation and demodulation, tape speed variation is modeled as a distortion of the independent variable of a frequency-modulated signal. This distortion gives rise to an additive amplitude error in the demodulated message, which comprises two terms. Both terms depend on the derivative of time base error, that is, the flutter of the analog tape machine. It is pointed out that the first term depends on the channel's center frequency and frequency deviation constant, as well as on the flutter, and that the second depends solely on the message and flutter. A description is given of the relationship between the additive amplitude error and manufacturer's flutter specification. For the case of a constant message, relative errors and signal-to-noise ratios are discussed to provide insight into when the variation in tape speed will cause significant errors. An algorithm is then developed which theoretically achieves full compensation of tape speed variation. After being confirmed via spectral computations on laboratory data, the algorithm is applied to field data.
NASA Astrophysics Data System (ADS)
Agner, R. M.; Liu, A. Z.
2013-12-01
Gravity waves and atmospheric tides have strong interactions in the mesopause region and is a major contributor to the large variabilities in this region. How these two large perturbations interact with each other is not well understood. Observational studies of their relationships are needed to help clarify some contradictory results from modeling studies. Due to large differences in temporal and spatial scales between gravity waves and tides, they are not easily observed simultaneously and consistently with extended periods of time. In this work, we use four-hundred hours of Na LIDAR observation at Starfire Optical Range (SOR, 35.0 N, 106.5 W), New Mexico to derive the local time variation of gravity wave momentum flux and corresponding background wind. Their relationship is then examined in detail. The effects of gravity waves on the background wind at the tidal time scale are deduced. These results are explained through gravity wave propagation in a varying background atmosphere.
Speech Errors in Progressive Non-Fluent Aphasia
ERIC Educational Resources Information Center
Ash, Sharon; McMillan, Corey; Gunawardena, Delani; Avants, Brian; Morgan, Brianna; Khan, Alea; Moore, Peachie; Gee, James; Grossman, Murray
2010-01-01
The nature and frequency of speech production errors in neurodegenerative disease have not previously been precisely quantified. In the present study, 16 patients with a progressive form of non-fluent aphasia (PNFA) were asked to tell a story from a wordless children's picture book. Errors in production were classified as either phonemic,…
Parental Reports of Children's Scale Errors in Everyday Life
ERIC Educational Resources Information Center
Rosengren, Karl S.; Gutierrez, Isabel T.; Anderson, Kathy N.; Schein, Stevie S.
2009-01-01
Scale errors refer to behaviors where young children attempt to perform an action on an object that is too small to effectively accommodate the behavior. The goal of this study was to examine the frequency and characteristics of scale errors in everyday life. To do so, the researchers collected parental reports of children's (age range = 13-21…
Optical linear algebra processors - Noise and error-source modeling
NASA Technical Reports Server (NTRS)
Casasent, D.; Ghosh, A.
1985-01-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...
ERIC Educational Resources Information Center
Julian, Liam
2009-01-01
In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…
Reducing the error growth in the numerical propagation of satellite orbits
NASA Astrophysics Data System (ADS)
Ferrandiz, Jose M.; Vigo, Jesus; Martin, P.
1991-12-01
An algorithm especially designed for the long term numerical integration of perturbed oscillators, in one or several frequencies, is presented. The method is applied to the numerical propagation of satellite orbits, using focal variables, and the results concerning highly eccentric or nearly circular cases are reported. The method performs particularly well for high eccentricity. For e = 0.99 and J2 + J3 perturbations it allows the last perigee after 1000 revolutions with an error less than 1 cm, with only 80 derivative evaluations per revolution. In general the approach provides about a hundred times more accuracy than Bettis methods over one thousand revolutions.
Challenge and Error: Critical Events and Attention-Related Errors
ERIC Educational Resources Information Center
Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel
2011-01-01
Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…
Lien, Yu-An S.; Stepp, Cara E.
2014-01-01
The relative fundamental frequency (RFF) surrounding the production of a voiceless consonant has previously been estimated using unprocessed and low-pass filtered microphone signals, but it can also be estimated using a neck-placed accelerometer signal that is less affected by vocal tract formants. Determining the effects of signal type on RFF will allow for comparisons across studies and aid in establishing a standard protocol with minimal within-speaker variability. Here RFF was estimated in 12 speakers with healthy voices using unprocessed microphone, low-pass filtered microphone, and unprocessed accelerometer signals. Unprocessed microphone and accelerometer signals were recorded simultaneously using a microphone and neck-placed accelerometer. The unprocessed microphone signal was filtered at 350 Hz to construct the low-pass filtered microphone signal. Analyses of variance showed that signal type and the interaction of vocal cycle × signal type had significant effects on both RFF means and standard deviations, but with small effect sizes. The overall RFF trend was preserved regardless of signal type and the intra-speaker variability of RFF was similar among the signal types. Thus, RFF can be estimated using either a microphone or an accelerometer signal in individuals with healthy voices. Future work extending these findings to individuals with disordered voices is warranted. PMID:24815277
Derivation of a Molecular Mechanics Force Field for Cholesterol
Cournia, Zoe; Vaiana, Andrea C.; Smith, Jeremy C.; Ullmann, G. Matthias M.
2004-01-01
As a necessary step toward realistic cholesterol:biomembrane simulations, we have derived CHARMM molecular mechanics force-field parameters for cholesterol. For the parametrization we use an automated method that involves fitting the molecular mechanics potential to both vibrational frequencies and eigenvector projections derived from quantum chemical calculations. Results for another polycyclic molecule, rhodamine 6G, are also given. The usefulness of the method is thus demonstrated by the use of reference data from two molecules at different levels of theory. The frequency-matching plots for both cholesterol and rhodamine 6G show overall agreement between the CHARMM and quantum chemical normal modes, with frequency matching for both molecules within the error range found in previous benchmark studies.
A generalized discrepancy and quadrature error bound
NASA Astrophysics Data System (ADS)
Hickernell, F. J.
1998-01-01
An error bound for multidimensional quadrature is derived that includes the Koksma-Hlawka inequality as a special case. This error bound takes the form of a product of two terms. One term, which depends only on the integrand, is defined as a generalized variation. The other term, which depends only on the quadrature rule, is defined as a generalized discrepancy. The generalized discrepancy is a figure of merit for quadrature rules and includes as special cases the L-p-star discrepancy and P-alpha that arises in the study of lattice rules.
Software errors and complexity: An empirical investigation
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Perricone, Berry T.
1983-01-01
The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.
[The error, source of learning].
Joyeux, Stéphanie; Bohic, Valérie
2016-05-01
The error itself is not recognised as a fault. It is the intentionality which differentiates between an error and a fault. An error is unintentional while a fault is a failure to respect known rules. The risk of error is omnipresent in health institutions. Public authorities have therefore set out a series of measures to reduce this risk. PMID:27155272
ERIC Educational Resources Information Center
Rieger, Martina; Martinez, Fanny; Wenke, Dorit
2011-01-01
Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…
Neural Correlates of Reach Errors
Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza
2005-01-01
Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
The Insufficiency of Error Analysis
ERIC Educational Resources Information Center
Hammarberg, B.
1974-01-01
The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…
Control by model error estimation
NASA Technical Reports Server (NTRS)
Likins, P. W.; Skelton, R. E.
1976-01-01
Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).
F, Delaporte
2008-09-01
The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729
Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark
1999-01-01
A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.
NASA Technical Reports Server (NTRS)
1989-01-01
001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.
NASA Technical Reports Server (NTRS)
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Delay compensation - Its effect in reducing sampling errors in Fourier spectroscopy
NASA Technical Reports Server (NTRS)
Zachor, A. S.; Aaronson, S. M.
1979-01-01
An approximate formula is derived for the spectrum ghosts caused by periodic drive speed variations in a Michelson interferometer. The solution represents the case of fringe-controlled sampling and is applicable when the reference fringes are delayed to compensate for the delay introduced by the electrical filter in the signal channel. Numerical results are worked out for several common low-pass filters. It is shown that the maximum relative ghost amplitude over the range of frequencies corresponding to the lower half of the filter band is typically 20 times smaller than the relative zero-to-peak velocity error, when delayed sampling is used. In the lowest quarter of the filter band it is more than 100 times smaller than the relative velocity error. These values are ten and forty times smaller, respectively, than they would be without delay compensation if the filter is a 6-pole Butterworth.
Space telemetry degradation due to Manchester data asymmetry induced carrier tracking phase error
NASA Technical Reports Server (NTRS)
Nguyen, Tien M.
1991-01-01
The deleterious effects that the Manchester (or Bi-phi) data asymmetry has on the performance of phase-modulated residual carrier communication systems are analyzed. Expressions for the power spectral density of an asymmetric Manchester data stream, the interference-to-carrier signal power ratio (I/C), and the error probability performance are derived. Since data asymmetry can cause undesired spectral components at the carrier frequency, the I/C ratio is given as a function of both the data asymmetry and the telemetry modulation index. Also presented are the data asymmetry and asymmetry-induced carrier tracking loop and the system bit-error rate to various parameters of the models.
Bayesian Error Estimation Functionals
NASA Astrophysics Data System (ADS)
Jacobsen, Karsten W.
The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.
Human Error In Complex Systems
NASA Technical Reports Server (NTRS)
Morris, Nancy M.; Rouse, William B.
1991-01-01
Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.
Pulse Shaping Entangling Gates and Error Supression
NASA Astrophysics Data System (ADS)
Hucul, D.; Hayes, D.; Clark, S. M.; Debnath, S.; Quraishi, Q.; Monroe, C.
2011-05-01
Control of spin dependent forces is important for generating entanglement and realizing quantum simulations in trapped ion systems. Here we propose and implement a composite pulse sequence based on the Molmer-Sorenson gate to decrease gate infidelity due to frequency and timing errors. The composite pulse sequence uses an optical frequency comb to drive Raman transitions simultaneously detuned from trapped ion transverse motional red and blue sideband frequencies. The spin dependent force displaces the ions in phase space, and the resulting spin-dependent geometric phase depends on the detuning. Voltage noise on the rf electrodes changes the detuning between the trapped ions' motional frequency and the laser, decreasing the fidelity of the gate. The composite pulse sequence consists of successive pulse trains from counter-propagating frequency combs with phase control of the microwave beatnote of the lasers to passively suppress detuning errors. We present the theory and experimental data with one and two ions where a gate is performed with a composite pulse sequence. This work supported by the U.S. ARO, IARPA, the DARPA OLE program, the MURI program; the NSF PIF Program; the NSF Physics Frontier Center at JQI; the European Commission AQUTE program; and the IC postdoc program administered by the NGA.
Error magnitude estimation in model-reference adaptive systems
NASA Technical Reports Server (NTRS)
Colburn, B. K.; Boland, J. S., III
1975-01-01
A second order approximation is derived from a linearized error characteristic equation for Lyapunov designed model-reference adaptive systems and is used to estimate the maximum error between the model and plant states, and the time to reach this peak following a plant perturbation. The results are applicable in the analysis of plants containing magnitude-dependent nonlinearities.
The Relative Error Magnitude in Three Measures of Change.
ERIC Educational Resources Information Center
Zimmerman, Donald W.; Williams, Richard H.
1982-01-01
Formulas for the standard error of measurement of three measures of change (simple differences; residualized difference scores; and a measure introduced by Tucker, Damarin, and Messick) are derived. A practical guide for determining the relative error of the three measures is developed. (Author/JKS)
A posteriori pointwise error estimates for the boundary element method
Paulino, G.H.; Gray, L.J.; Zarikian, V.
1995-01-01
This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.
Polarization influence on reflectance measurements in the spatial frequency domain.
Wiest, J; Bodenschatz, N; Brandes, A; Liemert, A; Kienle, A
2015-08-01
In this work, we quantify the influence of crossed polarizers on reflectance measurements in the spatial frequency domain. The use of crossed polarizers is a very common approach for suppression of specular surface reflections. However, measurements are typically evaluated using a non-polarized scalar theory. The consequences of this discrepancy are the focus of our study, and we also quantify the related errors of the derived optical properties. We used polarized Monte Carlo simulations for forward calculation of the reflectance from different samples. The samples' scatterers are assumed to be spherical, allowing for the calculation of the scattering functions by Mie theory. From the forward calculations, the reduced scattering coefficient [Formula: see text] and the absorption coefficient μa were derived by means of a scalar theory, as commonly used. Here, we use the analytical solution of the scalar radiative transfer equation. With this evaluation approach, which does not consider polarization, we found large errors in [Formula: see text] and μa in the range of 25% and above. Furthermore, we investigated the applicability of the use of a reference measurement to reduce these errors as suggested in literature. We found that this method is not able to generally improve the accuracy of measurements in the spatial frequency domain. Our general recommendation is to apply a polarized theory when using crossed polarizers. PMID:26158399
Dose error analysis for a scanned proton beam delivery system
NASA Astrophysics Data System (ADS)
Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.
2010-12-01
All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.
NASA Astrophysics Data System (ADS)
Bergmann-Wolf, I.; Dobslaw, H.; Mayer-Gürr, T.
2015-12-01
A realistically perturbed synthetic de-aliasing model consistent with the updated Earth System Model of the European Space Agency (Dobslaw et al., 2015) is now available for the years 1995 -- 2006. The data-set contains realizations of (i) errors at large spatial scales assessed individually for periods between 10 -- 30, 3 -- 10, and 1 -- 3 days, the S1 atmospheric tide, and sub-diurnal periods; (ii) errors at small spatial scales typically not covered by global models of atmosphere and ocean variability; and (iii) errors due to physical processes not represented in currently available de-aliasing products. The error magnitudes for each of the different frequency bands are derived from a small ensemble of four atmospheric and oceanic models. In order to demonstrate the plausibility of the error magnitudes chosen, we perform a variance component estimation based on daily GRACE normal equations from the ITSG-Grace2014 global gravity field series recently published by the University of Graz. All 12 years of the error model are used to calculate empirical error variance-covariance matrices describing the systematic dependencies of the errors both in time and in space individually for five continental and four oceanic regions, and daily GRACE normal equations are subsequently employed to obtain pre-factors for each of those matrices. For the largest spatial scales up to d/o = 40 and periods longer than 24 h, errors prepared for the updated ESM are found to be largely consistent with noise of a similar stochastic character contained in present-day GRACE solutions. Differences and similarities identified for all of the nine regions considered will be discussed in detail during the presentation.Dobslaw, H., I. Bergmann-Wolf, R. Dill, E. Forootan, V. Klemann, J. Kusche, and I. Sasgen (2015), The updated ESA Earth System Model for future gravity mission simulation studies, J. Geod., doi:10.1007/s00190-014-0787-8.
Riggs, H.C.
1968-01-01
This manual describes graphical and mathematical procedures for preparing frequency curves from samples of hydrologic data. It also discusses the theory of frequency curves, compares advantages of graphical and mathematical fitting, suggests methods of describing graphically defined frequency curves analytically, and emphasizes the correct interpretations of a frequency curve.
Asteroid orbital error analysis: Theory and application
NASA Technical Reports Server (NTRS)
Muinonen, K.; Bowell, Edward
1992-01-01
We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).
The Corneal Reflection Technique and the Visual Preference Method: Sources of Error
ERIC Educational Resources Information Center
Slater, Alan M.; Findlay, John M.
1975-01-01
This report examines the causes of error in two techniques for measuring eye fixation position. Theoretical calculations of the magnitude of sources of error are shown to produce good agreement with empirically derived magnitudes for adult and neonate eyes.
Frequency division multiplex technique
NASA Technical Reports Server (NTRS)
Brey, H. (Inventor)
1973-01-01
A system for monitoring a plurality of condition responsive devices is described. It consists of a master control station and a remote station. The master control station is capable of transmitting command signals which includes a parity signal to a remote station which transmits the signals back to the command station so that such can be compared with the original signals in order to determine if there are any transmission errors. The system utilizes frequency sources which are 1.21 multiples of each other so that no linear combination of any harmonics will interfere with another frequency.
On the Routh approximation technique and least squares errors
NASA Technical Reports Server (NTRS)
Aburdene, M. F.; Singh, R.-N. P.
1979-01-01
A new method for calculating the coefficients of the numerator polynomial of the direct Routh approximation method (DRAM) using the least square error criterion is formulated. The necessary conditions have been obtained in terms of algebraic equations. The method is useful for low frequency as well as high frequency reduced-order models.
NASA Technical Reports Server (NTRS)
Pei, Jing; Wall, John
2013-01-01
This paper describes the techniques involved in determining the aerodynamic stability derivatives for the frequency domain analysis of the Space Launch System (SLS) vehicle. Generally for launch vehicles, determination of the derivatives is fairly straightforward since the aerodynamic data is usually linear through a moderate range of angle of attack. However, if the wind tunnel data lacks proper corrections then nonlinearities and asymmetric behavior may appear in the aerodynamic database coefficients. In this case, computing the derivatives becomes a non-trivial task. Errors in computing the nominal derivatives could lead to improper interpretation regarding the natural stability of the system and tuning of the controller parameters, which would impact both stability and performance. The aerodynamic derivatives are also provided at off nominal operating conditions used for dispersed frequency domain Monte Carlo analysis. Finally, results are shown to illustrate that the effects of aerodynamic cross axis coupling can be neglected for the SLS configuration studied
Evaluating a medical error taxonomy.
Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie
2002-01-01
Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting. PMID:12463789
New Gear Transmission Error Measurement System Designed
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2001-01-01
The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.
Position error propagation in the simplex strapdown navigation system
NASA Technical Reports Server (NTRS)
1976-01-01
The results of an analysis of the effects of deterministic error sources on position error in the simplex strapdown navigation system were documented. Improving the long term accuracy of the system was addressed in two phases: understanding and controlling the error within the system, and defining methods of damping the net system error through the use of an external reference velocity or position. Review of the flight and ground data revealed error containing the Schuler frequency as well as non-repeatable trends. The only unbounded terms are those involving gyro bias and azimuth error coupled with velocity. All forms of Schuler-periodic position error were found to be sufficiently large to require update or damping capability unless the source coefficients can be limited to values less than those used in this analysis for misalignment and gyro and accelerometer bias. The first-order effects of the deterministic error sources were determined with a simple error propagator which provided plots of error time functions in response to various source error values.
First-order approximation error analysis of Risley-prism-based beam directing system.
Zhao, Yanyan; Yuan, Yan
2014-12-01
To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration. PMID:25607958
Transmission errors and forward error correction in embedded differential pulse code modulation
NASA Astrophysics Data System (ADS)
Goodman, D. J.; Sundberg, C.-E.
1983-11-01
Formulas are derived for the combined effects of quantization and transmission errors on embedded Differential Pulse Code Modulation (DPCM) performance. The present analysis, which is both more general and precise than previous work on transmission errors in digital communication of analog signals, includes as its special cases the conventional DPCM and Pulse code Modulation. An SNR formula is obtained in which the effects of source characteristics and the effects of transmission characteristics are clearly distinguishable. Also given in computationally convenient form are specialized formulas applying to uncoded transmission through a random-error channel, transmission through a slowly fading channel, and transmission with all or part of the DCPM signal being protected by an error-correcting code.
A rapid method for obtaining frequency-response functions for multiple input photogrammetric data
NASA Technical Reports Server (NTRS)
Kroen, M. L.; Tripp, J. S.
1984-01-01
A two-digital-camera photogrammetric technique for measuring the motion of a vibrating spacecraft structure or wing surface and an applicable data-reduction algorithm are presented. The 3D frequency-response functions are obtained by coordinate transformation from averaged cross and autopower spectra derived from the 4D camera coordinates by Fourier transformation. Error sources are investigated analytically, and sample results are shown in graphs.
NASA Astrophysics Data System (ADS)
Ross, A.; Czisch, M.; King, G. C.
1997-02-01
A theoretical approach to calculate the time evolution of magnetization during a CPMG pulse sequence of arbitrary parameter settings is developed and verified by experiment. The analysis reveals that off-resonance effects can cause systematic reductions in measured peak amplitudes that commonly lie in the range 5-25%, reaching 50% in unfavorable circumstances. These errors, which are finely dependent upon frequency offset and CPMG parameter settings, are subsequently transferred into erroneousT2values obtained by curve fitting, where they are reduced or amplified depending upon the magnitude of the relaxation time. Subsequent transfer to Lipari-Szabo model analysis can produce significant errors in derived motional parameters, with τeinternal correlation times being affected somewhat more thanS2order parameters. A hazard of this off-resonance phenomenon is its oscillatory nature, so that strongly affected and unaffected signals can be found at various frequencies within a CPMG spectrum. Methods for the reduction of the systematic error are discussed. Relaxation studies on biomolecules, especially at high field strengths, should take account of potential off-resonance contributions.
Error analysis and data reduction for interferometric surface measurements
NASA Astrophysics Data System (ADS)
Zhou, Ping
High-precision optical systems are generally tested using interferometry, since it often is the only way to achieve the desired measurement precision and accuracy. Interferometers can generally measure a surface to an accuracy of one hundredth of a wave. In order to achieve an accuracy to the next order of magnitude, one thousandth of a wave, each error source in the measurement must be characterized and calibrated. Errors in interferometric measurements are classified into random errors and systematic errors. An approach to estimate random errors in the measurement is provided, based on the variation in the data. Systematic errors, such as retrace error, imaging distortion, and error due to diffraction effects, are also studied in this dissertation. Methods to estimate the first order geometric error and errors due to diffraction effects are presented. Interferometer phase modulation transfer function (MTF) is another intrinsic error. The phase MTF of an infrared interferometer is measured with a phase Siemens star, and a Wiener filter is designed to recover the middle spatial frequency information. Map registration is required when there are two maps tested in different systems and one of these two maps needs to be subtracted from the other. Incorrect mapping causes wavefront errors. A smoothing filter method is presented which can reduce the sensitivity to registration error and improve the overall measurement accuracy. Interferometric optical testing with computer-generated holograms (CGH) is widely used for measuring aspheric surfaces. The accuracy of the drawn pattern on a hologram decides the accuracy of the measurement. Uncertainties in the CGH manufacturing process introduce errors in holograms and then the generated wavefront. An optimal design of the CGH is provided which can reduce the sensitivity to fabrication errors and give good diffraction efficiency for both chrome-on-glass and phase etched CGHs.
Estimating errors in least-squares fitting
NASA Technical Reports Server (NTRS)
Richter, P. H.
1995-01-01
While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.
An error bound for instantaneous coverage
NASA Technical Reports Server (NTRS)
White, Allan L.
1991-01-01
An error bound is derived for a reliability model approximation method. The approximation method is appropriate for the semi-Markov models of reconfigurable systems that are designed to achieve extremely high reliability. The semi-Markov models of these system are complex, and a significant amount of their complexity arises from the detailed descriptions of the reconfiguration processes. The reliability model approximation method consists of replacing a detailed description of a reconfiguration process with the probabilities of the possible outcomes of the reconfiguration process. These probabilities are included in the model as instantaneous jumps from the fault-occurrence state. Since little time is spent in the reconfiguration states, instantaneous jumps are a close approximation to the original model. This approximation procedure is shown to produce an overestimation for the probability of system failure, and an error bound is derived for this overestimation.
Speech Errors, Error Correction, and the Construction of Discourse.
ERIC Educational Resources Information Center
Linde, Charlotte
Speech errors have been used in the construction of production models of the phonological and semantic components of language, and for a model of interactional processes. Errors also provide insight into how speakers plan discourse and syntactic structure,. Different types of discourse exhibit different types of error. The present data are taken…
Influence of modulation frequency in rubidium cell frequency standards
NASA Technical Reports Server (NTRS)
Audoin, C.; Viennet, J.; Cyr, N.; Vanier, J.
1983-01-01
The error signal which is used to control the frequency of the quartz crystal oscillator of a passive rubidium cell frequency standard is considered. The value of the slope of this signal, for an interrogation frequency close to the atomic transition frequency is calculated and measured for various phase (or frequency) modulation waveforms, and for several values of the modulation frequency. A theoretical analysis is made using a model which applies to a system in which the optical pumping rate, the relaxation rates and the RF field are homogeneous. Results are given for sine-wave phase modulation, square-wave frequency modulation and square-wave phase modulation. The influence of the modulation frequency on the slope of the error signal is specified. It is shown that the modulation frequency can be chosen as large as twice the non-saturated full-width at half-maximum without a drastic loss of the sensitivity to an offset of the interrogation frequency from center line, provided that the power saturation factor and the amplitude of modulation are properly adjusted.
Forward error correction for an atmospheric noise channel
NASA Astrophysics Data System (ADS)
Olson, Katharyn E.; Enge, Per K.
1992-05-01
Two Markov chains are employed to model the memory of the atmospheric noise channel. It derives the transition probabilities for these chains from atmospheric noise error processes that were recorded at 306 kHz. The models are then utilized to estimate the probability of codeword error, and compares these estimates to codeword error rates that are obtained directly from the recorded error processes. These comparisons are made for the Golay code with various bit interleaving depths, and for a Reed-Solomon code with a variety of symbol interleaving depths.
Reducing medication errors in critical care: a multimodal approach
Kruer, Rachel M; Jarrell, Andrew S; Latif, Asad
2014-01-01
The Institute of Medicine has reported that medication errors are the single most common type of error in health care, representing 19% of all adverse events, while accounting for over 7,000 deaths annually. The frequency of medication errors in adult intensive care units can be as high as 947 per 1,000 patient-days, with a median of 105.9 per 1,000 patient-days. The formulation of drugs is a potential contributor to medication errors. Challenges related to drug formulation are specific to the various routes of medication administration, though errors associated with medication appearance and labeling occur among all drug formulations and routes of administration. Addressing these multifaceted challenges requires a multimodal approach. Changes in technology, training, systems, and safety culture are all strategies to potentially reduce medication errors related to drug formulation in the intensive care unit. PMID:25210478
Probability of undetected error after decoding for a concatenated coding scheme
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.
More On The Decoder-Error Probability Of Reed-Solomon Codes
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming
1989-01-01
Paper extends theory of decoder-error probability for linear maximum-distance separable (MDS) codes. General class of error-correcting codes includes Reed-Solomon codes, important in communications with distant spacecraft, military communications, and compact-disk recording industry. Advancing beyond previous theoretical developments that placed upper bounds on decoder-error probabilities, author derives an exact formula for probability PE(u) that decoder will make error when u code symbols in error.
Skylab water balance error analysis
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.
Uncertainty quantification and error analysis
Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Grammatical Errors and Communication Breakdown.
ERIC Educational Resources Information Center
Tomiyama, Machiko
This study investigated the relationship between grammatical errors and communication breakdown by examining native speakers' ability to correct grammatical errors. The assumption was that communication breakdown exists to a certain degree if a native speaker cannot correct the error or if the correction distorts the information intended to be…
Errors inducing radiation overdoses.
Grammaticos, Philip C
2013-01-01
There is no doubt that equipments exposing radiation and used for therapeutic purposes should be often checked for possibly administering radiation overdoses to the patients. Technologists, radiation safety officers, radiologists, medical physicists, healthcare providers and administration should take proper care on this issue. "We must be beneficial and not harmful to the patients", according to the Hippocratic doctrine. Cases of radiation overdose are often reported. A series of cases of radiation overdoses have recently been reported. Doctors who were responsible, received heavy punishments. It is much better to prevent than to treat an error or a disease. A Personal Smart Card or Score Card has been suggested for every patient undergoing therapeutic and/or diagnostic procedures by the use of radiation. Taxonomy may also help. PMID:24251304
NASA Technical Reports Server (NTRS)
1984-01-01
The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.
Goodman, Gerald R
2002-12-01
This article discusses principal concepts for the analysis, classification, and reporting of problems involving medical device technology. We define a medical device in regulatory terminology and define and discuss concepts and terminology used to distinguish the causes and sources of medical device problems. Database classification systems for medical device failure tracking are presented, as are sources of information on medical device failures. The importance of near-accident reporting is discussed to alert users that reported medical device errors are typically limited to those that have caused an injury or death. This can represent only a fraction of the true number of device problems. This article concludes with a summary of the most frequently reported medical device failures by technology type, clinical application, and clinical setting. PMID:12400632
Medical error and related factors during internship and residency.
Ahmadipour, Habibeh; Nahid, Mortazavi
2015-01-01
It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors. PMID:26592783
Attitude control with realization of linear error dynamics
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Bach, Ralph E.
1993-01-01
An attitude control law is derived to realize linear unforced error dynamics with the attitude error defined in terms of rotation group algebra (rather than vector algebra). Euler parameters are used in the rotational dynamics model because they are globally nonsingular, but only the minimal three Euler parameters are used in the error dynamics model because they have no nonlinear mathematical constraints to prevent the realization of linear error dynamics. The control law is singular only when the attitude error angle is exactly pi rad about any eigenaxis, and a simple intuitive modification at the singularity allows the control law to be used globally. The forced error dynamics are nonlinear but stable. Numerical simulation tests show that the control law performs robustly for both initial attitude acquisition and attitude control.
Systematic errors for a Mueller matrix dual rotating compensator ellipsometer.
Broch, Laurent; En Naciri, Aotmane; Johann, Luc
2008-06-01
The characterization of anisotropic materials and complex systems by ellipsometry has pushed the design of instruments to require the measurement of the full reflection Mueller matrix of the sample with a great precision. Therefore Mueller matrix ellipsometers have emerged over the past twenty years. The values of some coefficients of the matrix can be very small and errors due to noise or systematic errors can induce distored analysis. We present a detailed characterization of the systematic errors for a Mueller Matrix Ellipsometer in the dual-rotating compensator configuration. Starting from a general formalism, we derive explicit first-order expressions for the errors on all the coefficients of the Mueller matrix of the sample. The errors caused by inaccuracy of the azimuthal arrangement of the optical components and residual ellipticity introduced by imperfect optical elements are shown. A new method based on a four-zone averaging measurement is proposed to vanish the systematic errors. PMID:18545594
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Rapid mapping of volumetric errors
Krulewich, D.; Hale, L.; Yordy, D.
1995-09-13
This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.
Identifying and Reducing Systematic Errors in Chromosome Conformation Capture Data
Hahn, Seungsoo; Kim, Dongsup
2015-01-01
Chromosome conformation capture (3C)-based techniques have recently been used to uncover the mystic genomic architecture in the nucleus. These techniques yield indirect data on the distances between genomic loci in the form of contact frequencies that must be normalized to remove various errors. This normalization process determines the quality of data analysis. In this study, we describe two systematic errors that result from the heterogeneous local density of restriction sites and different local chromatin states, methods to identify and remove those artifacts, and three previously described sources of systematic errors in 3C-based data: fragment length, mappability, and local DNA composition. To explain the effect of systematic errors on the results, we used three different published data sets to show the dependence of the results on restriction enzymes and experimental methods. Comparison of the results from different restriction enzymes shows a higher correlation after removing systematic errors. In contrast, using different methods with the same restriction enzymes shows a lower correlation after removing systematic errors. Notably, the improved correlation of the latter case caused by systematic errors indicates that a higher correlation between results does not ensure the validity of the normalization methods. Finally, we suggest a method to analyze random error and provide guidance for the maximum reproducibility of contact frequency maps. PMID:26717152
Identifying and Reducing Systematic Errors in Chromosome Conformation Capture Data.
Hahn, Seungsoo; Kim, Dongsup
2015-01-01
Chromosome conformation capture (3C)-based techniques have recently been used to uncover the mystic genomic architecture in the nucleus. These techniques yield indirect data on the distances between genomic loci in the form of contact frequencies that must be normalized to remove various errors. This normalization process determines the quality of data analysis. In this study, we describe two systematic errors that result from the heterogeneous local density of restriction sites and different local chromatin states, methods to identify and remove those artifacts, and three previously described sources of systematic errors in 3C-based data: fragment length, mappability, and local DNA composition. To explain the effect of systematic errors on the results, we used three different published data sets to show the dependence of the results on restriction enzymes and experimental methods. Comparison of the results from different restriction enzymes shows a higher correlation after removing systematic errors. In contrast, using different methods with the same restriction enzymes shows a lower correlation after removing systematic errors. Notably, the improved correlation of the latter case caused by systematic errors indicates that a higher correlation between results does not ensure the validity of the normalization methods. Finally, we suggest a method to analyze random error and provide guidance for the maximum reproducibility of contact frequency maps. PMID:26717152
NASA Astrophysics Data System (ADS)
Hänsch, Theodor W.; Picqué, Nathalie
Much of modern research in the field of atomic, molecular, and optical science relies on lasers, which were invented some 50 years ago and perfected in five decades of intense research and development. Today, lasers and photonic technologies impact most fields of science and they have become indispensible in our daily lives. Laser frequency combs were conceived a decade ago as tools for the precision spectroscopy of atomic hydrogen. Through the development of optical frequency comb techniques,
Social aspects of clinical errors.
Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave
2009-08-01
Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors. PMID:19201405
Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.
ERIC Educational Resources Information Center
Monagle, E. Brette
The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…
Error analysis in the measurement of average power with application to switching controllers
NASA Technical Reports Server (NTRS)
Maisel, J. E.
1979-01-01
The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current and the signal multiplier was studied. It was concluded that this measurement error can be minimized if the frequency responses of the first order transfer functions are identical.
Examining IFOV error and demodulation strategies for infrared microgrid polarimeter imagery
NASA Astrophysics Data System (ADS)
Ratliff, Bradley M.; Tyo, J. Scott; LaCasse, Charles F.; Black, Wiley T.
2009-08-01
For the past several years we have been working on strategies to mitigate the effects of IFOV errors on LWIR microgrid polarimeters. In this paper we present a detailed, theoretical analysis of the source of IFOV error in the frequency domain, and show a frequency domain strategy to mitigate those effects.
Barberá, Ariana; Lorenzo, Noraylis; van Kooten, Peter; van Roon, Joel; de Jager, Wilco; Prada, Dinorah; Gómez, Jorge; Padrón, Gabriel; van Eden, Willem; Broere, Femke; Del Carmen Domínguez, María
2016-07-01
Rheumatoid arthritis (RA) is a systemic autoimmune disease characterized by a chronic relapsing-remitting joint inflammation. Perturbations in the balance between CD4 + T cells producing IL-17 and CD4 + CD25(high)FoxP3 + Tregs correlate with irreversible bone and cartilage destruction in RA. APL1 is an altered peptide ligand derived from a CD4+ T-cell epitope of human HSP60, an autoantigen expressed in the inflamed synovium, which increases the frequency of CD4 + CD25(high)FoxP3+ Tregs in peripheral blood mononuclear cells from RA patients. The aim of this study was to evaluate the suppressive capacity of Tregs induced by APL1 on proliferation of effector CD4+ T cells using co-culture experiments. Enhanced Treg-mediated suppression was observed in APL1-treated cultures compared with cells cultured only with media. Subsequent analyses using autologous cross-over experiments showed that the enhanced Treg suppression in APL1-treated cultures could reflect increased suppressive function of Tregs against APL1-responsive T cells. On the other hand, APL1-treatment had a significant effect reducing IL-17 levels produced by effector CD4+ T cells. Hence, this peptide has the ability to increase the frequency of Tregs and their suppressive properties whereas effector T cells produce less IL-17. Thus, we propose that APL1 therapy could help to ameliorate the pathogenic Th17/Treg balance in RA patients. PMID:27241313
Leach, Julia M.; Mancini, Martina; Peterka, Robert J.; Hayes, Tamara L.; Horak, Fay B.
2014-01-01
The Nintendo Wii balance board (WBB) has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation) research domains. Although the WBB has been proposed as an alternative to the “gold standard” laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP) measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP) to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz). Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB's CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML) (compared to the anteroposterior (AP)) sway direction. There was no difference in error across the 12 WBB's, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB's CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB's CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error) from 2–6 mm (before calibration) to 0.5–2 mm (after calibration). WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain) CoP sway measures, from −10.5% (before calibration) to −0.05% (after calibration) (percent errors averaged across all sway measures and in both sway directions). In this study, we characterized the WBB's CoP measurement error under controlled
Leach, Julia M; Mancini, Martina; Peterka, Robert J; Hayes, Tamara L; Horak, Fay B
2014-01-01
The Nintendo Wii balance board (WBB) has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation) research domains. Although the WBB has been proposed as an alternative to the "gold standard" laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP) measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP) to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz). Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB's CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML) (compared to the anteroposterior (AP)) sway direction. There was no difference in error across the 12 WBB's, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB's CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB's CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error) from 2-6 mm (before calibration) to 0.5-2 mm (after calibration). WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain) CoP sway measures, from -10.5% (before calibration) to -0.05% (after calibration) (percent errors averaged across all sway measures and in both sway directions). In this study, we characterized the WBB's CoP measurement error under controlled, dynamic
Sub-nanometer periodic nonlinearity error in absolute distance interferometers.
Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang
2015-05-01
Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°. PMID:26026510
Parametric registration of cross test error maps for optical surfaces
NASA Astrophysics Data System (ADS)
Chen, Shanyong; Dai, Yifan; Nie, Xuqing; Li, Shengyi
2015-07-01
It is necessary to quantitatively compare two measurement results which are typically in the form of error maps of the same surface figure for the purpose of cross test. The error maps are obtained by different methods or even different instruments. Misalignment exists between them including the tip-tilt, lateral shift, clocking and scaling. A fast registration algorithm is proposed to correct the misalignment before we can calculate the pixel-to-pixel difference of the two maps. It is formulated as simply a linear least-squares problem. Sensitivity of registration error to the misalignment is simulated with low-frequency features and mid-frequency features in the surface error maps represented by Zernike polynomials and spatially correlated functions, respectively. Finally by applying it to two cases of real datasets, the algorithm is validated to be comparable in accuracy to general non-linear optimization method based on sequential quadratic programming while the computation time is superiorly incomparable.
NASA Astrophysics Data System (ADS)
Prabu, K.; Kumar, D. Sriram
2015-05-01
An optical wireless communication system is an alternative to radio frequency communication, but atmospheric turbulence induced fading and misalignment fading are the main impairments affecting an optical signal when propagating through the turbulence channel. The resultant of misalignment fading is the pointing errors, it degrades the bit error rate (BER) performance of the free space optics (FSO) system. In this paper, we study the BER performance of the multiple-input multiple-output (MIMO) FSO system employing coherent binary polarization shift keying (BPOLSK) in gamma-gamma (G-G) channel with pointing errors. The BER performance of the BPOLSK based MIMO FSO system is compared with the single-input single-output (SISO) system. Also, the average BER performance of the systems is analyzed and compared with and without pointing errors. A novel closed form expressions of BER are derived for MIMO FSO system with maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques. The analytical results show that the pointing errors can severely degrade the performance of the system.
Flower, J
1998-01-01
The reality is that most change efforts fail. McKinsey & Company carried out a fascinating research project on change to "crack the code" on creating and managing change in large organizations. One of the questions they asked--and answered--is why most organizations fail in their efforts to manage change. They found that 80 percent of these failures could be traced to 13 common errors. They are: (1) No winning strategy; (2) failure to make a compelling and urgent case for change; (3) failure to distinguish between decision-driven and behavior-dependent change; (4) over-reliance on structure and systems to change behavior; (5) lack of skills and resources; (6) failure to experiment; (7) leaders' inability or unwillingness to confront how they and their roles must change; (8) failure to mobilize and engage pivotal groups; (9) failure to understand and shape the informal organization; (10) inability to integrate and align all the initiatives; (11) no performance focus; (12) excessively open-ended process; and (13) failure to make the whole process transparent and meaningful to individuals. PMID:10351717
NASA Technical Reports Server (NTRS)
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
Developing an Error Model for Ionospheric Phase Distortions in L-Band SAR and InSAR Data
NASA Astrophysics Data System (ADS)
Meyer, F. J.; Agram, P. S.
2014-12-01
Many of the recent and upcoming spaceborne SAR systems are operating in the L-band frequency range. The choice of L-band has a number of advantages especially for InSAR applications. These include deeper penetration into vegetation, higher coherence, and higher sensitivity to soil moisture. While L-band SARs are undoubtedly beneficial for a number of earth science disciplines, their signals are susceptive to path delay effects in the ionosphere. Many recent publications indicate that the ionosphere can have detrimental effects on InSAR coherence and phase. It has also been shown that the magnitude of these effects strongly depends on the time of day and geographic location of the image acquisition as well as on the coincident solar activity. Hence, in order to provide realistic error estimates for geodetic measurements derived from L-band InSAR, an error model needs to be developed that is capable of describing ionospheric noise. With this paper, we present a global ionospheric error model that is currently being developed in support of NASA's future L-band SAR mission NISAR. The system is based on a combination of empirical data analysis and modeling input from the ionospheric model WBMOD, and is capable of predicting ionosphere-induced phase noise as a function of space and time. The error model parameterizes ionospheric noise using a power spectrum model and provides the parameters of this model in a global 1x1 degree raster. From the power law model, ionospheric errors in deformation estimates can be calculated. In Polar Regions, our error model relies on a statistical analysis of ionospheric-phase noise in a large number of SAR data from previous L-band SAR missions such as ALOS PALSAR and JERS-1. The focus on empirical analyses is due to limitations of WBMOD in high latitude areas. Outside of the Polar Regions, the ionospheric model WBMOD is used to derive ionospheric structure parameters for as a function of solar activity. The structure parameters are
Improved Quantum Metrology Using Quantum Error Correction
NASA Astrophysics Data System (ADS)
Dür, W.; Skotiniotis, M.; Fröwis, F.; Kraus, B.
2014-02-01
We consider quantum metrology in noisy environments, where the effect of noise and decoherence limits the achievable gain in precision by quantum entanglement. We show that by using tools from quantum error correction this limitation can be overcome. This is demonstrated in two scenarios, including a many-body Hamiltonian with single-qubit dephasing or depolarizing noise and a single-body Hamiltonian with transversal noise. In both cases, we show that Heisenberg scaling, and hence a quadratic improvement over the classical case, can be retained. Moreover, for the case of frequency estimation we find that the inclusion of error correction allows, in certain instances, for a finite optimal interrogation time even in the asymptotic limit.
Measurement of errors in clinical laboratories.
Agarwal, Rachna
2013-07-01
Laboratories have a major impact on patient safety as 80-90 % of all the diagnosis are made on the basis of laboratory tests. Laboratory errors have a reported frequency of 0.012-0.6 % of all test results. Patient safety is a managerial issue which can be enhanced by implementing active system to identify and monitor quality failures. This can be facilitated by reactive method which includes incident reporting followed by root cause analysis. This leads to identification and correction of weaknesses in policies and procedures in the system. Another way is proactive method like Failure Mode and Effect Analysis. In this focus is on entire examination process, anticipating major adverse events and pre-emptively prevent them from occurring. It is used for prospective risk analysis of high-risk processes to reduce the chance of errors in the laboratory and other patient care areas. PMID:24426216
Error analysis in laparoscopic surgery
NASA Astrophysics Data System (ADS)
Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.
1998-06-01
Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.
NASA Technical Reports Server (NTRS)
Luers, J. K.
1980-01-01
An initial value of pressure is required to derive the density and pressure profiles of the rocketborne rocketsonde sensor. This tie-on pressure value is obtained from the nearest rawinsonde launch at an altitude where overlapping rawinsonde and rocketsonde measurements occur. An error analysis was performed of the error sources in these sensors that contribute to the error in the tie-on pressure. It was determined that significant tie-on pressure errors result from radiation errors in the rawinsonde rod thermistor, and temperature calibration bias errors. To minimize the effect of these errors radiation corrections should be made to the rawinsonde temperature and the tie-on altitude should be chosen at the lowest altitude of overlapping data. Under these conditions the tie-on error, and consequently the resulting error in the Datasonde pressure and density profiles is less tha 1%. The effect of rawinsonde pressure and temperature errors on the wind and temperature versus height profiles of the rawinsonde was also determined.
Effect of counting errors on immunoassay precision
Klee, G.G.; Post, G. )
1989-07-01
Using mathematical analysis and computer simulation, we studied the effect of gamma scintillation counting error on two radioimmunoassays (RIAs) and an immunoradiometric assay (IRMA). To analyze the propagation of the counting errors into the estimation of analyte concentration, we empirically derived parameters for logit-log data-reduction models for assays of digoxin and triiodothyronine (RIAs) and ferritin (IRMA). The component of the analytical error attributable to counting variability, when expressed as a CV of the analyte concentration, decreased approximately linearly with the inverse of the square root of the maximum counts bound. Larger counting-error CVs were found at lower concentrations for both RIAs and the IRMA. Substantially smaller CVs for overall assay were found when the maximum counts bound progressively increased from 500 to 10,000 counts, but further increases in maximum bound counts resulted in little decrease in overall assay CV except when very low concentrations of analyte were being measured. Therefore, RIA and IRMA systems based in duplicate determinations having at least 10,000 maximum counts bound should have adequate precision, except possibly at very low concentrations.
A QUANTITATIVE MODEL OF ERROR ACCUMULATION DURING PCR AMPLIFICATION
Pienaar, E; Theron, M; Nelson, M; Viljoen, HJ
2006-01-01
The amplification of target DNA by the polymerase chain reaction (PCR) produces copies which may contain errors. Two sources of errors are associated with the PCR process: (1) editing errors that occur during DNA polymerase-catalyzed enzymatic copying and (2) errors due to DNA thermal damage. In this study a quantitative model of error frequencies is proposed and the role of reaction conditions is investigated. The errors which are ascribed to the polymerase depend on the efficiency of its editing function as well as the reaction conditions; specifically the temperature and the dNTP pool composition. Thermally induced errors stem mostly from three sources: A+G depurination, oxidative damage of guanine to 8-oxoG and cytosine deamination to uracil. The post-PCR modifications of sequences are primarily due to exposure of nucleic acids to elevated temperatures, especially if the DNA is in a single-stranded form. The proposed quantitative model predicts the accumulation of errors over the course of a PCR cycle. Thermal damage contributes significantly to the total errors; therefore consideration must be given to thermal management of the PCR process. PMID:16412692
Skills, rules and knowledge in aircraft maintenance: errors in context
NASA Technical Reports Server (NTRS)
Hobbs, Alan; Williamson, Ann
2002-01-01
Automatic or skill-based behaviour is generally considered to be less prone to error than behaviour directed by conscious control. However, researchers who have applied Rasmussen's skill-rule-knowledge human error framework to accidents and incidents have sometimes found that skill-based errors appear in significant numbers. It is proposed that this is largely a reflection of the opportunities for error which workplaces present and does not indicate that skill-based behaviour is intrinsically unreliable. In the current study, 99 errors reported by 72 aircraft mechanics were examined in the light of a task analysis based on observations of the work of 25 aircraft mechanics. The task analysis identified the opportunities for error presented at various stages of maintenance work packages and by the job as a whole. Once the frequency of each error type was normalized in terms of the opportunities for error, it became apparent that skill-based performance is more reliable than rule-based performance, which is in turn more reliable than knowledge-based performance. The results reinforce the belief that industrial safety interventions designed to reduce errors would best be directed at those aspects of jobs that involve rule- and knowledge-based performance.
Random errors in egocentric networks.
Almquist, Zack W
2012-10-01
The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5-20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412
Random errors in egocentric networks
Almquist, Zack W.
2013-01-01
The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5–20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412
Dopamine reward prediction error coding
Schultz, Wolfram
2016-01-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377
[Error factors in spirometry].
Quadrelli, S A; Montiel, G C; Roncoroni, A J
1994-01-01
Spirometry is the more frequently used method to estimate pulmonary function in the clinical laboratory. It is important to comply with technical requisites to approximate the real values sought as well as adequate interpretation of results. Recommendations are made to establish: 1--quality control 2--define abnormality 3--classify the change from normal and its degree 4--define reversibility. In relation to quality control several criteria are pointed out such as end of the test, back-extrapolation and extrapolated volume in order to delineate most common errors. Daily calibration is advised. Inspection of graphical records of the test is mandatory. The limitations to the common use of 80% of predicted values to establish abnormality is stressed. The reasons for employing 95% confidence limits are detailed. It is important to select the reference values equation (in view of the differences in predicted values). It is advisable to validate the selection with local population normal values. In relation to the definition of the defect as restrictive or obstructive, the limitations of vital capacity (VC) to establish restriction, when obstruction is also present, are defined. Also the limitations of maximal mid-expiratory flow 25-75 (FMF 25-75) as an isolated marker of obstruction. Finally the qualities of forced expiratory volume in 1 sec (VEF1) and the difficulties with other indicators (CVF, FMF 25-75, VEF1/CVF) to estimate reversibility after bronchodilators are evaluated. The value of different methods used to define reversibility (% of change in initial value, absolute change or % of predicted), is commented. Clinical spirometric studies in order to be valuable should be performed with the same technical rigour as any other more complex studies. PMID:7990690
Frequency spectrum analyzer with phase-lock
Boland, Thomas J.
1984-01-01
A frequency-spectrum analyzer with phase-lock for analyzing the frequency and amplitude of an input signal is comprised of a voltage controlled oscillator (VCO) which is driven by a ramp generator, and a phase error detector circuit. The phase error detector circuit measures the difference in phase between the VCO and the input signal, and drives the VCO locking it in phase momentarily with the input signal. The input signal and the output of the VCO are fed into a correlator which transfers the input signal to a frequency domain, while providing an accurate absolute amplitude measurement of each frequency component of the input signal.
Development of transmission error tester for face gears
NASA Astrophysics Data System (ADS)
Shi, Zhao-yao; Lu, Xiao-ning; Chen, Chang-he; Lin, Jia-chun
2013-10-01
A tester for measuring face gears' transmission error was developed based on single-flank rolling principle. The mechanical host was of hybrid configuration of the vertical and horizontal structures. The tester is mainly constituted by base, precision spindle, grating measurement system and control unit. The structure of precision spindles was designed, and rotation accuracy of the spindleswas improved. The key techniques, such as clamping, positioning and adjustment of the gears were researched. In order to collect the data of transmission error, high-frequency clock pulse subdivision count method with higher measurement resolution was proposed. The developed tester can inspect the following errors, such as transmission error of the pair, tangential composite deviation for the measured face gear, pitch deviation, eccentricity error, and so on. The results of measurement can be analyzed by the tester; The tester can meet face gear quality testing requirements for accuracy of grade 5.
Nonresponse Error in Mail Surveys: Top Ten Problems
Daly, Jeanette M.; Jones, Julie K.; Gereau, Patricia L.; Levy, Barcey T.
2011-01-01
Conducting mail surveys can result in nonresponse error, which occurs when the potential participant is unwilling to participate or impossible to contact. Nonresponse can result in a reduction in precision of the study and may bias results. The purpose of this paper is to describe and make readers aware of a top ten list of mailed survey problems affecting the response rate encountered over time with different research projects, while utilizing the Dillman Total Design Method. Ten nonresponse error problems were identified, such as inserter machine gets sequence out of order, capitalization in databases, and mailing discarded by postal service. These ten mishaps can potentiate nonresponse errors, but there are ways to minimize their frequency. Suggestions offered stem from our own experiences during research projects. Our goal is to increase researchers' knowledge of nonresponse error problems and to offer solutions which can decrease nonresponse error in future projects. PMID:21994846
Teratogenic inborn errors of metabolism.
Leonard, J. V.
1986-01-01
Most children with inborn errors of metabolism are born healthy without malformations as the fetus is protected by the metabolic activity of the placenta. However, certain inborn errors of the fetus have teratogenic effects although the mechanisms responsible for the malformations are not generally understood. Inborn errors in the mother may also be teratogenic. The adverse effects of these may be reduced by improved metabolic control of the biochemical disorder. PMID:3540927
Coherent error suppression in multiqubit entangling gates.
Hayes, D; Clark, S M; Debnath, S; Hucul, D; Inlek, I V; Lee, K W; Quraishi, Q; Monroe, C
2012-07-13
We demonstrate a simple pulse shaping technique designed to improve the fidelity of spin-dependent force operations commonly used to implement entangling gates in trapped ion systems. This extension of the Mølmer-Sørensen gate can theoretically suppress the effects of certain frequency and timing errors to any desired order and is demonstrated through Walsh modulation of a two qubit entangling gate on trapped atomic ions. The technique is applicable to any system of qubits coupled through collective harmonic oscillator modes. PMID:23030141
Confidence limits and their errors
Rajendran Raja
2002-03-22
Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.
Compensating For GPS Ephemeris Error
NASA Technical Reports Server (NTRS)
Wu, Jiun-Tsong
1992-01-01
Method of computing position of user station receiving signals from Global Positioning System (GPS) of navigational satellites compensates for most of GPS ephemeris error. Present method enables user station to reduce error in its computed position substantially. User station must have access to two or more reference stations at precisely known positions several hundred kilometers apart and must be in neighborhood of reference stations. Based on fact that when GPS data used to compute baseline between reference station and user station, vector error in computed baseline is proportional ephemeris error and length of baseline.
Medication Errors in Outpatient Pediatrics.
Berrier, Kyla
2016-01-01
Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices. PMID:27537086
Physical examination. Frequently observed errors.
Wiener, S; Nathanson, M
1976-08-16
A method allowing for direct observation of intern and resident physicians while interviewing and examining patients has been in use on our medical wards for the last five years. A large number of errors in the performance of the medical examination by young physicians were noted and a classification of these errors into those of technique, omission, detection, interpretation, and recording was made. An approach to detection and correction of each of these kinds of errors is presented, as well as a discussion of possible reasons for the occurrence of these errors in physician performance. PMID:947266
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1980-01-01
Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
Quality and Safety in Health Care, Part XIII: Detecting and Analyzing Diagnostic Errors.
Harolds, Jay A
2016-08-01
There are many ways to help determine the incidence of errors in diagnosis including reviewing autopsy data, health insurance and malpractice claims, patient health records, and surveys of doctors and patients. However, all of these methods have positive and negative points. There are also a variety of ways to analyze diagnostic errors and many recommendations about how to decrease the frequency of errors in diagnosis. Overdiagnosis is an important quality and safety issue but is not considered an error. PMID:27163458
Evaluating Spectral Signals to Identify Spectral Error.
Bazar, George; Kovacs, Zoltan; Tsenkova, Roumiana
2016-01-01
Since the precision and accuracy level of a chemometric model is highly influenced by the quality of the raw spectral data, it is very important to evaluate the recorded spectra and describe the erroneous regions before qualitative and quantitative analyses or detailed band assignment. This paper provides a collection of basic spectral analytical procedures and demonstrates their applicability in detecting errors of near infrared data. Evaluation methods based on standard deviation, coefficient of variation, mean centering and smoothing techniques are presented. Applications of derivatives with various gap sizes, even below the bandpass of the spectrometer, are shown to evaluate the level of spectral errors and find their origin. The possibility for prudent measurement of the third overtone region of water is also highlighted by evaluation of a complex data recorded with various spectrometers. PMID:26731541
Evaluating Spectral Signals to Identify Spectral Error
Bazar, George; Kovacs, Zoltan; Tsenkova, Roumiana
2016-01-01
Since the precision and accuracy level of a chemometric model is highly influenced by the quality of the raw spectral data, it is very important to evaluate the recorded spectra and describe the erroneous regions before qualitative and quantitative analyses or detailed band assignment. This paper provides a collection of basic spectral analytical procedures and demonstrates their applicability in detecting errors of near infrared data. Evaluation methods based on standard deviation, coefficient of variation, mean centering and smoothing techniques are presented. Applications of derivatives with various gap sizes, even below the bandpass of the spectrometer, are shown to evaluate the level of spectral errors and find their origin. The possibility for prudent measurement of the third overtone region of water is also highlighted by evaluation of a complex data recorded with various spectrometers. PMID:26731541
Error analysis of stochastic gradient descent ranking.
Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan
2013-06-01
Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error. PMID:24083315
Implications of Three Causal Models for the Measurement of Halo Error.
ERIC Educational Resources Information Center
Fisicaro, Sebastiano A.; Lance, Charles E.
1990-01-01
Three conceptual definitions of halo error are reviewed in the context of causal models of halo error. A corrected correlational measurement of halo error is derived, and the traditional and corrected measures are compared empirically for a 1986 study of 52 undergraduate students' ratings of a lecturer's performance. (SLD)
A Simple Approximation for the Symbol Error Rate of Triangular Quadrature Amplitude Modulation
NASA Astrophysics Data System (ADS)
Duy, Tran Trung; Kong, Hyung Yun
In this paper, we consider the error performance of the regular triangular quadrature amplitude modulation (TQAM). In particular, using an accurate exponential bound of the complementary error function, we derive a simple approximation for the average symbol error rate (SER) of TQAM over Additive White Gaussian Noise (AWGN) and fading channels. The accuracy of our approach is verified by some simulation results.
Error estimates and specification parameters for functional renormalization
Schnoerr, David; Boettcher, Igor; Pawlowski, Jan M.; Wetterich, Christof
2013-07-15
We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.
A posteriori error estimator and error control for contact problems
NASA Astrophysics Data System (ADS)
Weiss, Alexander; Wohlmuth, Barbara I.
2009-09-01
In this paper, we consider two error estimators for one-body contact problems. The first error estimator is defined in terms of H( div ) -conforming stress approximations and equilibrated fluxes while the second is a standard edge-based residual error estimator without any modification with respect to the contact. We show reliability and efficiency for both estimators. Moreover, the error is bounded by the first estimator with a constant one plus a higher order data oscillation term plus a term arising from the contact that is shown numerically to be of higher order. The second estimator is used in a control-based AFEM refinement strategy, and the decay of the error in the energy is shown. Several numerical tests demonstrate the performance of both estimators.
Impact of Measurement Error on Testing Genetic Association with Quantitative Traits
Liao, Jiemin; Li, Xiang; Wong, Tien-Yin; Wang, Jie Jin; Khor, Chiea Chuen; Tai, E. Shyong; Aung, Tin; Teo, Yik-Ying; Cheng, Ching-Yu
2014-01-01
Measurement error of a phenotypic trait reduces the power to detect genetic associations. We examined the impact of sample size, allele frequency and effect size in presence of measurement error for quantitative traits. The statistical power to detect genetic association with phenotype mean and variability was investigated analytically. The non-centrality parameter for a non-central F distribution was derived and verified using computer simulations. We obtained equivalent formulas for the cost of phenotype measurement error. Effects of differences in measurements were examined in a genome-wide association study (GWAS) of two grading scales for cataract and a replication study of genetic variants influencing blood pressure. The mean absolute difference between the analytic power and simulation power for comparison of phenotypic means and variances was less than 0.005, and the absolute difference did not exceed 0.02. To maintain the same power, a one standard deviation (SD) in measurement error of a standard normal distributed trait required a one-fold increase in sample size for comparison of means, and a three-fold increase in sample size for comparison of variances. GWAS results revealed almost no overlap in the significant SNPs (p<10−5) for the two cataract grading scales while replication results in genetic variants of blood pressure displayed no significant differences between averaged blood pressure measurements and single blood pressure measurements. We have developed a framework for researchers to quantify power in the presence of measurement error, which will be applicable to studies of phenotypes in which the measurement is highly variable. PMID:24475218
The undetected error probability for Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Mceliece, Robert J.
1988-01-01
McEliece and Swanson (1986) offered an upper bound on P(E)u, the decoder error probability given u symbol errors occur. In the present study, by using a combinatoric technique such as the principle of inclusion and exclusion, an exact formula for P(E)u is derived. The P(E)u of a maximum distance separable code is observed to approach Q rapidly as u gets large, where Q is the probability that a completely random error pattern will cause decoder error. An upper bound for the expansion P(E)u/Q - 1 is derived, and is shown to decrease nearly exponentially as u increases. This proves analytically that P(E)u indeed approaches Q as u becomes large, and that some laws of large number come into play.
NASA Astrophysics Data System (ADS)
Dobslaw, Henryk; Bergmann-Wolf, Inga; Forootan, Ehsan; Dahle, Christoph; Mayer-Gürr, Torsten; Kusche, Jürgen; Flechtner, Frank
2016-05-01
A realistically perturbed synthetic de-aliasing model consistent with the updated Earth System Model of the European Space Agency is now available over the period 1995-2006. The dataset contains realizations of (1) errors at large spatial scales assessed individually for periods 10-30, 3-10, and 1-3 days, the S1 atmospheric tide, and sub-diurnal periods; (2) errors at small spatial scales typically not covered by global models of atmosphere and ocean variability; and (3) errors due to physical processes not represented in currently available de-aliasing products. The model is provided in two separate sets of Stokes coefficients to allow for a flexible re-scaling of the overall error level to account for potential future improvements in atmosphere and ocean mass variability models. Error magnitudes for the different frequency bands are derived from a small ensemble of four atmospheric and oceanic models. For the largest spatial scales up to d/o = 40 and periods longer than 24 h, those error estimates are approximately confirmed from a variance component estimation based on GRACE daily normal equations. Future mission performance simulations based on the updated Earth System Model and the realistically perturbed de-aliasing model indicate that for GRACE-type missions only moderate reductions of de-aliasing errors can be expected from a second satellite pair in a shifted polar orbit. Substantially more accurate global gravity fields are obtained when a second pair of satellites in an moderately inclined orbit is added, which largely stabilizes the global gravity field solutions due to its rotated sampling sensitivity.
NASA Astrophysics Data System (ADS)
Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric
2013-04-01
Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors
Correction of single frequency altimeter measurements for ionosphere delay
Schreiner, W.S.; Markin, R.E.; Born, G.H.
1997-03-01
Satellite altimetry has become a very powerful tool for the study of ocean circulation and variability and provides data for understanding important issues related to climate and global change. This study is a preliminary analysis of the accuracy of various ionosphere models to correct single frequency altimeter height measurements for ionospheric path delay. In particular, research focused on adjusting empirical and parameterized ionosphere models in the parameterized real-time ionospheric specification model (PRISM) 1.2 using total electron content (TEC) data from the global positioning system (GPS). The types of GPS data used to adjust PRISM included GPS line-of-sight (LOS) TEC data mapped to the vertical, and a grid of GPS derived TEC data in a sun-fixed longitude frame. The adjusted PRISM TEC values, as well as predictions by IRI-90, a climatological model, were compared to TOPEX/Poseidon (T/P) TEC measurements form the dual-frequency altimeter for a number of T/P tracks. When adjusted with GPS LOS data, the PRISM empirical model predicted TEC over 24 1 h data sets for a given local time to within a global error of 8.60 TECU rms during a midnight centered ionosphere and 9.74 TECU rms during a noon centered ionosphere. Using GPS derived sun-fixed TEC data, the PRISM parameterized model predicted TEC within an error of 8.47 TECU rms centered at midnight and 12.83 TECU rms centered at noon.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
NASA Astrophysics Data System (ADS)
Noble, Viveca K.
1994-10-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
NASA Astrophysics Data System (ADS)
Yang, Feng; Li, Kwok H.; Teh, Kah C.
2006-12-01
Carrier frequency offset (CFO) is a serious drawback in orthogonal frequency division multiplexing (OFDM) systems. It must be estimated and compensated before demodulation to guarantee the system performance. In this paper, we examine the performance of a blind minimum output variance (MOV) estimator. Based on the derived probability density function (PDF) of the output magnitude, its mean and variance are obtained and it is observed that the variance reaches the minimum when there is no frequency offset. This observation motivates the development of the proposed MOV estimator. The theoretical mean-square error (MSE) of the MOV estimator over an AWGN channel is obtained. The analytical results are in good agreement with the simulation results. The performance evaluation of the MOV estimator is extended to a frequency-selective fading channel and the maximal-ratio combining (MRC) technique is applied to enhance the MOV estimator's performance. Simulation results show that the MRC technique significantly improves the accuracy of the MOV estimator.
Joint angle and Doppler frequency estimation of coherent targets in monostatic MIMO radar
NASA Astrophysics Data System (ADS)
Cao, Renzheng; Zhang, Xiaofei
2015-05-01
This paper discusses the problem of joint direction of arrival (DOA) and Doppler frequency estimation of coherent targets in a monostatic multiple-input multiple-output radar. In the proposed algorithm, we perform a reduced dimension (RD) transformation on the received signal first and then use forward spatial smoothing (FSS) technique to decorrelate the coherence and obtain joint estimation of DOA and Doppler frequency by exploiting the estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm. The joint estimated parameters of the proposed RD-FSS-ESPRIT are automatically paired. Compared with the conventional FSS-ESPRIT algorithm, our RD-FSS-ESPRIT algorithm has much lower complexity and better estimation performance of both DOA and frequency. The variance of the estimation error and the Cramer-Rao Bound of the DOA and frequency estimation are derived. Simulation results show the effectiveness and improvement of our algorithm.
Error Estimation for Reduced Order Models of Dynamical Systems
Homescu, C; Petzold, L; Serban, R
2004-01-22
The use of reduced order models to describe a dynamical system is pervasive in science and engineering. Often these models are used without an estimate of their error or range of validity. In this paper we consider dynamical systems and reduced models built using proper orthogonal decomposition. We show how to compute estimates and bounds for these errors, by a combination of small sample statistical condition estimation and error estimation using the adjoint method. Most importantly, the proposed approach allows the assessment of regions of validity for reduced models, i.e., ranges of perturbations in the original system over which the reduced model is still appropriate. Numerical examples validate our approach: the error norm estimates approximate well the forward error while the derived bounds are within an order of magnitude.
Dose error analysis for a scanned proton beam delivery system.
Coutrakon, G; Wang, N; Miller, D W; Yang, Y
2010-12-01
All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm(3) target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy. PMID:21076200
Explaining Errors in Children's Questions
ERIC Educational Resources Information Center
Rowland, Caroline F.
2007-01-01
The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…
Dyslexia and Oral Reading Errors
ERIC Educational Resources Information Center
Singleton, Chris
2005-01-01
Thomson was the first of very few researchers to have studied oral reading errors as a means of addressing the question: Are dyslexic readers different to other readers? Using the Neale Analysis of Reading Ability and Goodman's taxonomy of oral reading errors, Thomson concluded that dyslexic readers are different, but he found that they do not…
Children's Scale Errors with Tools
ERIC Educational Resources Information Center
Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi
2011-01-01
Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…
Robustness and modeling error characterization
NASA Technical Reports Server (NTRS)
Lehtomaki, N. A.; Castanon, D. A.; Sandell, N. R., Jr.; Levy, B. C.; Athans, M.; Stein, G.
1984-01-01
The results on robustness theory presented here are extensions of those given in Lehtomaki et al., (1981). The basic innovation in these new results is that they utilize minimal additional information about the structure of the modeling error, as well as its magnitude, to assess the robustness of feedback systems for which robustness tests based on the magnitude of modeling error alone are inconclusive.
Human Error: A Concept Analysis
NASA Technical Reports Server (NTRS)
Hansen, Frederick D.
2007-01-01
Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.
Dual Processing and Diagnostic Errors
ERIC Educational Resources Information Center
Norman, Geoff
2009-01-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…
Measurement Errors in Organizational Surveys.
ERIC Educational Resources Information Center
Dutka, Solomon; Frankel, Lester R.
1993-01-01
Describes three classes of measurement techniques: (1) interviewing methods; (2) record retrieval procedures; and (3) observation methods. Discusses primary reasons for measurement error. Concludes that, although measurement error can be defined and controlled for, there are other design factors that also must be considered. (CFR)
Barriers to Medical Error Reporting
Poorolajal, Jalal; Rezaie, Shirin; Aghighi, Negar
2015-01-01
Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan, Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%), lack of proper reporting form (51.8%), lack of peer supporting a person who has committed an error (56.0%), and lack of personal attention to the importance of medical errors (62.9%). The rate of committing medical errors was higher in men (71.4%), age of 50–40 years (67.6%), less-experienced personnel (58.7%), educational level of MSc (87.5%), and staff of radiology department (88.9%). Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement. PMID:26605018
Higgins, Jane; Bezjak, Andrea; Hope, Andrew; Panzarella, Tony; Li, Winnie; Cho, John B.C.; Craig, Tim; Brade, Anthony; Sun, Alexander; Bissonnette, Jean-Pierre
2011-08-01
Purpose: To assess the relative effectiveness of five image-guidance (IG) frequencies on reducing patient positioning inaccuracies and setup margins for locally advanced lung cancer patients. Methods and Materials: Daily cone-beam computed tomography data for 100 patients (4,237 scans) were analyzed. Subsequently, four less-than-daily IG protocols were simulated using these data (no IG, first 5-day IG, weekly IG, and alternate-day IG). The frequency and magnitude of residual setup error were determined. The less-than-daily IG protocols were compared against the daily IG, the assumed reference standard. Finally, the population-based setup margins were calculated. Results: With the less-than-daily IG protocols, 20-43% of fractions incurred residual setup errors {>=}5 mm; daily IG reduced this to 6%. With the exception of the first 5-day IG, reductions in systematic error ({Sigma}) occurred as the imaging frequency increased and only daily IG provided notable random error ({sigma}) reductions ({Sigma} = 1.5-2.2 mm, {sigma} = 2.5-3.7 mm; {Sigma} = 1.8-2.6 mm, {sigma} = 2.5-3.7 mm; and {Sigma} = 0.7-1.0 mm, {sigma} = 1.7-2.0 mm for no IG, first 5-day IG, and daily IG, respectively. An overall significant difference in the mean setup error was present between the first 5-day IG and daily IG (p < .0001). The derived setup margins were 5-9 mm for less-than-daily IG and were 3-4 mm with daily IG. Conclusion: Daily cone-beam computed tomography substantially reduced the setup error and could permit setup margin reduction and lead to a reduction in normal tissue toxicity for patients undergoing conventionally fractionated lung radiotherapy. Using first 5-day cone-beam computed tomography was suboptimal for lung patients, given the inability to reduce the random error and the potential for the systematic error to increase throughout the treatment course.
Reducing latent errors, drift errors, and stakeholder dissonance.
Samaras, George M
2012-01-01
Healthcare information technology (HIT) is being offered as a transformer of modern healthcare delivery systems. Some believe that it has the potential to improve patient safety, increase the effectiveness of healthcare delivery, and generate significant cost savings. In other industrial sectors, information technology has dramatically influenced quality and profitability - sometimes for the better and sometimes not. Quality improvement efforts in healthcare delivery have not yet produced the dramatic results obtained in other industrial sectors. This may be that previously successful quality improvement experts do not possess the requisite domain knowledge (clinical experience and expertise). It also appears related to a continuing misconception regarding the origins and meaning of work errors in healthcare delivery. The focus here is on system use errors rather than individual user errors. System use errors originate in both the development and the deployment of technology. Not recognizing stakeholders and their conflicting needs, wants, and desires (NWDs) may lead to stakeholder dissonance. Mistakes translating stakeholder NWDs into development or deployment requirements may lead to latent errors. Mistakes translating requirements into specifications may lead to drift errors. At the sharp end, workers encounter system use errors or, recognizing the risk, expend extensive and unanticipated resources to avoid them. PMID:22317001
Onorbit IMU alignment error budget
NASA Technical Reports Server (NTRS)
Corson, R. W.
1980-01-01
The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.
Poster error probability in the Mu-11 Sequential Ranging System
NASA Technical Reports Server (NTRS)
Coyle, C. W.
1981-01-01
An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.
Waveform error analysis for bistatic synthetic aperture radar systems
NASA Astrophysics Data System (ADS)
Adams, J. W.; Schifani, T. M.
The signal phase histories at the transmitter, receiver, and radar signal processor in bistatic SAR systems are described. The fundamental problem of mismatches in the waveform generators for the illuminating and receiving radar systems is analyzed. The effects of errors in carrier frequency and chirp slope are analyzed for bistatic radar systems which use linear FM waveforms. It is shown that the primary effect of a mismatch in carrier frequencies is an azimuth displacement of the image.
On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion.
Ricci, Luca; Taffoni, Fabrizio; Formica, Domenico
2016-01-01
The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application. PMID:27612100
Regional Ionospheric Modelling for Single-Frequency Users
NASA Astrophysics Data System (ADS)
Boisits, Janina; Joldzic, Nina; Weber, Robert
2016-04-01
Ionospheric signal delays are a main error source in GNSS-based positioning. Thus, single-frequency receivers, which are frequently used nowadays, require additional ionospheric information to mitigate these effects. Within the Austrian Research Promotion Agency (FFG) project Regiomontan (Regional Ionospheric Modelling for Single-Frequency Users) a new and as realistic as possible model is used to obtain precise GNSS ionospheric signal delays. These delays will be provided to single-frequency users to significantly increase positioning accuracy. The computational basis is the Thin-Shell Model. For regional modelling a thin electron layer of the underlying model is approximated by a Taylor series up to degree two. The network used includes 22 GNSS Reference Stations in Austria and nearby. First results were calculated from smoothed code observations by forming the geometry-free linear combination. Satellite and station DCBs were applied. In a least squares adjustment the model parameters, consisting of the VTEC0 at the origin of the investigated area, as well as the first and the second derivatives of the electron content in longitude and latitude, were obtained with a temporal resolution of 1 hour. The height of the layer was kept fixed. The formal errors of the model parameters suggest an accuracy of the VTEC slightly better than 1TECU for a user location within Austria. In a further step, the model parameters were derived from sole phase observations by using a levelling approach to mitigate common range biases. The formal errors of this model approach suggest an accuracy of about a few tenths of a TECU. For validation, the Regiomontan VTEC was compared to IGS TEC maps depicting a very good agreement. Further, a comparison of pseudoranges has been performed to calculate the 'true' error by forming the ionosphere-free linear combination on the one hand, and by applying the Regiomontan model to L1 pseudoranges on the other hand. The resulting differences are mostly
Quantum rms error and Heisenberg's error-disturbance relation
NASA Astrophysics Data System (ADS)
Busch, Paul
2014-09-01
Reports on experiments recently performed in Vienna [Erhard et al, Nature Phys. 8, 185 (2012)] and Toronto [Rozema et al, Phys. Rev. Lett. 109, 100404 (2012)] include claims of a violation of Heisenberg's error-disturbance relation. In contrast, a Heisenberg-type tradeoff relation for joint measurements of position and momentum has been formulated and proven in [Phys. Rev. Lett. 111, 160405 (2013)]. Here I show how the apparent conflict is resolved by a careful consideration of the quantum generalization of the notion of root-mean-square error. The claim of a violation of Heisenberg's principle is untenable as it is based on a historically wrong attribution of an incorrect relation to Heisenberg, which is in fact trivially violated. We review a new general trade-off relation for the necessary errors in approximate joint measurements of incompatible qubit observables that is in the spirit of Heisenberg's intuitions. The experiments mentioned may directly be used to test this new error inequality.
Syntactic and Semantic Errors in Radiology Reports Associated With Speech Recognition Software.
Ringler, Michael D; Goss, Brian C; Bartholmai, Brian J
2015-01-01
Speech recognition software (SRS) has many benefits, but also increases the frequency of errors in radiology reports, which could impact patient care. As part of a quality control project, 13 trained medical transcriptionists proofread 213,977 SRS-generated signed reports from 147 different radiologists over a 40 month time interval. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods using .2 analysis and multiple logistic regression, as appropriate. 20,759 (9.7%) reports contained errors; 3,992 (1.9%) contained material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors (P<.001). Error proportion varied significantly among radiologists and between imaging subspecialties (P<.001). Errors were more common in cross-sectional reports (vs. plain radiography) (OR, 3.72), reports reinterpreting results of outside examinations (vs. in-house) (OR, 1.55), and procedural studies (vs. diagnostic) (OR, 1.91) (all P<.001). Dictation microphone upgrade did not affect error rate (P=.06). Error rate decreased over time (P<.001). PMID:26262224
Error estimates for Gaussian quadratures of analytic functions
NASA Astrophysics Data System (ADS)
Milovanovic, Gradimir V.; Spalevic, Miodrag M.; Pranic, Miroslav S.
2009-12-01
For analytic functions the remainder term of Gaussian quadrature formula and its Kronrod extension can be represented as a contour integral with a complex kernel. We study these kernels on elliptic contours with foci at the points ±1 and the sum of semi-axes [varrho]>1 for the Chebyshev weight functions of the first, second and third kind, and derive representation of their difference. Using this representation and following Kronrod's method of obtaining a practical error estimate in numerical integration, we derive new error estimates for Gaussian quadratures.
Frequency synthesizers for telemetry receivers
NASA Astrophysics Data System (ADS)
Stirling, Ronald C.
1990-07-01
The design of a frequency synthesizer is presented for telemetry receivers. The synthesizer contains two phase-locked loops, each with a programmable frequency counter, and incorporates fractional frequency synthesis but does not use a phase accumulator. The selected receiver design has a variable reference loop operating as a part of the output loop. Within the synthesizer, a single VTO generates the output frequency that is voltage-tunable from 375-656 MHz. The single-sideband phase noise is measured with an HP 8566B spectrum analyzer, and the receiver's bit error rate (BER) is measured with a carrier frequency of 250 MHz, synthesized LO at 410 MHz, and the conditions of BPSK, NRZ-L, and 2.3 kHz bit rate. The phase noise measurement limits and the BER performance data are presented in tabular form.
Error compensation for thermally induced errors on a machine tool
Krulewich, D.A.
1996-11-08
Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.
Discrete derivative estimation in LISA Pathfinder data reduction
NASA Astrophysics Data System (ADS)
Ferraioli, Luigi; Hueller, Mauro; Vitale, Stefano
2009-05-01
Data analysis for the LISA Technology package (LTP) experiment to be flown aboard the LISA Pathfinder mission requires the solution of the system dynamics for the calculation of the force acting on the test masses (TMs) starting from interferometer position data. The need for a solution to this problem has prompted us to implement a discrete time-domain derivative estimator suited for the LTP experiment requirements. We first report on the mathematical procedures for the definition of two methods; the first based on a parabolic fit approximation and the second based on a Taylor series expansion. These two methods are then generalized and incorporated into a more general class of five-point discrete derivative estimators. The same procedure employed for the second derivative can be applied to the estimation of the first derivative and of a data smoother allowing defining a class of simple five-point estimators for both. The performances of three particular realizations of the five-point second-derivative estimator are analyzed with simulated noisy data. This analysis pointed out that those estimators introducing large amount of high-frequency noise can determine systematic errors in the estimation of low-frequency noise levels.
Stochastic Models of Human Errors
NASA Technical Reports Server (NTRS)
Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)
2002-01-01
Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.
Laser tracker error determination using a network measurement
NASA Astrophysics Data System (ADS)
Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim
2011-04-01
We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies.
Error prediction for probes guided by means of fixtures
NASA Astrophysics Data System (ADS)
Fitzpatrick, J. Michael
2012-02-01
Probe guides are surgical fixtures that are rigidly attached to bone anchors in order to place a probe at a target with high accuracy (RMS error < 1 mm). Applications include needle biopsy, the placement of electrodes for deep-brain stimulation (DBS), spine surgery, and cochlear implant surgery. Targeting is based on pre-operative images, but targeting errors can arise from three sources: (1) anchor localization error, (2) guide fabrication error, and (3) external forces and torques. A well-established theory exists for the statistical prediction of target registration error (TRE) when targeting is accomplished by means of tracked probes, but no such TRE theory is available for fixtured probe guides. This paper provides that theory and shows that all three error sources can be accommodated in a remarkably simple extension of existing theory. Both the guide and the bone with attached anchors are modeled as objects with rigid sections and elastic sections, the latter of which are described by stiffness matrices. By relating minimization of elastic energy for guide attachment to minimization of fiducial registration error for point registration, it is shown that the expression for targeting error for the guide is identical to that for weighted rigid point registration if the weighting matrices are properly derived from stiffness matrices and the covariance matrices for fiducial localization are augmented with offsets in the anchor positions. An example of the application of the theory is provided for ear surgery.
Error propagation in energetic carrying capacity models
Pearse, Aaron T.; Stafford, Joshua D.
2014-01-01
Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.
Detection and frequency tracking of chirping signals
Elliott, G.R.; Stearns, S.D.
1990-08-01
This paper discusses several methods to detect the presence of and track the frequency of a chirping signal in broadband noise. The dynamic behavior of each of the methods is described and tracking error bounds are investigated in terms of the chirp rate. Frequency tracking and behavior in the presence of varying levels of noise are illustrated in examples. 11 refs., 29 figs.
NASA Astrophysics Data System (ADS)
Ottino-Löffler, Bertrand; Strogatz, Steven H.
2016-09-01
We study the dynamics of coupled phase oscillators on a two-dimensional Kuramoto lattice with periodic boundary conditions. For coupling strengths just below the transition to global phase-locking, we find localized spatiotemporal patterns that we call "frequency spirals." These patterns cannot be seen under time averaging; they become visible only when we examine the spatial variation of the oscillators' instantaneous frequencies, where they manifest themselves as two-armed rotating spirals. In the more familiar phase representation, they appear as wobbly periodic patterns surrounding a phase vortex. Unlike the stationary phase vortices seen in magnetic spin systems, or the rotating spiral waves seen in reaction-diffusion systems, frequency spirals librate: the phases of the oscillators surrounding the central vortex move forward and then backward, executing a periodic motion with zero winding number. We construct the simplest frequency spiral and characterize its properties using analytical and numerical methods. Simulations show that frequency spirals in large lattices behave much like this simple prototype.
Canonical Correlation Analysis that Incorporates Measurement and Sampling Error Considerations.
ERIC Educational Resources Information Center
Thompson, Bruce; Daniel, Larry
Multivariate methods are being used with increasing frequency in educational research because these methods control "experimentwise" error rate inflation, and because the methods best honor the nature of the reality to which the researcher wishes to generalize. This paper: explains the basic logic of canonical analysis; illustrates that canonical…
Shape error analysis for reflective nano focusing optics
Modi, Mohammed H.; Idir, Mourad
2010-06-23
Focusing performance of reflective x-ray optics is determined by surface figure accuracy. Any surface imperfection present on such optics introduces a phase error in the outgoing wave fields. Therefore converging beam at the focal spot will differ from the desired performance. Effect of these errors on focusing performance can be calculated by wave optical approach considering a coherent wave field illumination of optical elements. We have developed a wave optics simulator using Fresnel-Kirchhoff diffraction integral to calculate the mirror pupil function. Both analytically calculated and measured surface topography data can be taken as an aberration source to outgoing wave fields. Simulations are performed to study the effect of surface height fluctuations on focusing performances over wide frequency range in high, mid and low frequency band. The results using real shape profile measured with long trace profilometer (LTP) suggest that the shape error of {lambda}/4 PV (peak to valley) is tolerable to achieve diffraction limited performance. It is desirable to remove shape error of very low frequency as 0.1 mm{sup -1} which otherwise will generate beam waist or satellite peaks. All other frequencies above this limit will not affect the focused beam profile but only caused a loss in intensity.
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
Kuldeep, B; Singh, V K; Kumar, A; Singh, G K
2015-01-01
In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647
Robust characterization of leakage errors
NASA Astrophysics Data System (ADS)
Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph
2016-04-01
Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.
Static Detection of Disassembly Errors
Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K
2009-10-13
Static disassembly is a crucial first step in reverse engineering executable files, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.
Orbital and Geodetic Error Analysis
NASA Technical Reports Server (NTRS)
Felsentreger, T.; Maresca, P.; Estes, R.
1985-01-01
Results that previously required several runs determined in more computer-efficient manner. Multiple runs performed only once with GEODYN and stored on tape. ERODYN then performs matrix partitioning and linear algebra required for each individual error-analysis run.
Prospective errors determine motor learning.
Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi
2015-01-01
Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model's novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628
Human errors and measurement uncertainty
NASA Astrophysics Data System (ADS)
Kuselman, Ilya; Pennecchi, Francesca
2015-04-01
Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.