RCCS operation with a resonant frequency error in the KOMAC
NASA Astrophysics Data System (ADS)
Seo, Dong-Hyuk
2015-10-01
The resonance control cooling systems (RCCSs) of the Korea Multi-purpose Accelerator Complex have been operated for cooling the drift tubes (DT) and controlling the resonant frequency of the drift tube linac (DTL). The DTL should maintain a resonant frequency of 350 MHz during operation. A RCCS can control the temperature of the cooling water to within ±0.1 °C by using a 3-way valve opening and has a constant-cooling-water-temperature control mode and resonant-frequency-control mode. In the case of the resonant-frequency control, the error in the frequency is measured by using the low-level radio-frequency control system, and the RCCS uses a proportional-integral-derivative control algorithm to compensate for the error by controlling the temperature of the cooling water to the DT.
The Relative Frequency of Spanish Pronunciation Errors.
ERIC Educational Resources Information Center
Hammerly, Hector
Types of hierarchies of pronunciation difficulty are discussed, and a hierarchy based on contrastive analysis plus informal observation is proposed. This hierarchy is less one of initial difficulty than of error persistence. One feature of this hierarchy is that, because of lesser learner awareness and very limited functional load, errors…
Compensation Low-Frequency Errors in TH-1 Satellite
NASA Astrophysics Data System (ADS)
Wang, Jianrong; Wang, Renxiang; Hu, Xin
2016-06-01
The topographic mapping products at 1:50,000 scale can be realized using satellite photogrammetry without ground control points (GCPs), which requires the high accuracy of exterior orientation elements. Usually, the attitudes of exterior orientation elements are obtained from the attitude determination system on the satellite. Based on the theoretical analysis and practice, the attitude determination system exists not only the high-frequency errors, but also the low-frequency errors related to the latitude of satellite orbit and the time. The low-frequency errors would affect the location accuracy without GCPs, especially to the horizontal accuracy. In SPOT5 satellite, the latitudinal model was proposed to correct attitudes using approximately 20 calibration sites data, and the location accuracy was improved. The low-frequency errors are also found in Tian Hui 1 (TH-1) satellite. Then, the method of compensation low-frequency errors is proposed in ground image processing of TH-1, which can detect and compensate the low-frequency errors automatically without using GCPs. This paper deal with the low-frequency errors in TH-1: First, the analysis about low-frequency errors of the attitude determination system is performed. Second, the compensation models are proposed in bundle adjustment. Finally, the verification is tested using data of TH-1. The testing results show: the low-frequency errors of attitude determination system can be compensated during bundle adjustment, which can improve the location accuracy without GCPs and has played an important role in the consistency of global location accuracy.
Frequency of Consonant Articulation Errors in Dysarthric Speech
ERIC Educational Resources Information Center
Kim, Heejin; Martin, Katie; Hasegawa-Johnson, Mark; Perlman, Adrienne
2010-01-01
This paper analyses consonant articulation errors in dysarthric speech produced by seven American-English native speakers with cerebral palsy. Twenty-three consonant phonemes were transcribed with diacritics as necessary in order to represent non-phoneme misarticulations. Error frequencies were examined with respect to six variables: articulatory…
Antenna pointing systematic error model derivations
NASA Technical Reports Server (NTRS)
Guiar, C. N.; Lansing, F. L.; Riggs, R.
1987-01-01
The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.
Frequency analysis of nonlinear oscillations via the global error minimization
NASA Astrophysics Data System (ADS)
Kalami Yazdi, M.; Hosseini Tehrani, P.
2016-06-01
The capacity and effectiveness of a modified variational approach, namely global error minimization (GEM) is illustrated in this study. For this purpose, the free oscillations of a rod rocking on a cylindrical surface and the Duffing-harmonic oscillator are treated. In order to validate and exhibit the merit of the method, the obtained result is compared with both of the exact frequency and the outcome of other well-known analytical methods. The corollary reveals that the first order approximation leads to an acceptable relative error, specially for large initial conditions. The procedure can be promisingly exerted to the conservative nonlinear problems.
Frequency analysis of photoplethysmogram and its derivatives.
Elgendi, Mohamed; Fletcher, Richard R; Norton, Ian; Brearley, Matt; Abbott, Derek; Lovell, Nigel H; Schuurmans, Dale
2015-12-01
There are a limited number of studies on heat stress dynamics during exercise using the photoplethysmogram (PPG). We investigate the PPG signal and its derivatives for heat stress assessment using Welch (non-parametric) and autoregressive (parametric) spectral estimation methods. The preliminary results of this study indicate that applying the first and second derivatives to PPG waveforms is useful for determining heat stress level using 20-s recordings. Interestingly, Welch's and Yule-Walker's methods in agreement that the second derivative is an improved detector for heat stress. In fact, both spectral estimation methods showed a clear separation in the frequency domain between measurements before and after simulated heat-stress induction when the second derivative is applied. Moreover, the results demonstrate superior performance of the Welch's method over the Yule-Walker's method in separating before and after the three simulated heat-stress inductions. PMID:26498064
NASA Astrophysics Data System (ADS)
Li, Guofa; Huang, Wei; Zheng, Hao; Zhang, Baoqing
2016-02-01
The spectral ratio method (SRM) is widely used to estimate quality factor Q via the linear regression of seismic attenuation under the assumption of a constant Q. However, the estimate error will be introduced when this assumption is violated. For the frequency-dependent Q described by a power-law function, we derived the analytical expression of estimate error as a function of the power-law exponent γ and the ratio of the bandwidth to the central frequency σ . Based on the theoretical analysis, we found that the estimate errors are mainly dominated by the exponent γ , and less affected by the ratio σ . This phenomenon implies that the accuracy of the Q estimate can hardly be improved by adjusting the width and range of the frequency band. Hence, we proposed a two-parameter regression method to estimate the frequency-dependent Q from the nonlinear seismic attenuation. The proposed method was tested using the direct waves acquired by a near-surface cross-hole survey, and its reliability was evaluated in comparison with the result of SRM.
NASA Technical Reports Server (NTRS)
Moore, H. J.; Wu, S. C.
1973-01-01
The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.
NASA Astrophysics Data System (ADS)
Liu, Wei; Li, Chao; Sun, Zhao-Yang; Zhao, Yu; Wu, Shi-You; Fang, Guang-You
2016-08-01
In the terahertz (THz) band, the inherent shake of the human body may strongly impair the image quality of a beam scanning single frequency holography system for personnel screening. To realize accurate shake compensation in imaging processing, it is quite necessary to develop a high-precision measure system. However, in many cases, different parts of a human body may shake to different extents, resulting in greatly increasing the difficulty in conducting a reasonable measurement of body shake errors for image reconstruction. In this paper, a body shake error compensation algorithm based on the raw data is proposed. To analyze the effect of the body shake on the raw data, a model of echoed signal is rebuilt with considering both the beam scanning mode and the body shake. According to the rebuilt signal model, we derive the body shake error estimated method to compensate for the phase error. Simulation on the reconstruction of point targets with shake errors and proof-of-principle experiments on the human body in the 0.2-THz band are both performed to confirm the effectiveness of the body shake compensation algorithm proposed. Project supported by the Knowledge Innovation Program of the Chinese Academy of Sciences (Grant No. YYYJ-1123).
Susceptibility of biallelic haplotype and genotype frequencies to genotyping error.
Moskvina, Valentina; Schmidt, Karl Michael
2006-12-01
With the availability of fast genotyping methods and genomic databases, the search for statistical association of single nucleotide polymorphisms with a complex trait has become an important methodology in medical genetics. However, even fairly rare errors occurring during the genotyping process can lead to spurious association results and decrease in statistical power. We develop a systematic approach to study how genotyping errors change the genotype distribution in a sample. The general M-marker case is reduced to that of a single-marker locus by recognizing the underlying tensor-product structure of the error matrix. Both method and general conclusions apply to the general error model; we give detailed results for allele-based errors of size depending both on the marker locus and the allele present. Multiple errors are treated in terms of the associated diffusion process on the space of genotype distributions. We find that certain genotype and haplotype distributions remain unchanged under genotyping errors, and that genotyping errors generally render the distribution more similar to the stable one. In case-control association studies, this will lead to loss of statistical power for nondifferential genotyping errors and increase in type I error for differential genotyping errors. Moreover, we show that allele-based genotyping errors do not disturb Hardy-Weinberg equilibrium in the genotype distribution. In this setting we also identify maximally affected distributions. As they correspond to situations with rare alleles and marker loci in high linkage disequilibrium, careful checking for genotyping errors is advisable when significant association based on such alleles/haplotypes is observed in association studies.
Jason-2 systematic error analysis in the GPS derived orbits
NASA Astrophysics Data System (ADS)
Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.
2011-12-01
Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced
PULSAR TIMING ERRORS FROM ASYNCHRONOUS MULTI-FREQUENCY SAMPLING OF DISPERSION MEASURE VARIATIONS
Lam, M. T.; Cordes, J. M.; Chatterjee, S.; Dolch, T.
2015-03-10
Free electrons in the interstellar medium cause frequency-dependent delays in pulse arrival times due to both scattering and dispersion. Multi-frequency measurements are used to estimate and remove dispersion delays. In this paper, we focus on the effect of any non-simultaneity of multi-frequency observations on dispersive delay estimation and removal. Interstellar density variations combined with changes in the line of sight from pulsar and observer motions cause dispersion measure (DM) variations with an approximately power-law power spectrum, augmented in some cases by linear trends. We simulate time series, estimate the magnitude and statistical properties of timing errors that result from non-simultaneous observations, and derive prescriptions for data acquisition that are needed in order to achieve a specified timing precision. For nearby, highly stable pulsars, measurements need to be simultaneous to within about one day in order for the timing error from asynchronous DM correction to be less than about 10 ns. We discuss how timing precision improves when increasing the number of dual-frequency observations used in DM estimation for a given epoch. For a Kolmogorov wavenumber spectrum, we find about a factor of two improvement in precision timing when increasing from two to three observations but diminishing returns thereafter.
A Study of the Frequency and Communicative Effects of Errors in Spanish
ERIC Educational Resources Information Center
Guntermann, Gail
1978-01-01
A study conducted in El Salvador was designed to: determine which kinds of errors may be most frequently committed by learners who have reached a basic level of proficiency: discover which high-frequency errors most impede comprehension; and develop a procedure for eliciting evaluational reactions to errors from native listeners. (SW)
Theory of point-spread function artifacts due to structured mid-spatial frequency surface errors.
Tamkin, John M; Dallas, William J; Milster, Tom D
2010-09-01
Optical design and tolerancing of aspheric or free-form surfaces require attention to surface form, structured surface errors, and nonstructured errors. We describe structured surface error profiles and effects on the image point-spread function using harmonic (Fourier) decomposition. Surface errors over the beam footprint map onto the pupil, where multiple structured surface frequencies mix to create sum and difference diffraction orders in the image plane at each field point. Difference frequencies widen the central lobe of the point-spread function and summation frequencies create ghost images.
Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth
Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan
2015-01-01
Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars. PMID:26347779
Bounding higher-order ionosphere errors for the dual-frequency GPS user
NASA Astrophysics Data System (ADS)
Datta-Barua, S.; Walter, T.; Blanch, J.; Enge, P.
2008-10-01
Civil signals at L2 and L5 frequencies herald a new phase of Global Positioning System (GPS) performance. Dual-frequency users typically assume a first-order approximation of the ionosphere index of refraction, combining the GPS observables to eliminate most of the ranging delay, on the order of meters, introduced into the pseudoranges. This paper estimates the higher-order group and phase errors that occur from assuming the ordinary first-order dual-frequency ionosphere model using data from the Federal Aviation Administration's Wide Area Augmentation System (WAAS) network on a solar maximum quiet day and an extremely stormy day postsolar maximum. We find that during active periods, when ionospheric storms may introduce slant range delays at L1 as high as 100 m, the higher-order group errors in the L1-L2 or L1-L5 dual-frequency combination can be tens of centimeters. The group and phase errors are no longer equal and opposite, so these errors accumulate in carrier smoothing of the dual-frequency code observable. We show the errors in the carrier-smoothed code are due to higher-order group errors and, to a lesser extent, to higher-order phase rate errors. For many applications, this residual error is sufficiently small as to be neglected. However, such errors can impact geodetic applications as well as the error budgets of GPS Augmentation Systems providing Category III precision approach.
Evaluating Error of LIDAR Derived dem Interpolation for Vegetation Area
NASA Astrophysics Data System (ADS)
Ismail, Z.; Khanan, M. F. Abdul; Omar, F. Z.; Rahman, M. Z. Abdul; Mohd Salleh, M. R.
2016-09-01
Light Detection and Ranging or LiDAR data is a data source for deriving digital terrain model while Digital Elevation Model or DEM is usable within Geographical Information System or GIS. The aim of this study is to evaluate the accuracy of LiDAR derived DEM generated based on different interpolation methods and slope classes. Initially, the study area is divided into three slope classes: (a) slope class one (0° - 5°), (b) slope class two (6° - 10°) and (c) slope class three (11° - 15°). Secondly, each slope class is tested using three distinctive interpolation methods: (a) Kriging, (b) Inverse Distance Weighting (IDW) and (c) Spline. Next, accuracy assessment is done based on field survey tachymetry data. The finding reveals that the overall Root Mean Square Error or RMSE for Kriging provided the lowest value of 0.727 m for both 0.5 m and 1 m spatial resolutions of oil palm area, followed by Spline with values of 0.734 m for 0.5 m spatial resolution and 0.747 m for spatial resolution of 1 m. Concurrently, IDW provided the highest RMSE value of 0.784 m for both spatial resolutions of 0.5 and 1 m. For rubber area, Spline provided the lowest RMSE value of 0.746 m for 0.5 m spatial resolution and 0.760 m for 1 m spatial resolution. The highest value of RMSE for rubber area is IDW with the value of 1.061 m for both spatial resolutions. Finally, Kriging gave the RMSE value of 0.790m for both spatial resolutions.
To Err is Normable: The Computation of Frequency-Domain Error Bounds from Time-Domain Data
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Veillette, Robert J.; DeAbreuGarcia, J. Alexis; Chicatelli, Amy; Hartmann, Richard
1998-01-01
This paper exploits the relationships among the time-domain and frequency-domain system norms to derive information useful for modeling and control design, given only the system step response data. A discussion of system and signal norms is included. The proposed procedures involve only simple numerical operations, such as the discrete approximation of derivatives and integrals, and the calculation of matrix singular values. The resulting frequency-domain and Hankel-operator norm approximations may be used to evaluate the accuracy of a given model, and to determine model corrections to decrease the modeling errors.
ERIC Educational Resources Information Center
Sampson, Andrew
2012-01-01
This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…
NASA Astrophysics Data System (ADS)
Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng
2016-06-01
The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.
Tamkin, John M; Milster, Tom D; Dallas, William
2010-09-01
Aspheric and free-form surfaces are powerful surface forms that allow designers to achieve better performance with fewer lenses and smaller packages. Unlike spheres, these surfaces are fabricated with processes that leave a signature, or "structure," that is primarily in the mid-spatial-frequency region. These structured surface errors create ripples in the modulation transfer function (MTF) profile. Using Fourier techniques with generalized functions, the drop in MTF is derived and shown to exhibit a nonlinear relationship with the peak-to-valley height of the structured surface error.
Vazin, Afsaneh; Zamani, Zahra; Hatam, Nahid
2014-01-01
This study was conducted with the purpose of determining the frequency of medication errors (MEs) occurring in tertiary care emergency department (ED) of a large academic hospital in Iran. The incidence of MEs was determined through the disguised direct observation method conducted by a trained observer. A total of 1,031 medication doses administered to 202 patients admitted to the tertiary care ED were observed over a course of 54 6-hour shifts. Following collection of the data and analysis of the errors with the assistance of a clinical pharmacist, frequency of errors in the different stages was reported and analyzed in SPSS-21 software. For the 202 patients and the 1,031 medication doses evaluated in the present study, 707 (68.5%) MEs were recorded in total. In other words, 3.5 errors per patient and almost 0.69 errors per medication are reported to have occurred, with the highest frequency of errors pertaining to cardiovascular (27.2%) and antimicrobial (23.6%) medications. The highest rate of errors occurred during the administration phase of the medication use process with a share of 37.6%, followed by errors of prescription and transcription with a share of 21.1% and 10% of errors, respectively. Omission (7.6%) and wrong time error (4.4%) were the most frequent administration errors. The less-experienced nurses (P=0.04), higher patient-to-nurse ratio (P=0.017), and the morning shifts (P=0.035) were positively related to administration errors. Administration errors marked the highest share of MEs occurring in the different medication use processes. Increasing the number of nurses and employing the more experienced of them in EDs can help reduce nursing errors. Addressing the shortcomings with further research should result in reduction of MEs in EDs. PMID:25525391
Vazin, Afsaneh; Zamani, Zahra; Hatam, Nahid
2014-01-01
This study was conducted with the purpose of determining the frequency of medication errors (MEs) occurring in tertiary care emergency department (ED) of a large academic hospital in Iran. The incidence of MEs was determined through the disguised direct observation method conducted by a trained observer. A total of 1,031 medication doses administered to 202 patients admitted to the tertiary care ED were observed over a course of 54 6-hour shifts. Following collection of the data and analysis of the errors with the assistance of a clinical pharmacist, frequency of errors in the different stages was reported and analyzed in SPSS-21 software. For the 202 patients and the 1,031 medication doses evaluated in the present study, 707 (68.5%) MEs were recorded in total. In other words, 3.5 errors per patient and almost 0.69 errors per medication are reported to have occurred, with the highest frequency of errors pertaining to cardiovascular (27.2%) and antimicrobial (23.6%) medications. The highest rate of errors occurred during the administration phase of the medication use process with a share of 37.6%, followed by errors of prescription and transcription with a share of 21.1% and 10% of errors, respectively. Omission (7.6%) and wrong time error (4.4%) were the most frequent administration errors. The less-experienced nurses (P=0.04), higher patient-to-nurse ratio (P=0.017), and the morning shifts (P=0.035) were positively related to administration errors. Administration errors marked the highest share of MEs occurring in the different medication use processes. Increasing the number of nurses and employing the more experienced of them in EDs can help reduce nursing errors. Addressing the shortcomings with further research should result in reduction of MEs in EDs. PMID:25525391
Disentangling the impacts of outcome valence and outcome frequency on the post-error slowing.
Wang, Lijun; Tang, Dandan; Zhao, Yuanfang; Hitchman, Glenn; Wu, Shanshan; Tan, Jinfeng; Chen, Antao
2015-01-01
Post-error slowing (PES) reflects efficient outcome monitoring, manifested as slower reaction time after errors. Cognitive control account assumes that PES depends on error information, whereas orienting account posits that it depends on error frequency. This raises the question how the outcome valence and outcome frequency separably influence the generation of PES. To address this issue, we varied the probability of observation errors (50/50 and 20/80, correct/error) the "partner" committed by employing an observation-execution task and investigated the corresponding behavioral and neural effects. On each trial, participants first viewed the outcome of a flanker-run that was supposedly performed by a 'partner', and then performed a flanker-run themselves afterwards. We observed PES in the two error rate conditions. However, electroencephalographic data suggested error-related potentials (oERN and oPe) and rhythmic oscillation associated with attentional process (alpha band) were respectively sensitive to outcome valence and outcome frequency. Importantly, oERN amplitude was positively correlated with PES. Taken together, these findings support the assumption of the cognitive control account, suggesting that outcome valence and outcome frequency are both involved in PES. Moreover, the generation of PES is indexed by oERN, whereas the modulation of PES size could be reflected on the alpha band.
Disentangling the impacts of outcome valence and outcome frequency on the post-error slowing
Wang, Lijun; Tang, Dandan; Zhao, Yuanfang; Hitchman, Glenn; Wu, Shanshan; Tan, Jinfeng; Chen, Antao
2015-01-01
Post-error slowing (PES) reflects efficient outcome monitoring, manifested as slower reaction time after errors. Cognitive control account assumes that PES depends on error information, whereas orienting account posits that it depends on error frequency. This raises the question how the outcome valence and outcome frequency separably influence the generation of PES. To address this issue, we varied the probability of observation errors (50/50 and 20/80, correct/error) the “partner” committed by employing an observation-execution task and investigated the corresponding behavioral and neural effects. On each trial, participants first viewed the outcome of a flanker-run that was supposedly performed by a ‘partner’, and then performed a flanker-run themselves afterwards. We observed PES in the two error rate conditions. However, electroencephalographic data suggested error-related potentials (oERN and oPe) and rhythmic oscillation associated with attentional process (alpha band) were respectively sensitive to outcome valence and outcome frequency. Importantly, oERN amplitude was positively correlated with PES. Taken together, these findings support the assumption of the cognitive control account, suggesting that outcome valence and outcome frequency are both involved in PES. Moreover, the generation of PES is indexed by oERN, whereas the modulation of PES size could be reflected on the alpha band. PMID:25732237
Error Bounds for Quadrature Methods Involving Lower Order Derivatives
ERIC Educational Resources Information Center
Engelbrecht, Johann; Fedotov, Igor; Fedotova, Tanya; Harding, Ansie
2003-01-01
Quadrature methods for approximating the definite integral of a function f(t) over an interval [a,b] are in common use. Examples of such methods are the Newton-Cotes formulas (midpoint, trapezoidal and Simpson methods etc.) and the Gauss-Legendre quadrature rules, to name two types of quadrature. Error bounds for these approximations involve…
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
An Emprical Point Error Model for Tls Derived Point Clouds
NASA Astrophysics Data System (ADS)
Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin
2016-06-01
The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.
Frequency-domain correction of sensor dynamic error for step response.
Yang, Shuang-Long; Xu, Ke-Jun
2012-11-01
To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly. PMID:23206091
Frequency-domain correction of sensor dynamic error for step response.
Yang, Shuang-Long; Xu, Ke-Jun
2012-11-01
To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly.
Frequency-domain correction of sensor dynamic error for step response
NASA Astrophysics Data System (ADS)
Yang, Shuang-Long; Xu, Ke-Jun
2012-11-01
To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly.
NASA Astrophysics Data System (ADS)
Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim
2012-12-01
This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.
Sensorimotor adaptation error signals are derived from realistic predictions of movement outcomes.
Wong, Aaron L; Shelhamer, Mark
2011-03-01
Neural systems that control movement maintain accuracy by adaptively altering motor commands in response to errors. It is often assumed that the error signal that drives adaptation is equivalent to the sensory error observed at the conclusion of a movement; for saccades, this is typically the visual (retinal) error. However, we instead propose that the adaptation error signal is derived as the difference between the observed visual error and a realistic prediction of movement outcome. Using a modified saccade-adaptation task in human subjects, we precisely controlled the amount of error experienced at the conclusion of a movement by back-stepping the target so that the saccade is hypometric (positive retinal error), but less hypometric than if the target had not moved (smaller retinal error than expected). This separates prediction error from both visual errors and motor corrections. Despite positive visual errors and forward-directed motor corrections, we found an adaptive decrease in saccade amplitudes, a finding that is well-explained by the employment of a prediction-based error signal. Furthermore, adaptive changes in movement size were linearly correlated to the disparity between the predicted and observed movement outcomes, in agreement with the forward-model hypothesis of motor learning, which states that adaptation error signals incorporate predictions of motor outcomes computed using a copy of the motor command (efference copy).
Online public reactions to frequency of diagnostic errors in US outpatient care
Giardina, Traber Davis; Sarkar, Urmimala; Gourley, Gato; Modi, Varsha; Meyer, Ashley N.D.; Singh, Hardeep
2016-01-01
Background Diagnostic errors pose a significant threat to patient safety but little is known about public perceptions of diagnostic errors. A study published in BMJ Quality & Safety in 2014 estimated that diagnostic errors affect at least 5% of US adults (or 12 million) per year. We sought to explore online public reactions to media reports on the reported frequency of diagnostic errors in the US adult population. Methods We searched the World Wide Web for any news article reporting findings from the study. We then gathered all the online comments made in response to the news articles to evaluate public reaction to the newly reported diagnostic error frequency (n=241). Two coders conducted content analyses of the comments and an experienced qualitative researcher resolved differences. Results Overall, there were few comments made regarding the frequency of diagnostic errors. However, in response to the media coverage, 44 commenters shared personal experiences of diagnostic errors. Additionally, commentary centered on diagnosis-related quality of care as affected by two emergent categories: (1) US health care providers (n=79; 63 commenters) and (2) US health care reform-related policies, most commonly the Affordable Care Act (ACA) and insurance/reimbursement issues (n=62; 47 commenters). Conclusion The public appears to have substantial concerns about the impact of the ACA and other reform initiatives on the diagnosis-related quality of care. However, policy discussions on diagnostic errors are largely absent from the current national conversation on improving quality and safety. Because outpatient diagnostic errors have emerged as a major safety concern, researchers and policymakers should consider evaluating the effects of policy and practice changes on diagnostic accuracy. PMID:27347474
Random Numbers Demonstrate the Frequency of Type I Errors: Three Spreadsheets for Class Instruction
ERIC Educational Resources Information Center
Duffy, Sean
2010-01-01
This paper describes three spreadsheet exercises demonstrating the nature and frequency of type I errors using random number generation. The exercises are designed specifically to address issues related to testing multiple relations using correlation (Demonstration I), t tests varying in sample size (Demonstration II) and multiple comparisons…
NASA Technical Reports Server (NTRS)
Tsaoussi, Lucia S.; Koblinsky, Chester J.
1994-01-01
In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in
Correction of electrode modelling errors in multi-frequency EIT imaging.
Jehl, Markus; Holder, David
2016-06-01
The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.
A Derivation of the Unbiased Standard Error of Estimate: The General Case.
ERIC Educational Resources Information Center
O'Brien, Francis J., Jr.
This paper is part of a series of applied statistics monographs intended to provide supplementary reading for applied statistics students. In the present paper, derivations of the unbiased standard error of estimate for both the raw score and standard score linear models are presented. The derivations for raw score linear models are presented in…
Error analysis for semi-analytic displacement derivatives with respect to shape and sizing variables
NASA Technical Reports Server (NTRS)
Fenyes, Peter A.; Lust, Robert V.
1989-01-01
Sensitivity analysis is fundamental to the solution of structural optimization problems. Consequently, much research has focused on the efficient computation of static displacement derivatives. As originally developed, these methods relied on analytical representations for the derivatives of the structural stiffness matrix (K) with respect to the design variables (b sub i). To extend these methods for use with complex finite element formulations and facilitate their implementation into structural optimization programs using the general finite element method analysis codes, the semi-analytic method was developed. In this method the matrix the derivative of K/the derivative b sub i is approximated by finite difference. Although it is well known that the accuracy of the semi-analytic method is dependent on the finite difference parameter, recent work has suggested that more fundamental inaccuracies exist in the method when used for shape optimization. Another study has argued qualitatively that these errors are related to nonuniform errors in the stiffness matrix derivatives. The accuracy of the semi-analytic method is investigated. A general framework was developed for the error analysis and then it is shown analytically that the errors in the method are entirely accounted for by errors in delta K/delta b sub i. Furthermore, it is demonstrated that acceptable accuracy in the derivatives can be obtained through careful selection of the finite difference parameter.
Effects of structured mid-spatial frequency surface errors on image performance.
Tamkin, John M; Milster, Tom D
2010-11-20
Optical designers are encouraged to adopt aspheric and free-form surfaces into an increasing number of design spaces because of their improved performance. However, residual tooling marks from advanced aspheric fabrication techniques are difficult to remove. These marks, typically in the mid-spatial frequency (MSF) regime, give rise to structured image artifacts. Using a theory developed in previous publications, this paper applies the fundamentals of MSF modeling to demonstrate how MSF errors are evaluated and toleranced in an optical system. Examples of as-built components with MSF errors are analyzed using commercial optical design software.
Where is the effect of frequency in word production? Insights from aphasic picture naming errors
Kittredge, Audrey K.; Dell, Gary S.; Verkuilen, Jay; Schwartz, Myrna F.
2010-01-01
Some theories of lexical access in production locate the effect of lexical frequency at the retrieval of a word’s phonological characteristics, as opposed to the prior retrieval of a holistic representation of the word from its meaning. Yet there is evidence from both normal and aphasic individuals that frequency may influence both of these retrieval processes. This inconsistency is especially relevant in light of recent attempts to determine the representation of another lexical property, age of acquisition or AoA, whose effect is similar to that of frequency. To further explore the representations of these lexical variables in the word retrieval system, we performed hierarchical, multinomial logistic regression analyses of 50 aphasic patients’ picture-naming responses. While both log frequency and AoA had a significant influence on patient accuracy and led to fewer phonologically related errors and omissions, only log frequency had an effect on semantically related errors. These results provide evidence for a lexical access process sensitive to frequency at all stages, but with AoA having a more limited effect. PMID:18704797
Error Estimates Derived from the Data for Least-Squares Spline Fitting
Jerome Blair
2007-06-25
The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.
NASA Astrophysics Data System (ADS)
Wang, Ben; Zhang, Yimin D.; Qin, Si; Amin, Moeness G.
2016-05-01
In this paper, we propose a nonstationary jammer suppression method for GPS receivers when the signals are sparsely sampled. Missing data samples induce noise-like artifacts in the time-frequency (TF) distribution and ambiguity function of the received signals, which lead to reduced capability and degraded performance in jammer signature estimation and excision. In the proposed method, a data-dependent TF kernel is utilized to mitigate the artifacts and sparse reconstruction methods are then applied to obtain instantaneous frequency (IF) estimation of the jammers. In addition, an error tolerance of the IF estimate is applied is applied to achieve robust jammer suppression performance in the presence of IF estimation inaccuracy.
A frequency-domain derivation of shot-noise
NASA Astrophysics Data System (ADS)
Rice, Frank
2016-01-01
A formula for shot-noise is derived in the frequency-domain. The derivation is complete and reasonably rigorous while being appropriate for undergraduate students; it models a sequence of random pulses using Fourier sine and cosine series, and requires some basic statistical concepts. The text here may serve as a pedagogic introduction to the spectral analysis of random processes and may prove useful to introduce students to the logic behind stochastic problems. The concepts of noise power spectral density and equivalent noise bandwidth are introduced.
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-21
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ∼ 2°, than those from the three empirical models with averaged errors > ∼ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
NASA Astrophysics Data System (ADS)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-01
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.
Guadagnoli, M A; Kohl, R M
2001-06-01
The authors of the present study investigated the apparent contradiction between early and more recent views of knowledge of results (KR), the idea that how one is engaged before receiving KR may not be independent of how one uses that KR. In a 2 ×: 2 factorial design, participants (N = 64) practiced a simple force-production task and (a) were required, or not required, to estimate error about their previous response and (b) were provided KR either after every response (100%) or after every 5th response (20%) during acquisition. A no-KR retention test revealed an interaction between acquisition error estimation and KR frequencies. The group that received 100% KR and was required to error estimate during acquisition performed the best during retention. The 2 groups that received 20% KR performed less well. Finally, the group that received 100% KR and was not required to error estimate during acquisition performed the poorest during retention. One general interpretation of that pattern of results is that motor learning is an increasing function of the degree to which participants use KR to test response hypotheses (J. A. Adams, 1971; R. A. Schmidt, 1975). Practicing simple responses coupled with error estimation may embody response hypotheses that can be tested with KR, thus benefiting motor learning most under a 100% KR condition. Practicing simple responses without error estimation is less likely to embody response hypothesis, however, which may increase the probability that participants will use KR to guide upcoming responses, thus attenuating motor learning under a 100% KR condition. The authors conclude, therefore, that how one is engaged before receiving KR may not be independent of how one uses KR. PMID:11404216
The use of neural networks in identifying error sources in satellite-derived tropical SST estimates.
Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin
2011-01-01
An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%.
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less
Topological derivatives for fundamental frequencies of elastic bodies
NASA Astrophysics Data System (ADS)
Kobelev, Vladimir
2016-01-01
In this article a new method for topological optimization of fundamental frequencies of elastic bodies, which could be considered as an improvement on the bubble method, is introduced. The method is based on generalized topological derivatives. For a body with different types of inclusion the vector genus is introduced. The dimension of the genus is the number of different elastic properties of the inclusions being introduced. The disturbances of stress and strain fields in an elastic matrix due to a newly inserted elastic inhomogeneity are given explicitly in terms of the stresses and strains in the initial body. The iterative positioning of inclusions is carried out by determination of the preferable position of the new inhomogeneity at the extreme points of the characteristic function. The characteristic function was derived using Eshelby's method. The expressions for optimal ratios of the semi-axes of the ellipse and angular orientation of newly inserted infinitesimally small inclusions of elliptical form are derived in closed analytical form.
Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint
Florita, A.; Hodge, B. M.; Milligan, M.
2012-08-01
The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.
NASA Technical Reports Server (NTRS)
Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.
1993-01-01
Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.
NASA Astrophysics Data System (ADS)
Weiner, M. M.
1994-01-01
The performance of ground-based high-frequency (HF) receiving arrays is reduced when the array elements have electrically small ground planes. The array rms phase error and beam-pointing errors, caused by multipath rays reflected from a nonhomogeneous Earth, are determined for a sparse array of elements that are modeled as Hertzian dipoles in close proximity to Earth with no ground planes. Numerical results are presented for cases of randomly distributed and systematically distributed Earth nonhomogeneities where one-half of vertically polarized array elements are located in proximity to one type of Earth and the remaining half are located in proximity to a second type of Earth. The maximum rms phase errors, for the cases examined, are 18 deg and 9 deg for randomly distributed and systematically distributed nonhomogeneities, respectively. The maximum beampointing errors are 0 and 0.3 beam widths for randomly distributed and systematically distributed nonhomogeneities, respectively.
Frequency Domain Analysis of Errors in Cross-Correlations of Ambient Seismic Noise
NASA Astrophysics Data System (ADS)
Liu, Xin; Ben-Zion, Yehuda; Zigone, Dimitri
2016-09-01
We analyze random errors (variances) in cross-correlations of ambient seismic noise in the frequency domain, which differ from previous time domain methods. Extending previous theoretical results on ensemble averaged cross-spectrum, we estimate confidence interval of stacked cross-spectrum of finite amount of data at each frequency using non-overlapping windows with fixed length. The extended theory also connects amplitude and phase variances with the variance of each complex spectrum value. Analysis of synthetic stationary ambient noise is used to estimate the confidence interval of stacked cross-spectrum obtained with different length of noise data corresponding to different number of evenly spaced windows of the same duration. This method allows estimating Signal/Noise Ratio (SNR) of noise cross-correlation in the frequency domain, without specifying filter bandwidth or signal/noise windows that are needed for time domain SNR estimations. Based on synthetic ambient noise data, we also compare the probability distributions, causal part amplitude and SNR of stacked cross-spectrum function using one-bit normalization or pre-whitening with those obtained without these preprocessing steps. Natural continuous noise records contain both ambient noise and small earthquakes that are inseparable from the noise with the existing preprocessing steps. Using probability distributions of random cross-spectrum values based on the theoretical results provides an effective way to exclude such small earthquakes, and additional data segments (outliers) contaminated by signals of different statistics (e.g. rain, cultural noise), from continuous noise waveforms. This technique is applied to constrain values and uncertainties of amplitude and phase velocity of stacked noise cross-spectrum at different frequencies, using data from southern California at both regional scale (˜ 35 km) and dense linear array (˜ 20 m) across the plate-boundary faults. A block bootstrap resampling method
Estimates of ocean forecast error covariance derived from Hessian Singular Vectors
NASA Astrophysics Data System (ADS)
Smith, Kevin D.; Moore, Andrew M.; Arango, Hernan G.
2015-05-01
Experience in numerical weather prediction suggests that singular value decomposition (SVD) of a forecast can yield useful a priori information about the growth of forecast errors. It has been shown formally that SVD using the inverse of the expected analysis error covariance matrix to define the norm at initial time yields the Empirical Orthogonal Functions (EOFs) of the forecast error covariance matrix at the final time. Because of their connection to the 2nd derivative of the cost function in 4-dimensional variational (4D-Var) data assimilation, the initial time singular vectors defined in this way are often referred to as the Hessian Singular Vectors (HSVs). In the present study, estimates of ocean forecast errors and forecast error covariance were computed using SVD applied to a baroclinically unstable temperature front in a re-entrant channel using the Regional Ocean Modeling System (ROMS). An identical twin approach was used in which a truth run of the model was sampled to generate synthetic hydrographic observations that were then assimilated into the same model started from an incorrect initial condition using 4D-Var. The 4D-Var system was run sequentially, and forecasts were initialized from each ocean analysis. SVD was performed on the resulting forecasts to compute the HSVs and corresponding EOFs of the expected forecast error covariance matrix. In this study, a reduced rank approximation of the inverse expected analysis error covariance matrix was used to compute the HSVs and EOFs based on the Lanczos vectors computed during the 4D-Var minimization of the cost function. This has the advantage that the entire spectrum of HSVs and EOFs in the reduced space can be computed. The associated singular value spectrum is found to yield consistent and reliable estimates of forecast error variance in the space spanned by the EOFs. In addition, at long forecast lead times the resulting HSVs and companion EOFs are able to capture many features of the actual
NASA Technical Reports Server (NTRS)
Kaufmann, D. C.
1976-01-01
The fine frequency setting of a cesium beam frequency standard is accomplished by adjusting the C field control with the appropriate Zeeman frequency applied to the harmonic generator. A novice operator in the field, even when using the correct Zeeman frequency input, may mistakenly set the C field to any one of seven major Beam I peaks (fingers) represented by the Ramsey curve. This can result in frequency offset errors of as much as 2.5 parts in ten to the tenth. The effects of maladjustment are demonstrated and suggestions are discussed on how to avoid the subtle traps associated with C field adjustments.
2013-01-01
Background 454 sequencing technology is a promising approach for characterizing HIV-1 populations and for identifying low frequency mutations. The utility of 454 technology for determining allele frequencies and linkage associations in HIV infected individuals has not been extensively investigated. We evaluated the performance of 454 sequencing for characterizing HIV populations with defined allele frequencies. Results We constructed two HIV-1 RT clones. Clone A was a wild type sequence. Clone B was identical to clone A except it contained 13 introduced drug resistant mutations. The clones were mixed at ratios ranging from 1% to 50% and were amplified by standard PCR conditions and by PCR conditions aimed at reducing PCR-based recombination. The products were sequenced using 454 pyrosequencing. Sequence analysis from standard PCR amplification revealed that 14% of all sequencing reads from a sample with a 50:50 mixture of wild type and mutant DNA were recombinants. The majority of the recombinants were the result of a single crossover event which can happen during PCR when the DNA polymerase terminates synthesis prematurely. The incompletely extended template then competes for primer sites in subsequent rounds of PCR. Although less often, a spectrum of other distinct crossover patterns was also detected. In addition, we observed point mutation errors ranging from 0.01% to 1.0% per base as well as indel (insertion and deletion) errors ranging from 0.02% to nearly 50%. The point errors (single nucleotide substitution errors) were mainly introduced during PCR while indels were the result of pyrosequencing. We then used new PCR conditions designed to reduce PCR-based recombination. Using these new conditions, the frequency of recombination was reduced 27-fold. The new conditions had no effect on point mutation errors. We found that 454 pyrosequencing was capable of identifying minority HIV-1 mutations at frequencies down to 0.1% at some nucleotide positions. Conclusion
A Research on Errors in Two-way Satellite Time and Frequency Transfer
NASA Astrophysics Data System (ADS)
Wu, W. J.
2013-07-01
The two-way satellite time and frequency transfer (TWSTFT) is one of the most accurate means for remote clock comparison with an uncertainty in time of less than 1 ns and with a relative uncertainty in frequency of about 10^{-14} d^{-1}. The transmission paths of signals between two stations are almost symmetrical in the TWSTFT. In principal, most of all kinds of path delays are canceled out, which guarantees the high accuracy of TWSTFT. With the development of TWSTFT and the increase in the frequence of observations, it is showed that the diurnal variation of systematic errors is about 1˜3 ns in the TWSTFT. This problem has become a hot topic of research around the world. By using the data of Transfer Satellite Orbit Determination Net (TSODN) and international TWSTFT links, the systematic errors are studied in detail as follows: (1) The atmospheric effect. This includes ionospheric and tropospheric effects. The tropospheric effect is very small, and it can be ignored. The ionospheric error can be corrected by using the IGS ionosphere product. The variations of ionospheric effect are about 0˜0.05 ns and 0˜0.7 ns at KU band and C band, respectively, and have the diurnal variation characteristics. (2) The equipment time delay. The equipment delay is closely related with temperature, presenting a linear relation at the normal temperature. Its outdoor part indicates the characteristics of the diurnal variation with the environment temperature. The various kinds of effects related with the modem are studied. Some resolutions are proposed. (3) The satellite transponder effect. This effect is studied by using the data of international TWSTFT links. It is analyzed that different satellite transponders can highly increase the amplitude of the diurnal variation in one TWSTFT link. This is the major reason of the diurnal variation in the TWSTFT. The function fitting method is used to basically solve this problem. (4) The satellite motion effect. The geostationary
Lexical Frequency and Third-Graders' Stress Accuracy in Derived English Word Production
ERIC Educational Resources Information Center
Jarmulowicz, Linda; Taran, Valentina L.; Hay, Sarah E.
2008-01-01
This study examined the effects of lexical frequency on children's production of accurate primary stress in words derived with nonneutral English suffixes. Forty-four third-grade children participated in an elicited derived word task in which they produced high-frequency, low-frequency, and nonsense-derived words with stress-changing suffixes…
On the uncertainty of stream networks derived from elevation data: the error propagation approach
NASA Astrophysics Data System (ADS)
Hengl, T.; Heuvelink, G. B. M.; van Loon, E. E.
2010-07-01
DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks) using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations), and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations). All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial) clearly depicts areas where the extracted stream network is least precise - usually areas of low local relief and slightly convex (0-10 difference from the mean value). In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor) show high error (H>0.5) of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy the required accuracy level. Such error propagation tool should become a standard functionality in any modern GIS. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the
Error analysis for intrinsic quality factor measurement in superconducting radio frequency resonators
NASA Astrophysics Data System (ADS)
Melnychuk, O.; Grassellino, A.; Romanenko, A.
2014-12-01
In this paper, we discuss error analysis for intrinsic quality factor (Q0) and accelerating gradient (Eacc) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab, we estimated total uncertainty for both Q0 and Eacc to be at the level of approximately 4% for input coupler coupling parameter β1 in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q0 uncertainty increases (decreases) with β1 whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27], is independent of β1. Overall, our estimated Q0 uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27].
Peng, Cheng Y; Ma, Xiao C; Yan, She F; Yang, Li
2014-02-01
The pulse-output Direct Digital Synthesis (DDS), in which the overflow signal of the phase accumulator is used for the pulse output, can be easily implemented due to its simple hardware architecture and low algorithm complexity. This paper introduces the fundamentals for generating Linear Frequency Modulation (LFM) pulse using pulse-output DDS technique. Error introducing mechanisms that affect the accuracy of signal's duration, initial phase, and frequency are studied. Extensive analysis of round-off error is given. A modified hardware architecture for LFM pulse generation with reduced round-off error is proposed. Experiment results are given, which shows that the proposed generator is promising in applications such as sonar transmitters.
Error estimation for ORION baseline vector determination
NASA Technical Reports Server (NTRS)
Wu, S. C.
1980-01-01
Effects of error sources on Operational Radio Interferometry Observing Network (ORION) baseline vector determination are studied. Partial derivatives of delay observations with respect to each error source are formulated. Covariance analysis is performed to estimate the contribution of each error source to baseline vector error. System design parameters such as antenna sizes, system temperatures and provision for dual frequency operation are discussed.
NASA Astrophysics Data System (ADS)
Rugini, Luca; Banelli, Paolo
2005-12-01
The performance of multicarrier systems is highly impaired by intercarrier interference (ICI) due to frequency synchronization errors at the receiver and by intermodulation distortion (IMD) introduced by a nonlinear amplifier (NLA) at the transmitter. In this paper, we evaluate the bit-error rate (BER) of multicarrier direct-sequence code-division multiple-access (MC-DS-CDMA) downlink systems subject to these impairments in frequency-selective Rayleigh fading channels, assuming quadrature amplitude modulation (QAM). The analytical findings allow to establish the sensitivity of MC-DS-CDMA systems to carrier frequency offset (CFO) and NLA distortions, to identify the maximum CFO that is tolerable at the receiver side in different scenarios, and to find out the optimum value of the NLA output power backoff for a given CFO. Simulation results show that the approximated analysis is quite accurate in several conditions.
Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat
de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A.; Kölzsch, Andrea; Prins, Herbert H. T.; de Boer, W. Fred
2015-01-01
The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations
Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat.
de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A; Kölzsch, Andrea; Prins, Herbert H T; de Boer, W Fred
2015-01-01
The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations
NASA Astrophysics Data System (ADS)
Duan, Beiping; Zheng, Zhoushun; Cao, Wen
2016-08-01
In this paper, we revisit two spectral approximations, including truncated approximation and interpolation for Caputo fractional derivative. The two approaches have been studied to approximate Riemann-Liouville (R-L) fractional derivative by Chen et al. and Zayernouri et al. respectively in their most recent work. For truncated approximation the reconsideration partly arises from the difference between fractional derivative in R-L sense and Caputo sense: Caputo fractional derivative requires higher regularity of the unknown than R-L version. Another reason for the reconsideration is that we distinguish the differential order of the unknown with the index of Jacobi polynomials, which is not presented in the previous work. Also we provide a way to choose the index when facing multi-order problems. By using generalized Hardy's inequality, the gap between the weighted Sobolev space involving Caputo fractional derivative and the classical weighted space is bridged, then the optimal projection error is derived in the non-uniformly Jacobi-weighted Sobolev space and the maximum absolute error is presented as well. For the interpolation, analysis of interpolation error was not given in their work. In this paper we build the interpolation error in non-uniformly Jacobi-weighted Sobolev space by constructing fractional inverse inequality. With combining collocation method, the approximation technique is applied to solve fractional initial-value problems (FIVPs). Numerical examples are also provided to illustrate the effectiveness of this algorithm.
Rieche, Marie; Komenský, Tomás; Husar, Peter
2011-01-01
Radio Frequency Identification (RFID) systems in healthcare facilitate the possibility of contact-free identification and tracking of patients, medical equipment and medication. Thereby, patient safety will be improved and costs as well as medication errors will be reduced considerably. However, the application of RFID and other wireless communication systems has the potential to cause harmful electromagnetic disturbances on sensitive medical devices. This risk mainly depends on the transmission power and the method of data communication. In this contribution we point out the reasons for such incidents and give proposals to overcome these problems. Therefore a novel modulation and transmission technique called Gaussian Derivative Frequency Modulation (GDFM) is developed. Moreover, we carry out measurements to show the inteference properties of different modulation schemes in comparison to our GDFM. PMID:22254771
Rieche, Marie; Komenský, Tomás; Husar, Peter
2011-01-01
Radio Frequency Identification (RFID) systems in healthcare facilitate the possibility of contact-free identification and tracking of patients, medical equipment and medication. Thereby, patient safety will be improved and costs as well as medication errors will be reduced considerably. However, the application of RFID and other wireless communication systems has the potential to cause harmful electromagnetic disturbances on sensitive medical devices. This risk mainly depends on the transmission power and the method of data communication. In this contribution we point out the reasons for such incidents and give proposals to overcome these problems. Therefore a novel modulation and transmission technique called Gaussian Derivative Frequency Modulation (GDFM) is developed. Moreover, we carry out measurements to show the inteference properties of different modulation schemes in comparison to our GDFM.
NASA Technical Reports Server (NTRS)
Rao, P. Anil; Velden, Christopher S.; Braun, Scott A.; Einaudi, Franco (Technical Monitor)
2001-01-01
Errors in the height assignment of some satellite-derived winds exist because the satellites sense radiation emitted from a finite layer of the atmosphere rather than a specific level. Potential problems in data assimilation may arise because the motion of a measured layer is often represented by a single-level value. In this research, cloud and water vapor motion winds that are derived from the Geostationary Operational Environmental Satellites (GOES winds) are compared to collocated rawinsonde observations (RAOBs). An important aspect of this work is that in addition to comparisons at each assigned height, the GOES winds are compared to the entire profile of the collocated RAOB data to determine the vertical error characteristics of the GOES winds. The impact of these results on numerical weather prediction is then investigated. The comparisons at individual vector height assignments indicate that the error of the GOES winds range from approx. 3 to 10 m/s and generally increase with height. However, if taken as a percentage of the total wind speed, accuracy is better at upper levels. As expected, comparisons with the entire profile of the collocated RAOBs indicate that clear-air water vapor winds represent deeper layers than do either infrared or water vapor cloud-tracked winds. This is because in cloud-free regions the signal from water vapor features may result from emittance over a thicker layer. To further investigate characteristics of the clear-air water vapor winds, they are stratified into two categories that are dependent on the depth of the layer represented by the vector. It is found that if the vertical gradient of moisture is smooth and uniform from near the height assignment upwards, the clear-air water vapor wind tends to represent a relatively deep layer. The information from the comparisons is then used in numerical model simulations of two separate events to determine the forecast impacts. Four simulations are performed for each case: 1) A
The distribution of particulate matter (PM) concentrations has an impact on human health effects and the setting of PM regulations. Since PM is commonly sampled on less than daily schedules, the magnitude of sampling errors needs to be determined. Daily PM data from Spokane, W...
Mass measurement errors caused by 'local" frequency perturbations in FTICR mass spectrometry.
Masselon, Christophe; Tolmachev, Aleksey V; Anderson, Gordon A; Harkewicz, Richard; Smith, Richard D
2002-01-01
One of the key qualities of mass spectrometric measurements for biomolecules is the mass measurement accuracy (MMA) obtained. FTICR presently provides the highest MMA over a broad m/z range. However, due to space charge effects, the achievable MMA crucially depends on the number of ions trapped in the ICR cell for a measurement. Thus, beyond some point, as the effective sensitivity and dynamic range of a measurement increase, MMA tends to decrease. While analyzing deviations from the commonly used calibration law in FTICR we have found systematic errors which are not accounted for by a "global" space charge correction approach. The analysis of these errors and their dependence on charge population and post-excite radius have led us to conclude that each ion cloud experiences a different interaction with other ion clouds. We propose a novel calibration function which is shown to provide an improvement in MMA for all the spectra studied.
NASA Technical Reports Server (NTRS)
Platnick, Steven; Wind, Galina; Xiong, Xiaoxiong
2011-01-01
MODIS retrievals of cloud optical thickness and effective particle radius employ a well-known VNIR/SWIR solar reflectance technique. For this type of algorithm, we evaluate the uncertainty in simultaneous retrievals of these two parameters to pixel-level (scene-dependent) radiometric error estimates as well as other tractable error sources.
NASA Technical Reports Server (NTRS)
Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo
2016-01-01
The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer. However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop size
NASA Technical Reports Server (NTRS)
Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo
2016-01-01
The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer [1]). However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link [2]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop
The effects of random errors in rawinsonde data on derived kinematic quantities
NASA Technical Reports Server (NTRS)
Belt, C. L.; Fuelberg, H. E.
1982-01-01
The sensitivity of kinematic parameters to random errors contained in rawinsonde data is assessed. Parameters under consideration include relative vorticity, vorticity advection, horizontal divergence, kinematic vertical motion, and temperature advection. It is shown that horizontal divergence is the most affected, with reliability a function of height. Vorticity advection is the next most altered by random error, and in this case, the effects of wind errors are greater on the gradient of vorticity than on the vorticity itself. Vertical motions are most affected by random perturbations at 300 mb and above, and temperature advection is found to be the least sensitive to random perturbations.
NASA Astrophysics Data System (ADS)
Tinkham, W. T.; Hoffman, C. M.; Falkowski, M. J.; Smith, A. M.; Link, T. E.; Marshall, H.
2011-12-01
Light Detection and Ranging (LiDAR) has become one of the most effective and reliable means of characterizing surface topography and vegetation structure. Most LiDAR-derived estimates such as vegetation height, snow depth, and floodplain boundaries rely on the accurate creation of digital terrain models (DTM). As a result of the importance of an accurate DTM in using LiDAR data to estimate snow depth, it is necessary to understand the variables that influence the DTM accuracy in order to assess snow depth error. A series of 4 x 4 m plots that were surveyed at 0.5 m spacing in a semi-arid catchment were used for training the Random Forests algorithm along with a series of 35 variables in order to spatially predict vertical error within a LiDAR derived DTM. The final model was utilized to predict the combined error resulting from snow volume and snow water equivalent estimates derived from a snow-free LiDAR DTM and a snow-on LiDAR acquisition of the same site. The methodology allows for a statistical quantification of the spatially-distributed error patterns that are incorporated into the estimation of snow volume and snow water equivalents from LiDAR.
Estimation of errors in measurement of stationary signals from a continuous frequency band spectrum
NASA Technical Reports Server (NTRS)
Ivanov, V. A.
1973-01-01
The design of an apparatus for frequency analyses on signals with continuous spectra is reported. Filter statistical characteristics are used to expand the dynamic range to 80 db and more or to limit the input signal spectra. A series connection of several band filters gives the most effective results.
Cumberland, Phillippa M.; Bao, Yanchun; Hysi, Pirro G.; Foster, Paul J.; Hammond, Christopher J.; Rahi, Jugnoo S.
2015-01-01
Purpose To report the methodology and findings of a large scale investigation of burden and distribution of refractive error, from a contemporary and ethnically diverse study of health and disease in adults, in the UK. Methods U K Biobank, a unique contemporary resource for the study of health and disease, recruited more than half a million people aged 40–69 years. A subsample of 107,452 subjects undertook an enhanced ophthalmic examination which provided autorefraction data (a measure of refractive error). Refractive error status was categorised using the mean spherical equivalent refraction measure. Information on socio-demographic factors (age, gender, ethnicity, educational qualifications and accommodation tenure) was reported at the time of recruitment by questionnaire and face-to-face interview. Results Fifty four percent of participants aged 40–69 years had refractive error. Specifically 27% had myopia (4% high myopia), which was more common amongst younger people, those of higher socio-economic status, higher educational attainment, or of White or Chinese ethnicity. The frequency of hypermetropia increased with age (7% at 40–44 years increasing to 46% at 65–69 years), was higher in women and its severity was associated with ethnicity (moderate or high hypermetropia at least 30% less likely in non-White ethnic groups compared to White). Conclusions Refractive error is a significant public health issue for the UK and this study provides contemporary data on adults for planning services, health economic modelling and monitoring of secular trends. Further investigation of risk factors is necessary to inform strategies for prevention. There is scope to do this through the planned longitudinal extension of the UK Biobank study. PMID:26430771
Lower Bounds on the Frequency Estimation Error in Magnetically Coupled MEMS Resonant Sensors.
Paden, Brad E
2016-02-01
MEMS inductor-capacitor (LC) resonant pressure sensors have revolutionized the treatment of abdominal aortic aneurysms. In contrast to electrostatically driven MEMS resonators, these magnetically coupled devices are wireless so that they can be permanently implanted in the body and can communicate to an external coil via pressure-induced frequency modulation. Motivated by the importance of these sensors in this and other applications, this paper develops relationships among sensor design variables, system noise levels, and overall system performance. Specifically, new models are developed that express the Cramér-Rao lower bound for the variance of resonator frequency estimates in terms of system variables through a system of coupled algebraic equations, which can be used in design and optimization. Further, models are developed for a novel mechanical resonator in addition to the LC-type resonators.
NASA Astrophysics Data System (ADS)
Anderson, K.; Dungan, J. L.
2008-12-01
vegetation. The grey panel data showed a wavelength- dependent pattern, similar to the NEdL laboratory trend, but subsequent error propagation of laboratory- derived NEdL through to a reflectance factor showed that the laboratory characterisation was unable to account for all of the uncertainty measured in the field. Therefore the estimate of u gained from field data more closely represents the reproducibility of measurements where atmospheric, solar zenith and instrument-related uncertainties are combined. Results on vegetation u showed a stronger wavelength dependency with higher standard uncertainties beyond the vegetation red-edge than in visible wavelengths (maximum = 0.015 at 800 nm, and 0.004 at 550nm). The results demonstrate that standard uncertainties of field reflectance data have a spectral dependence and exceed laboratory-derived estimates of instrument "noise". Uncertainty of this type must be taken into account when statistically testing for differences in field spectra. Improved reporting of standard uncertainties from field experiments will foster progress in remote sensing science.
An analysis of perceptual errors in reading mammograms using quasi-local spatial frequency spectra.
Mello-Thoms, C; Dunn, S M; Nodine, C F; Kundel, H L
2001-09-01
In this pilot study the authors examined areas on a mammogram that attracted the visual attention of experienced mammographers and mammography fellows, as well as areas that were reported to contain a malignant lesion, and, based on their spatial frequency spectrum, they characterized these areas by the type of decision outcome that they yielded: true-positives (TP), false-positives (FP), true-negatives (TN), and false-negatives (FN). Five 2-view (craniocaudal and medial-lateral oblique) mammogram cases were examined by 8 experienced observers, and the eye position of the observers was tracked. The observers were asked to report the location and nature of any malignant lesions present in the case. The authors analyzed each area in which either the observer made a decision or in which the observer had prolonged (>1,000 ms) visual dwell using wavelet packets, and characterized these areas in terms of the energy contents of each spatial frequency band. It was shown that each decision outcome is characterized by a specific profile in the spatial frequency domain, and that these profiles are significantly different from one another. As a consequence of these differences, the profiles can be used to determine which type of decision a given observer will make when examining the area. Computer-assisted perception correctly predicted up to 64% of the TPs made by the observers, 77% of the FPs, and 70% of the TNs.
Diaz, Manuel I.; Aquino, Wilkins; Bonnet, Marc
2015-01-01
This paper presents a methodology for the inverse identification of linearly viscoelastic material parameters in the context of steady-state dynamics using interior data. The inverse problem of viscoelasticity imaging is solved by minimizing a modified error in constitutive equation (MECE) functional, subject to the conservation of linear momentum. The treatment is applicable to configurations where boundary conditions may be partially or completely underspecified. The MECE functional measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses, and also incorporates the measurement data in a quadratic penalty term. Regularization of the problem is achieved through a penalty parameter in combination with the discrepancy principle due to Morozov. Numerical results demonstrate the robust performance of the method in situations where the available measurement data is incomplete and corrupted by noise of varying levels. PMID:26388656
A high-frequency analysis of radome-induced radar pointing error
NASA Astrophysics Data System (ADS)
Burks, D. G.; Graf, E. R.; Fahey, M. D.
1982-09-01
An analysis is presented of the effect of a tangent ogive radome on the pointing accuracy of a monopulse radar employing an aperture antenna. The radar is assumed to be operating in the receive mode, and the incident fields at the antenna are found by a ray tracing procedure. Rays entering the antenna aperture by direct transmission through the radome and by single reflection from the radome interior are considered. The radome wall is treated as being locally planar. The antenna can be scanned in two angular directions, and two orthogonal polarization states which produce an arbitrarily polarized incident field are considered. Numerical results are presented for both in-plane and cross-plane errors as a function of scan angle and polarization.
High Frequency Variations in Earth Orientation Derived From GNSS Observations
NASA Astrophysics Data System (ADS)
Weber, R.; Englich, S.; Snajdrova, K.; Boehm, J.
2006-12-01
Current observations gained by the space geodetic techniques, especially VLBI, GPS and SLR, allow for the determination of Earth Orientation Parameters (EOPs - polar motion, UT1/LOD, nutation offsets) with unprecedented accuracy and temporal resolution. This presentation focuses on contributions to the EOP recovery provided by satellite navigation systems (primarily GPS). The IGS (International GNSS Service), for example, currently provides daily polar motion with an accuracy of less than 0.1mas and LOD estimates with an accuracy of a few microseconds. To study more rapid variations in polar motion and LOD we established in a first step a high resolution (hourly resolution) ERP-time series from GPS observation data of the IGS network covering the period from begin of 2005 till March 2006. The calculations were carried out by means of the Bernese GPS Software V5.0 considering observations from a subset of 79 fairly stable stations out of the IGb00 reference frame sites. From these ERP time series the amplitudes of the major diurnal and semidiurnal variations caused by ocean tides are estimated. After correcting the series for ocean tides the remaining geodetic observed excitation is compared with variations of atmospheric excitation (AAM). To study the sensitivity of the estimates with respect to the applied mapping function we applied both the widely used NMF (Niell Mapping Function) and the VMF1 (Vienna Mapping Function 1). In addition, based on computations covering two months in 2005, the potential improvement due to the use of additional GLONASS data will be discussed. Finally, satellite techniques are also able to provide nutation offset rates with respect to the most recent nutation model. Based on GPS observations from 2005 we established nutation rate time series and subsequently derived the amplitudes of several nutation waves with periods less than 30 days. The results are compared to VLBI estimates processed by means of the OCCAM 6.1 software.
NASA Astrophysics Data System (ADS)
Alpar, M. Ali
2016-10-01
The correlation between the frequency and the absolute value of the frequency derivative of the kilohertz quasi-periodic oscillations (QPOs) observed for the first time from 4U 1636-53 is a simple consequence and indicator of the existence of a non-Keplerian rotation rate in the accretion disc boundary layer. This Letter interprets the observed correlation, showing that the observations provide strong evidence in support of the fundamental assumption of disc accretion models around slow rotators, that the boundary layer matches the Keplerian disc to the neutron star magnetosphere.
A semiempirical error estimation technique for PWV derived from atmospheric radiosonde data
NASA Astrophysics Data System (ADS)
Castro-Almazán, Julio A.; Pérez-Jordán, Gabriel; Muñoz-Tuñón, Casiana
2016-09-01
A semiempirical method for estimating the error and optimum number of sampled levels in precipitable water vapour (PWV) determinations from atmospheric radiosoundings is proposed. Two terms have been considered: the uncertainties in the measurements and the sampling error. Also, the uncertainty has been separated in the variance and covariance components. The sampling and covariance components have been modelled from an empirical dataset of 205 high-vertical-resolution radiosounding profiles, equipped with Vaisala RS80 and RS92 sondes at four different locations: Güímar (GUI) in Tenerife, at sea level, and the astronomical observatory at Roque de los Muchachos (ORM, 2300 m a.s.l.) on La Palma (both on the Canary Islands, Spain), Lindenberg (LIN) in continental Germany, and Ny-Ålesund (NYA) in the Svalbard Islands, within the Arctic Circle. The balloons at the ORM were launched during intensive and unique site-testing runs carried out in 1990 and 1995, while the data for the other sites were obtained from radiosounding stations operating for a period of 1 year (2013-2014). The PWV values ranged between ˜ 0.9 and ˜ 41 mm. The method sub-samples the profile for error minimization. The result is the minimum error and the optimum number of levels. The results obtained in the four sites studied showed that the ORM is the driest of the four locations and the one with the fastest vertical decay of PWV. The exponential autocorrelation pressure lags ranged from 175 hPa (ORM) to 500 hPa (LIN). The results show a coherent behaviour with no biases as a function of the profile. The final error is roughly proportional to PWV whereas the optimum number of levels (N0) is the reverse. The value of N0 is less than 400 for 77 % of the profiles and the absolute errors are always < 0.6 mm. The median relative error is 2.0 ± 0.7 % and the 90th percentile P90 = 4.6 %. Therefore, whereas a radiosounding samples at least N0 uniform vertical levels, depending on the water
NASA Technical Reports Server (NTRS)
Mitchell, J. R.
1972-01-01
The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.
Estimates of Mode-S EHS aircraft-derived wind observation errors using triple collocation
NASA Astrophysics Data System (ADS)
de Haan, Siebren
2016-08-01
Information on the accuracy of meteorological observation is essential to assess the applicability of the measurements. In general, accuracy information is difficult to obtain in operational situations, since the truth is unknown. One method to determine this accuracy is by comparison with the model equivalent of the observation. The advantage of this method is that all measured parameters can be evaluated, from 2 m temperature observation to satellite radiances. The drawback is that these comparisons also contain the (unknown) model error. By applying the so-called triple-collocation method , on two independent observations at the same location in space and time, combined with model output, and assuming uncorrelated observations, the three error variances can be estimated. This method is applied in this study to estimate wind observation errors from aircraft, obtained utilizing information from air traffic control surveillance radar with Selective Mode Enhanced Surveillance capabilities Mode-S EHS, see. Radial wind measurements from Doppler weather radar and wind vector measurements from sodar, together with equivalents from a non-hydrostatic numerical weather prediction model, are used to assess the accuracy of the Mode-S EHS wind observations. The Mode-S EHS wind (zonal and meridional) observation error is estimated to be less than 1.4 ± 0.1 m s-1 near the surface and around 1.1 ± 0.3 m s-1 at 500 hPa.
NASA Astrophysics Data System (ADS)
Hashiguchi, Koji; Abe, Hisashi
2016-11-01
We have experimentally evaluated the accuracy of the frequency measured using a commonly used wavelength meter in the near-infrared region, which was calibrated in the visible region. An error of approximately 50 MHz was observed in the frequency measurement using the wavelength meter in the near-infrared region although the accuracy specified in the catalogue was 20 MHz. This error was attributable to residual moisture inside the Fizeau interferometer of the wavelength meter. A simple method to avoid the error is proposed.
NASA Astrophysics Data System (ADS)
Rose, Julian A. R.; Tong, Jenna R.; Allain, Damien J.; Mitchell, Cathryn N.
2011-01-01
Signals from Global Positioning System (GPS) satellites at the horizon or at low elevations are often excluded from a GPS solution because they experience considerable ionospheric delays and multipath effects. Their exclusion can degrade the overall satellite geometry for the calculations, resulting in greater errors; an effect known as the Dilution of Precision (DOP). In contrast, signals from high elevation satellites experience less ionospheric delays and multipath effects. The aim is to find a balance in the choice of elevation mask, to reduce the propagation delays and multipath whilst maintaining good satellite geometry, and to use tomography to correct for the ionosphere and thus improve single-frequency GPS timing accuracy. GPS data, collected from a global network of dual-frequency GPS receivers, have been used to produce four GPS timing solutions, each with a different ionospheric compensation technique. One solution uses a 4D tomographic algorithm, Multi-Instrument Data Analysis System (MIDAS), to compensate for the ionospheric delay. Maps of ionospheric electron density are produced and used to correct the single-frequency pseudorange observations. This method is compared to a dual-frequency solution and two other single-frequency solutions: one does not include any ionospheric compensation and the other uses the broadcast Klobuchar model. Data from the solar maximum year 2002 and October 2003 have been investigated to display results when the ionospheric delays are large and variable. The study focuses on Europe and results are produced for the chosen test site, VILL (Villafranca, Spain). The effects of excluding all of the GPS satellites below various elevation masks, ranging from 5° to 40°, on timing solutions for fixed (static) and mobile (moving) situations are presented. The greatest timing accuracies when using the fixed GPS receiver technique are obtained by using a 40° mask, rather than a 5° mask. The mobile GPS timing solutions are most
General-form 3-3-3 interpolation kernel and its simplified frequency-response derivation
NASA Astrophysics Data System (ADS)
Deng, Tian-Bo
2016-11-01
An interpolation kernel is required in a wide variety of signal processing applications such as image interpolation and timing adjustment in digital communications. This article presents a general-form interpolation kernel called 3-3-3 interpolation kernel and derives its frequency response in a closed-form by using a simple derivation method. This closed-form formula is preliminary to designing various 3-3-3 interpolation kernels subject to a set of design constraints. The 3-3-3 interpolation kernel is formed through utilising the third-degree piecewise polynomials, and it is an even-symmetric function. Thus, it will suffice to consider only its right-hand side when deriving its frequency response. Since the right-hand side of the interpolation kernel contains three piecewise polynomials of the third degree, i.e. the degrees of the three piecewise polynomials are (3,3,3), we call it the 3-3-3 interpolation kernel. Once the general-form frequency-response formula is derived, we can systematically formulate the design of various 3-3-3 interpolation kernels subject to a set of design constraints, which are targeted for different interpolation applications. Therefore, the closed-form frequency-response expression is preliminary to the optimal design of various 3-3-3 interpolation kernels. We will use an example to show the optimal design of a 3-3-3 interpolation kernel based on the closed-form frequency-response expression.
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Ackerman, Thomas P.
1996-01-01
A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. We have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. We conclude that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, we conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results.
Suzuki, Hirokazu; Kobayashi, Jyumpei; Wada, Keisuke; Furukawa, Megumi; Doi, Katsumi
2015-01-01
Thermostability is an important property of enzymes utilized for practical applications because it allows long-term storage and use as catalysts. In this study, we constructed an error-prone strain of the thermophile Geobacillus kaustophilus HTA426 and investigated thermoadaptation-directed enzyme evolution using the strain. A mutation frequency assay using the antibiotics rifampin and streptomycin revealed that G. kaustophilus had substantially higher mutability than Escherichia coli and Bacillus subtilis. The predominant mutations in G. kaustophilus were A · T→G · C and C · G→T · A transitions, implying that the high mutability of G. kaustophilus was attributable in part to high-temperature-associated DNA damage during growth. Among the genes that may be involved in DNA repair in G. kaustophilus, deletions of the mutSL, mutY, ung, and mfd genes markedly enhanced mutability. These genes were subsequently deleted to construct an error-prone thermophile that showed much higher (700- to 9,000-fold) mutability than the parent strain. The error-prone strain was auxotrophic for uracil owing to the fact that the strain was deficient in the intrinsic pyrF gene. Although the strain harboring Bacillus subtilis pyrF was also essentially auxotrophic, cells became prototrophic after 2 days of culture under uracil starvation, generating B. subtilis PyrF variants with an enhanced half-denaturation temperature of >10°C. These data suggest that this error-prone strain is a promising host for thermoadaptation-directed evolution to generate thermostable variants from thermolabile enzymes.
A Benchmark Study on Error Assessment and Quality Control of CCS Reads Derived from the PacBio RS.
Jiao, Xiaoli; Zheng, Xin; Ma, Liang; Kutty, Geetha; Gogineni, Emile; Sun, Qiang; Sherman, Brad T; Hu, Xiaojun; Jones, Kristine; Raley, Castle; Tran, Bao; Munroe, David J; Stephens, Robert; Liang, Dun; Imamichi, Tomozumi; Kovacs, Joseph A; Lempicki, Richard A; Huang, Da Wei
2013-07-31
PacBio RS, a newly emerging third-generation DNA sequencing platform, is based on a real-time, single-molecule, nano-nitch sequencing technology that can generate very long reads (up to 20-kb) in contrast to the shorter reads produced by the first and second generation sequencing technologies. As a new platform, it is important to assess the sequencing error rate, as well as the quality control (QC) parameters associated with the PacBio sequence data. In this study, a mixture of 10 prior known, closely related DNA amplicons were sequenced using the PacBio RS sequencing platform. After aligning Circular Consensus Sequence (CCS) reads derived from the above sequencing experiment to the known reference sequences, we found that the median error rate was 2.5% without read QC, and improved to 1.3% with an SVM based multi-parameter QC method. In addition, a De Novo assembly was used as a downstream application to evaluate the effects of different QC approaches. This benchmark study indicates that even though CCS reads are post error-corrected it is still necessary to perform appropriate QC on CCS reads in order to produce successful downstream bioinformatics analytical results.
NASA Astrophysics Data System (ADS)
Congedo, Giuseppe
2015-04-01
The measurement of frequency shifts for light beams exchanged between two test masses nearly in free fall is at the heart of gravitational-wave detection. It is envisaged that the derivative of the frequency shift is in fact limited by differential forces acting on those test masses. We calculate the derivative of the frequency shift with a fully covariant, gauge-independent and coordinate-free method. This method is general and does not require a congruence of nearby beams' null geodesics as done in previous work. We show that the derivative of the parallel transport is the only means by which gravitational effects shows up in the frequency shift. This contribution is given as an integral of the Riemann tensor—the only physical observable of curvature—along the beam's geodesic. The remaining contributions are the difference of velocities, the difference of nongravitational forces, and finally fictitious forces, either locally at the test masses or nonlocally integrated along the beam's geodesic. As an application relevant to gravitational-wave detection, we work out the frequency shift in the local Lorentz frame of nearby geodesics.
Mace, G.G.; Ackerman, T.P.
1996-07-01
A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. The authors have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. It is concluded that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, the authors conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results. 18 refs., 9 figs., 6 tabs.
GOME Total Ozone and Calibration Error Derived Usign Version 8 TOMS Algorithm
NASA Technical Reports Server (NTRS)
Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.
2003-01-01
The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local stiucture as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The lb detector appears to be quite well behaved throughout this time period.
NASA Astrophysics Data System (ADS)
Barzaghi, R.; Gatti, A.; Reguzzoni, M.; Venuti, G.
2012-04-01
The global height datum problem, that is the determination of biases of different height systems at global scale, is revised and two solutions are proposed. As it is well known, biased heights enter into the computation of terrestrial gravity anomalies, which in turn are used for geoid determination. Hence, these biases enter as secondary or indirect effect also in such a geoid model. In contrast to terrestrial gravity anomalies, gravity and geoid models derived from satellite gravity missions, and in particular GRACE and GOCE, do not suffer from those inconsistencies. Thus, these models can be profitably used in estimating the existing height system biases. Two approaches have been studied. The first one compares the gravity potential coefficients in the range of degrees from 100 to 200 of an unbiased gravity field from GOCE with those of the combined model EGM2008 that in this range are affected by the height biases. The second approach compares height anomalies derived from GNSS ellipsoidal heights and biased normal heights, with anomalies derived from an anomalous potential which combines a satellite-only model up to degree 200 and a high-resolution global model above 200. Numerical tests have been devised to prove the effectiveness of the two methods, in terms of variances of the biases to be estimated. This error budget analysis depends on the observation accuracies as well as of their number and spatial distribution. The impact of the error covariance structure of the GOCE and EGM2008 models has been evaluated together with the impact of the observation network design.
NASA Technical Reports Server (NTRS)
Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.
2011-01-01
Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.
NASA Astrophysics Data System (ADS)
Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.
2011-11-01
Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.
NASA Astrophysics Data System (ADS)
Bartsotas, Nikolaos S.; Nikolopoulos, Efthymios I.; Anagnostou, Emmanouil N.; Kallos, George
2015-04-01
Mountainous regions account for a significant part of the Earth's surface. Such areas are persistently affected by heavy precipitation episodes, which induce flash floods and landslides. The limitation of inadequate in-situ observations has put remote sensing rainfall estimates on a pedestal concerning the analyses of these events, as in many mountainous regions worldwide they serve as the only available data source. However, well-known issues of remote sensing techniques over mountainous areas, such as the strong underestimation of precipitation associated with low-level orographic enhancement, limit the way these estimates can accommodate operational needs. Even locations that fall within the range of weather radars suffer from strong biases in precipitation estimates due to terrain blockage and vertical rainfall profile issues. A novel approach towards the reduction of error in quantitative precipitation estimates lies upon the utilization of high-resolution numerical simulations in order to derive error correction functions for corresponding satellite precipitation data. The correction functions examined consist of 1) mean field bias adjustment and 2) pdf matching, two procedures that are simple and have been widely used in gauge-based adjustment techniques. For the needs of this study, more than 15 selected storms over the mountainous Upper Adige region of Northern Italy were simulated at 1-km resolution from a state-of-the-art atmospheric model (RAMS/ICLAMS), benefiting from the explicit cloud microphysical scheme, prognostic treatment of natural pollutants such as dust and sea-salt and the detailed SRTM90 topography that are implemented in the model. The proposed error correction approach is applied on three quasi-global and widely used satellite precipitation datasets (CMORPH, TRMM 3B42 V7 and PERSIANN) and the evaluation of the error model is based on independent in situ precipitation measurements from a dense rain gauge network (1 gauge / 70 km2
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.
de Waal, Eric; Mak, Winifred; Calhoun, Sondra; Stein, Paula; Ord, Teri; Krapp, Christopher; Coutifaris, Christos; Schultz, Richard M; Bartolomei, Marisa S
2014-02-01
Assisted reproductive technologies (ART) have enabled millions of couples with compromised fertility to conceive children. Nevertheless, there is a growing concern regarding the safety of these procedures due to an increased incidence of imprinting disorders, premature birth, and low birth weight in ART-conceived offspring. An integral aspect of ART is the oxygen concentration used during in vitro development of mammalian embryos, which is typically either atmospheric (~20%) or reduced (5%). Both oxygen tension levels have been widely used, but 5% oxygen improves preimplantation development in several mammalian species, including that of humans. To determine whether a high oxygen tension increases the frequency of epigenetic abnormalities in mouse embryos subjected to ART, we measured DNA methylation and expression of several imprinted genes in both embryonic and placental tissues from concepti generated by in vitro fertilization (IVF) and exposed to 5% or 20% oxygen during culture. We found that placentae from IVF embryos exhibit an increased frequency of abnormal methylation and expression profiles of several imprinted genes, compared to embryonic tissues. Moreover, IVF-derived placentae exhibit a variety of epigenetic profiles at the assayed imprinted genes, suggesting that these epigenetic defects arise by a stochastic process. Although culturing embryos in both of the oxygen concentrations resulted in a significant increase of epigenetic defects in placental tissues compared to naturally conceived controls, we did not detect significant differences between embryos cultured in 5% and those cultured in 20% oxygen. Thus, further optimization of ART should be considered to minimize the occurrence of epigenetic errors in the placenta. PMID:24337315
Alicki, Robert; Lidar, Daniel A.; Zanardi, Paolo
2006-05-15
We critically examine the internal consistency of a set of minimal assumptions entering the theory of fault-tolerant quantum error correction for Markovian noise. These assumptions are fast gates, a constant supply of fresh and cold ancillas, and a Markovian bath. We point out that these assumptions may not be mutually consistent in light of rigorous formulations of the Markovian approximation. Namely, Markovian dynamics requires either the singular coupling limit (high temperature), or the weak coupling limit (weak system-bath interaction). The former is incompatible with the assumption of a constant and fresh supply of cold ancillas, while the latter is inconsistent with fast gates. We discuss ways to resolve these inconsistencies. As part of our discussion we derive, in the weak coupling limit, a new master equation for a system subject to periodic driving.
NASA Astrophysics Data System (ADS)
Iwata, Y.; Yamada, S.; Murakami, T.; Fujimoto, T.; Fujisawa, T.; Ogawa, H.; Miyahara, N.; Yamamoto, K.; Hojo, S.; Sakamoto, Y.; Muramatsu, M.; Takeuchi, T.; Mitsumoto, T.; Tsutsui, H.; Watanabe, T.; Ueda, T.
2008-05-01
A compact injector for a heavy-ion medical-accelerator complex was developed. It consists of an electron-cyclotron-resonance ion-source (ECRIS) and two linacs, which are a radio-frequency-quadrupole (RFQ) linac and an Interdigital H-mode drift-tube-linac (IH-DTL). Beam acceleration tests of the compact injector were performed, and the designed beam quality was verified by the measured results, as reported earlier. Because the method of alternating-phase-focusing (APF) was used for beam focusing of the IH-DTL, the motion of beam ions would be sensitive to gap-voltage errors, caused during tuning of the gap-voltage distribution and by automatic-frequency tuning in actual operation. To study the effects of voltage errors to beam quality, further measurements were performed during acceleration tests. In this report, the effects of voltage errors for the APF IH-DTL are discussed.
Low frequency vibrational modes of oxygenated myoglobin, hemoglobins, and modified derivatives.
Jeyarajah, S; Proniewicz, L M; Bronder, H; Kincaid, J R
1994-12-01
The low frequency resonance Raman spectra of the dioxygen adducts of myoglobin, hemoglobin, its isolated subunits, mesoheme-substituted hemoglobin, and several deuteriated heme derivatives are reported. The observed oxygen isotopic shifts are used to assign the iron-oxygen stretching (approximately 570 cm-1) and the heretofore unobserved delta (Fe-O-O) bending (approximately 420 cm-1) modes. Although the delta (Fe-O-O) is not enhanced in the case of oxymyoglobin, it is observed for all the hemoglobin derivatives, its exact frequency being relatively invariable among the derivatives. The lack of sensitivity to H2O/D2O buffer exchange is consistent with our previous interpretation of H2O/D2O-induced shifts of v(O-O) in the resonance Raman spectra of dioxygen adducts of cobalt-substituted heme proteins; namely, that those shifts are associated with alterations in vibrational coupling of v(O-O) with internal modes of proximal histidyl imidazole rather than to steric or electronic effects of H/D exchange at the active site. No evidence is obtained for enhancement of the v(Fe-N) stretching frequency of the linkage between the heme iron and the imidazole group of the proximal histidine. PMID:7983043
NASA Astrophysics Data System (ADS)
Gall, Clarence A.
1999-05-01
When an electromagnetic radiation (EMR) source is in uniform motion with respect to an observer, a spectral (Doppler) shift in frequency is seen (blue as it approaches, red as it recedes). Since special relativity is limited to coordinate systems in uniform relative motion, this theory should be subject to this condition. On the other hand, the gravitational red shift (Einstein; Relativity: The Special and the General Theory, Crown,(1961), p.129) claims that EMR frequency decreases as the gravitational field, where the source is located, increases. As a gravitational effect, one would expect its derivation from a solution of the general relativistic field equations (R_μσ=0). Up to now, it has only been possible to derive it indirectly, by comparing the gravitational field to a (centrifugal) field produced by coordinate systems in relative rotational motion as an approximation of special relativity. Since rotation implies acceleration, it does not meet the conditions of special relativity so this is unsatisfactory. This work shows that the problem lies in the Schwarzschild metric which is independent of EMR frequency. By contrast it is easy to deduce the gravitational red shift from the frequency dependent Gall metric (Gall in AIP Conference Proceedings 308, The Evolution of X-Ray Binaries,(1993), p. 87).
Galbraith, G C; Bagasan, B; Sulahian, J
2001-02-01
The human brainstem frequency-following response reflects neural activity to periodic auditory stimuli. Responses were simultaneously recorded from one vertically oriented and three horizontally oriented electrode derivations. Nine participants each received a total of 16,000 tone repetitions, 4,000 for each of four stimulus frequencies: 222, 266, 350, and 450 Hz. The responses were digitally filtered, quantified by correlation and spectral analysis, and statistically evaluated by repeated measure analysis of variance. While the various horizontal derivation responses did not differ from each other in latency (values tightly clustered around M= 2.60 msec.), the vertical derivation response occurred significantly later (M=4.38 msec.). The smaller latency for the horizontal responses suggests an origin within the acoustic nerve, while the larger latency for the vertical response suggests a central brainstem origin. The largest response amplitude resulted from gold "tiptrode" electrodes placed in each auditory meatus, suggesting that this electrode derivation provided the most accurate (noninvasive) assessment of short-latency events originating at the level of the auditory nerve. PMID:11322612
NASA Astrophysics Data System (ADS)
Zhang, Feifei; Dou, Xiankang; Sun, Dongsong; Shu, Zhifeng; Xia, Haiyun; Gao, Yuanyuan; Hu, Dongdong; Shangguan, Mingjia
2014-12-01
Direct detection Doppler wind lidar (DWL) has been demonstrated for its capability of atmospheric wind detection ranging from the troposphere to stratosphere with high temporal and spatial resolution. We design and describe a fiber-based optical receiver for direct detection DWL. Then the locking error of the relative laser frequency is analyzed and the dependent variables turn out to be the relative error of the calibrated constant and the slope of the transmission function. For high accuracy measurement of the calibrated constant for a fiber-based system, an integrating sphere is employed for its uniform scattering. What is more, the feature of temporally widening the pulse laser allows more samples be acquired for the analog-to-digital card of the same sampling rate. The result shows a relative error of 0.7% for a calibrated constant. For the latter, a new improved locking filter for a Fabry-Perot Interferometer was considered and designed with a larger slope. With these two strategies, the locking error for the relative laser frequency is calculated to be about 3 MHz, which is equivalent to a radial velocity of about 0.53 m/s and demonstrates the effective improvements of frequency locking for a robust DWL.
System for adjusting frequency of electrical output pulses derived from an oscillator
Bartholomew, David B.
2006-11-14
A system for setting and adjusting a frequency of electrical output pulses derived from an oscillator in a network is disclosed. The system comprises an accumulator module configured to receive pulses from an oscillator and to output an accumulated value. An adjustor module is configured to store an adjustor value used to correct local oscillator drift. A digital adder adds values from the accumulator module to values stored in the adjustor module and outputs their sums to the accumulator module, where they are stored. The digital adder also outputs an electrical pulse to a logic module. The logic module is in electrical communication with the adjustor module and the network. The logic module may change the value stored in the adjustor module to compensate for local oscillator drift or change the frequency of output pulses. The logic module may also keep time and calculate drift.
NASA Astrophysics Data System (ADS)
Thyer, Mark; Li, Jing; Lambert, Martin; Kuczera, George; Metcalfe, Andrew
2015-04-01
Flood extremes are driven by highly variable and complex climatic and hydrological processes. Derived flood frequency methods are often used to predict the flood frequency distribution (FFD) because they can provide predictions in ungauged catchments and evaluate the impact of land-use or climate change. This study presents recent work on development of a new derived flood frequency method called the hybrid causative events (HCE) approach. The advantage of the HCE approach is that it combines the accuracy of the continuous simulation approach with the computational efficiency of the event-based approaches. Derived flood frequency methods, can be divided into two classes. Event-based approaches provide fast estimation, but can also lead to prediction bias due to limitations of inherent assumptions required for obtaining input information (rainfall and catchment wetness) for events that cause large floods. Continuous simulation produces more accurate predictions, however, at the cost of massive computational time. The HCE method uses a short continuous simulation to provide inputs for a rainfall-runoff model running in an event-based fashion. A proof-of-concept pilot study that the HCE produces estimates of the flood frequency distribution with similar accuracy as the continuous simulation, but with dramatically reduced computation time. Recent work incorporated seasonality into the HCE approach and evaluated with a more realistic set of eight sites from a wide range of climate zones, typical of Australia, using a virtual catchment approach. The seasonal hybrid-CE provided accurate predictions of the FFD for all sites. Comparison with the existing non-seasonal hybrid-CE showed that for some sites the non-seasonal hybrid-CE significantly over-predicted the FFD. Analysis of the underlying cause of whether a site had a high, low or no need to use seasonality found it was based on a combination of reasons, that were difficult to predict apriori. Hence it is recommended
NASA Astrophysics Data System (ADS)
Pankratov, Oleg; Kuvshinov, Alexei
2010-04-01
Electromagnetic (EM) studies of the Earth have advanced significantly over the past few years. This progress was driven, in particular, by new developments in the methods of 3-D inversion of EM data. Due to the large scale of the 3-D EM inverse problems, iterative gradient-type methods have mostly been employed. In these methods one has to calculate multiple times the gradient of the penalty function-a sum of misfit and regularization terms-with respect to the model parameters. However, even with modern computational capabilities the straightforward calculation of the misfit gradients based on numerical differentiation is extremely time consuming. Much more efficient and elegant way to calculate the gradient of the misfit is provided by the so-called `adjoint' approach. This is now widely used in many 3-D numerical schemes for inverting EM data of different types and origin. It allows the calculation of the misfit gradient for the price of only a few additional forward calculations. In spite of its popularity we did not find in the literature any general description of the approach, which would allow researchers to apply this methodology in a straightforward manner to their scenario of interest. In the paper, we present formalism for the efficient calculation of the derivatives of EM frequency-domain responses and the derivatives of the misfit with respect to variations of 3-D isotropic/anisotropic conductivity. The approach is rather general; it works with single-site responses, multisite responses and responses that include spatial derivatives of EM field. The formalism also allows for various types of parametrization of the 3-D conductivity distribution. Using this methodology one can readily obtain appropriate formulae for the specific sounding methods. To illustrate the concept we provide such formulae for a number of EM techniques: geomagnetic depth sounding (GDS), conventional and generalized magnetotellurics, the magnetovariational method, horizontal
Frequency and origins of hemoglobin S mutation in African-derived Brazilian populations.
De Mello Auricchio, Maria Teresa Balester; Vicente, João Pedro; Meyer, Diogo; Mingroni-Netto, Regina Célia
2007-12-01
Africans arrived in Brazil as slaves in great numbers, mainly after 1550. Before the abolition of slavery in Brazil in 1888, many communities, called quilombos, were formed by runaway or abandoned African slaves. These communities are presently referred to as remnants of quilombos, and many are still partially genetically isolated. These remnants can be regarded as relicts of the original African genetic contribution to the Brazilian population. In this study we assessed frequencies and probable geographic origins of hemoglobin S (HBB*S) mutations in remnants of quilombo populations in the Ribeira River valley, São Paulo, Brazil, to reconstruct the history of African-derived populations in the region. We screened for HBB*S mutations in 11 quilombo populations (1,058 samples) and found HBB*S carrier frequencies that ranged from 0% to 14%. We analyzed beta-globin gene cluster haplotypes linked to the HBB*S mutation in 86 chromosomes and found the four known African haplotypes: 70 (81.4%) Bantu (Central Africa Republic), 7 (8.1%) Benin, 7 (8.1%) Senegal, and 2 (2.3%) Cameroon haplotypes. One sickle cell homozygote was Bantu/Bantu and two homozygotes had Bantu/Benin combinations. The high frequency of the sickle cell trait and the diversity of HBB*S linked haplotypes indicate that Brazilian remnants of quilombos are interesting repositories of genetic diversity present in the ancestral African populations.
García-Martínez, V; Montes, M A; Villanueva, J; Gimenez-Molina, Y; de Toledo, G A; Gutiérrez, L M
2015-06-01
Sphingomyelin derivatives like sphingosine have been shown to enhance secretion in a variety of systems, including neuroendocrine and neuronal cells. By studying the mechanisms underlying this effect, we demonstrate here that sphingomyelin rafts co-localize strongly with synaptosomal-associated protein of 25Kda (SNAP-25) clusters in cultured bovine chromaffin cells and that they appear to be linked in a dynamic manner. In functional terms, when cultured rat chromaffin cells are treated with sphingomyelinase (SMase), producing sphingomyelin derivatives, the secretion elicited by repetitive depolarizations is enhanced. This increase was independent of cell size and it was significant 15min after initiating stimulation. Interestingly, by evaluating the membrane capacitance we found that the events in control untreated cells corresponded to two populations of microvesicles and granules, and the fusion of both these populations is clearly enhanced after treatment with SMase. Furthermore, SMase does not increase the size of chromaffin granules. Together, these results strongly suggest that SNARE-mediated exocytosis is enhanced by the generation of SMase derivatives, reflecting an increase in the frequency of fusion of both microvesicles and chromaffin granules rather than an increase in the size of these vesicles.
2011-01-01
Introduction Continuous cardiac output monitoring is used for early detection of hemodynamic instability and guidance of therapy in critically ill patients. Recently, the accuracy of pulse contour-derived cardiac output (PCCO) has been questioned in different clinical situations. In this study, we examined agreement between PCCO and transcardiopulmonary thermodilution cardiac output (COTCP) in critically ill patients, with special emphasis on norepinephrine (NE) administration and the time interval between calibrations. Methods This prospective, observational study was performed with a sample of 73 patients (mean age, 63 ± 13 years) requiring invasive hemodynamic monitoring on a non-cardiac surgery intensive care unit. PCCO was recorded immediately before calibration by COTCP. Bland-Altman analysis was performed on data subsets comparing agreement between PCCO and COTCP according to NE dosage and the time interval between calibrations up to 24 hours. Further, central artery stiffness was calculated on the basis of the pulse pressure to stroke volume relationship. Results A total of 330 data pairs were analyzed. For all data pairs, the mean COTCP (±SD) was 8.2 ± 2.0 L/min. PCCO had a mean bias of 0.16 L/min with limits of agreement of -2.81 to 3.15 L/min (percentage error, 38%) when compared to COTCP. Whereas the bias between PCCO and COTCP was not significantly different between NE dosage categories or categories of time elapsed between calibrations, interchangeability (percentage error <30%) between methods was present only in the high NE dosage subgroup (≥0.1 μg/kg/min), as the percentage errors were 40%, 47% and 28% in the no NE, NE < 0.1 and NE ≥ 0.1 μg/kg/min subgroups, respectively. PCCO was not interchangeable with COTCP in subgroups of different calibration intervals. The high NE dosage group showed significantly increased central artery stiffness. Conclusions This study shows that NE dosage, but not the time interval between calibrations, has an
NASA Astrophysics Data System (ADS)
Kuhn, Michael; Hirt, Christian
2016-09-01
In gravity forward modelling, the concept of Rock-Equivalent Topography (RET) is often used to simplify the computation of gravity implied by rock, water, ice and other topographic masses. In the RET concept, topographic masses are compressed (approximated) into equivalent rock, allowing the use of a single constant mass-density value. Many studies acknowledge the approximate character of the RET, but few have attempted yet to quantify and analyse the approximation errors in detail for various gravity field functionals and heights of computation points. Here, we provide an in-depth examination of approximation errors associated with the RET compression for the topographic gravitational potential and its first- and second-order derivatives. Using the Earth2014 layered topography suite we apply Newtonian integration in the spatial domain in the variants (a) rigorous forward modelling of all mass bodies, (b) approximative modelling using RET. The differences among both variants, which reflect the RET approximation error, are formed and studied for an ensemble of 10 different gravity field functionals at three levels of altitude (on and 3 km above the Earth's surface and at 250 km satellite height). The approximation errors are found to be largest at the Earth's surface over RET compression areas (oceans, ice shields) and to increase for the first- and second-order derivatives. Relative errors, computed here as ratio between the range of differences between both variants relative to the range in signal, are at the level of 0.06-0.08 % for the potential, ˜ 3-7 % for the first-order derivatives at the Earth's surface (˜ 0.1 % at satellite altitude). For the second-order derivatives, relative errors are below 1 % at satellite altitude, at the 10-20 % level at 3 km and reach maximum values as large as ˜ 20 to 110 % near the surface. As such, the RET approximation errors may be acceptable for functionals computed far away from the Earth's surface or studies focussing on
Large-scale derived flood frequency analysis based on continuous simulation
NASA Astrophysics Data System (ADS)
Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several
An efficient causative event-based approach for deriving the annual flood frequency distribution
NASA Astrophysics Data System (ADS)
Li, Jing; Thyer, Mark; Lambert, Martin; Kuczera, George; Metcalfe, Andrew
2014-03-01
In ungauged catchments or catchments without sufficient streamflow data, derived flood frequency methods are often applied to provide the basis for flood risk assessment. The most commonly used event-based methods, such as design storm and joint probability approaches are able to give fast estimation, but can also lead to prediction bias and uncertainties due to the limitations of inherent assumptions and difficulties in obtaining input information (rainfall and catchment wetness) related to events that cause extreme floods. An alternative method is a long continuous simulation which produces more accurate predictions, but at the cost of massive computational time. In this study a hybrid method was developed to make the best use of both event-based and continuous approaches. The method uses a short continuous simulation to provide inputs for a rainfall-runoff model running in an event-based fashion. The total probability theorem is then combined with the peak over threshold method to estimate annual flood distribution. A synthetic case study demonstrates the efficacy of this procedure compared with existing methods of estimating annual flood distribution. The main advantage of the hybrid method is that it provides estimates of the flood frequency distribution with an accuracy similar to the continuous simulation approach, but with dramatically reduced computation time. This paper presents the method at the proof-of-concept stage of development and future work is required to extend the method to more realistic catchments.
Mardani, Mohammad; Roshankhah, Shiva; Hashemibeni, Batool; Salahshoor, Mohammadreza; Naghsh, Erfan; Esfandiari, Ebrahim
2016-01-01
Background: Since when the cartilage damage (e.g., with the osteoarthritis) it could not be repaired in the body, hence for its reconstruction needs cell therapy. For this purpose, adipose-derived stem cells (ADSCs) is one of the best cell sources because by the tissue engineering techniques it can be differentiated into chondrocytes. Chemical and physical inducers is required order to stem cells to chondrocytes differentiating. We have decided to define the role of electric field (EF) in inducing chondrogenesis process. Materials and Methods: A low frequency EF applied the ADSCs as a physical inducer for chondrogenesis in a 3D micromass culture system which ADSCs were extracted from subcutaneous abdominal adipose tissue. Also enzyme-linked immunosorbent assay, methyl thiazolyl tetrazolium, real time polymerase chain reaction and flowcytometry techniques were used for this study. Results: We found that the 20 minutes application of 1 kHz, 20 mv/cm EF leads to chondrogenesis in ADSCs. Although our results suggest that application of physical (EF) and chemical (transforming growth factor-β3) inducers at the same time, have best results in expression of collagen type II and SOX9 genes. It is also seen EF makes significant decreased expression of collagens type I and X genes. Conclusion: The low frequency EF can be a good motivator to promote chondrogenic differentiation of human ADSCs. PMID:27308269
The Huygens Doppler Wind Experiment - Titan Winds Derived from Probe Radio Frequency Measurements
NASA Astrophysics Data System (ADS)
Bird, M. K.; Dutta-Roy, R.; Heyl, M.; Allison, M.; Asmar, S. W.; Folkner, W. M.; Preston, R. A.; Atkinson, D. H.; Edenhofer, P.; Plettemeier, D.; Wohlmuth, R.; Iess, L.; Tyler, G. L.
2002-07-01
A Doppler Wind Experiment (DWE) will be performed during the Titan atmospheric descent of the ESA Huygens Probe. The direction and strength of Titan's zonal winds will be determined with an accuracy better than 1 m s-1 from the start of mission at an altitude of ~160 km down to the surface. The Probe's wind-induced horizontal motion will be derived from the residual Doppler shift of its S-band radio link to the Cassini Orbiter, corrected for all known orbit and propagation effects. It is also planned to record the frequency of the Probe signal using large ground-based antennas, thereby providing an additional component of the horizontal drift. In addition to the winds, DWE will obtain valuable information on the rotation, parachute swing and atmospheric buffeting of the Huygens Probe, as well as its position and attitude after Titan touchdown. The DWE measurement strategy relies on experimenter-supplied Ultra-Stable Oscillators to generate the transmitted signal from the Probe and to extract the frequency of the received signal on the Orbiter. Results of the first in-flight checkout, as well as the DWE Doppler calibrations conducted with simulated Huygens signals uplinked from ground (Probe Relay Tests), are described. Ongoing efforts to measure and model Titan's winds using various Earth-based techniques are briefly reviewed.
The Huygens Doppler Wind Experiment - Titan Winds Derived from Probe Radio Frequency Measurements
NASA Astrophysics Data System (ADS)
Bird, M. K.; Dutta-Roy, R.; Heyl, M.; Allison, M.; Asmar, S. W.; Folkner, W. M.; Preston, R. A.; Atkinson, D. H.; Edenhofer, P.; Plettemeier, D.; Wohlmuth, R.; Iess, L.; Tyler, G. L.
2002-07-01
A Doppler Wind Experiment (DWE) will be performed during the Titan atmospheric descent of the ESA Huygens Probe. The direction and strength of Titan's zonal winds will be determined with an accuracy better than 1 m s-1 from the start of mission at an altitude of ˜160 km down to the surface. The Probe's wind-induced horizontal motion will be derived from the residual Doppler shift of its S-band radio link to the Cassini Orbiter, corrected for all known orbit and propagation effects. It is also planned to record the frequency of the Probe signal using large ground-based antennas, thereby providing an additional component of the horizontal drift. In addition to the winds, DWE will obtain valuable information on the rotation, parachute swing and atmospheric buffeting of the Huygens Probe, as well as its position and attitude after Titan touchdown. The DWE measurement strategy relies on experimenter-supplied Ultra-Stable Oscillators to generate the transmitted signal from the Probe and to extract the frequency of the received signal on the Orbiter. Results of the first in-flight checkout, as well as the DWE Doppler calibrations conducted with simulated Huygens signals uplinked from ground (Probe Relay Tests), are described. Ongoing efforts to measure and model Titan's winds using various Earth-based techniques are briefly reviewed.
Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar
2015-01-01
For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489
Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique
2016-01-01
Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865
Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique
2016-01-01
Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865
Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique
2016-04-18
Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.
1990-01-01
Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.
NASA Astrophysics Data System (ADS)
Ou, Z. W.; Tong, H.; Kou, F. F.; Ding, G. Q.
2016-04-01
Eight pulsars have low braking indices, which challenge the magnetic dipole braking of pulsars. 222 pulsars and 15 magnetars have abnormal distribution of frequency second derivatives, which also make contradiction with classical understanding. How neutron star magnetospheric activities affect these two phenomena are investigated by using the wind braking model of pulsars. It is based on the observational evidence that pulsar timing is correlated with emission and both aspects reflect the magnetospheric activities. Fluctuations are unavoidable for a physical neutron star magnetosphere. Young pulsars have meaningful braking indices, while old pulsars' and magnetars' fluctuation item dominates their frequency second derivatives. It can explain both the braking index and frequency second derivative of pulsars uniformly. The braking indices of eight pulsars are the combined effect of magnetic dipole radiation and particle wind. During the lifetime of a pulsar, its braking index will evolve from three to one. Pulsars with low braking index may put strong constraint on the particle acceleration process in the neutron star magnetosphere. The effect of pulsar death should be considered during the long term rotational evolution of pulsars. An equation like the Langevin equation for Brownian motion was derived for pulsar spin-down. The fluctuation in the neutron star magnetosphere can be either periodic or random, which result in anomalous frequency second derivative and they have similar results. The magnetospheric activities of magnetars are always stronger than those of normal pulsars.
NASA Technical Reports Server (NTRS)
Greenhall, Charles A.
1996-01-01
The phase of a frequency standard that uses periodic interrogation and control of a local oscillator (LO) is degraded by a long-term random-walk component induced by downconversion of LO noise into the loop passband. The Dick formula for the noise level of this degradation can be derived from explicit solotions of two LO control-loop models. A summary of the derivations is given here.
NASA Astrophysics Data System (ADS)
Eshagh, Mehdi; Ghorbannia, Morteza
2014-07-01
The spatial truncation error (STE) is a significant systematic error in the integral inversion of satellite gradiometric and orbital data to gravity anomalies at sea level. In order to reduce the effect of STE, a larger area than the desired one is considered in the inversion process, but the anomalies located in its central part are selected as the final results. The STE influences the variance of the results as well because the residual vector, which is contaminated with STE, is used for its estimation. The situation is even more complicated in variance component estimation because of its iterative nature. In this paper, we present a strategy to reduce the effect of STE on the a posteriori variance factor and the variance components for inversion of satellite orbital and gradiometric data to gravity anomalies at sea level. The idea is to define two windowing matrices for reducing this error from the estimated residuals and anomalies. Our simulation studies over Fennoscandia show that the differences between the 0.5°×0.5° gravity anomalies obtained from orbital data and an existing gravity model have standard deviation (STD) and root mean squared error (RMSE) of 10.9 and 12.1 mGal, respectively, and those obtained from gradiometric data have 7.9 and 10.1 in the same units. In the case that they are combined using windowed variance components the STD and RMSE become 6.1 and 8.4 mGal. Also, the mean value of the estimated RMSE after using the windowed variances is in agreement with the RMSE of the differences between the estimated anomalies and those obtained from the gravity model.
Lievens, Hans; Vernieuwe, Hilde; Alvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E C
2009-01-01
In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration.
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1996-01-01
The phase of a frequency standard that uses periodic interrogation and control of a local oscillator (LO) is degraded by a long-term random-walk component induced by downconversion of LO noise into the loop passband. The Dick formula for the noise level of this degradation is derived from an explicit solution of an LO control-loop model.
NASA Technical Reports Server (NTRS)
Huang, Dong; Yang, Wenze; Tan, Bin; Rautiainen, Miina; Zhang, Ping; Hu, Jiannan; Shabanov, Nikolay V.; Linder, Sune; Knyazikhin, Yuri; Myneni, Ranga B.
2006-01-01
The validation of moderate-resolution satellite leaf area index (LAI) products such as those operationally generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor data requires reference LAI maps developed from field LAI measurements and fine-resolution satellite data. Errors in field measurements and satellite data determine the accuracy of the reference LAI maps. This paper describes a method by which reference maps of known accuracy can be generated with knowledge of errors in fine-resolution satellite data. The method is demonstrated with data from an international field campaign in a boreal coniferous forest in northern Sweden, and Enhanced Thematic Mapper Plus images. The reference LAI map thus generated is used to assess modifications to the MODIS LAI/fPAR algorithm recently implemented to derive the next generation of the MODIS LAI/fPAR product for this important biome type.
NASA Astrophysics Data System (ADS)
Krueger, Tobias; Inman, Alex; Paling, Nick
2014-05-01
Catchment management, as driven by legislation such as the EU WFD or grassroots initiatives, requires the apportionment of in-stream pollution to point and diffuse sources so that mitigation measures can be targeted and costs and benefits shared. Source apportionment is typically done via modelling. Given model imperfections and input data errors, it has become state-of-the-art to employ an uncertainty framework. However, what is not easily incorporated in such a framework, and currently much discussed in hydrology, are epistemic uncertainties, i.e. those uncertainties that relate to lack of knowledge about processes and data. For example, what if an otherwise negligible source suddenly matters because of an accidental pollution incident? In this paper we present such a case of epistemic error, an industrial spill ignored in a water quality model, demonstrate the bias of the resulting model simulations, and show how the error was discovered somewhat incidentally by auxiliary high-frequency data and finally corrected through the collective intelligence of a stakeholder network. We suggest that accidental pollution incidents like this are a wide-spread, though largely ignored, problem. Hence our discussion will reflect on the practice of catchment monitoring, modelling and management in general. The case itself occurred as part of ongoing modelling support in the Tamar catchment, one of the priority catchments of the UK government's new approach to managing water resources more decentralised and collaboratively. An Extended Export Coefficient Model (ECM+) had been developed with stakeholders to simulate transfers of nutrients (N & P), sediment and Faecal Coliforms from land to water and down the river network as a function of sewage treatment options, land use, livestock densities and farm management practices. In the process of updating the model for the hydrological years 2008-2012 an over-prediction of the annual average P concentration by the model was found at
NASA Technical Reports Server (NTRS)
Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.
2007-01-01
Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross
TOA/FOA geolocation error analysis.
Mason, John Jeffrey
2008-08-01
This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.
Kumar Sahu, Rabindra; Panda, Sidhartha; Biswal, Ashutosh; Chandra Sekhar, G T
2016-03-01
In this paper, a novel Tilt Integral Derivative controller with Filter (TIDF) is proposed for Load Frequency Control (LFC) of multi-area power systems. Initially, a two-area power system is considered and the parameters of the TIDF controller are optimized using Differential Evolution (DE) algorithm employing an Integral of Time multiplied Absolute Error (ITAE) criterion. The superiority of the proposed approach is demonstrated by comparing the results with some recently published heuristic approaches such as Firefly Algorithm (FA), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) optimized PID controllers for the same interconnected power system. Investigations reveal that proposed TIDF controllers provide better dynamic response compared to PID controller in terms of minimum undershoots and settling times of frequency as well as tie-line power deviations following a disturbance. The proposed approach is also extended to two widely used three area test systems considering nonlinearities such as Generation Rate Constraint (GRC) and Governor Dead Band (GDB). To improve the performance of the system, a Thyristor Controlled Series Compensator (TCSC) is also considered and the performance of TIDF controller in presence of TCSC is investigated. It is observed that system performance improves with the inclusion of TCSC. Finally, sensitivity analysis is carried out to test the robustness of the proposed controller by varying the system parameters, operating condition and load pattern. It is observed that the proposed controllers are robust and perform satisfactorily with variations in operating condition, system parameters and load pattern. PMID:26712682
Mean frequency derived via Hilbert-Huang transform with application to fatigue EMG signal analysis.
Xie, Hongbo; Wang, Zhizhong
2006-05-01
The mean frequency (MNF) of surface electromyography (EMG) signal is an important index of local muscle fatigue. The purpose of this study is to improve the mean frequency (MNF) estimation. Three methods to estimate the MNF of non-stationary EMG are compared. A novel approach based on Hilbert-Huang transform (HHT), which comprises the empirical mode decomposition (EMD) and Hilbert transform, is proposed to estimate the mean frequency of non-stationary signal. The performance of this method is compared with the two existing methods, i.e. autoregressive (AR) spectrum estimation and wavelet transform method. It is observed that our method shows low variability in terms of robustness to the length of the analysis window. The time-varying characteristic of the proposed approach also enables us to accommodate other non-stationary biomedical data analysis.
NASA Astrophysics Data System (ADS)
Torregrosa, A.; Flint, L. E.; Flint, A. L.; Peters, J.; Combs, C.
2014-12-01
Coastal fog modifies the hydrodynamic and thermodynamic properties of California watersheds with the greatest impact to ecosystem functioning during arid summer months. Lowered maximum temperatures resulting from inland penetration of marine fog are probably adequate to capture fog effects on thermal land surface characteristics however the hydrologic impact from lowered rates of evapotranspiration due to shade, fog drip, increased relative humidity, and other factors associated with fog events are more difficult to gauge. Fog products, such as those derived from National Weather Service Geostationary Operational Environmental Satellite (GOES) imagery, provide high frequency (up to 15 min) views of fog and low cloud cover and can potentially improve water balance models. Even slight improvements in water balance calculations can benefit urban water managers and agricultural irrigation. The high frequency of GOES output provides the opportunity to explore options for integrating fog frequency data into water balance models. This pilot project compares GOES-derived fog frequency intervals (6, 12 and 24 hour) to explore the most useful for water balance models and to develop model-relevant relationships between climatic and water balance variables. Seasonal diurnal thermal differences, plant ecophysiological processes, and phenology suggest that a day/night differentiation on a monthly basis may be adequate. To explore this hypothesis, we examined discharge data from stream gages and outputs from the USGS Basin Characterization Model for runoff, recharge, potential evapotranspiration, and actual evapotranspiration for the Russian River Watershed under low, medium, and high fog event conditions derived from hourly GOES imagery (1999-2009). We also differentiated fog events into daytime and nighttime versus a 24-hour compilation on a daily, monthly, and seasonal basis. Our data suggest that a daily time-step is required to adequately incorporate the hydrologic effect of
CORRELATED ERRORS IN EARTH POINTING MISSIONS
NASA Technical Reports Server (NTRS)
Bilanow, Steve; Patt, Frederick S.
2005-01-01
Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor
Verginadis, Ioannis I; Simos, Yannis V; Velalopoulou, Anastasia P; Vadalouca, Athina N; Kalfakakou, Vicky P; Karkabounas, Spyridon Ch; Evangelou, Angelos M
2012-12-01
Exposure to various types of electromagnetic fields (EMFs) affects pain specificity (nociception) and pain inhibition (analgesia). Previous study of ours has shown that exposure to the resonant spectra derived from biologically active substances' NMR may induce to live targets the same effects as the substances themselves. The purpose of this study is to investigate the potential analgesic effect of the resonant EMFs derived from the NMR spectrum of morphine. Twenty five Wistar rats were divided into five groups: control group; intraperitoneal administration of morphine 10 mg/kg body wt; exposure of rats to resonant EMFs of morphine; exposure of rats to randomly selected non resonant EMFs; and intraperitoneal administration of naloxone and simultaneous exposure of rats to the resonant EMFs of morphine. Tail Flick and Hot Plate tests were performed for estimation of the latency time. Results showed that rats exposed to NMR spectrum of morphine induced a significant increase in latency time at time points (p < 0.05), while exposure to the non resonant random EMFs exerted no effects. Additionally, naloxone administration inhibited the analgesic effects of the NMR spectrum of morphine. Our results indicate that exposure of rats to the resonant EMFs derived from the NMR spectrum of morphine may exert on animals similar analgesic effects to morphine itself.
A genome signature derived from the interplay of word frequencies and symbol correlations
NASA Astrophysics Data System (ADS)
Möller, Simon; Hameister, Heike; Hütt, Marc-Thorsten
2014-11-01
Genome signatures are statistical properties of DNA sequences that provide information on the underlying species. It is not understood, how such species-discriminating statistical properties arise from processes of genome evolution and from functional properties of the DNA. Investigating the interplay of different genome signatures can contribute to this understanding. Here we analyze the statistical dependences of two such genome signatures: word frequencies and symbol correlations at short and intermediate distances. We formulate a statistical model of word frequencies in DNA sequences based on the observed symbol correlations and show that deviations of word counts from this correlation-based null model serve as a new genome signature. This signature (i) performs better in sorting DNA sequence segments according to their species origin and (ii) reveals unexpected species differences in the composition of microsatellites, an important class of repetitive DNA. While the first observation is a typical task in metagenomics projects and therefore an important benchmark for a genome signature, the latter suggests strong species differences in the biological mechanisms of genome evolution. On a more general level, our results highlight that the choice of null model (here: word abundances computed via symbol correlations rather than shorter word counts) substantially affects the interpretation of such statistical signals.
NASA Astrophysics Data System (ADS)
Igoshev, Andrei; Verbunt, Frank; Cator, Eric
2016-06-01
We use a Bayesian approach to derive the distance probability distribution for one object from its parallax with measurement uncertainty for two spatial distribution priors, a homogeneous spherical distribution and a galactocentric distribution - applicable for radio pulsars - observed from Earth. We investigate the dependence on measurement uncertainty, and show that a parallax measurement can underestimate or overestimate the actual distance, depending on the spatial distribution prior. We derive the probability distributions for distance and luminosity combined - and for each separately when a flux with measurement error for the object is also available - and demonstrate the necessity of and dependence on the luminosity function prior. We apply this to estimate the distance and the radio and gamma-ray luminosities of PSR J0218+4232. The use of realistic priors improves the quality of the estimates for distance and luminosity compared to those based on measurement only. Use of the wrong prior, for example a homogeneous spatial distribution without upper bound, may lead to very incorrect results.
NASA Astrophysics Data System (ADS)
Kapinska, Anna D.; Uttley, P.; Kaiser, C. R.
2010-03-01
FRII radio galaxies are relatively simple systems which can be used to determine the influence of jets on their environments. Even simple analytical models of FRII evolution can link the observed lobe luminosities and sizes to fundamental properties such as jet power and density of the ambient medium; these are crucial for understanding AGN feedback. However, due to strong flux selection effects interpreting FRII samples is not straightforward. To overcome this problem we construct Monte Carlo simulations to create artificial samples of radio galaxies. We explore jet power and external density distributions by using them as the simulation input parameters. Further, we compute radio luminosity functions (RLF) and fit them to the observed low-frequency radio data that cover redshifts up to z 2, which gives us the most plausible distributions of FRIIs' fundamental properties. Moreover, based on these RLFs, we obtain the kinetic luminosity functions of these powerful sources.
NASA Technical Reports Server (NTRS)
Fisher, Lewis R
1958-01-01
Three wing models were oscillated in yaw about their vertical axes to determine the effects of systematic variations of frequency and amplitude of oscillation on the in-phase and out-of-phase combination lateral stability derivatives resulting from this motion. The tests were made at low speeds for a 60 degree delta wing, a 45 degree swept wing, and an unswept wing; the swept and unswept wings had aspect ratios of 4. The results indicate that large changes in the magnitude of the stability derivatives due to the variation of frequency occur at high angles of attack, particularly for the delta wing. The greatest variations of the derivatives with frequency take place for the lowest frequencies of oscillation; at the higher frequencies, the effects of frequency are smaller and the derivatives become more linear with angle of attack. Effects of amplitude of oscillation on the stability derivatives for delta wings were evident for certain high angles of attack and for the lowest frequencies of oscillation. As the frequency became high, the amplitude effects tended to disappear.
NASA Astrophysics Data System (ADS)
Kato, Seiji; Sun-Mack, Sunny; Miller, Walter F.; Rose, Fred G.; Chen, Yan; Minnis, Patrick; Wielicki, Bruce A.
2010-01-01
A cloud frequency of occurrence matrix is generated using merged cloud vertical profiles derived from the satellite-borne Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and cloud profiling radar. The matrix contains vertical profiles of cloud occurrence frequency as a function of the uppermost cloud top. It is shown that the cloud fraction and uppermost cloud top vertical profiles can be related by a cloud overlap matrix when the correlation length of cloud occurrence, which is interpreted as an effective cloud thickness, is introduced. The underlying assumption in establishing the above relation is that cloud overlap approaches random overlap with increasing distance separating cloud layers and that the probability of deviating from random overlap decreases exponentially with distance. One month of Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) and CloudSat data (July 2006) support these assumptions, although the correlation length sometimes increases with separation distance when the cloud top height is large. The data also show that the correlation length depends on cloud top hight and the maximum occurs when the cloud top height is 8 to 10 km. The cloud correlation length is equivalent to the decorrelation distance introduced by Hogan and Illingworth (2000) when cloud fractions of both layers in a two-cloud layer system are the same. The simple relationships derived in this study can be used to estimate the top-of-atmosphere irradiance difference caused by cloud fraction, uppermost cloud top, and cloud thickness vertical profile differences.
Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif
2012-04-01
Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.
NASA Technical Reports Server (NTRS)
Palumbo, Dan
2008-01-01
The lifetimes of coherent structures are derived from data correlated over a 3 sensor array sampling streamwise sidewall pressure at high Reynolds number (> 10(exp 8)). The data were acquired at subsonic, transonic and supersonic speeds aboard a Tupolev Tu-144. The lifetimes are computed from a variant of the correlation length termed the lifelength. Characteristic lifelengths are estimated by fitting a Gaussian distribution to the sensors cross spectra and are shown to compare favorably with Efimtsov s prediction of correlation space scales. Lifelength distributions are computed in the time/frequency domain using an interval correlation technique on the continuous wavelet transform of the original time data. The median values of the lifelength distributions are found to be very close to the frequency averaged result. The interval correlation technique is shown to allow the retrieval and inspection of the original time data of each event in the lifelength distributions, thus providing a means to locate and study the nature of the coherent structure in the turbulent boundary layer. The lifelength data are converted to lifetimes using the convection velocity. The lifetime of events in the time/frequency domain are displayed in Lifetime Maps. The primary purpose of the paper is to validate these new analysis techniques so that they can be used with confidence to further characterize the behavior of coherent structures in the turbulent boundary layer.
Carotid ultrasound segmentation using radio-frequency derived phase information and gabor filters.
Azzopardi, Carl; Camilleri, Kenneth P; Hicks, Yulia A
2015-01-01
Ultrasound image segmentation is a field which has garnered much interest over the years. This is partially due to the complexity of the problem, arising from the lack of contrast between different tissue types which is quite typical of ultrasound images. Recently, segmentation techniques which treat RF signal data have also become popular, particularly with the increasing availability of such data from open-architecture machines. It is believed that RF data provides a rich source of information whose integrity remains intact, as opposed to the loss which occurs through the signal processing chain leading to Brightness Mode Images. Furthermore, phase information contained within RF data has not been studied in much detail, as the nature of the information here appears to be mostly random. In this work however, we show that phase information derived from RF data does elicit structure, characterized by texture patterns. Texture segmentation of this data permits the extraction of rough, but well localized, carotid boundaries. We provide some initial quantitative results, which report the performance of the proposed technique. PMID:26737742
... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...
Surface Roughness of the Moon Derived from Multi-frequency Radar Data
NASA Astrophysics Data System (ADS)
Fa, W.
2011-12-01
Surface roughness of the Moon provides important information concerning both significant questions about lunar surface processes and engineering constrains for human outposts and rover trafficabillity. Impact-related phenomena change the morphology and roughness of lunar surface, and therefore surface roughness provides clues to the formation and modification mechanisms of impact craters. Since the Apollo era, lunar surface roughness has been studied using different approaches, such as direct estimation from lunar surface digital topographic relief, and indirect analysis of Earth-based radar echo strengths. Submillimeter scale roughness at Apollo landing sites has been studied by computer stereophotogrammetry analysis of Apollo Lunar Surface Closeup Camera (ALSCC) pictures, whereas roughness at meter to kilometer scale has been studied using laser altimeter data from recent missions. Though these studies shown lunar surface roughness is scale dependent that can be described by fractal statistics, roughness at centimeter scale has not been studied yet. In this study, lunar surface roughnesses at centimeter scale are investigated using Earth-based 70 cm Arecibo radar data and miniature synthetic aperture radar (Mini-SAR) data at S- and X-band (with wavelengths 12.6 cm and 4.12 cm). Both observations and theoretical modeling show that radar echo strengths are mostly dominated by scattering from the surface and shallow buried rocks. Given the different penetration depths of radar waves at these frequencies (< 30 m for 70 cm wavelength, < 3 m at S-band, and < 1 m at X-band), radar echo strengths at S- and X-band will yield surface roughness directly, whereas radar echo at 70-cm will give an upper limit of lunar surface roughness. The integral equation method is used to model radar scattering from the rough lunar surface, and dielectric constant of regolith and surface roughness are two dominate factors. The complex dielectric constant of regolith is first estimated
Jones, Timothy D; Chappell, Nick A; Tych, Wlodek
2014-11-18
The first dynamic model of dissolved organic carbon (DOC) export in streams derived directly from high frequency (subhourly) observations sampled at a regular interval through contiguous storms is presented. The optimal model, identified using the recently developed RIVC algorithm, captured the rapid dynamics of DOC load from 15 min monitored rainfall with high simulation efficiencies and constrained uncertainty with a second-order (two-pathway) structure. Most of the DOC export in the four headwater basins studied was associated with the faster hydrometric pathway (also modeled in parallel), and was soon exhausted in the slower pathway. A delay in the DOC mobilization became apparent as the ambient temperatures increased. These features of the component pathways were quantified in the dynamic response characteristics (DRCs) identified by RIVC. The model and associated DRCs are intended as a foundation for a better understanding of storm-related DOC dynamics and predictability, given the increasing availability of subhourly DOC concentration data.
Estimating Filtering Errors Using the Peano Kernel Theorem
Jerome Blair
2008-03-01
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.
Estimating Filtering Errors Using the Peano Kernel Theorem
Jerome Blair
2009-02-20
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.
Motion error analysis of the 3D coordinates of airborne lidar for typical terrains
NASA Astrophysics Data System (ADS)
Peng, Tao; Lan, Tian; Ni, Guoqiang
2013-07-01
A motion error model of 3D coordinates is established and the impact on coordinate errors caused by the non-ideal movement of the airborne platform is analyzed. The simulation results of the model show that when the lidar system operates at high altitude, the influence on the positioning errors derived from laser point cloud spacing is small. For the model the positioning errors obey simple harmonic vibration whose amplitude envelope gradually reduces with the increase of the vibration frequency. When the vibration period number is larger than 50, the coordinate errors are almost uncorrelated with time. The elevation error is less than the plane error and in the plane the error in the scanning direction is less than the error in the flight direction. Through the analysis of flight test data, the conclusion is verified.
Madani, Nima; Kimball, John S; Nazeri, Mona; Kumar, Lalit; Affleck, David L R
2016-01-01
Species distribution modeling has been widely used in studying habitat relationships and for conservation purposes. However, neglecting ecological knowledge about species, e.g. their seasonal movements, and ignoring the proper environmental factors that can explain key elements for species survival (shelter, food and water) increase model uncertainty. This study exemplifies how these ecological gaps in species distribution modeling can be addressed by modeling the distribution of the emu (Dromaius novaehollandiae) in Australia. Emus cover a large area during the austral winter. However, their habitat shrinks during the summer months. We show evidence of emu summer habitat shrinkage due to higher fire frequency, and low water and food availability in northern regions. Our findings indicate that emus prefer areas with higher vegetation productivity and low fire recurrence, while their distribution is linked to an optimal intermediate (~0.12 m3 m(-3)) soil moisture range. We propose that the application of three geospatial data products derived from satellite remote sensing, namely fire frequency, ecosystem productivity, and soil water content, provides an effective representation of emu general habitat requirements, and substantially improves species distribution modeling and representation of the species' ecological habitat niche across Australia.
NASA Technical Reports Server (NTRS)
Kato, Seiji; Sun-Mack, Sunny; Miller, Walter F.; Rose, Fred G.; Chen, Yan; Minnis, Patrick; Wielicki, Bruce A.
2009-01-01
A cloud frequency of occurrence matrix is generated using merged cloud vertical profile derived from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and Cloud Profiling Radar (CPR). The matrix contains vertical profiles of cloud occurrence frequency as a function of the uppermost cloud top. It is shown that the cloud fraction and uppermost cloud top vertical pro les can be related by a set of equations when the correlation distance of cloud occurrence, which is interpreted as an effective cloud thickness, is introduced. The underlying assumption in establishing the above relation is that cloud overlap approaches the random overlap with increasing distance separating cloud layers and that the probability of deviating from the random overlap decreases exponentially with distance. One month of CALIPSO and CloudSat data support these assumptions. However, the correlation distance sometimes becomes large, which might be an indication of precipitation. The cloud correlation distance is equivalent to the de-correlation distance introduced by Hogan and Illingworth [2000] when cloud fractions of both layers in a two-cloud layer system are the same.
Madani, Nima; Kimball, John S; Nazeri, Mona; Kumar, Lalit; Affleck, David L R
2016-01-01
Species distribution modeling has been widely used in studying habitat relationships and for conservation purposes. However, neglecting ecological knowledge about species, e.g. their seasonal movements, and ignoring the proper environmental factors that can explain key elements for species survival (shelter, food and water) increase model uncertainty. This study exemplifies how these ecological gaps in species distribution modeling can be addressed by modeling the distribution of the emu (Dromaius novaehollandiae) in Australia. Emus cover a large area during the austral winter. However, their habitat shrinks during the summer months. We show evidence of emu summer habitat shrinkage due to higher fire frequency, and low water and food availability in northern regions. Our findings indicate that emus prefer areas with higher vegetation productivity and low fire recurrence, while their distribution is linked to an optimal intermediate (~0.12 m3 m(-3)) soil moisture range. We propose that the application of three geospatial data products derived from satellite remote sensing, namely fire frequency, ecosystem productivity, and soil water content, provides an effective representation of emu general habitat requirements, and substantially improves species distribution modeling and representation of the species' ecological habitat niche across Australia. PMID:26799732
Razavi, Shahnaz; Salimi, Marzieh; Shahbazi-Gahrouei, Daryoush; Karbasi, Saeed; Kermani, Saeed
2014-01-01
Background: Extremely low-frequency electromagnetic fields (ELF-EMF) can effect on biological systems and alters some cell functions like proliferation rate. Therefore, we aimed to attempt the evaluation effect of ELF-EMF on the growth of human adipose derived stem cells (hADSCs). Materials and Methods: ELF-EMF was generated by a system including autotransformer, multi-meter, solenoid coils, teslameter and its probe. We assessed the effect of ELF-EMF with intensity of 0.5 and 1 mT and power line frequency 50 Hz on the survival of hADSCs for 20 and 40 min/day for 7 days by MTT assay. One-way analysis of variance was used to assessment the significant differences in groups. Results: ELF-EMF has maximum effect with intensity of 1 mT for 20 min/day on proliferation of hADSCs. The survival and proliferation effect (PE) in all exposure groups were significantly higher than that in sham groups (P < 0.05) except in group of 1 mT and 40 min/day. Conclusion: Our results show that between 0.5 m and 1 mT ELF-EMF could be enhances survival and PE of hADSCs conserving the duration of exposure. PMID:24592372
Madani, Nima; Kimball, John S.; Nazeri, Mona; Kumar, Lalit; Affleck, David L. R.
2016-01-01
Species distribution modeling has been widely used in studying habitat relationships and for conservation purposes. However, neglecting ecological knowledge about species, e.g. their seasonal movements, and ignoring the proper environmental factors that can explain key elements for species survival (shelter, food and water) increase model uncertainty. This study exemplifies how these ecological gaps in species distribution modeling can be addressed by modeling the distribution of the emu (Dromaius novaehollandiae) in Australia. Emus cover a large area during the austral winter. However, their habitat shrinks during the summer months. We show evidence of emu summer habitat shrinkage due to higher fire frequency, and low water and food availability in northern regions. Our findings indicate that emus prefer areas with higher vegetation productivity and low fire recurrence, while their distribution is linked to an optimal intermediate (~0.12 m3 m-3) soil moisture range. We propose that the application of three geospatial data products derived from satellite remote sensing, namely fire frequency, ecosystem productivity, and soil water content, provides an effective representation of emu general habitat requirements, and substantially improves species distribution modeling and representation of the species’ ecological habitat niche across Australia. PMID:26799732
NASA Astrophysics Data System (ADS)
Morioka, T.; Kawanishi, S.; Saruwatari, M.
1994-05-01
Error-free, tunable optical frequency conversion of a transform-limited 4.0 ps optical pulse signalis demonstrated at 6.3 Gbit/s using four-wave mixing in a polarization-maintaining optical fibre. The process generates 4.0-4.6 ps pulses over a 25nm range with time-bandwidth products of 0.31-0.43 and conversion power penalties of less than 1.5 dB.
Kobayashi, Jyumpei; Tanabiki, Misaki; Doi, Shohei; Kondo, Akihiko; Ohshiro, Takashi
2015-01-01
The plasmid pGKE75-catA138T, which comprises pUC18 and the catA138T gene encoding thermostable chloramphenicol acetyltransferase with an A138T amino acid replacement (CATA138T), serves as an Escherichia coli-Geobacillus kaustophilus shuttle plasmid that confers moderate chloramphenicol resistance on G. kaustophilus HTA426. The present study examined the thermoadaptation-directed mutagenesis of pGKE75-catA138T in an error-prone thermophile, generating the mutant plasmid pGKE75αβ-catA138T responsible for substantial chloramphenicol resistance at 65°C. pGKE75αβ-catA138T contained no mutation in the catA138T gene but had two mutations in the pUC replicon, even though the replicon has no apparent role in G. kaustophilus. Biochemical characterization suggested that the efficient chloramphenicol resistance conferred by pGKE75αβ-catA138T is attributable to increases in intracellular CATA138T and acetyl-coenzyme A following a decrease in incomplete forms of pGKE75αβ-catA138T. The decrease in incomplete plasmids may be due to optimization of plasmid replication by RNA species transcribed from the mutant pUC replicon, which were actually produced in G. kaustophilus. It is noteworthy that G. kaustophilus was transformed with pGKE75αβ-catA138T using chloramphenicol selection at 60°C. In addition, a pUC18 derivative with the two mutations propagated in E. coli at a high copy number independently of the culture temperature and high plasmid stability. Since these properties have not been observed in known plasmids, the outcomes extend the genetic toolboxes for G. kaustophilus and E. coli. PMID:26319877
Kobayashi, Jyumpei; Tanabiki, Misaki; Doi, Shohei; Kondo, Akihiko; Ohshiro, Takashi; Suzuki, Hirokazu
2015-11-01
The plasmid pGKE75-catA138T, which comprises pUC18 and the catA138T gene encoding thermostable chloramphenicol acetyltransferase with an A138T amino acid replacement (CATA138T), serves as an Escherichia coli-Geobacillus kaustophilus shuttle plasmid that confers moderate chloramphenicol resistance on G. kaustophilus HTA426. The present study examined the thermoadaptation-directed mutagenesis of pGKE75-catA138T in an error-prone thermophile, generating the mutant plasmid pGKE75(αβ)-catA138T responsible for substantial chloramphenicol resistance at 65°C. pGKE75(αβ)-catA138T contained no mutation in the catA138T gene but had two mutations in the pUC replicon, even though the replicon has no apparent role in G. kaustophilus. Biochemical characterization suggested that the efficient chloramphenicol resistance conferred by pGKE75(αβ)-catA138T is attributable to increases in intracellular CATA138T and acetyl-coenzyme A following a decrease in incomplete forms of pGKE75(αβ)-catA138T. The decrease in incomplete plasmids may be due to optimization of plasmid replication by RNA species transcribed from the mutant pUC replicon, which were actually produced in G. kaustophilus. It is noteworthy that G. kaustophilus was transformed with pGKE75(αβ)-catA138T using chloramphenicol selection at 60°C. In addition, a pUC18 derivative with the two mutations propagated in E. coli at a high copy number independently of the culture temperature and high plasmid stability. Since these properties have not been observed in known plasmids, the outcomes extend the genetic toolboxes for G. kaustophilus and E. coli.
ERIC Educational Resources Information Center
Matthews, Danielle E.; Theakston, Anna L.
2006-01-01
How do English-speaking children inflect nouns for plurality and verbs for the past tense? We assess theoretical answers to this question by considering errors of omission, which occur when children produce a stem in place of its inflected counterpart (e.g., saying "dress" to refer to 5 dresses). A total of 307 children (aged 3;11-9;9)…
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.
Financial errors in dementia: Testing a neuroeconomic conceptual framework
Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.
2013-01-01
Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884
Abdollahi, M R; Moieni, A; Mousavi, A; Salmanian, A H
2011-02-01
Transgenic doubled haploid rapeseed (Brassica napus L. cvs. Global and PF(704)) plants were obtained from microspore-derived embryo (MDE) hypocotyls using the microprojectile bombardment. The binary vector pCAMBIA3301 containing the gus and bar genes under control of CaMV 35S promoter was used for bombardment experiments. Transformed plantlets were selected and continuously maintained on selective medium containing 10 mg l(-1) phosphinothricin (PPT) and transgenic plants were obtained by selecting transformed secondary embryos. The presence, copy numbers and expression of the transgenes were confirmed by PCR, Southern blot, RT-PCR and histochemical GUS analyses. In progeny test, three out of four primary transformants for bar gene produced homozygous lines. The ploidy level of transformed plants was confirmed by flow cytometery analysis before colchicine treatment. All of the regenerated plants were haploid except one that was spontaneous diploid. High frequency of transgenic doubled haploid rapeseeds (about 15.55% for bar gene and 11.11% for gus gene) were considerably produced after colchicines treatment of the haploid plantlets. This result show a remarkable increase in production of transgenic doubled haploid rapeseed plants compared to previous studies.
NASA Astrophysics Data System (ADS)
Xie, Yi; Zhang, Shuang-Nan; Liao, Jin-Yuan
2015-07-01
We model the evolution of the spin frequency's second derivative v̈ and the braking index n of radio pulsars with simulations within the phenomenological model of their surface magnetic field evolution, which contains a long-term power-law decay modulated by short-term oscillations. For the pulsar PSR B0329+54, a model with three oscillation components can reproduce its v̈ variation. We show that the “averaged” n is different from the instantaneous n, and its oscillation magnitude decreases abruptly as the time span increases, due to the “averaging” effect. The simulated timing residuals agree with the main features of the reported data. Our model predicts that the averaged v̈ of PSR B0329+54 will start to decrease rapidly with newer data beyond those used in Hobbs et al. We further perform Monte Carlo simulations for the distribution of the reported data in |v̈| and |n| versus characteristic age τC diagrams. It is found that the magnetic field oscillation model with decay index α = 0 can reproduce the distributions quite well. Compared with magnetic field decay due to the ambipolar diffusion (α = 0.5) and the Hall cascade (α = 1.0), the model with no long term decay (α = 0) is clearly preferred for old pulsars by the p-values of the two-dimensional Kolmogorov-Smirnov test. Supported by the National Natural Science Foundation of China.
Abdollahi, M R; Moieni, A; Mousavi, A; Salmanian, A H
2011-02-01
Transgenic doubled haploid rapeseed (Brassica napus L. cvs. Global and PF(704)) plants were obtained from microspore-derived embryo (MDE) hypocotyls using the microprojectile bombardment. The binary vector pCAMBIA3301 containing the gus and bar genes under control of CaMV 35S promoter was used for bombardment experiments. Transformed plantlets were selected and continuously maintained on selective medium containing 10 mg l(-1) phosphinothricin (PPT) and transgenic plants were obtained by selecting transformed secondary embryos. The presence, copy numbers and expression of the transgenes were confirmed by PCR, Southern blot, RT-PCR and histochemical GUS analyses. In progeny test, three out of four primary transformants for bar gene produced homozygous lines. The ploidy level of transformed plants was confirmed by flow cytometery analysis before colchicine treatment. All of the regenerated plants were haploid except one that was spontaneous diploid. High frequency of transgenic doubled haploid rapeseeds (about 15.55% for bar gene and 11.11% for gus gene) were considerably produced after colchicines treatment of the haploid plantlets. This result show a remarkable increase in production of transgenic doubled haploid rapeseed plants compared to previous studies. PMID:20419350
NASA Astrophysics Data System (ADS)
Shi, Y. C.; Parker, D. L.; Dillon, C. R.
2016-08-01
This study evaluates the sensitivity of two magnetic resonance-guided focused ultrasound (MRgFUS) thermal property estimation methods to errors in required inputs and different data inclusion criteria. Using ex vivo pork muscle MRgFUS data, sensitivities to required inputs are determined by introducing errors to ultrasound beam locations (r error = -2 to 2 mm) and time vectors (t error = -2.2 to 2.2 s). In addition, the sensitivity to user-defined data inclusion criteria is evaluated by choosing different spatial (r fit = 1-10 mm) and temporal (t fit = 8.8-61.6 s) regions for fitting. Beam location errors resulted in up to 50% change in property estimates with local minima occurring at r error = 0 and estimate errors less than 10% when r error < 0.5 mm. Errors in the time vector led to property estimate errors up to 40% and without local minimum, indicating the need to trigger ultrasound sonications with the MR image acquisition. Regarding the selection of data inclusion criteria, property estimates reached stable values (less than 5% change) when r fit > 2.5 × FWHM, and were most accurate with the least variability for longer t fit. Guidelines provided by this study highlight the importance of identifying required inputs and choosing appropriate data inclusion criteria for robust and accurate thermal property estimation. Applying these guidelines will prevent the introduction of biases and avoidable errors when utilizing these property estimation techniques for MRgFUS thermal modeling applications.
NASA Astrophysics Data System (ADS)
Shi, Y. C.; Parker, D. L.; Dillon, C. R.
2016-08-01
This study evaluates the sensitivity of two magnetic resonance-guided focused ultrasound (MRgFUS) thermal property estimation methods to errors in required inputs and different data inclusion criteria. Using ex vivo pork muscle MRgFUS data, sensitivities to required inputs are determined by introducing errors to ultrasound beam locations (r error = ‑2 to 2 mm) and time vectors (t error = ‑2.2 to 2.2 s). In addition, the sensitivity to user-defined data inclusion criteria is evaluated by choosing different spatial (r fit = 1–10 mm) and temporal (t fit = 8.8–61.6 s) regions for fitting. Beam location errors resulted in up to 50% change in property estimates with local minima occurring at r error = 0 and estimate errors less than 10% when r error < 0.5 mm. Errors in the time vector led to property estimate errors up to 40% and without local minimum, indicating the need to trigger ultrasound sonications with the MR image acquisition. Regarding the selection of data inclusion criteria, property estimates reached stable values (less than 5% change) when r fit > 2.5 × FWHM, and were most accurate with the least variability for longer t fit. Guidelines provided by this study highlight the importance of identifying required inputs and choosing appropriate data inclusion criteria for robust and accurate thermal property estimation. Applying these guidelines will prevent the introduction of biases and avoidable errors when utilizing these property estimation techniques for MRgFUS thermal modeling applications.
Shi, Y C; Parker, D L; Dillon, C R
2016-08-01
This study evaluates the sensitivity of two magnetic resonance-guided focused ultrasound (MRgFUS) thermal property estimation methods to errors in required inputs and different data inclusion criteria. Using ex vivo pork muscle MRgFUS data, sensitivities to required inputs are determined by introducing errors to ultrasound beam locations (r error = -2 to 2 mm) and time vectors (t error = -2.2 to 2.2 s). In addition, the sensitivity to user-defined data inclusion criteria is evaluated by choosing different spatial (r fit = 1-10 mm) and temporal (t fit = 8.8-61.6 s) regions for fitting. Beam location errors resulted in up to 50% change in property estimates with local minima occurring at r error = 0 and estimate errors less than 10% when r error < 0.5 mm. Errors in the time vector led to property estimate errors up to 40% and without local minimum, indicating the need to trigger ultrasound sonications with the MR image acquisition. Regarding the selection of data inclusion criteria, property estimates reached stable values (less than 5% change) when r fit > 2.5 × FWHM, and were most accurate with the least variability for longer t fit. Guidelines provided by this study highlight the importance of identifying required inputs and choosing appropriate data inclusion criteria for robust and accurate thermal property estimation. Applying these guidelines will prevent the introduction of biases and avoidable errors when utilizing these property estimation techniques for MRgFUS thermal modeling applications. PMID:27385508
... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...
Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...
Thermodynamics of Error Correction
NASA Astrophysics Data System (ADS)
Sartori, Pablo; Pigolotti, Simone
2015-10-01
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Ansell, Juliet; Butts, Christine A; Paturi, Gunaranjan; Eady, Sarah L; Wallace, Alison J; Hedderley, Duncan; Gearry, Richard B
2015-05-01
The worldwide growth in the incidence of gastrointestinal disorders has created an immediate need to identify safe and effective interventions. In this randomized, double-blind, placebo-controlled study, we examined the effects of Actazin and Gold, kiwifruit-derived nutritional ingredients, on stool frequency, stool form, and gastrointestinal comfort in healthy and functionally constipated (Rome III criteria for C3 functional constipation) individuals. Using a crossover design, all participants consumed all 4 dietary interventions (Placebo, Actazin low dose [Actazin-L] [600 mg/day], Actazin high dose [Actazin-H] [2400 mg/day], and Gold [2400 mg/day]). Each intervention was taken for 28 days followed by a 14-day washout period between interventions. Participants recorded their daily bowel movements and well-being parameters in daily questionnaires. In the healthy cohort (n = 19), the Actazin-H (P = .014) and Gold (P = .009) interventions significantly increased the mean daily bowel movements compared with the washout. No significant differences were observed in stool form as determined by use of the Bristol stool scale. In a subgroup analysis of responders in the healthy cohort, Actazin-L (P = .005), Actazin-H (P < .001), and Gold (P = .001) consumption significantly increased the number of daily bowel movements by greater than 1 bowel movement per week. In the functionally constipated cohort (n = 9), there were no significant differences between interventions for bowel movements and the Bristol stool scale values or in the subsequent subgroup analysis of responders. This study demonstrated that Actazin and Gold produced clinically meaningful increases in bowel movements in healthy individuals.
Lewandowski, Daniel; Tomaszewski, Krzysztof A.; Henry, Brandon M.; Golec, Edward B.; Marędziak, Monika
2016-01-01
The aim of this study was to evaluate if low-frequency, low-magnitude vibrations (LFLM) could enhance chondrogenic differentiation potential of human adipose derived mesenchymal stem cells (hASCs) with simultaneous inhibition of their adipogenic properties for biomedical purposes. We developed a prototype device that induces low-magnitude (0.3 g) low-frequency vibrations with the following frequencies: 25, 35 and 45 Hz. Afterwards, we used human adipose derived mesenchymal stem cell (hASCS), to investigate their cellular response to the mechanical signals. We have also evaluated hASCs morphological and proliferative activity changes in response to each frequency. Induction of chondrogenesis in hASCs, under the influence of a 35 Hz signal leads to most effective and stable cartilaginous tissue formation through highest secretion of Bone Morphogenetic Protein 2 (BMP-2), and Collagen type II, with low concentration of Collagen type I. These results correlated well with appropriate gene expression level. Simultaneously, we observed significant up-regulation of α3, α4, β1 and β3 integrins in chondroblast progenitor cells treated with 35 Hz vibrations, as well as Sox-9. Interestingly, we noticed that application of 35 Hz frequencies significantly inhibited adipogenesis of hASCs. The obtained results suggest that application of LFLM vibrations together with stem cell therapy might be a promising tool in cartilage regeneration. PMID:26966645
Marycz, Krzysztof; Lewandowski, Daniel; Tomaszewski, Krzysztof A; Henry, Brandon M; Golec, Edward B; Marędziak, Monika
2016-01-01
The aim of this study was to evaluate if low-frequency, low-magnitude vibrations (LFLM) could enhance chondrogenic differentiation potential of human adipose derived mesenchymal stem cells (hASCs) with simultaneous inhibition of their adipogenic properties for biomedical purposes. We developed a prototype device that induces low-magnitude (0.3 g) low-frequency vibrations with the following frequencies: 25, 35 and 45 Hz. Afterwards, we used human adipose derived mesenchymal stem cell (hASCS), to investigate their cellular response to the mechanical signals. We have also evaluated hASCs morphological and proliferative activity changes in response to each frequency. Induction of chondrogenesis in hASCs, under the influence of a 35 Hz signal leads to most effective and stable cartilaginous tissue formation through highest secretion of Bone Morphogenetic Protein 2 (BMP-2), and Collagen type II, with low concentration of Collagen type I. These results correlated well with appropriate gene expression level. Simultaneously, we observed significant up-regulation of α3, α4, β1 and β3 integrins in chondroblast progenitor cells treated with 35 Hz vibrations, as well as Sox-9. Interestingly, we noticed that application of 35 Hz frequencies significantly inhibited adipogenesis of hASCs. The obtained results suggest that application of LFLM vibrations together with stem cell therapy might be a promising tool in cartilage regeneration. PMID:26966645
Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris
2014-07-01
Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to
Automatic Locking of Laser Frequency to an Absorption Peak
NASA Technical Reports Server (NTRS)
Koch, Grady J.
2006-01-01
An electronic system adjusts the frequency of a tunable laser, eventually locking the frequency to a peak in the optical absorption spectrum of a gas (or of a Fabry-Perot cavity that has an absorption peak like that of a gas). This system was developed to enable precise locking of the frequency of a laser used in differential absorption LIDAR measurements of trace atmospheric gases. This system also has great commercial potential as a prototype of means for precise control of frequencies of lasers in future dense wavelength-division-multiplexing optical communications systems. The operation of this system is completely automatic: Unlike in the operation of some prior laser-frequency-locking systems, there is ordinarily no need for a human operator to adjust the frequency manually to an initial value close enough to the peak to enable automatic locking to take over. Instead, this system also automatically performs the initial adjustment. The system (see Figure 1) is based on a concept of (1) initially modulating the laser frequency to sweep it through a spectral range that includes the desired absorption peak, (2) determining the derivative of the absorption peak with respect to the laser frequency for use as an error signal, (3) identifying the desired frequency [at the very top (which is also the middle) of the peak] as the frequency where the derivative goes to zero, and (4) thereafter keeping the frequency within a locking range and adjusting the frequency as needed to keep the derivative (the error signal) as close as possible to zero. More specifically, the system utilizes the fact that in addition to a zero crossing at the top of the absorption peak, the error signal also closely approximates a straight line in the vicinity of the zero crossing (see Figure 2). This vicinity is the locking range because the linearity of the error signal in this range makes it useful as a source of feedback for a proportional + integral + derivative control scheme that
NASA Astrophysics Data System (ADS)
Wentz, Frank J.; Meissner, Thomas
2016-05-01
The Liebe and Rosenkranz atmospheric absorption models for dry air and water vapor below 100 GHz are refined based on an analysis of antenna temperature (TA) measurements taken by the Global Precipitation Measurement Microwave Imager (GMI) in the frequency range 10.7 to 89.0 GHz. The GMI TA measurements are compared to the TA predicted by a radiative transfer model (RTM), which incorporates both the atmospheric absorption model and a model for the emission and reflection from a rough-ocean surface. The inputs for the RTM are the geophysical retrievals of wind speed, columnar water vapor, and columnar cloud liquid water obtained from the satellite radiometer WindSat. The Liebe and Rosenkranz absorption models are adjusted to achieve consistency with the RTM. The vapor continuum is decreased by 3% to 10%, depending on vapor. To accomplish this, the foreign-broadening part is increased by 10%, and the self-broadening part is decreased by about 40% at the higher frequencies. In addition, the strength of the water vapor line is increased by 1%, and the shape of the line at low frequencies is modified. The dry air absorption is increased, with the increase being a maximum of 20% at the 89 GHz, the highest frequency considered here. The nonresonant oxygen absorption is increased by about 6%. In addition to the RTM comparisons, our results are supported by a comparison between columnar water vapor retrievals from 12 satellite microwave radiometers and GPS-retrieved water vapor values.
Error monitoring in musicians.
Maidhof, Clemens
2013-01-01
To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255
NASA Astrophysics Data System (ADS)
Mobley, Joel; Waters, Kendall R.; Miller, James G.
2005-07-01
Kramers-Kronig (KK) analyses of experimental data are complicated by the extrapolation problem, that is, how the unexamined spectral bands impact KK calculations. This work demonstrates the causal linkages in resonant-type data provided by acoustic KK relations for the group velocity (cg) and the derivative of the attenuation coefficient (α') (components of the derivative of the acoustic complex wave number) without extrapolation or unmeasured parameters. These relations provide stricter tests of causal consistency relative to previously established KK relations for the phase velocity (cp) and attenuation coefficient (α) (components of the undifferentiated acoustic wave number) due to their shape invariance with respect to subtraction constants. For both the group velocity and attenuation derivative, three forms of the relations are derived. These relations are equivalent for bandwidths covering the entire infinite spectrum, but differ when restricted to bandlimited spectra. Using experimental data from suspensions of elastic spheres in saline, the accuracy of finite-bandwidth KK predictions for cg and α' is demonstrated. Of the multiple methods, the most accurate were found to be those whose integrals were expressed only in terms of the phase velocity and attenuation coefficient themselves, requiring no differentiated quantities.
Mobley, Joel; Waters, Kendall R; Miller, James G
2005-07-01
Kramers-Kronig (KK) analyses of experimental data are complicated by the extrapolation problem, that is, how the unexamined spectral bands impact KK calculations. This work demonstrates the causal linkages in resonant-type data provided by acoustic KK relations for the group velocity (c(g)) and the derivative of the attenuation coefficient (alpha') (components of the derivative of the acoustic complex wave number) without extrapolation or unmeasured parameters. These relations provide stricter tests of causal consistency relative to previously established KK relations for the phase velocity (c(p)) and attenuation coefficient (alpha) (components of the undifferentiated acoustic wave number) due to their shape invariance with respect to subtraction constants. For both the group velocity and attenuation derivative, three forms of the relations are derived. These relations are equivalent for bandwidths covering the entire infinite spectrum, but differ when restricted to bandlimited spectra. Using experimental data from suspensions of elastic spheres in saline, the accuracy of finite-bandwidth KK predictions for c(g) and alpha' is demonstrated. Of the multiple methods, the most accurate were found to be those whose integrals were expressed only in terms of the phase velocity and attenuation coefficient themselves, requiring no differentiated quantities.
Standard Errors of the Kernel Equating Methods under the Common-Item Design.
ERIC Educational Resources Information Center
Liou, Michelle; And Others
This research derives simplified formulas for computing the standard error of the frequency estimation method for equating score distributions that are continuized using a uniform or Gaussian kernel function (P. W. Holland, B. F. King, and D. T. Thayer, 1989; Holland and Thayer, 1987). The simplified formulas are applicable to equating both the…
Phase Errors and the Capture Effect
Blair, J., and Machorro, E.
2011-11-01
This slide-show presents analysis of spectrograms and the phase error of filtered noise in a signal. When the filtered noise is smaller than the signal amplitude, the phase error can never exceed 90{deg}, so the average phase error over many cycles is zero: this is called the capture effect because the largest signal captures the phase and frequency determination.
Johannesson, Bjarki; Sagi, Ido; Gore, Athurva; Paull, Daniel; Yamada, Mitsutoshi; Golan-Lev, Tamar; Li, Zhe; LeDuc, Charles; Shen, Yufeng; Stern, Samantha; Xu, Nanfang; Ma, Hong; Kang, Eunju; Mitalipov, Shoukhrat; Sauer, Mark V; Zhang, Kun; Benvenisty, Nissim; Egli, Dieter
2014-11-01
The recent finding that reprogrammed human pluripotent stem cells can be derived by nuclear transfer into human oocytes as well as by induced expression of defined factors has revitalized the debate on whether one approach might be advantageous over the other. Here we compare the genetic and epigenetic integrity of human nuclear-transfer embryonic stem cell (NT-ESC) lines and isogenic induced pluripotent stem cell (iPSC) lines, derived from the same somatic cell cultures of fetal, neonatal, and adult origin. The two cell types showed similar genome-wide gene expression and DNA methylation profiles. Importantly, NT-ESCs and iPSCs had comparable numbers of de novo coding mutations, but significantly more than parthenogenetic ESCs. As iPSCs, NT-ESCs displayed clone- and gene-specific aberrations in DNA methylation and allele-specific expression of imprinted genes. The occurrence of these genetic and epigenetic defects in both NT-ESCs and iPSCs suggests that they are inherent to reprogramming, regardless of derivation approach.
Automatic oscillator frequency control system
NASA Technical Reports Server (NTRS)
Smith, S. F. (Inventor)
1985-01-01
A frequency control system makes an initial correction of the frequency of its own timing circuit after comparison against a frequency of known accuracy and then sequentially checks and corrects the frequencies of several voltage controlled local oscillator circuits. The timing circuit initiates the machine cycles of a central processing unit which applies a frequency index to an input register in a modulo-sum frequency divider stage and enables a multiplexer to clock an accumulator register in the divider stage with a cyclical signal derived from the oscillator circuit being checked. Upon expiration of the interval, the processing unit compares the remainder held as the contents of the accumulator against a stored zero error constant and applies an appropriate correction word to a correction stage to shift the frequency of the oscillator being checked. A signal from the accumulator register may be used to drive a phase plane ROM and, with periodic shifts in the applied frequency index, to provide frequency shift keying of the resultant output signal. Interposition of a phase adder between the accumulator register and phase plane ROM permits phase shift keying of the output signal by periodic variation in the value of a phase index applied to one input of the phase adder.
NASA Technical Reports Server (NTRS)
Couvillon, L. A., Jr. (Inventor)
1968-01-01
A digital communicating system for automatically synchronizing signals for data detection is described. The systems consists of biphase modulating a subcarrier frequency by the binary data and transmitting a carrier phase modulated by this signal to a receiver, where coherent phase detection is employed to recover the subcarrier. Data detection is achieved by providing, in the receiver, a demodulated reference which is in synchronism with the unmodulated subcarrier in transmitting system. The output of the detector is passed through a matched filter where the signal is integrated over a bit period. As a result, random noise components are averaged out, so that the probability of detecting the correct data transmitted is maximized.
Radar error statistics for the space shuttle
NASA Technical Reports Server (NTRS)
Lear, W. M.
1979-01-01
Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.
Error diffusion with a more symmetric error distribution
NASA Astrophysics Data System (ADS)
Fan, Zhigang
1994-05-01
In this paper a new error diffusion algorithm is presented that effectively eliminates the `worm' artifacts appearing in the standard methods. The new algorithm processes each scanline of the image in two passes, a forward pass followed by a backward one. This enables the error made at one pixel to be propagated to all the `future' pixels. A much more symmetric error distribution is achieved than that of the standard methods. The frequency response of the noise shaping filter associated with the new algorithm is mirror-symmetric in magnitude.
Interpolation Errors in Spectrum Analyzers
NASA Technical Reports Server (NTRS)
Martin, J. L.
1996-01-01
To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.
Relative error covariance analysis techniques and application
NASA Technical Reports Server (NTRS)
Wolff, Peter, J.; Williams, Bobby G.
1988-01-01
A technique for computing the error covariance of the difference between two estimators derived from different (possibly overlapping) data arcs is presented. The relative error covariance is useful for predicting the achievable consistency between Kalman-Bucy filtered estimates generated from two (not necessarily disjoint) data sets. The relative error covariance analysis technique is then applied to a Venus Orbiter simulation.
Khine, Soe Minn; Houra, Tomoya; Tagawa, Masato
2013-04-01
In temperature measurement of non-isothermal fluid flows by a contact-type temperature sensor, heat conduction along the sensor body can cause significant measurement error which is called "heat-conduction error." The conventional formula for estimating the heat-conduction error was derived under the condition that the fluid temperature to be measured is uniform. Thus, if we apply the conventional formula to a thermal field with temperature gradient, the heat-conduction error will be underestimated. In the present study, we have newly introduced a universal physical model of a temperature-measurement system to estimate accurately the heat-conduction error even if a temperature gradient exists in non-isothermal fluid flows. Accordingly, we have been able to successfully derive a widely applicable estimation and/or evaluation formula of the heat-conduction error. Then, we have verified experimentally the effectiveness of the proposed formula using the two non-isothermal fields-a wake flow formed behind a heated cylinder and a candle flame-whose fluid-dynamical characteristics should be quite different. As a result, it is confirmed that the proposed formula can represent accurately the experimental behaviors of the heat-conduction error which cannot be explained appropriately by the existing formula. In addition, we have analyzed theoretically the effects of the heat-conduction error on the fluctuating temperature measurement of a non-isothermal unsteady fluid flow to derive the frequency response of the temperature sensor to be used. The analysis result shows that the heat-conduction error in temperature-fluctuation measurement appears only in a low-frequency range. Therefore, if the power-spectrum distribution of temperature fluctuations to be measured is sufficiently away from the low-frequency range, the heat-conduction error has virtually no effect on the temperature-fluctuation measurements even by the temperature sensor accompanying the heat-conduction error in
NASA Astrophysics Data System (ADS)
Minn Khine, Soe; Houra, Tomoya; Tagawa, Masato
2013-04-01
In temperature measurement of non-isothermal fluid flows by a contact-type temperature sensor, heat conduction along the sensor body can cause significant measurement error which is called "heat-conduction error." The conventional formula for estimating the heat-conduction error was derived under the condition that the fluid temperature to be measured is uniform. Thus, if we apply the conventional formula to a thermal field with temperature gradient, the heat-conduction error will be underestimated. In the present study, we have newly introduced a universal physical model of a temperature-measurement system to estimate accurately the heat-conduction error even if a temperature gradient exists in non-isothermal fluid flows. Accordingly, we have been able to successfully derive a widely applicable estimation and/or evaluation formula of the heat-conduction error. Then, we have verified experimentally the effectiveness of the proposed formula using the two non-isothermal fields—a wake flow formed behind a heated cylinder and a candle flame—whose fluid-dynamical characteristics should be quite different. As a result, it is confirmed that the proposed formula can represent accurately the experimental behaviors of the heat-conduction error which cannot be explained appropriately by the existing formula. In addition, we have analyzed theoretically the effects of the heat-conduction error on the fluctuating temperature measurement of a non-isothermal unsteady fluid flow to derive the frequency response of the temperature sensor to be used. The analysis result shows that the heat-conduction error in temperature-fluctuation measurement appears only in a low-frequency range. Therefore, if the power-spectrum distribution of temperature fluctuations to be measured is sufficiently away from the low-frequency range, the heat-conduction error has virtually no effect on the temperature-fluctuation measurements even by the temperature sensor accompanying the heat
Selection of Error-Less Synthetic Genes in Yeast.
Hoshida, Hisashi; Yarimizu, Tohru; Akada, Rinji
2017-01-01
Conventional gene synthesis is usually accompanied by sequence errors, which are often deletions derived from chemically synthesized oligonucleotides. Such deletions lead to frame shifts and mostly result in premature translational terminations. Therefore, in-frame fusion of a marker gene to the downstream of a synthetic gene is an effective strategy to select for frame-shift-free synthetic genes. Functional expression of fused marker genes indicates that synthetic genes are translated without premature termination, i.e., error-less synthetic genes. A recently developed nonhomologous end joining (NHEJ)-mediated DNA cloning method in the yeast Kluyveromyces marxianus is suitable for the selection of frame-shift-free synthetic genes. Transformation and NHEJ-mediated in-frame joining of a synthetic gene with a selection marker gene enables colony formation of only the yeast cells containing synthetic genes without premature termination. This method increased selection frequency of error-less synthetic genes by 3- to 12-fold. PMID:27671945
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1981-01-01
Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
[Diagnostic Errors in Medicine].
Buser, Claudia; Bankova, Andriyana
2015-12-01
The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.
[Diagnostic Errors in Medicine].
Buser, Claudia; Bankova, Andriyana
2015-12-01
The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954
Yang, Guangtao Swaaij, R. A. C. M. M. van; Dobrovolskiy, S.; Zeman, M.
2014-01-21
In this contribution, we demonstrate the application temperature dependent capacitance-frequency measurements (C-f) to n-i-p hydrogenated amorphous silicon (a-Si:H) solar cells that are forward-biased. By using a forward bias, the C-f measurement can detect the density of defect states in a particular energy range of the interface region. For this contribution, we have carried out this measurement method on n-i-p a-Si:H solar cells of which the intrinsic layer has been exposed to a H{sub 2}-plasma before p-type layer deposition. After this treatment, the open-circuit voltage and fill factor increased significantly, as well as the blue response of the solar cells as is concluded from external quantum efficiency. For single junction, n-i-p a-Si:H solar cells initial efficiency increased from 6.34% to 8.41%. This performance enhancement is believed to be mainly due to a reduction of the defect density in the i-p interface region after the H{sub 2}-plasma treatment. These results are confirmed by the C-f measurements. After H{sub 2}-plasma treatment, the defect density in the intrinsic layer near the i-p interface region is lower and peaks at an energy level deeper in the band gap. These C-f measurements therefore enable us to monitor changes in the defect density in the interface region as a result of a hydrogen plasma. The lower defect density at the i-p interface as detected by the C-f measurements is supported by dark current-voltage measurements, which indicate a lower carrier recombination rate.
Prejac, J; Višnjević, V; Drmić, S; Skalny, A A; Mimica, N; Momčilović, B
2014-04-01
Today, human iodine deficiency is next to iron the most common nutritional deficiency in developed European and underdeveloped third world countries, respectively. A current biological indicator of iodine status is urinary iodine that reflects the very recent iodine exposure, whereas some long term indicator of iodine status remains to be identified. We analyzed hair iodine in a prospective, observational, cross-sectional, and exploratory study involving 870 apparently healthy Croatians (270 men and 600 women). Hair iodine was analyzed with the inductively coupled plasma mass spectrometry (ICP MS). Population (n870) hair iodine (IH) respective median was 0.499μgg(-1) (0.482 and 0.508μgg(-1)) for men and women, respectively, suggesting no sex related difference. We studied the hair iodine uptake by the logistic sigmoid saturation curve of the median derivatives to assess iodine deficiency, adequacy and excess. We estimated the overt iodine deficiency to occur when hair iodine concentration is below 0.15μgg(-1). Then there was a saturation range interval of about 0.15-2.0μgg(-1) (r(2)=0.994). Eventually, the sigmoid curve became saturated at about 2.0μgg(-1) and upward, suggesting excessive iodine exposure. Hair appears to be a valuable and robust long term biological indicator tissue for assessing the iodine body status. We propose adequate iodine status to correspond with the hair iodine (IH) uptake saturation of 0.565-0.739μgg(-1) (55-65%).
Operational Interventions to Maintenance Error
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki
1997-01-01
A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.
The analysis of a coherent frequency hopped spread spectrum system
NASA Astrophysics Data System (ADS)
Su, Chun-Meng; Milstein, Laurence B.
A digital joint phase/timing tracking loop for a coherent frequency-hopped spread-spectrum system is analyzed for both training mode and tracking performance. Under minor assumptions, the phase error is modeled as a homogeneous finite Markov chain. The length of the training period, the approximate probability of entering the tracking range, the steady-state average error probability, and the mean-time to loss-of-lock are derived. The effects of both nonzero RF phase error and cubic channel phase response are presented. It is shown that the performance of the system can be designed to be close to that of a perfectly synchronized system.
NASA Technical Reports Server (NTRS)
Blucker, T. J.; Ferry, W. W.
1971-01-01
An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.
NASA Astrophysics Data System (ADS)
Orem, C. A.; Pelletier, J. D.
2015-11-01
Flood-envelope curves (FEC) are useful for constraining the upper limit of possible flood discharges within drainage basins in a particular hydroclimatic region. Their usefulness, however, is limited by their lack of a well-defined recurrence interval. In this study we use radar-derived precipitation estimates to develop an alternative to the FEC method, i.e. the frequency-magnitude-area-curve (FMAC) method, that incorporates recurrence intervals. The FMAC method is demonstrated in two well-studied U.S. drainage basins, i.e. the Upper and Lower Colorado River basins (UCRB and LCRB, respectively), using Stage III Next-Generation-Radar (NEXRAD) gridded products and the diffusion-wave flow-routing algorithm. The FMAC method can be applied worldwide using any radar-derived precipitation estimates. In the FMAC method, idealized basins of similar contributing area are grouped together for frequency-magnitude analysis of precipitation intensity. These data are then routed through the idealized drainage basins of different contributing areas, using contributing-area-specific estimates for channel slope and channel width. Our results show that FMACs of precipitation discharge are power-law functions of contributing area with an average exponent of 0.79 ± 0.07 for recurrence intervals from 10 to 500 years. We compare our FMACs to published FECs and find that for wet antecedent-moisture conditions, the 500-year FMAC of flood discharge in the UCRB is on par with the US FEC for contributing areas of ~ 102 to 103 km2. FMACs of flood discharge for the LCRB exceed the published FEC for the LCRB for contributing areas in the range of ~ 102 to 104 km2. The FMAC method retains the power of the FEC method for constraining flood hazards in basins that are ungauged or have short flood records, yet it has the added advantage that it includes recurrence interval information necessary for estimating event probabilities.
Study of geopotential error models used in orbit determination error analysis
NASA Technical Reports Server (NTRS)
Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.
1991-01-01
The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic
Unforced errors and error reduction in tennis
Brody, H
2006-01-01
Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568
Køllgaard, Tania; Ugurel-Becker, Selma; Idorn, Manja; Andersen, Mads Hald; Becker, Jürgen C; Straten, Per Thor
2015-01-01
Various subsets of immune regulatory cells are suggested to influence the outcome of therapeutic antigen-specific anti-tumor vaccinations. We performed an exploratory analysis of a possible correlation of pre-vaccination Th17 cells, MDSCs, and Tregs with both vaccination-induced T-cell responses as well as clinical outcome in metastatic melanoma patients vaccinated with survivin-derived peptides. Notably, we observed dysfunctional Th1 and cytotoxic T cells, i.e. down-regulation of the CD3ζchain (p=0.001) and an impaired IFNγ-production (p=0.001) in patients compared to healthy donors, suggesting an altered activity of immune regulatory cells. Moreover, the frequencies of Th17 cells (p=0.03) and Tregs (p=0.02) were elevated as compared to healthy donors. IL-17-secreting CD4+ T cells displayed an impact on the immunological and clinical effects of vaccination: Patients characterized by high frequencies of Th17 cells at pre-vaccination were more likely to develop survivin-specific T-cell reactivity post-vaccination (p=0.03). Furthermore, the frequency of Th17 (p=0.09) and Th17/IFNγ+ (p=0.19) cells associated with patient survival after vaccination. In summary, our explorative, hypothesis-generating study demonstrated that immune regulatory cells, in particular Th17 cells, play a relevant role for generation of the vaccine-induced anti-tumor immunity in cancer patients, hence warranting further investigation to test for validity as predictive biomarkers.
Køllgaard, Tania; Ugurel-Becker, Selma; Idorn, Manja; Andersen, Mads Hald
2015-01-01
Various subsets of immune regulatory cells are suggested to influence the outcome of therapeutic antigen-specific anti-tumor vaccinations. We performed an exploratory analysis of a possible correlation of pre-vaccination Th17 cells, MDSCs, and Tregs with both vaccination-induced T-cell responses as well as clinical outcome in metastatic melanoma patients vaccinated with survivin-derived peptides. Notably, we observed dysfunctional Th1 and cytotoxic T cells, i.e. down-regulation of the CD3ζchain (p=0.001) and an impaired IFNγ-production (p=0.001) in patients compared to healthy donors, suggesting an altered activity of immune regulatory cells. Moreover, the frequencies of Th17 cells (p=0.03) and Tregs (p=0.02) were elevated as compared to healthy donors. IL-17-secreting CD4+ T cells displayed an impact on the immunological and clinical effects of vaccination: Patients characterized by high frequencies of Th17 cells at pre-vaccination were more likely to develop survivin-specific T-cell reactivity post-vaccination (p=0.03). Furthermore, the frequency of Th17 (p=0.09) and Th17/IFNγ+ (p=0.19) cells associated with patient survival after vaccination. In summary, our explorative, hypothesis-generating study demonstrated that immune regulatory cells, in particular Th17 cells, play a relevant role for generation of the vaccine-induced anti-tumor immunity in cancer patients, hence warranting further investigation to test for validity as predictive biomarkers. PMID:26176858
Tignanelli, Christopher J; Herrera Loeza, Silvia G; Yeh, Jen Jen
2014-09-01
One obstacle in the translation of advances in cancer research into the clinic is a deficiency of adequate preclinical models that recapitulate human disease. Patient-derived xenograft (PDX) models are established by engrafting patient tumor tissue into mice and are advantageous because they capture tumor heterogeneity. One concern with these models is that selective pressure could lead to mutational drift and thus be an inaccurate reflection of patient tumors. Therefore, we evaluated if mutational frequency in PDX models is reflective of patient populations and if crucial mutations are stable across passages. We examined KRAS and PIK3CA gene mutations from pancreatic ductal adenocarcinoma (PDAC) (n = 30) and colorectal cancer (CRC) (n = 37) PDXs for as many as eight passages. DNA was isolated from tumors and target sequences were amplified by polymerase chain reaction. KRAS codons 12/13 and PIK3CA codons 542/545/1047 were examined using pyrosequencing. Twenty-three of 30 (77%) PDAC PDXs had KRAS mutations and one of 30 (3%) had PIK3CA mutations. Fifteen of 37 (41%) CRC PDXs had KRAS mutations and three of 37 (8%) had PIK3CA mutations. Mutations were 100 per cent preserved across passages. We found that the frequency of KRAS (77%) and PIK3CA (3%) mutations in PDAC PDX was similar to frequencies in patient tumors (71 to 100% KRAS, 0 to 11% PIK3CA). Similarly, KRAS (41%) and PIK3CA (8%) mutations in CRC PDX closely paralleled patient tumors (35 to 51% KRAS, 12 to 21% PIK3CA). The accurate mirroring and stability of genetic changes in PDX models compared with patient tumors suggest that these models are good preclinical surrogates for patient tumors.
Lu, Bing; Pan, Wei; Zou, Xihua; Yan, Xianglei; Yan, Lianshan; Luo, Bin
2015-05-15
A photonic approach for both wideband Doppler frequency shift (DFS) measurement and direction ambiguity resolution is proposed and experimentally demonstrated. In the proposed approach, a light wave from a laser diode is split into two paths. In one path, the DFS information is converted into an optical sideband close to the optical carrier by using two cascaded electro-optic modulators, while in the other path, the optical carrier is up-shifted by a specific value (e.g., from several MHz to hundreds of MHz) using an optical-frequency shift module. Then the optical signals from the two paths are combined and detected by a low-speed photodetector (PD), generating a low-frequency electronic signal. Through a subtraction between the specific optical frequency shift and the measured frequency of the low-frequency signal, the value of DFS is estimated from the derived absolute value, and the direction ambiguity is resolved from the derived sign (i.e., + or -). In the proof-of-concept experiments, DFSs from -90 to 90 kHz are successfully estimated for microwave signals at 10, 15, and 20 GHz, where the estimation errors are lower than ±60 Hz. The estimation errors can be further reduced via the use of a more stable optical frequency shift module.
Phase measurement error in summation of electron holography series.
McLeod, Robert A; Bergen, Michael; Malac, Marek
2014-06-01
Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS2 fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and -5% in the vacuum, indicating that the model can provide reliable quantitative predictions.
Goddard, P; Leslie, A; Jones, A; Wakeley, C; Kabala, J
2001-10-01
The level of error in radiology has been tabulated from articles on error and on "double reporting" or "double reading". The level of error varies depending on the radiological investigation, but the range is 2-20% for clinically significant or major error. The greatest reduction in error rates will come from changes in systems.
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. PMID:23999403
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies w_{r} in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = w_{r}/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less
Computing Instantaneous Frequency by normalizing Hilbert Transform
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2005-01-01
This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.
Computing Instantaneous Frequency by normalizing Hilbert Transform
Huang, Norden E.
2005-05-31
This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.
NASA Astrophysics Data System (ADS)
Takemura, Shunsuke; Furumura, Takashi
2013-04-01
We studied the scattering properties of high-frequency seismic waves due to the distribution of small-scale velocity fluctuations in the crust and upper mantle beneath Japan based on an analysis of three-component short-period seismograms and comparison with finite difference method (FDM) simulation of seismic wave propagation using various stochastic random velocity fluctuation models. Using a large number of dense High-Sensitivity Seismograph network waveform data of 310 shallow crustal earthquakes, we examined the P-wave energy partition of transverse component (PEPT), which is caused by scattering of the seismic wave in heterogeneous structure, as a function of frequency and hypocentral distances. At distance of less than D = 150 km, the PEPT increases with increasing frequency and is approximately constant in the range of from D = 50 to 150 km. The PEPT was found to increase suddenly at a distance of over D = 150 km and was larger in the high-frequency band (f > 4 Hz). Therefore, strong scattering of P wave may occur around the propagation path (upper crust, lower crust and around Moho discontinuity) of the P-wave first arrival phase at distances of larger than D = 150 km. We also found a regional difference in the PEPT value, whereby the PEPT value is large at the backarc side of northeastern Japan compared with southwestern Japan and the forearc side of northeastern Japan. These PEPT results, which were derived from shallow earthquakes, indicate that the shallow structure of heterogeneity at the backarc side of northeastern Japan is stronger and more complex compared with other areas. These hypotheses, that is, the depth and regional change of small-scale velocity fluctuations, are examined by 3-D FDM simulation using various heterogeneous structure models. By comparing the observed feature of the PEPT with simulation results, we found that strong seismic wave scattering occurs in the lower crust due to relatively higher velocity and stronger heterogeneities
NASA Astrophysics Data System (ADS)
Brasington, J.; Hicks, M.; Wheaton, J. M.; Williams, R. D.; Vericat, D.
2013-12-01
Repeat surveys of channel morphology provide a means to quantify fluvial sediment storage and enable inferences about changes in long-term sediment supply, watershed delivery and bed level adjustment; information vital to support effective river and land management. Over shorter time-scales, direct differencing of fluvial terrain models may also offer a route to predict reach-averaged sediment transport rates and quantify the patterns of channel morphodynamics and the processes that force them. Recent and rapid advances in geomatics have facilitated these goals by enabling the acquisition of topographic data at spatial resolutions and precisions suitable for characterising river morphology at the scale of individual grains over multi-kilometre reaches. Despite improvements in topographic surveying, inverting the terms of the sediment budget to derive estimates of sediment transport and link these to morphodynamic processes is, nonetheless, often confounded by limited knowledge of either the sediment supply or efflux across a boundary of the control volume, or unobserved cut-and-fill taking place between surveys. This latter problem is particularly poorly constrained, as field logistics frequently preclude surveys at a temporal frequency sufficient to capture changes in sediment storage associated with each competent event, let alone changes during individual floods. In this paper, we attempt to quantify the principal sources of uncertainty in morphologically-derived bedload transport rates for the large, labile, gravel-bed braided Rees River which drains the Southern Alps of NZ. During the austral summer of 2009-10, a unique timeseries of 10 high quality DEMs was derived for a 3 x 0.7 km reach of the Rees, using a combination of mobile terrestrial laser scanning, aDcp soundings and aerial image analysis. Complementary measurements of the forcing flood discharges and estimates of event-based particle step lengths were also acquired during the field campaign
A Review of Errors in the Journal Abstract
ERIC Educational Resources Information Center
Lee, Eunpyo; Kim, Eun-Kyung
2013-01-01
(percentage) of abstracts that involved with errors, the most erroneous part of the abstract, and the types and frequency of errors. Also the purpose expanded to compare the results with those of the previous…
Errors associated with outpatient computerized prescribing systems
Rothschild, Jeffrey M; Salzberg, Claudia; Keohane, Carol A; Zigmont, Katherine; Devita, Jim; Gandhi, Tejal K; Dalal, Anuj K; Bates, David W; Poon, Eric G
2011-01-01
Objective To report the frequency, types, and causes of errors associated with outpatient computer-generated prescriptions, and to develop a framework to classify these errors to determine which strategies have greatest potential for preventing them. Materials and methods This is a retrospective cohort study of 3850 computer-generated prescriptions received by a commercial outpatient pharmacy chain across three states over 4 weeks in 2008. A clinician panel reviewed the prescriptions using a previously described method to identify and classify medication errors. Primary outcomes were the incidence of medication errors; potential adverse drug events, defined as errors with potential for harm; and rate of prescribing errors by error type and by prescribing system. Results Of 3850 prescriptions, 452 (11.7%) contained 466 total errors, of which 163 (35.0%) were considered potential adverse drug events. Error rates varied by computerized prescribing system, from 5.1% to 37.5%. The most common error was omitted information (60.7% of all errors). Discussion About one in 10 computer-generated prescriptions included at least one error, of which a third had potential for harm. This is consistent with the literature on manual handwritten prescription error rates. The number, type, and severity of errors varied by computerized prescribing system, suggesting that some systems may be better at preventing errors than others. Conclusions Implementing a computerized prescribing system without comprehensive functionality and processes in place to ensure meaningful system use does not decrease medication errors. The authors offer targeted recommendations on improving computerized prescribing systems to prevent errors. PMID:21715428
Impact of Measurement Error on Synchrophasor Applications
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.
Evaluating the impact of genotype errors on rare variant tests of association.
Cook, Kaitlyn; Benitez, Alejandra; Fu, Casey; Tintle, Nathan
2014-01-01
The new class of rare variant tests has usually been evaluated assuming perfect genotype information. In reality, rare variant genotypes may be incorrect, and so rare variant tests should be robust to imperfect data. Errors and uncertainty in SNP genotyping are already known to dramatically impact statistical power for single marker tests on common variants and, in some cases, inflate the type I error rate. Recent results show that uncertainty in genotype calls derived from sequencing reads are dependent on several factors, including read depth, calling algorithm, number of alleles present in the sample, and the frequency at which an allele segregates in the population. We have recently proposed a general framework for the evaluation and investigation of rare variant tests of association, classifying most rare variant tests into one of two broad categories (length or joint tests). We use this framework to relate factors affecting genotype uncertainty to the power and type I error rate of rare variant tests. We find that non-differential genotype errors (an error process that occurs independent of phenotype) decrease power, with larger decreases for extremely rare variants, and for the common homozygote to heterozygote error. Differential genotype errors (an error process that is associated with phenotype status), lead to inflated type I error rates which are more likely to occur at sites with more common homozygote to heterozygote errors than vice versa. Finally, our work suggests that certain rare variant tests and study designs may be more robust to the inclusion of genotype errors. Further work is needed to directly integrate genotype calling algorithm decisions, study costs and test statistic choices to provide comprehensive design and analysis advice which appropriately accounts for the impact of genotype errors.
Zhang, Xiaotong; Liu, Jiaen; Van de Moortele, Pierre-Francois; Schmitter, Sebastian; He, Bin
2014-12-15
Electrical Properties Tomography (EPT) technique utilizes measurable radio frequency (RF) coil induced magnetic fields (B1 fields) in a Magnetic Resonance Imaging (MRI) system to quantitatively reconstruct the local electrical properties (EP) of biological tissues. Information derived from the same data set, e.g., complex numbers of B1 distribution towards electric field calculation, can be used to estimate, on a subject-specific basis, local Specific Absorption Rate (SAR). SAR plays a significant role in RF pulse design for high-field MRI applications, where maximum local tissue heating remains one of the most constraining limits. The purpose of the present work is to investigate the feasibility of such B1-based local SAR estimation, expanding on previously proposed EPT approaches. To this end, B1 calibration was obtained in a gelatin phantom at 7 T with a multi-channel transmit coil, under a particular multi-channel B1-shim setting (B1-shim I). Using this unique set of B1 calibration, local SAR distribution was subsequently predicted for B1-shim I, as well as for another B1-shim setting (B1-shim II), considering a specific set of parameter for a heating MRI protocol consisting of RF pulses plaid at 1% duty cycle. Local SAR results, which could not be directly measured with MRI, were subsequently converted into temperature change which in turn were validated against temperature changes measured by MRI Thermometry based on the proton chemical shift.
NASA Astrophysics Data System (ADS)
Zhang, Xiaotong; Van de Moortele, Pierre-Francois; Liu, Jiaen; Schmitter, Sebastian; He, Bin
2014-12-01
Electrical Properties Tomography (EPT) technique utilizes measurable radio frequency (RF) coil induced magnetic fields (B1 fields) in a Magnetic Resonance Imaging (MRI) system to quantitatively reconstruct the local electrical properties (EP) of biological tissues. Information derived from the same data set, e.g., complex numbers of B1 distribution towards electric field calculation, can be used to estimate, on a subject-specific basis, local Specific Absorption Rate (SAR). SAR plays a significant role in RF pulse design for high-field MRI applications, where maximum local tissue heating remains one of the most constraining limits. The purpose of the present work is to investigate the feasibility of such B1-based local SAR estimation, expanding on previously proposed EPT approaches. To this end, B1 calibration was obtained in a gelatin phantom at 7 T with a multi-channel transmit coil, under a particular multi-channel B1-shim setting (B1-shim I). Using this unique set of B1 calibration, local SAR distribution was subsequently predicted for B1-shim I, as well as for another B1-shim setting (B1-shim II), considering a specific set of parameter for a heating MRI protocol consisting of RF pulses plaid at 1% duty cycle. Local SAR results, which could not be directly measured with MRI, were subsequently converted into temperature change which in turn were validated against temperature changes measured by MRI Thermometry based on the proton chemical shift.
Frequency-domain analysis of absolute gravimeters
NASA Astrophysics Data System (ADS)
Svitlov, S.
2012-12-01
An absolute gravimeter is analysed as a linear time-invariant system in the frequency domain. Frequency responses of absolute gravimeters are derived analytically based on the propagation of the complex exponential signal through their linear measurement functions. Depending on the model of motion and the number of time-distance coordinates, an absolute gravimeter is considered as a second-order (three-level scheme) or third-order (multiple-level scheme) low-pass filter. It is shown that the behaviour of an atom absolute gravimeter in the frequency domain corresponds to that of the three-level corner-cube absolute gravimeter. Theoretical results are applied for evaluation of random and systematic measurement errors and optimization of an experiment. The developed theory agrees with known results of an absolute gravimeter analysis in the time and frequency domains and can be used for measurement uncertainty analyses, building of vibration-isolation systems and synthesis of digital filtering algorithms.
Adjoint Error Estimation for Linear Advection
Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S
2011-03-30
An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
Decoding and synchronization of error correcting codes
NASA Astrophysics Data System (ADS)
Madkour, S. A.
1983-01-01
Decoding devices for hard quantization and soft decision error correcting codes are discussed. A Meggit decoder for Reed-Solomon polynominal codes was implemented and tested. It uses 49 TTL logic IC. A maximum binary frequency of 30 Mbit/sec is demonstrated. A soft decision decoding approach was applied to hard decision decoding, using the principles of threshold decoding. Simulation results indicate that the proposed schema achieves satisfactory performance using only a small number of parity checks. The combined correction of substitution and synchronization errors is analyzed. The algorithm presented shows the capability of convolutional codes to correct synchronization errors as well as independent additive errors without any additional redundancy.
Parameters and error of a theoretical model
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs.
AUTOMATIC FREQUENCY CONTROL SYSTEM
Hansen, C.F.; Salisbury, J.D.
1961-01-10
A control is described for automatically matching the frequency of a resonant cavity to that of a driving oscillator. The driving oscillator is disconnected from the cavity and a secondary oscillator is actuated in which the cavity is the frequency determining element. A low frequency is mixed with the output of the driving oscillator and the resultant lower and upper sidebands are separately derived. The frequencies of the sidebands are compared with the secondary oscillator frequency. deriving a servo control signal to adjust a tuning element in the cavity and matching the cavity frequency to that of the driving oscillator. The driving oscillator may then be connected to the cavity.
NASA Astrophysics Data System (ADS)
James Elliott, C.; McVey, Brian D.; Quimby, David C.
1991-07-01
The level of field errors in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly time.
NASA Astrophysics Data System (ADS)
Elliott, C. James; McVey, Brian D.; Quimby, David C.
1990-11-01
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement, and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of (plus minus)25(mu)m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time.
Elliott, C.J.; McVey, B. ); Quimby, D.C. )
1990-01-01
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.
ERIC Educational Resources Information Center
Gressang, Jane E.
2010-01-01
Second language (L2) learners notoriously have trouble using articles in their target languages (e.g., "a", "an", "the" in English). However, researchers disagree about the patterns and causes of these errors. Past studies have found that L2 English learners: (1) Predominantly omit articles (White 2003, Robertson 2000), (2) Overuse "the" (Huebner…
Accepting error to make less error.
Einhorn, H J
1986-01-01
In this article I argue that the clinical and statistical approaches rest on different assumptions about the nature of random error and the appropriate level of accuracy to be expected in prediction. To examine this, a case is made for each approach. The clinical approach is characterized as being deterministic, causal, and less concerned with prediction than with diagnosis and treatment. The statistical approach accepts error as inevitable and in so doing makes less error in prediction. This is illustrated using examples from probability learning and equal weighting in linear models. Thereafter, a decision analysis of the two approaches is proposed. Of particular importance are the errors that characterize each approach: myths, magic, and illusions of control in the clinical; lost opportunities and illusions of the lack of control in the statistical. Each approach represents a gamble with corresponding risks and benefits.
Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...
Drug Errors in Anaesthesiology
Jain, Rajnish Kumar; Katiyar, Sarika
2009-01-01
Summary Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed. PMID:20640103
Tiffany, T O; Thayer, P C; Coelho, C M; Manning, G B
1976-09-01
We present a total system error evaluation of random error, based on a propagation of error analysis of the expression for the calculation of enzyme activity. A simple expression is derived that contains terms for photometric error, timing uncertainty, temperature-control error, sample and reagent volume errors, and pathlength error. This error expression was developed in general to provide a simple means of evaluating the magnitude of random error in an analytical system and in particular to provide an error evaluation protocol for the assessment of the error components in a prototype Miniature Centrifugal Analyzer system. Individual system components of error are measured. These measured error components are combined in the error expressiion to predict performance. Enzyme activity measurements are made to correlate with the projected error data. In conclusion, it is demonstrated that this is one method for permitting the clinical chemist and the instrument manufacturer to establish reasonable error limits. PMID:954193
Medication errors: definitions and classification.
Aronson, Jeffrey K
2009-06-01
1. To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. 2. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey-Lewis method (based on an understanding of theory and practice). 3. A medication error is 'a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient'. 4. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is 'a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient'. The converse of this, 'balanced prescribing' is 'the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm'. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. 5. A prescription error is 'a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription'. The 'normal features' include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. 6. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies.
Medication errors: definitions and classification
Aronson, Jeffrey K
2009-01-01
To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526
[Medical errors in obstetrics].
Marek, Z
1984-08-01
Errors in medicine may fall into 3 main categories: 1) medical errors made only by physicians, 2) technical errors made by physicians and other health care specialists, and 3) organizational errors associated with mismanagement of medical facilities. This classification of medical errors, as well as the definition and treatment of them, fully applies to obstetrics. However, the difference between obstetrics and other fields of medicine stems from the fact that an obstetrician usually deals with healthy women. Conversely, professional risk in obstetrics is very high, as errors and malpractice can lead to very serious complications. Observations show that the most frequent obstetrical errors occur in induced abortions, diagnosis of pregnancy, selection of optimal delivery techniques, treatment of hemorrhages, and other complications. Therefore, the obstetrician should be prepared to use intensive care procedures similar to those used for resuscitation.
The Nature of Error in Adolescent Student Writing
ERIC Educational Resources Information Center
Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang
2014-01-01
This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…
Performance Errors in Weight Training and Their Correction.
ERIC Educational Resources Information Center
Downing, John H.; Lander, Jeffrey E.
2002-01-01
Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…
Regression Calibration with Heteroscedastic Error Variance
Spiegelman, Donna; Logan, Roger; Grove, Douglas
2011-01-01
The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses’ Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice. PMID:22848187
Analysis of Medication Error Reports
Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.
2004-11-15
In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.
Grammatical Errors Produced by English Majors: The Translation Task
ERIC Educational Resources Information Center
Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad
2011-01-01
This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…
Aircraft system modeling error and control error
NASA Technical Reports Server (NTRS)
Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)
2012-01-01
A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.
Lowest-order average effect of turbulence on atmospheric profiles derived from radio occultation
NASA Technical Reports Server (NTRS)
Eshleman, V. R.; Haugstad, B. S.
1977-01-01
Turbulence in planetary atmospheres and ionospheres causes changes in angles of refraction of radio waves used in occultation experiments. Atmospheric temperature and pressure profiles, and ionospheric electron concentration profiles, derived from radio occultation measurements of Doppler frequency contain errors due to such angular offsets. The lowest-order average errors are derived from a geometrical-optics treatment of the radio-wave phase advance caused by the addition of uniform turbulence to an initially homogeneous medium. It is concluded that the average profile errors are small and that precise Doppler frequency measurements at two or more wavelengths could be used to help determine characteristics of the turbulence, as well as accuracy limits and possible correction terms for the profiles. However, a more detailed study of both frequency and intensity characteristics in radio and optical occultation measurements of turbulent planetary atmospheres and ionospheres is required to realize the full potential of such measurements.
External laser frequency stabilizer
Hall, J.L.; Hansch, T.W.
1987-10-13
A frequency transducer for controlling or modulating the frequency of a light radiation system is described comprising: a source of radiation having a predetermined frequency; an electro-optic phase modulator for receiving the radiation and for changing the phase of the radiation in proportion to an applied error voltage; an acousto-optic modulator coupled to the electro-optic modulator for shifting the frequency of the output signal of the electro-optic modulator; a signal source for providing an error voltage representing undesirable fluctuations in the frequency of the light radiation; a first channel including a fast integrator coupled between the signal source and the input circuit of the electro-optic modulator; a second channel including a voltage controlled oscillator coupled between the signal source and the acousto-optic modulator; and a network including an electronic delay circuit coupled between the first and second channels for matching the delay of the acousto-optic modulator.
NASA Technical Reports Server (NTRS)
Buechler, W.; Tucker, A. G.
1981-01-01
Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
The Error in Total Error Reduction
Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.
2013-01-01
Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930
Image defects from surface and alignment errors in grazing incidence telescopes
NASA Technical Reports Server (NTRS)
Saha, Timo T.
1989-01-01
The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.
2010-01-01
Aims Cardiovascular magnetic resonance (CMR) allows non-invasive phase contrast measurements of flow through planes transecting large vessels. However, some clinically valuable applications are highly sensitive to errors caused by small offsets of measured velocities if these are not adequately corrected, for example by the use of static tissue or static phantom correction of the offset error. We studied the severity of uncorrected velocity offset errors across sites and CMR systems. Methods and Results In a multi-centre, multi-vendor study, breath-hold through-plane retrospectively ECG-gated phase contrast acquisitions, as are used clinically for aortic and pulmonary flow measurement, were applied to static gelatin phantoms in twelve 1.5 T CMR systems, using a velocity encoding range of 150 cm/s. No post-processing corrections of offsets were implemented. The greatest uncorrected velocity offset, taken as an average over a 'great vessel' region (30 mm diameter) located up to 70 mm in-plane distance from the magnet isocenter, ranged from 0.4 cm/s to 4.9 cm/s. It averaged 2.7 cm/s over all the planes and systems. By theoretical calculation, a velocity offset error of 0.6 cm/s (representing just 0.4% of a 150 cm/s velocity encoding range) is barely acceptable, potentially causing about 5% miscalculation of cardiac output and up to 10% error in shunt measurement. Conclusion In the absence of hardware or software upgrades able to reduce phase offset errors, all the systems tested appeared to require post-acquisition correction to achieve consistently reliable breath-hold measurements of flow. The effectiveness of offset correction software will still need testing with respect to clinical flow acquisitions. PMID:20074359
Statistics of the residual refraction errors in laser ranging data
NASA Technical Reports Server (NTRS)
Gardner, C. S.
1977-01-01
A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.
New approximating results for data with errors in both variables
NASA Astrophysics Data System (ADS)
Bogdanova, N.; Todorov, S.
2015-05-01
We introduce new data from mineral water probe Lenovo Bulgaria, measured with errors in both variables. We apply our Orthonormal Polynomial Expansion Method (OPEM), based on Forsythe recurrence formula to describe the data in the new error corridor. The development of OPEM gives the approximating curves and their derivatives in optimal orthonormal and usual expansions including the errors in both variables with special criteria.
Error decomposition and estimation of inherent optical properties.
Salama, Mhd Suhyb; Stein, Alfred
2009-09-10
We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation. PMID:19745859
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2008-01-01
An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.
NASA Technical Reports Server (NTRS)
Tai, Chang-Kou
1991-01-01
Formulas analogous to the frequency response functions for commonly used filters in orbit error removal are analytically derived to devise observational strategies for the large-scale oceanic variability and to decipher the signal contents of previous results. These include the polynomial orbit error approximations, i.e., the linear, bias-only and quadratic corrections, and the sinusoidal orbit error approximations (the purely sinusoidal correction, and the sinusoid-and-bias correction). It is shown that the frequency response function for a polynomial correction is a function of the ratio of wavelength/track length and to retain 90 percent or more of the signal at a certain wavelength, the ratio must be less than 0.65 (for the quadratic case), 0.90 (linear), and 1.54 (bias-only).
Nonlinear amplification of side-modes in frequency combs.
Probst, R A; Steinmetz, T; Wilken, T; Hundertmark, H; Stark, S P; Wong, G K L; Russell, P St J; Hänsch, T W; Holzwarth, R; Udem, Th
2013-05-20
We investigate how suppressed modes in frequency combs are modified upon frequency doubling and self-phase modulation. We find, both experimentally and by using a simplified model, that these side-modes are amplified relative to the principal comb modes. Whereas frequency doubling increases their relative strength by 6 dB, the growth due to self-phase modulation can be much stronger and generally increases with nonlinear propagation length. Upper limits for this effect are derived in this work. This behavior has implications for high-precision calibration of spectrographs with frequency combs used for example in astronomy. For this application, Fabry-Pérot filter cavities are used to increase the mode spacing to exceed the resolution of the spectrograph. Frequency conversion and/or spectral broadening after non-perfect filtering reamplify the suppressed modes, which can lead to calibration errors. PMID:23736390
Error and adjustment of reflecting prisms
NASA Astrophysics Data System (ADS)
Mao, Wenwei
1997-12-01
A manufacturing error in the orientation of the working planes of a reflecting prism, such as an angle error or an edge error, will cause the optical axis to deviate and the image to lean. So does an adjustment (position error) of a reflecting prism. A universal method to be used to calculate the optical axis deviation and the image lean caused by the manufacturing error of a reflecting prism is presented. It is suited to all types of reflecting prisms. A means to offset the position error against the manufacturing error of a reflecting prism and the changes of image orientation is discussed. For the calculation to be feasible, a surface named the 'separating surface' is introduced just in front of the real exit face of a real prism. It is the image of the entrance face formed by all reflecting surfaces of the real prism. It can be used to separate the image orientation change caused by the error of the prism's reflecting surfaces from the image orientation change caused by the error of the prism's refracting surface. Based on ray tracing, a set of simple and explicit formulas of the optical axis deviation and the image lean for a general optical wedge is derived.
Sequencing error correction without a reference genome
2013-01-01
Background Next (second) generation sequencing is an increasingly important tool for many areas of molecular biology, however, care must be taken when interpreting its output. Even a low error rate can cause a large number of errors due to the high number of nucleotides being sequenced. Identifying sequencing errors from true biological variants is a challenging task. For organisms without a reference genome this difficulty is even more challenging. Results We have developed a method for the correction of sequencing errors in data from the Illumina Solexa sequencing platforms. It does not require a reference genome and is of relevance for microRNA studies, unsequenced genomes, variant detection in ultra-deep sequencing and even for RNA-Seq studies of organisms with sequenced genomes where RNA editing is being considered. Conclusions The derived error model is novel in that it allows different error probabilities for each position along the read, in conjunction with different error rates depending on the particular nucleotides involved in the substitution, and does not force these effects to behave in a multiplicative manner. The model provides error rates which capture the complex effects and interactions of the three main known causes of sequencing error associated with the Illumina platforms. PMID:24350580
Meteor radar signal processing and error analysis
NASA Astrophysics Data System (ADS)
Kang, Chunmei
Meteor wind radar systems are a powerful tool for study of the horizontal wind field in the mesosphere and lower thermosphere (MLT). While such systems have been operated for many years, virtually no literature has focused on radar system error analysis. The instrumental error may prevent scientists from getting correct conclusions on geophysical variability. The radar system instrumental error comes from different sources, including hardware, software, algorithms and etc. Radar signal processing plays an important role in radar system and advanced signal processing algorithms may dramatically reduce the radar system errors. In this dissertation, radar system error propagation is analyzed and several advanced signal processing algorithms are proposed to optimize the performance of radar system without increasing the instrument costs. The first part of this dissertation is the development of a time-frequency waveform detector, which is invariant to noise level and stable to a wide range of decay rates. This detector is proposed to discriminate the underdense meteor echoes from the background white Gaussian noise. The performance of this detector is examined using Monte Carlo simulations. The resulting probability of detection is shown to outperform the often used power and energy detectors for the same probability of false alarm. Secondly, estimators to determine the Doppler shift, the decay rate and direction of arrival (DOA) of meteors are proposed and evaluated. The performance of these estimators is compared with the analytically derived Cramer-Rao bound (CRB). The results show that the fast maximum likelihood (FML) estimator for determination of the Doppler shift and decay rate and the spatial spectral method for determination of the DOAs perform best among the estimators commonly used on other radar systems. For most cases, the mean square error (MSE) of the estimator meets the CRB above a 10dB SNR. Thus meteor echoes with an estimated SNR below 10dB are
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
Calculating the CEP (Circular Error Probable)
NASA Technical Reports Server (NTRS)
1987-01-01
This report compares the probability contained in the Circular Error Probable associated with an Elliptical Error Probable to that of the EEP at a given confidence level. The levels examined are 50 percent and 95 percent. The CEP is found to be both more conservative and less conservative than the associated EEP, depending on the eccentricity of the ellipse. The formulas used are derived in the appendix.
Estimating diversity via frequency ratios.
Willis, Amy; Bunge, John
2015-12-01
We wish to estimate the total number of classes in a population based on sample counts, especially in the presence of high latent diversity. Drawing on probability theory that characterizes distributions on the integers by ratios of consecutive probabilities, we construct a nonlinear regression model for the ratios of consecutive frequency counts. This allows us to predict the unobserved count and hence estimate the total diversity. We believe that this is the first approach to depart from the classical mixed Poisson model in this problem. Our method is geometrically intuitive and yields good fits to data with reasonable standard errors. It is especially well-suited to analyzing high diversity datasets derived from next-generation sequencing in microbial ecology. We demonstrate the method's performance in this context and via simulation, and we present a dataset for which our method outperforms all competitors. PMID:26038228
Preventing errors in laterality.
Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie
2015-04-01
An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in separate colors. This allows the radiologist to correlate all detected laterality terms of the report with the images open in PACS and correct them before the report is finalized. The system is monitored every time an error in laterality was detected. The system detected 32 errors in laterality over a 7-month period (rate of 0.0007 %), with CT containing the highest error detection rate of all modalities. Significantly, more errors were detected in male patients compared with female patients. In conclusion, our study demonstrated that with our system, laterality errors can be detected and corrected prior to finalizing reports.
Dandona, R.; Dandona, L.
2001-01-01
Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669
ERIC Educational Resources Information Center
Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.
2010-01-01
Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-01
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
Fisher classifier and its probability of error estimation
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.
Error catastrophe in populations under similarity-essential recombination.
de Aguiar, Marcus A M; Schneider, David M; do Carmo, Eduardo; Campos, Paulo R A; Martins, Ayana B
2015-06-01
Organisms are often more likely to exchange genetic information with others that are similar to themselves. One of the most widely accepted mechanisms of RNA virus recombination requires substantial sequence similarity between the parental RNAs and is termed similarity-essential recombination. This mechanism may be considered analogous to assortative mating, an important form of non-random mating that can be found in animals and plants. Here we study the dynamics of haplotype frequencies in populations evolving under similarity-essential recombination. Haplotypes are represented by a genome of B biallelic loci and the Hamming distance between individuals is used as a criterion for recombination. We derive the evolution equations for the haplotype frequencies assuming that recombination does not occur if the genetic distance is larger than a critical value G and that mutation occurs at a rate μ per locus. Additionally, uniform crossover is considered. Although no fitness is directly associated to the haplotypes, we show that frequency-dependent selection emerges dynamically and governs the haplotype distribution. A critical mutation rate μc can be identified as the error threshold transition, beyond which this selective information cannot be stored. For μ<μc the distribution consists of a dominant sequence surrounded by a cloud of closely related sequences, characterizing a quasispecies. For μ>μc the distribution becomes uniform, with all haplotypes having the same frequency. In the case of extreme assortativeness, where individuals only recombine with others identical to themselves (G=0), the error threshold results μc=1/4, independently of the genome size. For weak assortativity (G=B-1)μc=2(-(B+1)) and for the case of no assortativity (G=B) μc=0. We compute the mutation threshold for 0
Frequency synchronization of a frequency-hopped MFSK communication system
NASA Technical Reports Server (NTRS)
Huth, G. K.; Polydoros, A.; Simon, M. K.
1981-01-01
This paper presents the performance of fine-frequency synchronization. The performance degradation due to imperfect frequency synchronization is found in terms of the effect on bit error probability as a function of full-band or partial-band noise jamming levels and of the number of frequency hops used in the estimator. The effect of imperfect fine-time synchronization is also included in the calculation of fine-frequency synchronization performance to obtain the overall performance degradation due to synchronization errors.
Multipath induced errors in meteorological Doppler/interferometer location systems
NASA Technical Reports Server (NTRS)
Wallace, R. G.
1984-01-01
One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.
Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca
2015-09-01
Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.
Uncorrected refractive errors.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755
Reduction of Maintenance Error Through Focused Interventions
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)
1997-01-01
It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.
Low-dimensional Representation of Error Covariance
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.
NASA Technical Reports Server (NTRS)
1987-01-01
In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.
NASA Astrophysics Data System (ADS)
Gao, J.
2014-12-01
Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a
Error Detection Processes during Observational Learning
ERIC Educational Resources Information Center
Badets, Arnaud; Blandin, Yannick; Wright, David L.; Shea, Charles H.
2006-01-01
The purpose of this experiment was to determine whether a faded knowledge of results (KR) frequency during observation of a model's performance enhanced error detection capabilities. During the observation phase, participants observed a model performing a timing task and received KR about the model's performance on each trial or on one of two…
Laser frequency offset synthesizer
NASA Astrophysics Data System (ADS)
Lewis, D. A.; Evans, R. M.; Finn, M. A.
1985-01-01
A method is reported for locking the frequency difference of two lasers with an accuracy of 0.5 kHz or less over a one-second interval which is simple, stable, and relatively free from systematic errors. Two 633 nm He-Ne lasers are used, one with a fixed frequency and the other tunable. The beat frequency between the lasers is controlled by a voltage applied to a piezoelectric device which varies the cavity length of the tunable laser. This variable beat frequency, scaled by a computer-controlled modulus, is equivalent to a synthesizer. This approach eliminates the need for a separate external frequency synthesizer; furthermore, the phase detection process occurs at a relatively low frequency, making the required electronics simple and straightforward.
Estimating Bias Error Distributions
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Finley, Tom D.
2001-01-01
This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.
Holroyd, Clay B; Yeung, Nick
2003-08-01
A recent study indicates that alcohol consumption reduces the amplitude of the error-related negativity (ERN), a negative deflection in the electroencephalogram associated with error commission. Here, we explore possible mechanisms underlying this result in the context of two recent theories about the neural system that produces the ERN - one based on principles of reinforcement learning and the other based on response conflict monitoring.
NASA Astrophysics Data System (ADS)
Lidar, Daniel A.; Brun, Todd A.
2013-09-01
Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and
Chan, Adeline; Yan, Jun; Csurhes, Peter; Greer, Judith; McCombe, Pamela
2015-09-15
The aim of this study was to measure the levels of circulating BDNF and the frequency of BDNF-producing T cells after acute ischaemic stroke. Serum BDNF levels were measured by ELISA. Flow cytometry was used to enumerate peripheral blood leukocytes that were labelled with antibodies against markers of T cells, T regulatory cells (Tregs), and intracellular BDNF. There was a slight increase in serum BDNF levels after stroke. There was no overall difference between stroke patients and controls in the frequency of CD4(+) and CD8(+) BDNF(+) cells, although a subgroup of stroke patients showed high frequencies of these cells. However, there was an increase in the percentage of BDNF(+) Treg cells in the CD4(+) population in stroke patients compared to controls. Patients with high percentages of CD4(+) BDNF(+) Treg cells had a better outcome at 6months than those with lower levels. These groups did not differ in age, gender or initial stroke severity. Enhancement of BDNF production after stroke could be a useful means of improving neuroprotection and recovery after stroke.
Hosein, Mervyn; Mohiuddin, Sidra; Fatima, Nazish
2015-01-01
Background: Oral submucous fibrosis (OSMF) is a chronic, premalignant condition of the oral mucosa and one of the commonest potentially malignant disorders amongst the Asian population. The objective of this study was to investigate the association of etiologic factors with: age, frequency, duration of consumption of areca nut and its derivatives, and the severity of clinical manifestations. Methods: A cross-sectional, multi centric study was conducted over 8 years on clinically diagnosed OSMF cases (n = 765) from both public and private tertiary care centers. Sample size was determined by World Health Organization sample size calculator. Consumption of areca nut in different forms, frequency of daily usage, years of chewing, degree of mouth opening and duration of the condition were recorded. Level of significance was kept at P ≤ 0.05. Results: A total of 765 patients of OSMF were examined, of whom 396 (51.8%) were male and 369 (48.2%) female with a mean age of 29.17 years. Mild OSMF was seen in 61 cases (8.0%), moderate OSMF in 353 (46.1%) and severe OSMF in 417 (54.5%) subjects. Areca nut and other derivatives were most frequently consumed and showed significant risk in the severity of OSMF (P ≤ 0.0001). Age of the sample and duration of chewing years were also significant (P = 0.012). Conclusions: The relative risk of OSMF increased with duration and frequency of areca nut consumption especially from an early age of onset. PMID:26473161
Tsvetkov, D.Y.
1983-01-01
Estimates of the frequency of type I and II supernovae occurring in galaxies of different types are derived from observational material acquired by the supernova patrol of the Shternberg Astronomical Institute.
Gear Transmission Error Measurement System Made Operational
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2002-01-01
A system directly measuring the transmission error between the meshing spur or helical gears was installed at the NASA Glenn Research Center and made operational in August 2001. This system employs light beams directed by lenses and prisms through gratings mounted on the two gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. The device is capable of resolution better than 0.1 mm (one thousandth the thickness of a human hair). The measured transmission error can be displayed in a "map" that shows how the transmission error varies with the gear rotation or it can be converted to spectra to show the components at the meshing frequencies. Accurate transmission error data will help researchers better understand the mechanisms that cause gear noise and vibration and will lead to The Design Unit at the University of Newcastle in England specifically designed the new system for NASA. It is the only device in the United States that can measure dynamic transmission error at high rotational speeds. The new system will be used to develop new techniques to reduce dynamic transmission error along with the resulting noise and vibration of aeronautical transmissions.
Surface errors in the course of machining precision optics
NASA Astrophysics Data System (ADS)
Biskup, H.; Haberl, A.; Rascher, R.
2015-08-01
Precision optical components are usually machined by grinding and polishing in several steps with increasing accuracy. Spherical surfaces will be finished in a last step with large tools to smooth the surface. The requested surface accuracy of non-spherical surfaces only can be achieved with tools in point contact to the surface. So called mid-frequency errors (MSFE) can accumulate with zonal processes. This work is on the formation of surface errors from grinding to polishing by conducting an analysis of the surfaces in their machining steps by non-contact interferometric methods. The errors on the surface can be distinguished as described in DIN 4760 whereby 2nd to 3rd order errors are the so-called MSFE. By appropriate filtering of the measured data frequencies of errors can be suppressed in a manner that only defined spatial frequencies will be shown in the surface plot. It can be observed that some frequencies already may be formed in the early machining steps like grinding and main-polishing. Additionally it is known that MSFE can be produced by the process itself and other side effects. Beside a description of surface errors based on the limits of measurement technologies, different formation mechanisms for selected spatial frequencies are presented. A correction may be only possible by tools that have a lateral size below the wavelength of the error structure. The presented considerations may be used to develop proposals to handle surface errors.
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Error awareness as evidence accumulation: effects of speed-accuracy trade-off on error signaling
Steinhauser, Marco; Yeung, Nick
2012-01-01
Errors in choice tasks have been shown to elicit a cascade of characteristic components in the human event-related potential (ERPs)—the error-related negativity (Ne/ERN) and the error positivity (Pe). Despite the large number of studies concerned with these components, it is still unclear how they relate to error awareness as measured by overt error signaling responses. In the present study, we considered error awareness as a decision process in which evidence for an error is accumulated until a decision criterion is reached, and hypothesized that the Pe is a correlate of the accumulated decision evidence. To test the prediction that the amplitude of the Pe varies as a function of the strength and latency of the accumulated evidence for an error, we manipulated the speed-accuracy trade-off (SAT) in a brightness discrimination task while participants signaled the occurrence of errors. Based on a previous modeling study, we predicted that lower speed pressure should be associated with weaker evidence for an error and, thus, with smaller Pe amplitudes. As predicted, average Pe amplitude was decreased and error signaling was impaired in a low speed pressure condition compared to a high speed pressure condition. In further analyses, we derived single-trial Pe amplitudes using a logistic regression approach. Single-trial amplitudes robustly predicted the occurrence of signaling responses on a trial-by-trial basis. These results confirm the predictions of the evidence accumulation account, supporting the notion that the Pe reflects accumulated evidence for an error and that this evidence drives the emergence of error awareness. PMID:22905027
Human error in aviation operations
NASA Technical Reports Server (NTRS)
Nagel, David C.
1988-01-01
The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.
Errata: Papers in Error Analysis.
ERIC Educational Resources Information Center
Svartvik, Jan, Ed.
Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…
SMOS SSS uncertainties associated with errors on auxiliary parameters
NASA Astrophysics Data System (ADS)
Yin, Xiaobin; Boutin, Jacqueline; Dinnat, Emmanuel; Martin, Nicolas; Guimbard, Sebastien
2014-05-01
The European Soil Moisture and Ocean Salinity (SMOS) mission, aimed at observing sea surface salinity (SSS) from space, has been launched in November 2009. The L-band frequency (1413 MHz) has been chosen as a tradeoff between a sufficient sensitivity of radiometric measurements to changes in salinity, a high sensitivity to soil moisture and spatial resolution constraints. It is also a band protected against human-made emissions. But, even at this frequency, the sensitivity of brightness temperature (TB) to SSS remains low requiring accurate correction for other sources of error. Two significant sources of error for retrieved SSS are the uncertainties on the correction for surface roughness and sea surface temperature (SST). One main geophysical source of error in the retrieval of SSS from L-band TB comes from the need for correcting the effect of the surface roughness and foam. In the SMOS processing, the wind speed (WS) provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) is used to initialize the retrieval process of WS and Sea Surface Salinity (SSS). This process compensates for the lack of onboard instrument providing a measure of ocean surface WS independent of the L-band radiometer measurements. Using multi-angular polarimetric SMOS TBs, it is possible to adjust the WS from the initial value in the center of the swath (within ±300km) by taking advantage of the different sensitivities of L-band H-pol and V-pol TBs to WS and SSS at various incidence angles. As a consequence, the inconsistencies between the MIRAS sensed roughness and the roughness simulated with the ECMWF WS are reduced by the retrieval scheme but they still lead to residual biases in the SMOS SSS. We have developed an alternative two-step method for retrieving WS from SMOS TB, with larger error on prior ECMWF wind speed in a first step. We show that although it improves SSS in some areas characterized by large currents, it is more sensitive to SMOS TB errors in the
Laplace approximation in measurement error models.
Battauz, Michela
2011-05-01
Likelihood analysis for regression models with measurement errors in explanatory variables typically involves integrals that do not have a closed-form solution. In this case, numerical methods such as Gaussian quadrature are generally employed. However, when the dimension of the integral is large, these methods become computationally demanding or even unfeasible. This paper proposes the use of the Laplace approximation to deal with measurement error problems when the likelihood function involves high-dimensional integrals. The cases considered are generalized linear models with multiple covariates measured with error and generalized linear mixed models with measurement error in the covariates. The asymptotic order of the approximation and the asymptotic properties of the Laplace-based estimator for these models are derived. The method is illustrated using simulations and real-data analysis.
Protecting weak measurements against systematic errors
NASA Astrophysics Data System (ADS)
Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.
2016-07-01
In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe undergoes decoherence can be significantly lower than that of a standard weak measurement. This indicates another advantage of weak-value amplification in improving the performance of parameter estimation. We illustrate the results by an exact numerical simulation of decoherence arising from a bosonic mode and compare it to the first-order analytical result we obtain.
A broad-band flux scale for low-frequency radio telescopes
NASA Astrophysics Data System (ADS)
Scaife, Anna M. M.; Heald, George H.
2012-06-01
We present parametrized broad-band spectral models valid at frequencies between 30 and 300 MHz for six bright radio sources selected from the 3C survey, spread in right ascension from 0 to 24 h. For each source, data from the literature are compiled and tied to a common flux density scale. These data are then used to parametrize an analytic polynomial spectral calibration model. The optimal polynomial order in each case is determined using the ratio of the Bayesian evidence for the candidate models. Maximum likelihood parameter values for each model are presented, with associated errors, and the percentage error in each model as a function of frequency is derived. These spectral models are intended as an initial reference for science from the new generation of low-frequency telescopes now coming online, with particular emphasis on the Low Frequency Array (LOFAR).
Some Surprising Errors in Numerical Differentiation
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2012-01-01
Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…
Analyzing human errors in flight mission operations
NASA Technical Reports Server (NTRS)
Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef
1993-01-01
A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.
Computation of Standard Errors
Dowd, Bryan E; Greene, William H; Norton, Edward C
2014-01-01
Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Lu, Xiaowei; Tsukune, Mariko; Watanabe, Hiroki; Yamazaki, Nozomu; Isobe, Yosuke; Kobayashi, Yo; Miyashita, Tomoyuki; Fujie, Masakatsu G
2012-01-01
Recently radiofrequency (RF) ablation has become increasingly important in treating liver cancers. RF ablation is ordinarily conducted using elastographic imaging to monitor the ablation procedure and the temperature of the electrode needle is displayed on the RF generator. However, the coagulation boundary of liver tissue in RF ablation is unclear and unconfident. This can lead to both excessive and insufficient RF ablation thereby diminishing the advantages of the procedure. In the present study, we developed a method for determining the coagulation boundary of liver tissue in RF ablation. To investigate this boundary we used the mechanical characteristics of biochemical components as an indicator of coagulation to produce a relational model for viscoelasticity and temperature. This paper presents the data acquisition procedures for the viscoelasticity characteristics and the analytical method used for the coagulation model. We employed a rheometer to measure the viscoelastic characteristics of liver tissue. To determine the model functional relationship between viscoelasticity and temperature, we used a least-square method and the minimum root mean square error was calculated to optimize the model functional relations. The functional relation between temperature and viscoelasticity was linear and non-linear in different temperature regions. The boundary between linear and non-linear functional relation was 58.0°C. PMID:23365863
Filter induced errors in laser anemometer measurements using counter processors
NASA Technical Reports Server (NTRS)
Oberle, L. G.; Seasholtz, R. G.
1985-01-01
Simulations of laser Doppler anemometer (LDA) systems have focused primarily on noise studies or biasing errors. Another possible source of error is the choice of filter types and filter cutoff frequencies. Before it is applied to the counter portion of the signal processor, a Doppler burst is filtered to remove the pedestal and to reduce noise in the frequency bands outside the region in which the signal occurs. Filtering, however, introduces errors into the measurement of the frequency of the input signal which leads to inaccurate results. Errors caused by signal filtering in an LDA counter-processor data acquisition system are evaluated and filters for a specific application which will reduce these errors are chosen.
Dialogues on prediction errors.
Niv, Yael; Schoenbaum, Geoffrey
2008-07-01
The recognition that computational ideas from reinforcement learning are relevant to the study of neural circuits has taken the cognitive neuroscience community by storm. A central tenet of these models is that discrepancies between actual and expected outcomes can be used for learning. Neural correlates of such prediction-error signals have been observed now in midbrain dopaminergic neurons, striatum, amygdala and even prefrontal cortex, and models incorporating prediction errors have been invoked to explain complex phenomena such as the transition from goal-directed to habitual behavior. Yet, like any revolution, the fast-paced progress has left an uneven understanding in its wake. Here, we provide answers to ten simple questions about prediction errors, with the aim of exposing both the strengths and the limitations of this active area of neuroscience research.
Experimental Quantum Error Detection
Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi
2012-01-01
Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047
Diagnostic Errors and Laboratory Medicine – Causes and Strategies
2015-01-01
While the frequency of laboratory errors varies greatly, depending on the study design and steps of the total testing process (TTP) investigated, a series of papers published in the last two decades drew the attention of laboratory professionals to the pre- and post-analytical phases, which currently appear to be more vulnerable to errors than the analytical phase. In particular, a high frequency of errors and risk of errors that could harm patients has been described in both the pre-pre- and post-post-analytical steps of the cycle that usually are not under the laboratory control. In 2008, the release of a Technical Specification (ISO/TS 22367) by the International Organization for Standardization played a key role in collecting the evidence and changing the perspective on laboratory errors, emphasizing the need for a patient-centred approach to errors in laboratory testing.
Diagnostic Errors and Laboratory Medicine - Causes and Strategies.
Plebani, Mario
2015-01-01
While the frequency of laboratory errors varies greatly, depending on the study design and steps of the total testing process (TTP) investigated, a series of papers published in the last two decades drew the attention of laboratory professionals to the pre- and post-analytical phases, which currently appear to be more vulnerable to errors than the analytical phase. In particular, a high frequency of errors and risk of errors that could harm patients has been described in both the pre-pre- and post-post-analytical steps of the cycle that usually are not under the laboratory control. In 2008, the release of a Technical Specification (ISO/TS 22367) by the International Organization for Standardization played a key role in collecting the evidence and changing the perspective on laboratory errors, emphasizing the need for a patient-centred approach to errors in laboratory testing.
Atmospheric refraction effects on baseline error in satellite laser ranging systems
NASA Technical Reports Server (NTRS)
Im, K. E.; Gardner, C. S.
1982-01-01
Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.
Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization
LaMar, E; Hamann, B; Joy, K I
2001-10-16
Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.
Duan, Yanmin; Zhu, Haiyong; Zhang, Yaoju; Zhang, Ge; Zhang, Jian; Tang, Dingyuan; Kaminskii, A. A.
2016-01-01
An intra-cavity RbTiOPO4 (RTP) cascade Raman laser was demonstrated for efficient multi-order Stokes emission. An acousto-optic Q-switched Nd:YAlO3 laser at 1.08 μm was used as the pump source and a 20-mm-long x-cut RTP crystal was used as the Raman medium to meet the X(Z,Z)X Raman configuration. Multi-order Stokes with multiple Raman shifts (~271, ~559 and ~687 cm−1) were achieved in the output. Under an incident pump power of 9.5 W, a total average output power of 580 mW with a pulse repetition frequency of 10 kHz was obtained. The optical conversion efficiency is 6.1%. The results show that the RTP crystal can enrich laser spectral lines and generate high order Stokes light. PMID:27666829
NASA Technical Reports Server (NTRS)
1985-01-01
A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.
A nonmystical treatment of tape speed compensation for frequency modulated signals
NASA Astrophysics Data System (ADS)
Solomon, O. M., Jr.
After briefly reviewing frequency modulation and demodulation, tape speed variation is modeled as a distortion of the independent variable of a frequency-modulated signal. This distortion gives rise to an additive amplitude error in the demodulated message, which comprises two terms. Both terms depend on the derivative of time base error, that is, the flutter of the analog tape machine. It is pointed out that the first term depends on the channel's center frequency and frequency deviation constant, as well as on the flutter, and that the second depends solely on the message and flutter. A description is given of the relationship between the additive amplitude error and manufacturer's flutter specification. For the case of a constant message, relative errors and signal-to-noise ratios are discussed to provide insight into when the variation in tape speed will cause significant errors. An algorithm is then developed which theoretically achieves full compensation of tape speed variation. After being confirmed via spectral computations on laboratory data, the algorithm is applied to field data.
Measuring Cyclic Error in Laser Heterodyne Interferometers
NASA Technical Reports Server (NTRS)
Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter
2010-01-01
An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-
Optical linear algebra processors: noise and error-source modeling.
Casasent, D; Ghosh, A
1985-06-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
Parental Reports of Children's Scale Errors in Everyday Life
ERIC Educational Resources Information Center
Rosengren, Karl S.; Gutierrez, Isabel T.; Anderson, Kathy N.; Schein, Stevie S.
2009-01-01
Scale errors refer to behaviors where young children attempt to perform an action on an object that is too small to effectively accommodate the behavior. The goal of this study was to examine the frequency and characteristics of scale errors in everyday life. To do so, the researchers collected parental reports of children's (age range = 13-21…
Optical linear algebra processors - Noise and error-source modeling
NASA Technical Reports Server (NTRS)
Casasent, D.; Ghosh, A.
1985-01-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
Speech Errors in Progressive Non-Fluent Aphasia
ERIC Educational Resources Information Center
Ash, Sharon; McMillan, Corey; Gunawardena, Delani; Avants, Brian; Morgan, Brianna; Khan, Alea; Moore, Peachie; Gee, James; Grossman, Murray
2010-01-01
The nature and frequency of speech production errors in neurodegenerative disease have not previously been precisely quantified. In the present study, 16 patients with a progressive form of non-fluent aphasia (PNFA) were asked to tell a story from a wordless children's picture book. Errors in production were classified as either phonemic,…
Friesdorf, Wolfgang; Marsolek, Ingo
2008-01-01
Medical devices define our everyday patient treatment processes. But despite the beneficial effect, every use can also lead to damages. Use errors are thus often explained by human failure. But human errors can never be completely extinct, especially in such complex work processes like those in medicine that often involve time pressure. Therefore we need error-tolerant work systems in which potential problems are identified and solved as early as possible. In this context human engineering uses the TOP principle: technological before organisational and then person-related solutions. But especially in everyday medical work we realise that error-prone usability concepts can often only be counterbalanced by organisational or person-related measures. Thus human failure is pre-programmed. In addition, many medical work places represent a somewhat chaotic accumulation of individual devices with totally different user interaction concepts. There is not only a lack of holistic work place concepts, but of holistic process and system concepts as well. However, this can only be achieved through the co-operation of producers, healthcare providers and clinical users, by systematically analyzing and iteratively optimizing the underlying treatment processes from both a technological and organizational perspective. What we need is a joint platform like medilab V of the TU Berlin, in which the entire medical treatment chain can be simulated in order to discuss, experiment and model--a key to a safe and efficient healthcare system of the future. PMID:19213452
ERIC Educational Resources Information Center
Julian, Liam
2009-01-01
In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…
... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...
Power Measurement Errors on a Utility Aircraft
NASA Technical Reports Server (NTRS)
Bousman, William G.
2002-01-01
Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.
A Numerical Approach for Computing Standard Errors of Linear Equating.
ERIC Educational Resources Information Center
Zeng, Lingjia
1993-01-01
A numerical approach for computing standard errors (SEs) of a linear equating is described in which first partial derivatives of equating functions needed to compute SEs are derived numerically. Numerical and analytical approaches are compared using the Tucker equating method. SEs derived numerically are found indistinguishable from SEs derived…
Challenge and Error: Critical Events and Attention-Related Errors
ERIC Educational Resources Information Center
Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel
2011-01-01
Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…
Ezgu, Fatih
2016-01-01
Inborn errors of metabolism are single gene disorders resulting from the defects in the biochemical pathways of the body. Although these disorders are individually rare, collectively they account for a significant portion of childhood disability and deaths. Most of the disorders are inherited as autosomal recessive whereas autosomal dominant and X-linked disorders are also present. The clinical signs and symptoms arise from the accumulation of the toxic substrate, deficiency of the product, or both. Depending on the residual activity of the deficient enzyme, the initiation of the clinical picture may vary starting from the newborn period up until adulthood. Hundreds of disorders have been described until now and there has been a considerable clinical overlap between certain inborn errors. Resulting from this fact, the definite diagnosis of inborn errors depends on enzyme assays or genetic tests. Especially during the recent years, significant achievements have been gained for the biochemical and genetic diagnosis of inborn errors. Techniques such as tandem mass spectrometry and gas chromatography for biochemical diagnosis and microarrays and next-generation sequencing for the genetic diagnosis have enabled rapid and accurate diagnosis. The achievements for the diagnosis also enabled newborn screening and prenatal diagnosis. Parallel to the development the diagnostic methods; significant progress has also been obtained for the treatment. Treatment approaches such as special diets, enzyme replacement therapy, substrate inhibition, and organ transplantation have been widely used. It is obvious that by the help of the preclinical and clinical research carried out for inborn errors, better diagnostic methods and better treatment approaches will high likely be available.
Derivation of a Molecular Mechanics Force Field for Cholesterol
Cournia, Zoe; Vaiana, Andrea C.; Smith, Jeremy C.; Ullmann, G. Matthias M.
2004-01-01
As a necessary step toward realistic cholesterol:biomembrane simulations, we have derived CHARMM molecular mechanics force-field parameters for cholesterol. For the parametrization we use an automated method that involves fitting the molecular mechanics potential to both vibrational frequencies and eigenvector projections derived from quantum chemical calculations. Results for another polycyclic molecule, rhodamine 6G, are also given. The usefulness of the method is thus demonstrated by the use of reference data from two molecules at different levels of theory. The frequency-matching plots for both cholesterol and rhodamine 6G show overall agreement between the CHARMM and quantum chemical normal modes, with frequency matching for both molecules within the error range found in previous benchmark studies.
NASA Astrophysics Data System (ADS)
Raj, A. Dennis; Jeeva, M.; Shankar, M.; Purusothaman, R.; Prabhu, G. Venkatesa; Potheher, I. Vetha
2016-11-01
2-naphthol derived Mannich base 1-((4-methylpiperazin-1-yl) (phenyl) methyl) naphthalen-2-ol (MPN) - a nonlinear optical single crystal was synthesized and successfully grown by slow evaporation technique at room temperature. The molecular structure was confirmed by single crystal XRD, FT-IR, 1H NMR and 13C NMR spectral studies. The single crystal X-ray diffraction analysis reveals that the crystal belongs to orthorhombic crystal system with non-centrosymmetric space group Pna21. The chemical shift of 5.34 ppm (singlet methine CH proton) in 1H NMR and signal for the CH carbon around δ70.169 ppm in 13C NMR confirms the formation of the title compound. The crystal growth pattern and dislocations of crystal are analyzed using chemical etching technique. UV cut off wavelength of the material was found to be 212 nm. The second harmonic generation (SHG) of MPN was determined from Kurtz Perry powder technique and the efficiency is almost equal to that of standard KDP crystal. The laser damage threshold was measured by passing Nd: YAG laser beam through the sample and it was found to be 1.1974 GW/cm2. The material was thermally stable up to 142 °C. The relationship between the molecular structure and the optical properties was also studied from quantum chemical calculations using Density Functional Theory (DFT) and reported for the first time.
McCullen, Seth D; McQuilling, John P; Grossfeld, Robert M; Lubischer, Jane L; Clarke, Laura I; Loboa, Elizabeth G
2010-12-01
Electric stimulation is known to initiate signaling pathways and provides a technique to enhance osteogenic differentiation of stem and/or progenitor cells. There are a variety of in vitro stimulation devices to apply electric fields to such cells. Herein, we describe and highlight the use of interdigitated electrodes to characterize signaling pathways and the effect of electric fields on the proliferation and osteogenic differentiation of human adipose-derived stem cells (hASCs). The advantage of the interdigitated electrode configuration is that cells can be easily imaged during short-term (acute) stimulation, and this identical configuration can be utilized for long-term (chronic) studies. Acute exposure of hASCs to alternating current (AC) sinusoidal electric fields of 1 Hz induced a dose-dependent increase in cytoplasmic calcium in response to electric field magnitude, as observed by fluorescence microscopy. hASCs that were chronically exposed to AC electric field treatment of 1 V/cm (4 h/day for 14 days, cultured in the osteogenic differentiation medium containing dexamethasone, ascorbic acid, and β-glycerol phosphate) displayed a significant increase in mineral deposition relative to unstimulated controls. This is the first study to evaluate the effects of sinusoidal AC electric fields on hASCs and to demonstrate that acute and chronic electric field exposure can significantly increase intracellular calcium signaling and the deposition of accreted calcium under osteogenic stimulation, respectively.
Errors and mistakes in breast ultrasound diagnostics.
Jakubowski, Wiesław; Dobruch-Sobczak, Katarzyna; Migda, Bartosz
2012-09-01
Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.
Quantifying truncation errors in effective field theory
NASA Astrophysics Data System (ADS)
Furnstahl, R. J.; Klco, N.; Phillips, D. R.; Wesolowski, S.
2015-08-01
Bayesian procedures designed to quantify truncation errors in perturbative calculations of quantum chromodynamics observables are adapted to expansions in effective field theory (EFT). In the Bayesian approach, such truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. Computation of these intervals requires specification of prior probability distributions ("priors") for the expansion coefficients. By encoding expectations about the naturalness of these coefficients, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. It also permits exploration of the ways in which such error bars are, and are not, sensitive to assumptions about EFT-coefficient naturalness. We first demonstrate the calculation of Bayesian probability distributions for the EFT truncation error in some representative examples and then focus on the application of chiral EFT to neutron-proton scattering. Epelbaum, Krebs, and Meißner recently articulated explicit rules for estimating truncation errors in such EFT calculations of few-nucleon-system properties. We find that their basic procedure emerges generically from one class of naturalness priors considered and that all such priors result in consistent quantitative predictions for 68% DOB intervals. We then explore several methods by which the convergence properties of the EFT for a set of observables may be used to check the statistical consistency of the EFT expansion parameter.
Software errors and complexity: An empirical investigation
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Perricone, Berry T.
1983-01-01
The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.
Tropical errors and convection
NASA Astrophysics Data System (ADS)
Bechtold, P.; Bauer, P.; Engelen, R. J.
2012-12-01
Tropical convection is analysed in the ECMWF Integrated Forecast System (IFS) through tropical errors and their evolution during the last decade as a function of model resolution and model changes. As the characterization of these errors is particularly difficult over tropical oceans due to sparse in situ upper-air data, more weight compared to the middle latitudes is given in the analysis to the underlying forecast model. Therefore, special attention is paid to available near-surface observations and to comparison with analysis from other Centers. There is a systematic lack of low-level wind convergence in the Inner Tropical Convergence Zone (ITCZ) in the IFS, leading to a spindown of the Hadley cell. Critical areas with strong cross-equatorial flow and large wind errors are the Indian Ocean with large interannual variations in forecast errors, and the East Pacific with persistent systematic errors that have evolved little during the last decade. The analysis quality in the East Pacific is affected by observation errors inherent to the atmospheric motion vector wind product. The model's tropical climate and its variability and teleconnections are also evaluated, with a particular focus on the Madden-Julian Oscillation (MJO) during the Year of Tropical Convection (YOTC). The model is shown to reproduce the observed tropical large-scale wave spectra and teleconnections, but overestimates the precipitation during the South-East Asian summer monsoon. The recent improvements in tropical precipitation, convectively coupled wave and MJO predictability are shown to be strongly related to improvements in the convection parameterization that realistically represents the convection sensitivity to environmental moisture, and the large-scale forcing due to the use of strong entrainment and a variable adjustment time-scale. There is however a remaining slight moistening tendency and low-level wind imbalance in the model that is responsible for the Asian Monsoon bias and for too
Zollanvari, Amin; Dougherty, Edward R
2014-06-01
The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.
Speech Errors across the Lifespan
ERIC Educational Resources Information Center
Vousden, Janet I.; Maylor, Elizabeth A.
2006-01-01
Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…
Control by model error estimation
NASA Technical Reports Server (NTRS)
Likins, P. W.; Skelton, R. E.
1976-01-01
Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).
Marking Errors: A Simple Strategy
ERIC Educational Resources Information Center
Timmons, Theresa Cullen
1987-01-01
Indicates that using highlighters to mark errors produced a 76% class improvement in removing comma errors and a 95.5% improvement in removing apostrophe errors. Outlines two teaching procedures, to be followed before introducing this tool to the class, that enable students to remove errors at this effective rate. (JD)
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Neural Correlates of Reach Errors
Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza
2005-01-01
Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440
The Insufficiency of Error Analysis
ERIC Educational Resources Information Center
Hammarberg, B.
1974-01-01
The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…
A Simple Approach to Experimental Errors
ERIC Educational Resources Information Center
Phillips, M. D.
1972-01-01
Classifies experimental error into two main groups: systematic error (instrument, personal, inherent, and variational errors) and random errors (reading and setting errors) and presents mathematical treatments for the determination of random errors. (PR)
F, Delaporte
2008-09-01
The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729
Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark
1999-01-01
A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.
NASA Technical Reports Server (NTRS)
1989-01-01
001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.
Delay compensation - Its effect in reducing sampling errors in Fourier spectroscopy
NASA Technical Reports Server (NTRS)
Zachor, A. S.; Aaronson, S. M.
1979-01-01
An approximate formula is derived for the spectrum ghosts caused by periodic drive speed variations in a Michelson interferometer. The solution represents the case of fringe-controlled sampling and is applicable when the reference fringes are delayed to compensate for the delay introduced by the electrical filter in the signal channel. Numerical results are worked out for several common low-pass filters. It is shown that the maximum relative ghost amplitude over the range of frequencies corresponding to the lower half of the filter band is typically 20 times smaller than the relative zero-to-peak velocity error, when delayed sampling is used. In the lowest quarter of the filter band it is more than 100 times smaller than the relative velocity error. These values are ten and forty times smaller, respectively, than they would be without delay compensation if the filter is a 6-pole Butterworth.
Errors from Rayleigh-Jeans approximation in satellite microwave radiometer calibration systems.
Weng, Fuzhong; Zou, Xiaolei
2013-01-20
The advanced technology microwave sounder (ATMS) onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite is a total power radiometer and scans across the track within a range of ±52.77° from nadir. It has 22 channels and measures the microwave radiation at either quasi-vertical or quasi-horizontal polarization from the Earth's atmosphere. The ATMS sensor data record algorithm employed a commonly used two-point calibration equation that derives the earth-view brightness temperature directly from the counts and temperatures of warm target and cold space, and the earth-scene count. This equation is only valid under Rayleigh-Jeans (RJ) approximation. Impacts of RJ approximation on ATMS calibration biases are evaluated in this study. It is shown that the RJ approximation used in ATMS radiometric calibration results in errors on the order of 1-2 K. The error is also scene count dependent and increases with frequency.
Calculation of error bars for laser damage observations
NASA Astrophysics Data System (ADS)
Arenberg, Jonathan W.
2008-10-01
The use of the error bar is a critical means of communicating the quality of individual data points and a processed result. Understanding the error bar for a processed measurement depends on the measurement technique being used and is the subject of many recent works, as such, the paper will confine its scope to the determination of the error bar on a single data point. Many investigators either ignore the error bar altogether or use a "one size error fits all" method, both of these approaches are poor procedure and misleading. It is the goal of this work to lift the veil of mysticism surrounding error bars for damage observations and make their description, calculation and use, easy and commonplace. This paper will rigorously derive the error bar size as a function of the experimental parameters and observed data and will concentrate on the dependent variable, the cumulative probability of damage. The paper will begin with a discussion of the error bar as a measure of data quality or reliability. The expression for the variance in the parameters is derived via standard methods and converted to a standard deviation. The concept of the coverage factor is introduced to scale the error bar to the desired confidence level, completing the derivation
Pulse Shaping Entangling Gates and Error Supression
NASA Astrophysics Data System (ADS)
Hucul, D.; Hayes, D.; Clark, S. M.; Debnath, S.; Quraishi, Q.; Monroe, C.
2011-05-01
Control of spin dependent forces is important for generating entanglement and realizing quantum simulations in trapped ion systems. Here we propose and implement a composite pulse sequence based on the Molmer-Sorenson gate to decrease gate infidelity due to frequency and timing errors. The composite pulse sequence uses an optical frequency comb to drive Raman transitions simultaneously detuned from trapped ion transverse motional red and blue sideband frequencies. The spin dependent force displaces the ions in phase space, and the resulting spin-dependent geometric phase depends on the detuning. Voltage noise on the rf electrodes changes the detuning between the trapped ions' motional frequency and the laser, decreasing the fidelity of the gate. The composite pulse sequence consists of successive pulse trains from counter-propagating frequency combs with phase control of the microwave beatnote of the lasers to passively suppress detuning errors. We present the theory and experimental data with one and two ions where a gate is performed with a composite pulse sequence. This work supported by the U.S. ARO, IARPA, the DARPA OLE program, the MURI program; the NSF PIF Program; the NSF Physics Frontier Center at JQI; the European Commission AQUTE program; and the IC postdoc program administered by the NGA.
A posteriori pointwise error estimates for the boundary element method
Paulino, G.H.; Gray, L.J.; Zarikian, V.
1995-01-01
This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.
NASA Astrophysics Data System (ADS)
Bergmann-Wolf, I.; Dobslaw, H.; Mayer-Gürr, T.
2015-12-01
A realistically perturbed synthetic de-aliasing model consistent with the updated Earth System Model of the European Space Agency (Dobslaw et al., 2015) is now available for the years 1995 -- 2006. The data-set contains realizations of (i) errors at large spatial scales assessed individually for periods between 10 -- 30, 3 -- 10, and 1 -- 3 days, the S1 atmospheric tide, and sub-diurnal periods; (ii) errors at small spatial scales typically not covered by global models of atmosphere and ocean variability; and (iii) errors due to physical processes not represented in currently available de-aliasing products. The error magnitudes for each of the different frequency bands are derived from a small ensemble of four atmospheric and oceanic models. In order to demonstrate the plausibility of the error magnitudes chosen, we perform a variance component estimation based on daily GRACE normal equations from the ITSG-Grace2014 global gravity field series recently published by the University of Graz. All 12 years of the error model are used to calculate empirical error variance-covariance matrices describing the systematic dependencies of the errors both in time and in space individually for five continental and four oceanic regions, and daily GRACE normal equations are subsequently employed to obtain pre-factors for each of those matrices. For the largest spatial scales up to d/o = 40 and periods longer than 24 h, errors prepared for the updated ESM are found to be largely consistent with noise of a similar stochastic character contained in present-day GRACE solutions. Differences and similarities identified for all of the nine regions considered will be discussed in detail during the presentation.Dobslaw, H., I. Bergmann-Wolf, R. Dill, E. Forootan, V. Klemann, J. Kusche, and I. Sasgen (2015), The updated ESA Earth System Model for future gravity mission simulation studies, J. Geod., doi:10.1007/s00190-014-0787-8.
NASA Astrophysics Data System (ADS)
Özceylan, Dilek; Aubrecht, Christoph
2013-04-01
The observed changing in the nature of climate related events and the increase in the number and severity of extreme weather events has been changing risk patterns and puts more people at risk. In recent years extreme heat events caused excess mortality and public concerns in many regions of the world (e.g., 2003 and 2006 Western European heat waves, 2007 and 2010 Asian heat waves, 2006 and most recent 2010-2012 North American heat waves). In the United States extreme heat events have been consistently reported as the leading cause of weather- related mortality and have attracted the attention of the international scientific community regarding the critical importance of risk assessment and decoding its components for risk reduction. In order to understand impact potentials and analyze risk in its individual components both the spatially and temporally varying patterns of heat stress and the multidimensional characteristics of vulnerability have to be considered. In this study we present a composite risk index aggregating these factors and implement that for the U.S. National Capital Region on a high level of spatial detail. The applied measure of assessing heat stress hazard is a novel approach of integrating magnitude, duration, and frequency over time in the assessment and is opposed to the study of single extreme events and the analysis of mere absolute numbers of heat waves that are independent of the length of the respective events. On the basis of heat related vulnerability conceptualization, we select various population and land cover characteristics in our study area and define a composite vulnerability index based on aggregation of three groups of indicators related to demographic, socio-economic, and environmental factors. The study reveals how risk patterns seem to be driven by the vulnerability distribution, generally showing a clear difference between high-risk urban areas and wide areas of low risk in the sub-urban and rural environments. This is
Frequency division multiplex technique
NASA Technical Reports Server (NTRS)
Brey, H. (Inventor)
1973-01-01
A system for monitoring a plurality of condition responsive devices is described. It consists of a master control station and a remote station. The master control station is capable of transmitting command signals which includes a parity signal to a remote station which transmits the signals back to the command station so that such can be compared with the original signals in order to determine if there are any transmission errors. The system utilizes frequency sources which are 1.21 multiples of each other so that no linear combination of any harmonics will interfere with another frequency.
Improving OFDR spatial resolution by reducing external clock sampling error
NASA Astrophysics Data System (ADS)
Feng, Bowen; Liu, Kun; Liu, Tiegen; Jiang, Junfeng; Du, Yang
2016-03-01
Utilizing an auxiliary interferometer to produce external clock signals as the data acquirement clock is widely used to compensate the nonlinearity of the tunable laser source (TLS) in optical frequency domain reflectometry (OFDR). However, this method is not always accurate because of the large optical length difference of both arms in the auxiliary interferometer. To investigate the deviation, we study the source and influence of the external clock sampling error in OFDR system. Based on the model, we find that the sampling error declines with the increase of the TLS's optical frequency tuning rate. The spatial resolution can be as high as 4.8 cm and the strain sensing location accuracy can be up to 0.15 m at the measurement length of 310 m under the minimum sampling error with the optical frequency tuning rate of 2500 GHz/s. Hence, the spatial resolution can be improved by reducing external clock sampling error in OFDR system.
Asteroid orbital error analysis: Theory and application
NASA Technical Reports Server (NTRS)
Muinonen, K.; Bowell, Edward
1992-01-01
We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).
Open quantum systems and error correction
NASA Astrophysics Data System (ADS)
Shabani Barzegar, Alireza
Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC
Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics
NASA Astrophysics Data System (ADS)
Sarovar, Mohan; Young, Kevin C.
2013-12-01
While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC.
NASA Technical Reports Server (NTRS)
Pei, Jing; Wall, John
2013-01-01
This paper describes the techniques involved in determining the aerodynamic stability derivatives for the frequency domain analysis of the Space Launch System (SLS) vehicle. Generally for launch vehicles, determination of the derivatives is fairly straightforward since the aerodynamic data is usually linear through a moderate range of angle of attack. However, if the wind tunnel data lacks proper corrections then nonlinearities and asymmetric behavior may appear in the aerodynamic database coefficients. In this case, computing the derivatives becomes a non-trivial task. Errors in computing the nominal derivatives could lead to improper interpretation regarding the natural stability of the system and tuning of the controller parameters, which would impact both stability and performance. The aerodynamic derivatives are also provided at off nominal operating conditions used for dispersed frequency domain Monte Carlo analysis. Finally, results are shown to illustrate that the effects of aerodynamic cross axis coupling can be neglected for the SLS configuration studied
Position error propagation in the simplex strapdown navigation system
NASA Technical Reports Server (NTRS)
1976-01-01
The results of an analysis of the effects of deterministic error sources on position error in the simplex strapdown navigation system were documented. Improving the long term accuracy of the system was addressed in two phases: understanding and controlling the error within the system, and defining methods of damping the net system error through the use of an external reference velocity or position. Review of the flight and ground data revealed error containing the Schuler frequency as well as non-repeatable trends. The only unbounded terms are those involving gyro bias and azimuth error coupled with velocity. All forms of Schuler-periodic position error were found to be sufficiently large to require update or damping capability unless the source coefficients can be limited to values less than those used in this analysis for misalignment and gyro and accelerometer bias. The first-order effects of the deterministic error sources were determined with a simple error propagator which provided plots of error time functions in response to various source error values.
New Gear Transmission Error Measurement System Designed
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2001-01-01
The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.
Transmission errors and forward error correction in embedded differential pulse code modulation
NASA Astrophysics Data System (ADS)
Goodman, D. J.; Sundberg, C.-E.
1983-11-01
Formulas are derived for the combined effects of quantization and transmission errors on embedded Differential Pulse Code Modulation (DPCM) performance. The present analysis, which is both more general and precise than previous work on transmission errors in digital communication of analog signals, includes as its special cases the conventional DPCM and Pulse code Modulation. An SNR formula is obtained in which the effects of source characteristics and the effects of transmission characteristics are clearly distinguishable. Also given in computationally convenient form are specialized formulas applying to uncoded transmission through a random-error channel, transmission through a slowly fading channel, and transmission with all or part of the DCPM signal being protected by an error-correcting code.
The feedback-related negativity signals salience prediction errors, not reward prediction errors.
Talmi, Deborah; Atkinson, Ryan; El-Deredy, Wael
2013-05-01
Modulations of the feedback-related negativity (FRN) event-related potential (ERP) have been suggested as a potential biomarker in psychopathology. A dominant theory about this signal contends that it reflects the operation of the neural system underlying reinforcement learning in humans. The theory suggests that this frontocentral negative deflection in the ERP 230-270 ms after the delivery of a probabilistic reward expresses a prediction error signal derived from midbrain dopaminergic projections to the anterior cingulate cortex. We tested this theory by investigating whether FRN will also be observed for an inherently aversive outcome: physical pain. In another session, the outcome was monetary reward instead of pain. As predicted, unexpected reward omissions (a negative reward prediction error) yielded a more negative deflection relative to unexpected reward delivery. Surprisingly, unexpected pain omission (a positive reward prediction error) also yielded a negative deflection relative to unexpected pain delivery. Our data challenge the theory by showing that the FRN expresses aversive prediction errors with the same sign as reward prediction errors. Both FRNs were spatiotemporally and functionally equivalent. We suggest that FRN expresses salience prediction errors rather than reward prediction errors. PMID:23658166
The feedback-related negativity signals salience prediction errors, not reward prediction errors.
Talmi, Deborah; Atkinson, Ryan; El-Deredy, Wael
2013-05-01
Modulations of the feedback-related negativity (FRN) event-related potential (ERP) have been suggested as a potential biomarker in psychopathology. A dominant theory about this signal contends that it reflects the operation of the neural system underlying reinforcement learning in humans. The theory suggests that this frontocentral negative deflection in the ERP 230-270 ms after the delivery of a probabilistic reward expresses a prediction error signal derived from midbrain dopaminergic projections to the anterior cingulate cortex. We tested this theory by investigating whether FRN will also be observed for an inherently aversive outcome: physical pain. In another session, the outcome was monetary reward instead of pain. As predicted, unexpected reward omissions (a negative reward prediction error) yielded a more negative deflection relative to unexpected reward delivery. Surprisingly, unexpected pain omission (a positive reward prediction error) also yielded a negative deflection relative to unexpected pain delivery. Our data challenge the theory by showing that the FRN expresses aversive prediction errors with the same sign as reward prediction errors. Both FRNs were spatiotemporally and functionally equivalent. We suggest that FRN expresses salience prediction errors rather than reward prediction errors.
Estimating errors in least-squares fitting
NASA Technical Reports Server (NTRS)
Richter, P. H.
1995-01-01
While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.
An error bound for instantaneous coverage
NASA Technical Reports Server (NTRS)
White, Allan L.
1991-01-01
An error bound is derived for a reliability model approximation method. The approximation method is appropriate for the semi-Markov models of reconfigurable systems that are designed to achieve extremely high reliability. The semi-Markov models of these system are complex, and a significant amount of their complexity arises from the detailed descriptions of the reconfiguration processes. The reliability model approximation method consists of replacing a detailed description of a reconfiguration process with the probabilities of the possible outcomes of the reconfiguration process. These probabilities are included in the model as instantaneous jumps from the fault-occurrence state. Since little time is spent in the reconfiguration states, instantaneous jumps are a close approximation to the original model. This approximation procedure is shown to produce an overestimation for the probability of system failure, and an error bound is derived for this overestimation.
Influence of modulation frequency in rubidium cell frequency standards
NASA Technical Reports Server (NTRS)
Audoin, C.; Viennet, J.; Cyr, N.; Vanier, J.
1983-01-01
The error signal which is used to control the frequency of the quartz crystal oscillator of a passive rubidium cell frequency standard is considered. The value of the slope of this signal, for an interrogation frequency close to the atomic transition frequency is calculated and measured for various phase (or frequency) modulation waveforms, and for several values of the modulation frequency. A theoretical analysis is made using a model which applies to a system in which the optical pumping rate, the relaxation rates and the RF field are homogeneous. Results are given for sine-wave phase modulation, square-wave frequency modulation and square-wave phase modulation. The influence of the modulation frequency on the slope of the error signal is specified. It is shown that the modulation frequency can be chosen as large as twice the non-saturated full-width at half-maximum without a drastic loss of the sensitivity to an offset of the interrogation frequency from center line, provided that the power saturation factor and the amplitude of modulation are properly adjusted.
Outpatient Prescribing Errors and the Impact of Computerized Prescribing
Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W
2005-01-01
Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752
Quantifying truncation errors in effective field theory
NASA Astrophysics Data System (ADS)
Furnstahl, R. J.; Klco, N.; Phillips, D. R.; Wesolowski, S.
2015-10-01
Bayesian procedures designed to quantify truncation errors in perturbative calculations of QCD observables are adapted to expansions in effective field theory (EFT). In the Bayesian approach, such truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. Computation of these intervals requires specification of prior probability distributions (``priors'') for the expansion coefficients. By encoding expectations about the naturalness of these coefficients, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. It also permits exploration of the ways in which such error bars are, and are not, sensitive to assumptions about EFT-coefficient naturalness. We demonstrate the calculation of Bayesian DOB intervals for the EFT truncation error in some representative cases and explore several methods by which the convergence properties of the EFT for a set of observables may be used to check the statistical consistency of the EFT expansion parameter. Supported in part by the NSF and the DOE.
Reducing medication errors in critical care: a multimodal approach
Kruer, Rachel M; Jarrell, Andrew S; Latif, Asad
2014-01-01
The Institute of Medicine has reported that medication errors are the single most common type of error in health care, representing 19% of all adverse events, while accounting for over 7,000 deaths annually. The frequency of medication errors in adult intensive care units can be as high as 947 per 1,000 patient-days, with a median of 105.9 per 1,000 patient-days. The formulation of drugs is a potential contributor to medication errors. Challenges related to drug formulation are specific to the various routes of medication administration, though errors associated with medication appearance and labeling occur among all drug formulations and routes of administration. Addressing these multifaceted challenges requires a multimodal approach. Changes in technology, training, systems, and safety culture are all strategies to potentially reduce medication errors related to drug formulation in the intensive care unit. PMID:25210478
Probability of undetected error after decoding for a concatenated coding scheme
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.
Generalized error-dependent prediction uncertainty in multivariate calibration.
Allegrini, Franco; Wentzell, Peter D; Olivieri, Alejandro C
2016-01-15
Most of the current expressions used to calculate figures of merit in multivariate calibration have been derived assuming independent and identically distributed (iid) measurement errors. However, it is well known that this condition is not always valid for real data sets, where the existence of many external factors can lead to correlated and/or heteroscedastic noise structures. In this report, the influence of the deviations from the classical iid paradigm is analyzed in the context of error propagation theory. New expressions have been derived to calculate sample dependent prediction standard errors under different scenarios. These expressions allow for a quantitative study of the influence of the different sources of instrumental error affecting the system under analysis. Significant differences are observed when the prediction error is estimated in each of the studied scenarios using the most popular first-order multivariate algorithms, under both simulated and experimental conditions.
Skylab water balance error analysis
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.
[Dealing with errors in medicine].
Schoenenberger, R A; Perruchoud, A P
1998-12-24
Iatrogenic disease is probably more commonly than assumed the consequence of errors and mistakes committed by physicians and other medical personnel. Traditionally, strategies to prevent errors in medicine focus on inspection and rely on the professional ethos of health care personnel. The increasingly complex nature of medical practise and the multitude of interventions that each patient receives increases the likelihood of error. More efficient approaches to deal with errors have been developed. The methods include routine identification of errors (critical incidence report), systematic monitoring of multiple-step processes in medical practice, system analysis, and system redesign. A search for underlying causes of errors (rather than distal causes) will enable organizations to collectively learn without denying the inevitable occurrence of human error. Errors and mistakes may become precious chances to increase the quality of medical care.
Error Sources in Asteroid Astrometry
NASA Technical Reports Server (NTRS)
Owen, William M., Jr.
2000-01-01
Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.
Reducing nurse medicine administration errors.
Ofosu, Rose; Jarrett, Patricia
Errors in administering medicines are common and can compromise the safety of patients. This review discusses the causes of drug administration error in hospitals by student and registered nurses, and the practical measures educators and hospitals can take to improve nurses' knowledge and skills in medicines management, and reduce drug errors.
Error Bounds for Interpolative Approximations.
ERIC Educational Resources Information Center
Gal-Ezer, J.; Zwas, G.
1990-01-01
Elementary error estimation in the approximation of functions by polynomials as a computational assignment, error-bounding functions and error bounds, and the choice of interpolation points are discussed. Precalculus and computer instruction are used on some of the calculations. (KR)
Uncertainty quantification and error analysis
Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Hinks, Timothy S C; Dosanjh, Davinder P S; Innes, John A; Pasvol, Geoffrey; Hackforth, Sarah; Varia, Hansa; Millington, Kerry A; Liu, Xiao-Qing; Bakir, Mustafa; Soysal, Ahmet; Davidson, Robert N; Gunatheesan, Rubamalar; Lalvani, Ajit
2009-12-01
The majority of individuals infected with Mycobacterium tuberculosis achieve lifelong immune containment of the bacillus. What constitutes this effective host immune response is poorly understood. We compared the frequencies of gamma interferon (IFN-gamma)-secreting T cells specific for five region of difference 1 (RD1)-encoded antigens and one DosR-encoded antigen in 205 individuals either with active disease (n = 167), whose immune responses had failed to contain the bacillus, or with remotely acquired latent infection (n = 38), who had successfully achieved immune control, and a further 149 individuals with recently acquired asymptomatic infection. When subjects with an IFN-gamma enzyme-linked immunospot (ELISpot) assay response to one or more RD1-encoded antigens were analyzed, T cells from subjects with active disease recognized more pools of peptides from these antigens than T cells from subjects with nonrecent latent infection (P = 0.002). The T-cell frequencies for peptide pools were greater for subjects with active infection than for subjects with nonrecent latent infection for summed RD1 peptide pools (P
NASA Technical Reports Server (NTRS)
1984-01-01
The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.
Errors inducing radiation overdoses.
Grammaticos, Philip C
2013-01-01
There is no doubt that equipments exposing radiation and used for therapeutic purposes should be often checked for possibly administering radiation overdoses to the patients. Technologists, radiation safety officers, radiologists, medical physicists, healthcare providers and administration should take proper care on this issue. "We must be beneficial and not harmful to the patients", according to the Hippocratic doctrine. Cases of radiation overdose are often reported. A series of cases of radiation overdoses have recently been reported. Doctors who were responsible, received heavy punishments. It is much better to prevent than to treat an error or a disease. A Personal Smart Card or Score Card has been suggested for every patient undergoing therapeutic and/or diagnostic procedures by the use of radiation. Taxonomy may also help. PMID:24251304
Optimized Paraunitary Filter Banks for Time-Frequency Channel Diagonalization
NASA Astrophysics Data System (ADS)
Ju, Ziyang; Hunziker, Thomas; Dahlhaus, Dirk
2010-12-01
We adopt the concept of channel diagonalization to time-frequency signal expansions obtained by DFT filter banks. As a generalization of the frequency domain channel representation used by conventional orthogonal frequency-division multiplexing receivers, the time-frequency domain channel diagonalization can be applied to time-variant channels and aperiodic signals. An inherent error in the case of doubly dispersive channels can be limited by choosing adequate windows underlying the filter banks. We derive a formula for the mean-squared sample error in the case of wide-sense stationary uncorrelated scattering (WSSUS) channels, which serves as objective function in the window optimization. Furthermore, an enhanced scheme for the parameterization of tight Gabor frames enables us to constrain the window in order to define paraunitary filter banks. We show that the design of windows optimized for WSSUS channels with known statistical properties can be formulated as a convex optimization problem. The performance of the resulting windows is investigated under different channel conditions, for different oversampling factors, and compared against the performance of alternative windows. Finally, a generic matched filter receiver incorporating the proposed channel diagonalization is discussed which may be essential for future reconfigurable radio systems.
Medical error and related factors during internship and residency.
Ahmadipour, Habibeh; Nahid, Mortazavi
2015-01-01
It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.
Error-related electrocorticographic activity in humans during continuous movements.
Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten
2012-04-01
Brain-machine interface (BMI) devices make errors in decoding. Detecting these errors online from neuronal activity can improve BMI performance by modifying the decoding algorithm and by correcting the errors made. Here, we study the neuronal correlates of two different types of errors which can both be employed in BMI: (i) the execution error, due to inaccurate decoding of the subjects' movement intention; (ii) the outcome error, due to not achieving the goal of the movement. We demonstrate that, in electrocorticographic (ECoG) recordings from the surface of the human brain, strong error-related neural responses (ERNRs) for both types of errors can be observed. ERNRs were present in the low and high frequency components of the ECoG signals, with both signal components carrying partially independent information. Moreover, the observed ERNRs can be used to discriminate between error types, with high accuracy (≥83%) obtained already from single electrode signals. We found ERNRs in multiple cortical areas, including motor and somatosensory cortex. As the motor cortex is the primary target area for recording control signals for a BMI, an adaptive motor BMI utilizing these error signals may not require additional electrode implants in other brain areas.
NASA Astrophysics Data System (ADS)
Zhang, Fan; He, Wen; He, Longbiao; Rong, Zuochao
2015-12-01
The wide concern on absolute pressure calibration of acoustic transducers at low frequencies prompts the development of the pistonphone method. At low frequencies, the acoustic properties of pistonphones are governed by the pressure leakage and the heat conduction effects. However, the traditional theory for these two effects applies a linear superposition of two independent correction models, which differs somewhat from their coupled effect at low frequencies. In this paper, acoustic properties of pistonphones at low frequencies in full consideration of the pressure leakage and heat conduction effects have been quantitatively studied, and the explicit expression for the generated sound pressure has been derived. With more practical significance, a coupled correction expression for these two effects of pistonphones has been derived. In allusion to two typical pistonphones, the NPL pistonphone and our developed infrasonic pistonphone, comparisons were done for the coupled correction expression and the traditional one, whose results reveal that the traditional one produces maximum insufficient errors of about 0.1 dB above the lower limiting frequencies of two pistonphones, while at lower frequencies, excessive correction errors with an explicit limit of about 3 dB are produced by the traditional expression. The coupled correction expression should be adopted in the absolute pressure calibration of acoustic transducers at low frequencies. Furthermore, it is found that the heat conduction effect takes a limiting deviation of about 3 dB for the pressure amplitude and a small phase difference as frequency decreases, while the pressure leakage effect remarkably drives the pressure amplitude to attenuate and the phase difference tends to be 90° as the frequency decreases. The pressure leakage effect plays a more important role on the low frequency property of pistonphones.
Attitude control with realization of linear error dynamics
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Bach, Ralph E.
1993-01-01
An attitude control law is derived to realize linear unforced error dynamics with the attitude error defined in terms of rotation group algebra (rather than vector algebra). Euler parameters are used in the rotational dynamics model because they are globally nonsingular, but only the minimal three Euler parameters are used in the error dynamics model because they have no nonlinear mathematical constraints to prevent the realization of linear error dynamics. The control law is singular only when the attitude error angle is exactly pi rad about any eigenaxis, and a simple intuitive modification at the singularity allows the control law to be used globally. The forced error dynamics are nonlinear but stable. Numerical simulation tests show that the control law performs robustly for both initial attitude acquisition and attitude control.
Systematic errors for a Mueller matrix dual rotating compensator ellipsometer.
Broch, Laurent; En Naciri, Aotmane; Johann, Luc
2008-06-01
The characterization of anisotropic materials and complex systems by ellipsometry has pushed the design of instruments to require the measurement of the full reflection Mueller matrix of the sample with a great precision. Therefore Mueller matrix ellipsometers have emerged over the past twenty years. The values of some coefficients of the matrix can be very small and errors due to noise or systematic errors can induce distored analysis. We present a detailed characterization of the systematic errors for a Mueller Matrix Ellipsometer in the dual-rotating compensator configuration. Starting from a general formalism, we derive explicit first-order expressions for the errors on all the coefficients of the Mueller matrix of the sample. The errors caused by inaccuracy of the azimuthal arrangement of the optical components and residual ellipticity introduced by imperfect optical elements are shown. A new method based on a four-zone averaging measurement is proposed to vanish the systematic errors.
Identifying and Reducing Systematic Errors in Chromosome Conformation Capture Data
Hahn, Seungsoo; Kim, Dongsup
2015-01-01
Chromosome conformation capture (3C)-based techniques have recently been used to uncover the mystic genomic architecture in the nucleus. These techniques yield indirect data on the distances between genomic loci in the form of contact frequencies that must be normalized to remove various errors. This normalization process determines the quality of data analysis. In this study, we describe two systematic errors that result from the heterogeneous local density of restriction sites and different local chromatin states, methods to identify and remove those artifacts, and three previously described sources of systematic errors in 3C-based data: fragment length, mappability, and local DNA composition. To explain the effect of systematic errors on the results, we used three different published data sets to show the dependence of the results on restriction enzymes and experimental methods. Comparison of the results from different restriction enzymes shows a higher correlation after removing systematic errors. In contrast, using different methods with the same restriction enzymes shows a lower correlation after removing systematic errors. Notably, the improved correlation of the latter case caused by systematic errors indicates that a higher correlation between results does not ensure the validity of the normalization methods. Finally, we suggest a method to analyze random error and provide guidance for the maximum reproducibility of contact frequency maps. PMID:26717152
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Rapid mapping of volumetric errors
Krulewich, D.; Hale, L.; Yordy, D.
1995-09-13
This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.
Cirrus cloud retrieval using infrared sounding data: Multilevel cloud errors
NASA Technical Reports Server (NTRS)
Baum, Bryan A.; Wielicki, Bruce A.
1994-01-01
In this study we perform an error analysis for cloud-top pressure retrieval using the High-Resolution Infrared Radiometric Sounder (HIRS/2) 15-microns CO2 channels for the two-layer case of transmissive cirrus overlying an overcast, opaque stratiform cloud. This analysis includes standard deviation and bias error due to instrument noise and the presence of two cloud layers, the lower of which is opaque. Instantaneous cloud pressure retrieval errors are determined for a range of cloud amounts (0.1-1.0) and cloud-top pressures (850-250 mb). Large cloud-top pressure retrieval errors are found to occur when a lower opaque layer is present underneath an upper transmissive cloud layer in the satellite field of view (FOV). Errors tend to increase with decreasing upper-cloud effective cloud amount and with decreasing cloud height (increasing pressure). Errors in retrieved upper-cloud pressure result in corresponding errors in derived effective cloud amount. For the case in which a HIRS FOV has two distinct cloud layers, the difference between the retrieved and actual cloud-top pressure is positive in all cases, meaning that the retrieved upper-cloud height is lower than the actual upper-cloud height. In addition, errors in retrieved cloud pressure are found to depend upon the lapse rate between the low-level cloud top and the surface. We examined which sounder channel combinations would minimize the total errors in derived cirrus cloud height caused by instrument noise and by the presence of a lower-level cloud. We find that while the sounding channels that peak between 700 and 1000 mb minimize random errors, the sounding channels that peak at 300-500 mb minimize bias errors. For a cloud climatology, the bias errors are most critical.
Mark, J.G.; Brown, A.K.; Matthews, A.
1987-01-06
A method is described for processing ring laser gyroscope test data comprising the steps of: (a) accumulating the data over a preselected sample period; and (b) filtering the data at a predetermined frequency so that non-time dependent errors are reduced by a substantially greater amount than are time dependent errors; then (c) analyzing the random walk error of the filtered data.
All-frequency reflectionlessness
NASA Astrophysics Data System (ADS)
Philbin, T. G.
2016-01-01
We derive planar permittivity profiles that do not reflect perpendicularly exiting radiation of any frequency. The materials obey the Kramers-Kronig relations and have no regions of gain. Reduction of the Casimir force by means of such materials is also discussed.
Error analysis in the measurement of average power with application to switching controllers
NASA Technical Reports Server (NTRS)
Maisel, J. E.
1979-01-01
The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current and the signal multiplier was studied. It was concluded that this measurement error can be minimized if the frequency responses of the first order transfer functions are identical.
Schiff, G D; Amato, M G; Eguale, T; Boehne, J J; Wright, A; Koppel, R; Rashidee, A H; Elson, R B; Whitney, D L; Thach, T-T; Bates, D W; Seger, A C
2015-01-01
Importance Medication computerised provider order entry (CPOE) has been shown to decrease errors and is being widely adopted. However, CPOE also has potential for introducing or contributing to errors. Objectives The objectives of this study are to (a) analyse medication error reports where CPOE was reported as a ‘contributing cause’ and (b) develop ‘use cases’ based on these reports to test vulnerability of current CPOE systems to these errors. Methods A review of medication errors reported to United States Pharmacopeia MEDMARX reporting system was made, and a taxonomy was developed for CPOE-related errors. For each error we evaluated what went wrong and why and identified potential prevention strategies and recurring error scenarios. These scenarios were then used to test vulnerability of leading CPOE systems, asking typical users to enter these erroneous orders to assess the degree to which these problematic orders could be entered. Results Between 2003 and 2010, 1.04 million medication errors were reported to MEDMARX, of which 63 040 were reported as CPOE related. A review of 10 060 CPOE-related cases was used to derive 101 codes describing what went wrong, 67 codes describing reasons why errors occurred, 73 codes describing potential prevention strategies and 21 codes describing recurring error scenarios. Ability to enter these erroneous order scenarios was tested on 13 CPOE systems at 16 sites. Overall, 298 (79.5%) of the erroneous orders were able to be entered including 100 (28.0%) being ‘easily’ placed, another 101 (28.3%) with only minor workarounds and no warnings. Conclusions and relevance Medication error reports provide valuable information for understanding CPOE-related errors. Reports were useful for developing taxonomy and identifying recurring errors to which current CPOE systems are vulnerable. Enhanced monitoring, reporting and testing of CPOE systems are important to improve CPOE safety. PMID:25595599
NASA Astrophysics Data System (ADS)
Kim, Yangjin; Hibino, Kenichi; Sugita, Naohiko; Mitsuishi, Mamoru
2016-11-01
When measuring the surface shape of a transparent sample using wavelength-tuning Fizeau interferometry, the calculated phase is critically determined by not only phase-shift errors, but also by coupling errors between higher harmonics and phase-shift errors. This paper presents the derivation of a 13-sample phase-shifting algorithm that can compensate for miscalibration and first-order nonlinearity of phase shift, coupling errors, and bias modulation of the intensity, and has strong suppression of the second reflective harmonic effect. The characteristics of the 13-sample algorithm are estimated with respect to Fourier representation in the frequency domain. The phase error of measurement performed using the 13-sample algorithm is discussed and compared with those of measurements obtained using other conventional phase-shifting algorithms. Finally, the surface shape of a fused silica wedge plate obtained using a wavelength tuning Fizeau interferometer and the 13-sample algorithm are presented. The experimental results indicate that the surface shape measurement accuracy for a transparent fused silica plate is 3 nm. The accuracy of the measurement is discussed by comparing the amplitudes of the crosstalk noise calculated using other conventional algorithms.
NASA Astrophysics Data System (ADS)
Prabu, K.; Kumar, D. Sriram
2015-05-01
An optical wireless communication system is an alternative to radio frequency communication, but atmospheric turbulence induced fading and misalignment fading are the main impairments affecting an optical signal when propagating through the turbulence channel. The resultant of misalignment fading is the pointing errors, it degrades the bit error rate (BER) performance of the free space optics (FSO) system. In this paper, we study the BER performance of the multiple-input multiple-output (MIMO) FSO system employing coherent binary polarization shift keying (BPOLSK) in gamma-gamma (G-G) channel with pointing errors. The BER performance of the BPOLSK based MIMO FSO system is compared with the single-input single-output (SISO) system. Also, the average BER performance of the systems is analyzed and compared with and without pointing errors. A novel closed form expressions of BER are derived for MIMO FSO system with maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques. The analytical results show that the pointing errors can severely degrade the performance of the system.
Improved Error Thresholds for Measurement-Free Error Correction
NASA Astrophysics Data System (ADS)
Crow, Daniel; Joynt, Robert; Saffman, M.
2016-09-01
Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.
Developing an Error Model for Ionospheric Phase Distortions in L-Band SAR and InSAR Data
NASA Astrophysics Data System (ADS)
Meyer, F. J.; Agram, P. S.
2014-12-01
Many of the recent and upcoming spaceborne SAR systems are operating in the L-band frequency range. The choice of L-band has a number of advantages especially for InSAR applications. These include deeper penetration into vegetation, higher coherence, and higher sensitivity to soil moisture. While L-band SARs are undoubtedly beneficial for a number of earth science disciplines, their signals are susceptive to path delay effects in the ionosphere. Many recent publications indicate that the ionosphere can have detrimental effects on InSAR coherence and phase. It has also been shown that the magnitude of these effects strongly depends on the time of day and geographic location of the image acquisition as well as on the coincident solar activity. Hence, in order to provide realistic error estimates for geodetic measurements derived from L-band InSAR, an error model needs to be developed that is capable of describing ionospheric noise. With this paper, we present a global ionospheric error model that is currently being developed in support of NASA's future L-band SAR mission NISAR. The system is based on a combination of empirical data analysis and modeling input from the ionospheric model WBMOD, and is capable of predicting ionosphere-induced phase noise as a function of space and time. The error model parameterizes ionospheric noise using a power spectrum model and provides the parameters of this model in a global 1x1 degree raster. From the power law model, ionospheric errors in deformation estimates can be calculated. In Polar Regions, our error model relies on a statistical analysis of ionospheric-phase noise in a large number of SAR data from previous L-band SAR missions such as ALOS PALSAR and JERS-1. The focus on empirical analyses is due to limitations of WBMOD in high latitude areas. Outside of the Polar Regions, the ionospheric model WBMOD is used to derive ionospheric structure parameters for as a function of solar activity. The structure parameters are
Errors in clinical laboratories or errors in laboratory medicine?
Plebani, Mario
2006-01-01
Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes
NASA Technical Reports Server (NTRS)
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
NASA Technical Reports Server (NTRS)
Luers, J. K.
1980-01-01
An initial value of pressure is required to derive the density and pressure profiles of the rocketborne rocketsonde sensor. This tie-on pressure value is obtained from the nearest rawinsonde launch at an altitude where overlapping rawinsonde and rocketsonde measurements occur. An error analysis was performed of the error sources in these sensors that contribute to the error in the tie-on pressure. It was determined that significant tie-on pressure errors result from radiation errors in the rawinsonde rod thermistor, and temperature calibration bias errors. To minimize the effect of these errors radiation corrections should be made to the rawinsonde temperature and the tie-on altitude should be chosen at the lowest altitude of overlapping data. Under these conditions the tie-on error, and consequently the resulting error in the Datasonde pressure and density profiles is less tha 1%. The effect of rawinsonde pressure and temperature errors on the wind and temperature versus height profiles of the rawinsonde was also determined.
Surprise and error: common neuronal architecture for the processing of errors and novelty.
Wessel, Jan R; Danielmeier, Claudia; Morton, J Bruce; Ullsperger, Markus
2012-05-30
According to recent accounts, the processing of errors and generally infrequent, surprising (novel) events share a common neuroanatomical substrate. Direct empirical evidence for this common processing network in humans is, however, scarce. To test this hypothesis, we administered a hybrid error-monitoring/novelty-oddball task in which the frequency of novel, surprising trials was dynamically matched to the frequency of errors. Using scalp electroencephalographic recordings and event-related functional magnetic resonance imaging (fMRI), we compared neural responses to errors with neural responses to novel events. In Experiment 1, independent component analysis of scalp ERP data revealed a common neural generator implicated in the generation of both the error-related negativity (ERN) and the novelty-related frontocentral N2. In Experiment 2, this pattern was confirmed by a conjunction analysis of event-related fMRI, which showed significantly elevated BOLD activity following both types of trials in the posterior medial frontal cortex, including the anterior midcingulate cortex (aMCC), the neuronal generator of the ERN. Together, these findings provide direct evidence of a common neural system underlying the processing of errors and novel events. This appears to be at odds with prominent theories of the ERN and aMCC. In particular, the reinforcement learning theory of the ERN may need to be modified because it may not suffice as a fully integrative model of aMCC function. Whenever course and outcome of an action violates expectancies (not necessarily related to reward), the aMCC seems to be engaged in evaluating the necessity of behavioral adaptation.
Surprise and error: common neuronal architecture for the processing of errors and novelty.
Wessel, Jan R; Danielmeier, Claudia; Morton, J Bruce; Ullsperger, Markus
2012-05-30
According to recent accounts, the processing of errors and generally infrequent, surprising (novel) events share a common neuroanatomical substrate. Direct empirical evidence for this common processing network in humans is, however, scarce. To test this hypothesis, we administered a hybrid error-monitoring/novelty-oddball task in which the frequency of novel, surprising trials was dynamically matched to the frequency of errors. Using scalp electroencephalographic recordings and event-related functional magnetic resonance imaging (fMRI), we compared neural responses to errors with neural responses to novel events. In Experiment 1, independent component analysis of scalp ERP data revealed a common neural generator implicated in the generation of both the error-related negativity (ERN) and the novelty-related frontocentral N2. In Experiment 2, this pattern was confirmed by a conjunction analysis of event-related fMRI, which showed significantly elevated BOLD activity following both types of trials in the posterior medial frontal cortex, including the anterior midcingulate cortex (aMCC), the neuronal generator of the ERN. Together, these findings provide direct evidence of a common neural system underlying the processing of errors and novel events. This appears to be at odds with prominent theories of the ERN and aMCC. In particular, the reinforcement learning theory of the ERN may need to be modified because it may not suffice as a fully integrative model of aMCC function. Whenever course and outcome of an action violates expectancies (not necessarily related to reward), the aMCC seems to be engaged in evaluating the necessity of behavioral adaptation. PMID:22649231
Sepsis: Medical errors in Poland.
Rorat, Marta; Jurek, Tomasz
2016-01-01
Health, safety and medical errors are currently the subject of worldwide discussion. The authors analysed medico-legal opinions trying to determine types of medical errors and their impact on the course of sepsis. The authors carried out a retrospective analysis of 66 medico-legal opinions issued by the Wroclaw Department of Forensic Medicine between 2004 and 2013 (at the request of the prosecutor or court) in cases examined for medical errors. Medical errors were confirmed in 55 of the 66 medico-legal opinions. The age of victims varied from 2 weeks to 68 years; 49 patients died. The analysis revealed medical errors committed by 113 health-care workers: 98 physicians, 8 nurses and 8 emergency medical dispatchers. In 33 cases, an error was made before hospitalisation. Hospital errors occurred in 35 victims. Diagnostic errors were discovered in 50 patients, including 46 cases of sepsis being incorrectly recognised and insufficient diagnoses in 37 cases. Therapeutic errors occurred in 37 victims, organisational errors in 9 and technical errors in 2. In addition to sepsis, 8 patients also had a severe concomitant disease and 8 had a chronic disease. In 45 cases, the authors observed glaring errors, which could incur criminal liability. There is an urgent need to introduce a system for reporting and analysing medical errors in Poland. The development and popularisation of standards for identifying and treating sepsis across basic medical professions is essential to improve patient safety and survival rates. Procedures should be introduced to prevent health-care workers from administering incorrect treatment in cases.
Skills, rules and knowledge in aircraft maintenance: errors in context
NASA Technical Reports Server (NTRS)
Hobbs, Alan; Williamson, Ann
2002-01-01
Automatic or skill-based behaviour is generally considered to be less prone to error than behaviour directed by conscious control. However, researchers who have applied Rasmussen's skill-rule-knowledge human error framework to accidents and incidents have sometimes found that skill-based errors appear in significant numbers. It is proposed that this is largely a reflection of the opportunities for error which workplaces present and does not indicate that skill-based behaviour is intrinsically unreliable. In the current study, 99 errors reported by 72 aircraft mechanics were examined in the light of a task analysis based on observations of the work of 25 aircraft mechanics. The task analysis identified the opportunities for error presented at various stages of maintenance work packages and by the job as a whole. Once the frequency of each error type was normalized in terms of the opportunities for error, it became apparent that skill-based performance is more reliable than rule-based performance, which is in turn more reliable than knowledge-based performance. The results reinforce the belief that industrial safety interventions designed to reduce errors would best be directed at those aspects of jobs that involve rule- and knowledge-based performance.
Frequency spectrum analyzer with phase-lock
Boland, Thomas J.
1984-01-01
A frequency-spectrum analyzer with phase-lock for analyzing the frequency and amplitude of an input signal is comprised of a voltage controlled oscillator (VCO) which is driven by a ramp generator, and a phase error detector circuit. The phase error detector circuit measures the difference in phase between the VCO and the input signal, and drives the VCO locking it in phase momentarily with the input signal. The input signal and the output of the VCO are fed into a correlator which transfers the input signal to a frequency domain, while providing an accurate absolute amplitude measurement of each frequency component of the input signal.
Dopamine reward prediction error coding.
Schultz, Wolfram
2016-03-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.
Dopamine reward prediction error coding
Schultz, Wolfram
2016-01-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377
Congenital errors of folate metabolism.
Zittoun, J
1995-09-01
Congenital errors of folate metabolism can be related either to defective transport of folate through various cells or to defective intracellular utilization of folate due to some enzyme deficiencies. Defective transport of folate across the intestine and the blood-brain barrier was reported in the condition 'Congenital Malabsorption of Folate'. This disease is characterized by a severe megaloblastic anaemia of early appearance associated with mental retardation. Anaemia is folate-responsive, but neurological symptoms are only poorly improved because of the inability to maintain adequate levels of folate in the CSF. A familial defect of cellular uptake was described in a family with a high frequency of aplastic anaemia or leukaemia. An isolated defect in folate transport into CSF was identified in a patient suffering from a cerebellar syndrome and pyramidal tract dysfunction. Among enzyme deficiencies, some are well documented, others still putative. Methylenetetrahydrofolate reductase deficiency is the most common. The main clinical findings are neurological signs (mental retardation, seizures, rarely schizophrenic syndromes) or vascular disease, without any haematological abnormality. Low levels of folate in serum, red blood cells and CSF associated with homocystinuria are constant. Methionine synthase deficiency is characterized by a megaloblastic anaemia occurring early in life that is more or less folate-responsive and associated with mental retardation. Glutamate formiminotransferase-cyclodeaminase deficiency is responsible for massive excretion of formiminoglutamic acid but megaloblastic anaemia is not constant. The clinical findings are a more or less severe mental or physical retardation. Dihydrofolate reductase deficiency was reported in three children presenting with a megaloblastic anaemia a few days or weeks after birth, which responded to folinic acid. The possible relationship between congenital disorders such as neural tube defects or
Influence of satellite geometry, range, clock, and altimeter errors on two-satellite GPS navigation
NASA Astrophysics Data System (ADS)
Bridges, Philip D.
Flight tests were conducted at Yuma Proving Grounds, Yuma, AZ, to determine the performance of a navigation system capable of using only two GPS satellites. The effect of satellite geometry, range error, and altimeter error on the horizontal position solution were analyzed for time and altitude aided GPS navigation (two satellites + altimeter + clock). The east and north position errors were expressed as a function of satellite range error, altimeter error, and east and north Dilution of Precision. The equations for the Dilution of Precision were derived as a function of satellite azimuth and elevation angles for the two satellite case. The expressions for the position error were then used to analyze the flight test data. The results showed the correlation between satellite geometry and position error, the increase in range error due to clock drift, and the impact of range and altimeter error on the east and north position error.
Medication Errors in Outpatient Pediatrics.
Berrier, Kyla
2016-01-01
Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices. PMID:27537086
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1980-01-01
Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
Further characterization of the influence of crowding on medication errors
Watts, Hannah; Nasim, Muhammad Umer; Sweis, Rolla; Sikka, Rishi; Kulstad, Erik
2013-01-01
Study Objectives: Our prior analysis suggested that error frequency increases disproportionately with Emergency department (ED) crowding. To further characterize, we measured this association while controlling for the number of charts reviewed and the presence of ambulance diversion status. We hypothesized that errors would occur significantly more frequently as crowding increased, even after controlling for higher patient volumes. Materials and Methods: We performed a prospective, observational study in a large, community hospital ED from May to October of 2009. Our ED has full-time pharmacists who review orders of patients to help identify errors prior to their causing harm. Research volunteers shadowed our ED pharmacists over discrete 4- hour time periods during their reviews of orders on patients in the ED. The total numbers of charts reviewed and errors identified were documented along with details for each error type, severity, and category. We then measured the correlation between error rate (number of errors divided by total number of charts reviewed) and ED occupancy rate while controlling for diversion status during the observational period. We estimated a sample size requirement of at least 45 errors identified to allow detection of an effect size of 0.6 based on our historical data. Results: During 324 hours of surveillance, 1171 charts were reviewed and 87 errors were identified. Median error rate per 4-hour block was 5.8% of charts reviewed (IQR 0-13). No significant change was seen with ED occupancy rate (Spearman's rho = –.08, P = .49). Median error rate during times on ambulance diversion was almost twice as large (11%, IQR 0-17), but this rate did not reach statistical significance in univariate or multivariate analysis. Conclusions: Error frequency appears to remain relatively constant across the range of crowding in our ED when controlling for patient volume via the quantity of orders reviewed. Error quantity therefore increases with crowding
Evaluating Spectral Signals to Identify Spectral Error
Bazar, George; Kovacs, Zoltan; Tsenkova, Roumiana
2016-01-01
Since the precision and accuracy level of a chemometric model is highly influenced by the quality of the raw spectral data, it is very important to evaluate the recorded spectra and describe the erroneous regions before qualitative and quantitative analyses or detailed band assignment. This paper provides a collection of basic spectral analytical procedures and demonstrates their applicability in detecting errors of near infrared data. Evaluation methods based on standard deviation, coefficient of variation, mean centering and smoothing techniques are presented. Applications of derivatives with various gap sizes, even below the bandpass of the spectrometer, are shown to evaluate the level of spectral errors and find their origin. The possibility for prudent measurement of the third overtone region of water is also highlighted by evaluation of a complex data recorded with various spectrometers. PMID:26731541
Measurement Error and Equating Error in Power Analysis
ERIC Educational Resources Information Center
Phillips, Gary W.; Jiang, Tao
2016-01-01
Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…
Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?
ERIC Educational Resources Information Center
Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.
2007-01-01
This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…
ISA accelerometer onboard the Mercury Planetary Orbiter: error budget
NASA Astrophysics Data System (ADS)
Iafolla, Valerio; Lucchesi, David M.; Nozzoli, Sergio; Santoli, Francesco
2007-03-01
We have estimated a preliminary error budget for the Italian Spring Accelerometer (ISA) that will be allocated onboard the Mercury Planetary Orbiter (MPO) of the European Space Agency (ESA) space mission to Mercury named BepiColombo. The role of the accelerometer is to remove from the list of unknowns the non-gravitational accelerations that perturb the gravitational trajectory followed by the MPO in the strong radiation environment that characterises the orbit of Mercury around the Sun. Such a role is of fundamental importance in the context of the very ambitious goals of the Radio Science Experiments (RSE) of the BepiColombo mission. We have subdivided the errors on the accelerometer measurements into two main families: (i) the pseudo-sinusoidal errors and (ii) the random errors. The former are characterised by a periodic behaviour with the frequency of the satellite mean anomaly and its higher order harmonic components, i.e., they are deterministic errors. The latter are characterised by an unknown frequency distribution and we assumed for them a noise-like spectrum, i.e., they are stochastic errors. Among the pseudo-sinusoidal errors, the main contribution is due to the effects of the gravity gradients and the inertial forces, while among the random-like errors the main disturbing effect is due to the MPO centre-of-mass displacements produced by the onboard High Gain Antenna (HGA) movements and by the fuel consumption and sloshing. Very subtle to be considered are also the random errors produced by the MPO attitude corrections necessary to guarantee the nadir pointing of the spacecraft. We have therefore formulated the ISA error budget and the requirements for the satellite in order to guarantee an orbit reconstruction for the MPO spacecraft with an along-track accuracy of about 1 m over the orbital period of the satellite around Mercury in such a way to satisfy the RSE requirements.
Error estimates and specification parameters for functional renormalization
Schnoerr, David; Boettcher, Igor; Pawlowski, Jan M.; Wetterich, Christof
2013-07-15
We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.
Impact of measurement error on testing genetic association with quantitative traits.
Liao, Jiemin; Li, Xiang; Wong, Tien-Yin; Wang, Jie Jin; Khor, Chiea Chuen; Tai, E Shyong; Aung, Tin; Teo, Yik-Ying; Cheng, Ching-Yu
2014-01-01
Measurement error of a phenotypic trait reduces the power to detect genetic associations. We examined the impact of sample size, allele frequency and effect size in presence of measurement error for quantitative traits. The statistical power to detect genetic association with phenotype mean and variability was investigated analytically. The non-centrality parameter for a non-central F distribution was derived and verified using computer simulations. We obtained equivalent formulas for the cost of phenotype measurement error. Effects of differences in measurements were examined in a genome-wide association study (GWAS) of two grading scales for cataract and a replication study of genetic variants influencing blood pressure. The mean absolute difference between the analytic power and simulation power for comparison of phenotypic means and variances was less than 0.005, and the absolute difference did not exceed 0.02. To maintain the same power, a one standard deviation (SD) in measurement error of a standard normal distributed trait required a one-fold increase in sample size for comparison of means, and a three-fold increase in sample size for comparison of variances. GWAS results revealed almost no overlap in the significant SNPs (p<10(-5)) for the two cataract grading scales while replication results in genetic variants of blood pressure displayed no significant differences between averaged blood pressure measurements and single blood pressure measurements. We have developed a framework for researchers to quantify power in the presence of measurement error, which will be applicable to studies of phenotypes in which the measurement is highly variable. PMID:24475218
Improving medication administration error reporting systems. Why do errors occur?
Wakefield, B J; Wakefield, D S; Uden-Holman, T
2000-01-01
Monitoring medication administration errors (MAE) is often included as part of the hospital's risk management program. While observation of actual medication administration is the most accurate way to identify errors, hospitals typically rely on voluntary incident reporting processes. Although incident reporting systems are more economical than other methods of error detection, incident reporting can also be a time-consuming process depending on the complexity or "user-friendliness" of the reporting system. Accurate incident reporting systems are also dependent on the ability of the practitioner to: 1) recognize an error has actually occurred; 2) believe the error is significant enough to warrant reporting; and 3) overcome the embarrassment of having committed a MAE and the fear of punishment for reporting a mistake (either one's own or another's mistake).
The undetected error probability for Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Mceliece, Robert J.
1988-01-01
McEliece and Swanson (1986) offered an upper bound on P(E)u, the decoder error probability given u symbol errors occur. In the present study, by using a combinatoric technique such as the principle of inclusion and exclusion, an exact formula for P(E)u is derived. The P(E)u of a maximum distance separable code is observed to approach Q rapidly as u gets large, where Q is the probability that a completely random error pattern will cause decoder error. An upper bound for the expansion P(E)u/Q - 1 is derived, and is shown to decrease nearly exponentially as u increases. This proves analytically that P(E)u indeed approaches Q as u becomes large, and that some laws of large number come into play.
Error-disturbance uncertainty relations studied in neutron optics
NASA Astrophysics Data System (ADS)
Sponar, Stephan; Sulyok, Georg; Demirel, Bulent; Hasegawa, Yuji
2016-09-01
Heisenberg's uncertainty principle is probably the most famous statement of quantum physics and its essential aspects are well described by a formulations in terms of standard deviations. However, a naive Heisenberg-type error-disturbance relation is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid Ozawa's relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg's original EDUR is violated, and the Ozawa's and Branciard's EDURs are valid in a wide range of experimental parameters, applying a new measurement procedure referred to as two-state method.
NASA Astrophysics Data System (ADS)
Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric
2013-04-01
Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors
NASA Astrophysics Data System (ADS)
Dobslaw, Henryk; Bergmann-Wolf, Inga; Forootan, Ehsan; Dahle, Christoph; Mayer-Gürr, Torsten; Kusche, Jürgen; Flechtner, Frank
2016-05-01
A realistically perturbed synthetic de-aliasing model consistent with the updated Earth System Model of the European Space Agency is now available over the period 1995-2006. The dataset contains realizations of (1) errors at large spatial scales assessed individually for periods 10-30, 3-10, and 1-3 days, the S1 atmospheric tide, and sub-diurnal periods; (2) errors at small spatial scales typically not covered by global models of atmosphere and ocean variability; and (3) errors due to physical processes not represented in currently available de-aliasing products. The model is provided in two separate sets of Stokes coefficients to allow for a flexible re-scaling of the overall error level to account for potential future improvements in atmosphere and ocean mass variability models. Error magnitudes for the different frequency bands are derived from a small ensemble of four atmospheric and oceanic models. For the largest spatial scales up to d/o = 40 and periods longer than 24 h, those error estimates are approximately confirmed from a variance component estimation based on GRACE daily normal equations. Future mission performance simulations based on the updated Earth System Model and the realistically perturbed de-aliasing model indicate that for GRACE-type missions only moderate reductions of de-aliasing errors can be expected from a second satellite pair in a shifted polar orbit. Substantially more accurate global gravity fields are obtained when a second pair of satellites in an moderately inclined orbit is added, which largely stabilizes the global gravity field solutions due to its rotated sampling sensitivity.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
NASA Astrophysics Data System (ADS)
Noble, Viveca K.
1994-10-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Joint angle and Doppler frequency estimation of coherent targets in monostatic MIMO radar
NASA Astrophysics Data System (ADS)
Cao, Renzheng; Zhang, Xiaofei
2015-05-01
This paper discusses the problem of joint direction of arrival (DOA) and Doppler frequency estimation of coherent targets in a monostatic multiple-input multiple-output radar. In the proposed algorithm, we perform a reduced dimension (RD) transformation on the received signal first and then use forward spatial smoothing (FSS) technique to decorrelate the coherence and obtain joint estimation of DOA and Doppler frequency by exploiting the estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm. The joint estimated parameters of the proposed RD-FSS-ESPRIT are automatically paired. Compared with the conventional FSS-ESPRIT algorithm, our RD-FSS-ESPRIT algorithm has much lower complexity and better estimation performance of both DOA and frequency. The variance of the estimation error and the Cramer-Rao Bound of the DOA and frequency estimation are derived. Simulation results show the effectiveness and improvement of our algorithm.
Higgins, Jane; Bezjak, Andrea; Hope, Andrew; Panzarella, Tony; Li, Winnie; Cho, John B.C.; Craig, Tim; Brade, Anthony; Sun, Alexander; Bissonnette, Jean-Pierre
2011-08-01
Purpose: To assess the relative effectiveness of five image-guidance (IG) frequencies on reducing patient positioning inaccuracies and setup margins for locally advanced lung cancer patients. Methods and Materials: Daily cone-beam computed tomography data for 100 patients (4,237 scans) were analyzed. Subsequently, four less-than-daily IG protocols were simulated using these data (no IG, first 5-day IG, weekly IG, and alternate-day IG). The frequency and magnitude of residual setup error were determined. The less-than-daily IG protocols were compared against the daily IG, the assumed reference standard. Finally, the population-based setup margins were calculated. Results: With the less-than-daily IG protocols, 20-43% of fractions incurred residual setup errors {>=}5 mm; daily IG reduced this to 6%. With the exception of the first 5-day IG, reductions in systematic error ({Sigma}) occurred as the imaging frequency increased and only daily IG provided notable random error ({sigma}) reductions ({Sigma} = 1.5-2.2 mm, {sigma} = 2.5-3.7 mm; {Sigma} = 1.8-2.6 mm, {sigma} = 2.5-3.7 mm; and {Sigma} = 0.7-1.0 mm, {sigma} = 1.7-2.0 mm for no IG, first 5-day IG, and daily IG, respectively. An overall significant difference in the mean setup error was present between the first 5-day IG and daily IG (p < .0001). The derived setup margins were 5-9 mm for less-than-daily IG and were 3-4 mm with daily IG. Conclusion: Daily cone-beam computed tomography substantially reduced the setup error and could permit setup margin reduction and lead to a reduction in normal tissue toxicity for patients undergoing conventionally fractionated lung radiotherapy. Using first 5-day cone-beam computed tomography was suboptimal for lung patients, given the inability to reduce the random error and the potential for the systematic error to increase throughout the treatment course.
Error probabilities in optical PPM receivers with Gaussian mixture densities
NASA Technical Reports Server (NTRS)
Gagliardi, R. M.
1982-01-01
A Gaussian mixture density arises when a discrete variable (e.g., a photodetector count variable) is added to a continuous Gaussian variable (e.g., thermal noise). Making use of some properties of photomultiplier Gaussian mixture distributions, some approximate error probability formulas can be derived. These appear as averages of M-ary orthogonal Gaussian error probabilities. The use of a pure Gaussian assumption is considered, and when properly defined, appears as an accurate upper bound to performance.
Sensitivity of feedforward neural networks to weight errors--
Stevenson, M.; Widrow, B. . Dept. of Engineering-Economic Systems); Winter, R. )
1990-03-01
An important consideration when implementing neural networks with digital or analog hardware of limited precision is the sensitivity of neural networks to weight errors. In this paper, the authors analyze the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network ( a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. Surprisingly, the probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).
Overcoming time-integration errors in SINDA's FWDBCK solution routine
NASA Technical Reports Server (NTRS)
Skladany, J. T.; Costello, F. A.
1984-01-01
The FWDBCK time step, which is usually chosen intuitively to achieve adequate accuracy at reasonable computational costs, can in fact lead to large errors. NASA observed such errors in solving cryogenic problems on the COBE spacecraft, but a similar error is also demonstrated for a single node radiating to space. An algorithm has been developed for selecting the time step during the course of the simulation. The error incurred when the time derivative is replaced by the FWDBCK time difference can be estimated from the Taylor-Series expression for the temperature. The algorithm selects the time step to keep this error small. The efficacy of the method is demonstrated on the COBE and single-node problems.
Human Error: A Concept Analysis
NASA Technical Reports Server (NTRS)
Hansen, Frederick D.
2007-01-01
Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.
Explaining Errors in Children's Questions
ERIC Educational Resources Information Center
Rowland, Caroline F.
2007-01-01
The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…
Dual Processing and Diagnostic Errors
ERIC Educational Resources Information Center
Norman, Geoff
2009-01-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…
Quantifying error distributions in crowding.
Hanus, Deborah; Vul, Edward
2013-03-22
When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.
Children's Scale Errors with Tools
ERIC Educational Resources Information Center
Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi
2011-01-01
Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…
Challenge and error: critical events and attention-related errors.
Cheyne, James Allan; Carriere, Jonathan S A; Solman, Grayden J F; Smilek, Daniel
2011-12-01
Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error↔attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention lapses; resource-depleting cognitions interfering with attention to subsequent task challenges. Attention lapses lead to errors, and errors themselves are a potent consequence often leading to further attention lapses potentially initiating a spiral into more serious errors. We investigated this challenge-induced error↔attention-lapse model using the Sustained Attention to Response Task (SART), a GO-NOGO task requiring continuous attention and response to a number series and withholding of responses to a rare NOGO digit. We found response speed and increased commission errors following task challenges to be a function of temporal distance from, and prior performance on, previous NOGO trials. We conclude by comparing and contrasting the present theory and findings to those based on choice paradigms and argue that the present findings have implications for the generality of conflict monitoring and control models.
Human error in recreational boating.
McKnight, A James; Becker, Wayne W; Pettit, Anthony J; McKnight, A Scott
2007-03-01
Each year over 600 people die and more than 4000 are reported injured in recreational boating accidents. As with most other accidents, human error is the major contributor. U.S. Coast Guard reports of 3358 accidents were analyzed to identify errors in each of the boat types by which statistics are compiled: auxiliary (motor) sailboats, cabin motorboats, canoes and kayaks, house boats, personal watercraft, open motorboats, pontoon boats, row boats, sail-only boats. The individual errors were grouped into categories on the basis of similarities in the behavior involved. Those presented here are the categories accounting for at least 5% of all errors when summed across boat types. The most revealing and significant finding is the extent to which the errors vary across types. Since boating is carried out with one or two types of boats for long periods of time, effective accident prevention measures, including safety instruction, need to be geared to individual boat types.
Angle interferometer cross axis errors
Bryan, J.B.; Carter, D.L.; Thompson, S.L.
1994-01-01
Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them by remachining the reference surfaces.
Onorbit IMU alignment error budget
NASA Technical Reports Server (NTRS)
Corson, R. W.
1980-01-01
The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.
Errors in velocities determined from EISCAT data
NASA Astrophysics Data System (ADS)
Bromage, B. J. I.
1984-07-01
It has been reported that unusually large errors are observed in velocities derived from EISCAT data in which the power in the returned signal falls to less than 2 percent of that in the background noise. This suggestion has been investigated using a specialised interactive analysis package, and possible causes, both in the data and in the analysis, were considered. Subsequently, an adaptation of the usual procedure for velocity determination was considered. This involved applying the normal method to the raw data without first subtracting the background noise. The advantages of this technique are discussed, in particular for data in which the returned signal is relatively weak.
Linear unbiased prediction of clock errors.
Shmaliy, Yuriy S
2009-09-01
In this paper, we propose a new formula for linear unbiased prediction of the local clock timescales. To predict future errors over all the measurement data, a new gain is derived for the p-step ramp unbiased finite impulse response (FIR) predictor. We then show that this gain gives the best linear unbiased fit suitable for forming the prediction vector. The predictor proposed is consistent with linear regression and best linear unbiased estimator. Applications are given for a crystal clock and the USNO Master Clock.
Space charge enhanced, plasma gradient induced error in satellite electric field measurements
NASA Technical Reports Server (NTRS)
Diebold, D. A.; Hershkowitz, N.; Dekock, J. R.; Intrator, T. P.; Lee, S-G.; Hsieh, M-K.
1994-01-01
In magnetospheric plasmas it is possible for plasma gradients to casue error in electric field measurements made by satellite double probes. The space charge emhanced plasma gradient induced error is discussed in general terms, the results of a laboratory experiment designed to illustrate this error are presented, and a simple expression that quantifies this error in a form that is readily applicable to satellite data is derived. The simple expression indicates that for a given probe bias current there is less error for cylindrical probes than for spherical probes. The expression also suggests that for Viking data the error is negligible.
Errors as allies: error management training in health professions education.
King, Aimee; Holder, Michael G; Ahmed, Rami A
2013-06-01
This paper adopts methods from the organisational team training literature to outline how health professions education can improve patient safety. We argue that health educators can improve training quality by intentionally encouraging errors during simulation-based team training. Preventable medical errors are inevitable, but encouraging errors in low-risk settings like simulation can allow teams to have better emotional control and foresight to manage the situation if it occurs again with live patients. Our paper outlines an innovative approach for delivering team training.
On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion.
Ricci, Luca; Taffoni, Fabrizio; Formica, Domenico
2016-01-01
The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application. PMID:27612100
On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion
Ricci, Luca; Taffoni, Fabrizio
2016-01-01
The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application. PMID:27612100
Theory and Simulation of Field Error Transport.
NASA Astrophysics Data System (ADS)
Dubin, D. H. E.
2007-11-01
The rate at which a plasma escapes across an applied magnetic field B due to symmetry-breaking electric or magnetic ``field errors'' is revisited. Such field errors cause plasma loss (or compression) in stellarators, tokamaks,ootnotetextH.E. Mynick, Ph Plas 13 058102 (2006). and nonneutral plasmas.ootnotetextEggleston, Ph Plas 14 012302 (07); Danielson et al., Ph Plas 13 055706. We study this process using idealized simulations that follow guiding centers in given trap fields, neglecting their collective effect on the evolution, but including collisions. Also, the Fokker-Planck equation describing the particle distribution is solved, and the predicted transport agrees with simulations in every applicable regime. When a field error of the form δφ(r, θ, z ) = ɛ(r) e^i m θ kz is applied to an infinite plasma column, the transport rates fall into the usual banana, plateau and fluid regimes. When the particles are axially confined by applied trap fields, the same three regimes occur. When an added ``squeeze'' potential produces a separatrix in the axial motion, the transport is enhanced, scaling roughly as ( ν/ B )^1/2 δ2̂ when ν< φ. For φ< ν< φB (where φ, ν and φB are the rotation, collision and axial bounce frequencies) there is also a 1/ ν regime similar to that predicted for ripple-enhanced transport.^1
Smit, A B
1993-06-01
The errors on consonant singletons made by children in the Iowa-Nebraska Articulation Norms Project (Smit, Hand, Freilinger, Bernthal, & Bird, 1990) were tabulated by age range and frequency. The prominent error types can usually be described as phonological processes, but there are other common errors as well, especially distortions of liquids and fricatives. Moreover, some of the relevant phonological processes appear to be restricted in the range of consonants or word-positions to which they apply. A metric based on frequency of use is proposed for determining that an error type is or is not atypical. Changes in frequency of error types over the age range are examined to determine if certain atypical error types are likely to be developmental, that is, likely to self-correct as the child matures. Finally, the clinical applications of these data for evaluation and intervention are explored.
Detection of Mendelian Consistent Genotyping Errors in Pedigrees
Cheung, Charles Y. K.; Thompson, Elizabeth A.; Wijsman, Ellen M.
2014-01-01
Detection of genotyping errors is a necessary step to minimize false results in genetic analysis. This is especially important when the rate of genotyping errors is high, as has been reported for high-throughput sequence data. To detect genotyping errors in pedigrees, Mendelian inconsistent (MI) error checks exist, as do multi-point methods that flag Mendelian consistent (MC) errors for sparse multi-allelic markers. However, few methods exist for detecting MC genotyping errors, particularly for dense variants on large pedigrees. Here we introduce an efficient method to detect MC errors even for very dense variants (e.g. SNPs and sequencing data) on pedigrees that may be large. Our method first samples inheritance vectors (IVs) using a moderately sparse but informative set of markers using a Markov chain Monte Carlo-based sampler. Using sampled IVs, we considered two test statistics to detect MC genotyping errors: the percentage of IVs inconsistent with observed genotypes (A1) or the posterior probability of error configurations (A2). Using simulations, we show that this method, even with the simpler A1 statistic, is effective for detecting MC genotyping errors in dense variants, with sensitivity almost as high as the theoretical best sensitivity possible. We also evaluate the effectiveness of this method as a function of parameters, when including the observed pattern for genotype, density of framework markers, error rate, allele frequencies, and number of sampled inheritance vectors. Our approach provides a line of defense against false findings based on the use of dense variants in pedigrees. PMID:24718985
Regional Ionospheric Modelling for Single-Frequency Users
NASA Astrophysics Data System (ADS)
Boisits, Janina; Joldzic, Nina; Weber, Robert
2016-04-01
Ionospheric signal delays are a main error source in GNSS-based positioning. Thus, single-frequency receivers, which are frequently used nowadays, require additional ionospheric information to mitigate these effects. Within the Austrian Research Promotion Agency (FFG) project Regiomontan (Regional Ionospheric Modelling for Single-Frequency Users) a new and as realistic as possible model is used to obtain precise GNSS ionospheric signal delays. These delays will be provided to single-frequency users to significantly increase positioning accuracy. The computational basis is the Thin-Shell Model. For regional modelling a thin electron layer of the underlying model is approximated by a Taylor series up to degree two. The network used includes 22 GNSS Reference Stations in Austria and nearby. First results were calculated from smoothed code observations by forming the geometry-free linear combination. Satellite and station DCBs were applied. In a least squares adjustment the model parameters, consisting of the VTEC0 at the origin of the investigated area, as well as the first and the second derivatives of the electron content in longitude and latitude, were obtained with a temporal resolution of 1 hour. The height of the layer was kept fixed. The formal errors of the model parameters suggest an accuracy of the VTEC slightly better than 1TECU for a user location within Austria. In a further step, the model parameters were derived from sole phase observations by using a levelling approach to mitigate common range biases. The formal errors of this model approach suggest an accuracy of about a few tenths of a TECU. For validation, the Regiomontan VTEC was compared to IGS TEC maps depicting a very good agreement. Further, a comparison of pseudoranges has been performed to calculate the 'true' error by forming the ionosphere-free linear combination on the one hand, and by applying the Regiomontan model to L1 pseudoranges on the other hand. The resulting differences are mostly
Syntactic and Semantic Errors in Radiology Reports Associated With Speech Recognition Software.
Ringler, Michael D; Goss, Brian C; Bartholmai, Brian J
2015-01-01
Speech recognition software (SRS) has many benefits, but also increases the frequency of errors in radiology reports, which could impact patient care. As part of a quality control project, 13 trained medical transcriptionists proofread 213,977 SRS-generated signed reports from 147 different radiologists over a 40 month time interval. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods using .2 analysis and multiple logistic regression, as appropriate. 20,759 (9.7%) reports contained errors; 3,992 (1.9%) contained material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors (P<.001). Error proportion varied significantly among radiologists and between imaging subspecialties (P<.001). Errors were more common in cross-sectional reports (vs. plain radiography) (OR, 3.72), reports reinterpreting results of outside examinations (vs. in-house) (OR, 1.55), and procedural studies (vs. diagnostic) (OR, 1.91) (all P<.001). Dictation microphone upgrade did not affect error rate (P=.06). Error rate decreased over time (P<.001). PMID:26262224
Error compensation for thermally induced errors on a machine tool
Krulewich, D.A.
1996-11-08
Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.
NASA Astrophysics Data System (ADS)
Ottino-Löffler, Bertrand; Strogatz, Steven H.
2016-09-01
We study the dynamics of coupled phase oscillators on a two-dimensional Kuramoto lattice with periodic boundary conditions. For coupling strengths just below the transition to global phase-locking, we find localized spatiotemporal patterns that we call "frequency spirals." These patterns cannot be seen under time averaging; they become visible only when we examine the spatial variation of the oscillators' instantaneous frequencies, where they manifest themselves as two-armed rotating spirals. In the more familiar phase representation, they appear as wobbly periodic patterns surrounding a phase vortex. Unlike the stationary phase vortices seen in magnetic spin systems, or the rotating spiral waves seen in reaction-diffusion systems, frequency spirals librate: the phases of the oscillators surrounding the central vortex move forward and then backward, executing a periodic motion with zero winding number. We construct the simplest frequency spiral and characterize its properties using analytical and numerical methods. Simulations show that frequency spirals in large lattices behave much like this simple prototype.
Error prediction for probes guided by means of fixtures
NASA Astrophysics Data System (ADS)
Fitzpatrick, J. Michael
2012-02-01
Probe guides are surgical fixtures that are rigidly attached to bone anchors in order to place a probe at a target with high accuracy (RMS error < 1 mm). Applications include needle biopsy, the placement of electrodes for deep-brain stimulation (DBS), spine surgery, and cochlear implant surgery. Targeting is based on pre-operative images, but targeting errors can arise from three sources: (1) anchor localization error, (2) guide fabrication error, and (3) external forces and torques. A well-established theory exists for the statistical prediction of target registration error (TRE) when targeting is accomplished by means of tracked probes, but no such TRE theory is available for fixtured probe guides. This paper provides that theory and shows that all three error sources can be accommodated in a remarkably simple extension of existing theory. Both the guide and the bone with attached anchors are modeled as objects with rigid sections and elastic sections, the latter of which are described by stiffness matrices. By relating minimization of elastic energy for guide attachment to minimization of fiducial registration error for point registration, it is shown that the expression for targeting error for the guide is identical to that for weighted rigid point registration if the weighting matrices are properly derived from stiffness matrices and the covariance matrices for fiducial localization are augmented with offsets in the anchor positions. An example of the application of the theory is provided for ear surgery.
Laser tracker error determination using a network measurement
NASA Astrophysics Data System (ADS)
Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim
2011-04-01
We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies.
Detection and frequency tracking of chirping signals
Elliott, G.R.; Stearns, S.D.
1990-08-01
This paper discusses several methods to detect the presence of and track the frequency of a chirping signal in broadband noise. The dynamic behavior of each of the methods is described and tracking error bounds are investigated in terms of the chirp rate. Frequency tracking and behavior in the presence of varying levels of noise are illustrated in examples. 11 refs., 29 figs.
Wind and Load Forecast Error Model for Multiple Geographically Distributed Forecasts
Makarov, Yuri V.; Reyes Spindola, Jorge F.; Samaan, Nader A.; Diao, Ruisheng; Hafen, Ryan P.
2010-11-02
The impact of wind and load forecast errors on power grid operations is frequently evaluated by conducting multi-variant studies, where these errors are simulated repeatedly as random processes based on their known statistical characteristics. To generate these errors correctly, we need to reflect their distributions (which do not necessarily follow a known distribution law), standard deviations, auto- and cross-correlations. For instance, load and wind forecast errors can be closely correlated in different zones of the system. This paper introduces a new methodology for generating multiple cross-correlated random processes to simulate forecast error curves based on a transition probability matrix computed from an empirical error distribution function. The matrix will be used to generate new error time series with statistical features similar to observed errors. We present the derivation of the method and present some experimental results by generating new error forecasts together with their statistics.
BFC: correcting Illumina sequencing errors
2015-01-01
Summary: BFC is a free, fast and easy-to-use sequencing error corrector designed for Illumina short reads. It uses a non-greedy algorithm but still maintains a speed comparable to implementations based on greedy methods. In evaluations on real data, BFC appears to correct more errors with fewer overcorrections in comparison to existing tools. It particularly does well in suppressing systematic sequencing errors, which helps to improve the base accuracy of de novo assemblies. Availability and implementation: https://github.com/lh3/bfc Contact: hengli@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25953801
Errors Associated With Measurements from Imaging Probes
NASA Astrophysics Data System (ADS)
Heymsfield, A.; Bansemer, A.
2015-12-01
Imaging probes, collecting data on particles from about 20 or 50 microns to several centimeters, are the probes that have been collecting data on the droplet and ice microphysics for more than 40 years. During that period, a number of problems associated with the measurements have been identified, including questions about the depth of field of particles within the probes' sample volume, and ice shattering, among others, have been identified. Many different software packages have been developed to process and interpret the data, leading to differences in the particle size distributions and estimates of the extinction, ice water content and radar reflectivity obtained from the same data. Given the numerous complications associated with imaging probe data, we have developed an optical array probe simulation package to explore the errors that can be expected with actual data. We simulate full particle size distributions with known properties, and then process the data with the same software that is used to process real-life data. We show that there are significant errors in the retrieved particle size distributions as well as derived parameters such as liquid/ice water content and total number concentration. Furthermore, the nature of these errors change as a function of the shape of the simulated size distribution and the physical and electronic characteristics of the instrument. We will introduce some methods to improve the retrieval of particle size distributions from real-life data.
Error propagation in energetic carrying capacity models
Pearse, Aaron T.; Stafford, Joshua D.
2014-01-01
Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.
Assessing the impact of differential genotyping errors on rare variant tests of association.
Mayer-Jochimsen, Morgan; Fast, Shannon; Tintle, Nathan L
2013-01-01
Genotyping errors are well-known to impact the power and type I error rate in single marker tests of association. Genotyping errors that happen according to the same process in cases and controls are known as non-differential genotyping errors, whereas genotyping errors that occur with different processes in the cases and controls are known as differential genotype errors. For single marker tests, non-differential genotyping errors reduce power, while differential genotyping errors increase the type I error rate. However, little is known about the behavior of the new generation of rare variant tests of association in the presence of genotyping errors. In this manuscript we use a comprehensive simulation study to explore the effects of numerous factors on the type I error rate of rare variant tests of association in the presence of differential genotyping error. We find that increased sample size, decreased minor allele frequency, and an increased number of single nucleotide variants (SNVs) included in the test all increase the type I error rate in the presence of differential genotyping errors. We also find that the greater the relative difference in case-control genotyping error rates the larger the type I error rate. Lastly, as is the case for single marker tests, genotyping errors classifying the common homozygote as the heterozygote inflate the type I error rate significantly more than errors classifying the heterozygote as the common homozygote. In general, our findings are in line with results from single marker tests. To ensure that type I error inflation does not occur when analyzing next-generation sequencing data careful consideration of study design (e.g. use of randomization), caution in meta-analysis and using publicly available controls, and the use of standard quality control metrics is critical.
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
The Effects of Computational Modeling Errors on the Estimation of Statistical Mechanical Variables.
Faver, John C; Yang, Wei; Merz, Kenneth M
2012-10-01
Computational models used in the estimation of thermodynamic quantities of large chemical systems often require approximate energy models that rely on parameterization and cancellation of errors to yield agreement with experimental measurements. In this work, we show how energy function errors propagate when computing statistical mechanics-derived thermodynamic quantities. Assuming that each microstate included in a statistical ensemble has a measurable amount of error in its calculated energy, we derive low-order expressions for the propagation of these errors in free energy, average energy, and entropy. Through gedanken experiments we show the expected behavior of these error propagation formulas on hypothetical energy surfaces. For very large microstate energy errors, these low-order formulas disagree with estimates from Monte Carlo simulations of error propagation. Hence, such simulations of error propagation may be required when using poor potential energy functions. Propagated systematic errors predicted by these methods can be removed from computed quantities, while propagated random errors yield uncertainty estimates. Importantly, we find that end-point free energy methods maximize random errors and that local sampling of potential energy wells decreases random error significantly. Hence, end-point methods should be avoided in energy computations and should be replaced by methods that incorporate local sampling. The techniques described herein will be used in future work involving the calculation of free energies of biomolecular processes, where error corrections are expected to yield improved agreement with experiment.
Kuldeep, B; Singh, V K; Kumar, A; Singh, G K
2015-01-01
In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647
FORCE: FORtran for Cosmic Errors
NASA Astrophysics Data System (ADS)
Colombi, Stéphane; Szapudi, István
We review the theory of cosmic errors we have recently developed for count-in-cells statistics. The corresponding FORCE package provides a simple and useful way to compute cosmic covariance on factorial moments and cumulants measured in galaxy catalogs.
Human errors and measurement uncertainty
NASA Astrophysics Data System (ADS)
Kuselman, Ilya; Pennecchi, Francesca
2015-04-01
Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.
Quantile Regression With Measurement Error
Wei, Ying; Carroll, Raymond J.
2010-01-01
Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. PMID:20305802
Robust characterization of leakage errors
NASA Astrophysics Data System (ADS)
Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph
2016-04-01
Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.
Static Detection of Disassembly Errors
Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K
2009-10-13
Static disassembly is a crucial first step in reverse engineering executable files, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.
Dual processing and diagnostic errors.
Norman, Geoff
2009-09-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.
Prospective errors determine motor learning
Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi
2015-01-01
Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model’s novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628
Orbital and Geodetic Error Analysis
NASA Technical Reports Server (NTRS)
Felsentreger, T.; Maresca, P.; Estes, R.
1985-01-01
Results that previously required several runs determined in more computer-efficient manner. Multiple runs performed only once with GEODYN and stored on tape. ERODYN then performs matrix partitioning and linear algebra required for each individual error-analysis run.
On the Bayesness, minimaxity and admissibility of point estimators of allelic frequencies.
Martínez, Carlos Alberto; Khare, Kshitij; Elzo, Mauricio A
2015-10-21
In this paper, decision theory was used to derive Bayes and minimax decision rules to estimate allelic frequencies and to explore their admissibility. Decision rules with uniformly smallest risk usually do not exist and one approach to solve this problem is to use the Bayes principle and the minimax principle to find decision rules satisfying some general optimality criterion based on their risk functions. Two cases were considered, the simpler case of biallelic loci and the more complex case of multiallelic loci. For each locus, the sampling model was a multinomial distribution and the prior was a Beta (biallelic case) or a Dirichlet (multiallelic case) distribution. Three loss functions were considered: squared error loss (SEL), Kulback-Leibler loss (KLL) and quadratic error loss (QEL). Bayes estimators were derived under these three loss functions and were subsequently used to find minimax estimators using results from decision theory. The Bayes estimators obtained from SEL and KLL turned out to be the same. Under certain conditions, the Bayes estimator derived from QEL led to an admissible minimax estimator (which was also equal to the maximum likelihood estimator). The SEL also allowed finding admissible minimax estimators. Some estimators had uniformly smaller variance than the MLE and under suitable conditions the remaining estimators also satisfied this property. In addition to their statistical properties, the estimators derived here allow variation in allelic frequencies, which is closer to the reality of finite populations exposed to evolutionary forces. PMID:26271891
Relative-Error-Covariance Algorithms
NASA Technical Reports Server (NTRS)
Bierman, Gerald J.; Wolff, Peter J.
1991-01-01
Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.
Addressee Errors in ATC Communications: The Call Sign Problem
NASA Technical Reports Server (NTRS)
Monan, W. P.
1983-01-01
Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.
System-related factors contributing to diagnostic errors.
Thammasitboon, Satid; Thammasitboon, Supat; Singhal, Geeta
2013-10-01
Several studies in primary care, internal medicine, and emergency departments show that rates of errors in test requests and result interpretations are unacceptably high and translate into missed, delayed, or erroneous diagnoses. Ineffective follow-up of diagnostic test results could lead to patient harm if appropriate therapeutic interventions are not delivered in a timely manner. The frequency of system-related factors that contribute directly to diagnostic errors depends on the types and sources of errors involved. Recent studies reveal that the errors and patient harm in the diagnostic testing loop have occurred mainly at the pre- and post-analytic phases, which are directed primarily by clinicians who may have limited expertise in the rapidly expanding field of clinical pathology. These errors may include inappropriate test requests, failure/delay in receiving results, and erroneous interpretation and application of test results to patient care. Efforts to address system-related factors often focus on technical errors in laboratory testing or failures in delivery of intended treatment. System-improvement strategies related to diagnostic errors tend to focus on technical aspects of laboratory medicine or delivery of treatment after completion of the diagnostic process. System failures and cognitive errors, more often than not, coexist and together contribute to the incidents of errors in diagnostic process and in laboratory testing. The use of highly structured hand-off procedures and pre-planned follow-up for any diagnostic test could improve efficiency and reliability of the follow-up process. Many feedback pathways should be established so that providers can learn if or when a diagnosis is changed. Patients can participate in the effort to reduce diagnostic errors. Providers should educate their patients about diagnostic probabilities and uncertainties. The patient-safety strategies focusing on the interface between diagnostic system and therapeutic
Quantifying errors without random sampling
Phillips, Carl V; LaPole, Luwanna M
2003-01-01
Background All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. Discussion We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Summary Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research. PMID:12892568
Hubbeling, Dieneke
2016-09-01
This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome. PMID:26662613
Error image aware content restoration
NASA Astrophysics Data System (ADS)
Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee
2015-12-01
As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.
Shi, Di-Fu; Qian, Bao-Liang; Wang, Hong-Gang; Li, Wei
2013-12-15
Field analysis method is used to derive the dispersion relation of rising-sun magnetron with sectorial and rectangular cavities. This dispersion relation is then extended to the general case in which the rising-sun magnetron can be with multi-group cavities of different shapes and sizes, and from which the dispersion relations of conventional magnetron, rising-sun magnetron, and magnetron-like device can be obtained directly. The results show that the relative errors between the theoretical and simulation values of the dispersion relation are less than 3%, the relative errors between the theoretical and simulation values of the cutoff frequencies of π mode are less than 2%. In addition, the influences of each structure parameter of the magnetron on the cutoff frequency of π mode and on the mode separation are investigated qualitatively and quantitatively, which may be of great interest to designing a frequency tuning magnetron.
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Yang, Yuekui
2016-01-01
Satellites always sample the Earth-atmosphere system in a finite temporal resolution. This study investigates the effect of sampling frequency on the satellite-derived Earth radiation budget, with the Deep Space Climate Observatory (DSCOVR) as an example. The output from NASA's Goddard Earth Observing System Version 5 (GEOS-5) Nature Run is used as the truth. The Nature Run is a high spatial and temporal resolution atmospheric simulation spanning a two-year period. The effect of temporal resolution on potential DSCOVR observations is assessed by sampling the full Nature Run data with 1-h to 24-h frequencies. The uncertainty associated with a given sampling frequency is measured by computing means over daily, monthly, seasonal and annual intervals and determining the spread across different possible starting points. The skill with which a particular sampling frequency captures the structure of the full time series is measured using correlations and normalized errors. Results show that higher sampling frequency gives more information and less uncertainty in the derived radiation budget. A sampling frequency coarser than every 4 h results in significant error. Correlations between true and sampled time series also decrease more rapidly for a sampling frequency less than 4 h.
The Study of Prescribing Errors Among General Dentists
Araghi, Solmaz; Sharifi, Rohollah; Ahmadi, Goran; Esfehani, Mahsa; Rezaei, Fatemeh
2016-01-01
Introduction: In dentistry, medicine often prescribed to relieve pain and remove infections. Therefore, wrong prescription can lead to a range of problems including lack of pain, antimicrobial treatment failure and the development of resistance to antibiotics. Materials and Methods: In this cross-sectional study, the aim was to evaluate the common errors in written prescriptions by general dentists in Kermanshah in 2014. Dentists received a questionnaire describing five hypothetical patient and the appropriate prescription for the patient in question was asked. Information about age, gender, work experience and the admission in university was collected. The frequency of errors in prescriptions was determined. Data by SPSS 20 statistical software and using statistical t-test, chi-square and Pearson correlation were analyzed (0.05> P). Results: A total of 180 dentists (62.6% male and 37.4% female) with a mean age of 8.23 ± 39.199 participated in this study. Prescription errors include the wrong in pharmaceutical form (11%), not having to write therapeutic dose (13%), writing wrong dose (14%), typos (15%), error prescription (23%) and writing wrong number of drugs (24%). The most frequent errors in the administration of antiviral drugs (31%) and later stages of antifungal drugs (30%), analgesics (23%) and antibiotics (16%) was observed. Males dentists compared with females dentists showed more frequent errors (P=0.046). Error frequency among dentists with a long work history (P>0.001) and the acceptance in the university except for the entrance examination (P=0.041) had a statistically significant relationship. Conclusion: This study showed that the written prescription by general dentists examined contained significant errors and improve prescribing through continuing education of dentists is essential. PMID:26573049
Error-tradeoff and error-disturbance relations for incompatible quantum measurements.
Branciard, Cyril
2013-04-23
Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario. PMID:23564344
Cao, Shengjiao; Yu, Changyuan; Kam, Pooi-Yuen
2013-09-23
We carry out a comprehensive analysis to examine the performance of our recently proposed correlation-based and pilot-tone-assisted frequency offset compensation method in coherent optical OFDM system. The frequency offset is divided into two parts: fraction part and integer part relative to the channel spacing. Our frequency offset scheme includes the correlation-based Schmidl algorithm for fraction part estimation as well as pilot-tone-assisted method for integer part estimation. In this paper, we analytically derive the error variance of fraction part estimation methods in the presence of laser phase noise using different correlation-based algorithms: Schmidl, Cox and Cyclic Prefix based. This analytical expression is given for the first time in the literature. Furthermore, we give a full derivation for the pilot-tone-assisted integer part estimation method using the OFDM model. PMID:24104171
Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation
NASA Technical Reports Server (NTRS)
Morelli, Eugene a.
2006-01-01
Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.
Li, Xian-Fang; Tang, Guo-Jin; Shen, Zhi-Bin; Lee, Kang Yong
2015-01-01
Free vibration and mass detection of carbon nanotube-based sensors are studied in this paper. Since the mechanical properties of carbon nanotubes possess a size effect, the nonlocal beam model is used to characterize flexural vibration of nanosensors carrying a concentrated nanoparticle, where the size effect is reflected by a nonlocal parameter. For nanocantilever or bridged sensor, frequency equations are derived when a nanoparticle is carried at the free end or the middle, respectively. Exact resonance frequencies are numerically determined for clamped-free, simply-supported, and clamped-clamped resonators. Alternative approximations of fundamental frequency are given in closed form within the relative error less than 0.4%, 0.6%, and 1.4% for cantilever, simply-supported, and bridged sensors, respectively. Mass identification formulae are derived in terms of the frequency shift. Identified masses via the present approach coincide with those using the molecular mechanics approach and reach as low as 10(-24)kg. The obtained results indicate that the nonlocal effect decreases the resonance frequency except for the fundamental frequency of nanocantilever sensor. These results are helpful to the design of micro/nanomechanical zeptogram-scale biosensor.
Food frequency questionnaires.
Pérez Rodrigo, Carmen; Aranceta, Javier; Salvador, Gemma; Varela-Moreiras, Gregorio
2015-02-26
Food Frequency Questionnaires are dietary assessment tools widely used in epidemiological studies investigating the relationship between dietary intake and disease or risk factors since the early '90s. The three main components of these questionnaires are the list of foods, frequency of consumption and the portion size consumed. The food list should reflect the food habits of the study population at the time the data is collected. The frequency of consumption may be asked by open ended questions or by presenting frequency categories. Qualitative Food Frequency Questionnaires do not ask about the consumed portions; semi-quantitative include standard portions and quantitative questionnaires ask respondents to estimate the portion size consumed either in household measures or grams. The latter implies a greater participant burden. Some versions include only close-ended questions in a standardized format, while others add an open section with questions about some specific food habits and practices and admit additions to the food list for foods and beverages consumed which are not included. The method can be self-administered, on paper or web-based, or interview administered either face-to-face or by telephone. Due to the standard format, especially closed-ended versions, and method of administration, FFQs are highly cost-effective thus encouraging its widespread use in large scale epidemiological cohort studies and also in other study designs. Coding and processing data collected is also less costly and requires less nutrition expertise compared to other dietary intake assessment methods. However, the main limitations are systematic errors and biases in estimates. Important efforts are being developed to improve the quality of the information. It has been recommended the use of FFQs with other methods thus enabling the adjustments required.
The challenges to transparency in reporting medical errors.
Paterick, Zachary R; Paterick, Barbara B; Waterhouse, Blake E; Paterick, Timothy E
2009-12-01
In an ideal health care environment, physicians and health care organizations would acknowledge and factually report all medical errors and "near misses" in an effort to improve future patient safety by better identifying systemic safety lapses. Truth must permeate the health care system to achieve the goal of transparency. The Institute of Medicine has estimated that 44,000 to 98,000 patients die each year as a result of medical errors. Improving the reporting of medical errors and near misses is essential for better prevention of medical errors and thus increasing patient safety. Higher rates of reporting can permit identification of the root causes of errors and create improved processes that can significantly reduce errors in future patient care. Multiple barriers exist with respect to reporting medical errors, despite the ethical and various professional, regulatory, and legislative expectations and requirements generating this obligation. As long as physicians perceive that they are at risk for sanctions, malpractice claims, and unpredictable compensation of injured patients as determined by the United States' tort law system, legislative or regulative reform is unlikely to affect the underreporting of medical errors, and patient safety cannot benefit from the lessons derived from past medical errors and near misses. A new infrastructure for creating patient safety systems, as identified in the Patient Safety and Quality Improvement Act of 2005 is needed. A patient compensation system guided by an administrative health court that includes some form of no-fault insurance must be studied to identify benefits and risks. Most urgent is the development of a reporting system for medical errors and near misses that is transparent and effectively recognizes the legitimate concerns of physicians and health care providers and improves patient safety. PMID:22130212
A Experimental Determination of the Resonant Frequency of Atoms Moving in a Medium
NASA Astrophysics Data System (ADS)
Beary, Daniel Andrew
The theory of the Doppler-Recoil effect is described. In contrast to previous theories, the theory proposed by Haugan and Kowalski suggests that the frequency of the electromagnetic wave that excites a transition in an atom is a function of the velocity of that atom and the index of refraction of the medium. Following the path of Haugan and Kowalski, the Doppler Recoil equation is derived under the conditions of a rarefied gas acting as a continuous medium. Next, the theory of saturation spectroscopy is revised. This method of spectroscopy uses a pump and probe beam traveling collinearly in opposite directions. Beams of equal frequency in the lab frame interact with the zero axial velocity population within the gas when the beams are on resonance. For pump and probe beams of different frequencies, the atoms that they interact with will have an axial velocity component such that the Doppler shift leads to resonance with both beams. The purpose of this work is to verify the Doppler -Recoil formula proposed by Haugan and Kowalski. In the experiment performed, the resonant frequency of the stationary and moving velocity groups is determined using saturation spectroscopy. The theory predicts an average frequency shift of 307 Hz/^circC. The data show a shift of 94 kHz/^circ C. Because of the unexpected result, possible sources of errors such as pressure broadening, power broadening, and potential for systematic errors were examined. No explanation was found for these shifts.
Explaining errors in children's questions.
Rowland, Caroline F
2007-07-01
The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
Random errors in interferometry with the least-squares method
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships have also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.
Structured near-optimal channel-adapted quantum error correction
NASA Astrophysics Data System (ADS)
Fletcher, Andrew S.; Shor, Peter W.; Win, Moe Z.
2008-01-01
We present a class of numerical algorithms which adapt a quantum error correction scheme to a channel model. Given an encoding and a channel model, it was previously shown that the quantum operation that maximizes the average entanglement fidelity may be calculated by a semidefinite program (SDP), which is a convex optimization. While optimal, this recovery operation is computationally difficult for long codes. Furthermore, the optimal recovery operation has no structure beyond the completely positive trace-preserving constraint. We derive methods to generate structured channel-adapted error recovery operations. Specifically, each recovery operation begins with a projective error syndrome measurement. The algorithms to compute the structured recovery operations are more scalable than the SDP and yield recovery operations with an intuitive physical form. Using Lagrange duality, we derive performance bounds to certify near-optimality.
Error-associated behaviors and error rates for robotic geology
NASA Technical Reports Server (NTRS)
Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin
2004-01-01
This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.
A neural fuzzy controller learning by fuzzy error propagation
NASA Technical Reports Server (NTRS)
Nauck, Detlef; Kruse, Rudolf
1992-01-01
In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.
Sensitivity of feedforward neural networks to weight errors
NASA Technical Reports Server (NTRS)
Stevenson, Maryhelen; Widrow, Bernard; Winter, Rodney
1990-01-01
An analysis is made of the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network (a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. The probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).
The generalized transmission error of spiral bevel gears
NASA Technical Reports Server (NTRS)
Mark, W. D.
1987-01-01
The traditional definition of the transmission error of parallel-axis gear pairs is reviewed and shown to be unsuitable for characterizing the deviation from conjugate action of bevel gear pairs for vibration excitation characterization purposes. This situation is rectified by generalizing the concept of the transmission error of parallel-axis gears to a three-component transmission error for spiral bevel gears of nominal spherical involute design. A general relationship is derived which expresses the contributions to the three-component transmission error from each gear of a meshing spiral bevel pair as a linear transformation of the six coordinates that describe the deviation of the shaft centerline position of each gear of the pair from the position of its rigid perfect involute counterpart.
Sensitivity of SLR baselines to errors in Earth orientation
NASA Technical Reports Server (NTRS)
Smith, D. E.; Christodoulidis, D. C.
1984-01-01
The sensitivity of inter station distances derived from Satellite Laser Ranging (SLR) to errors in Earth orientation is discussed. An analysis experiment is performed which imposes a known polar motion error on all of the arcs used over this interval. The effect of the averaging of the errors over the tracking periods of individual sites is assessed. Baselines between stations that are supported by a global network of tracking stations are only marginally affected by errors in Earth orientation. The global network of stations retains its integrity even in the presence of systematic changes in the coordinate frame. The effect of these coordinate frame changes on the relative locations of the stations is minimal.