NASA Astrophysics Data System (ADS)
Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang
2018-01-01
The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.
1974-01-01
A theoretical formulation of differential and composite OMEGA error is presented to establish hypotheses about the functional relationships between various parameters and OMEGA navigational errors. Computer software developed to provide for extensive statistical analysis of the phase data is described. Results from the regression analysis used to conduct parameter sensitivity studies on differential OMEGA error tend to validate the theoretically based hypothesis concerning the relationship between uncorrected differential OMEGA error and receiver separation range and azimuth. Limited results of measurement of receiver repeatability error and line of position measurement error are also presented.
On the Correct Analysis of the Foundations of Theoretical Physics
NASA Astrophysics Data System (ADS)
Kalanov, Temur Z.
2007-04-01
The problem of truth in science -- the most urgent problem of our time -- is discussed. The correct theoretical analysis of the foundations of theoretical physics is proposed. The principle of the unity of formal logic and rational dialectics is a methodological basis of the analysis. The main result is as follows: the generally accepted foundations of theoretical physics (i.e. Newtonian mechanics, Maxwell electrodynamics, thermodynamics, statistical physics and physical kinetics, the theory of relativity, quantum mechanics) contain the set of logical errors. These errors are explained by existence of the global cause: the errors are a collateral and inevitable result of the inductive way of cognition of the Nature, i.e. result of movement from formation of separate concepts to formation of the system of concepts. Consequently, theoretical physics enters the greatest crisis. It means that physics as a science of phenomenon leaves the progress stage for a science of essence (information). Acknowledgment: The books ``Surprises in Theoretical Physics'' (1979) and ``More Surprises in Theoretical Physics'' (1991) by Sir Rudolf Peierls stimulated my 25-year work.
Reliable absolute analog code retrieval approach for 3D measurement
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun
2017-11-01
The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.
Lee, Sheila; McMullen, D.; Brown, G. L.; Stokes, A. R.
1965-01-01
1. A theoretical analysis of the errors in multicomponent spectrophotometric analysis of nucleoside mixtures, by a least-squares procedure, has been made to obtain an expression for the error coefficient, relating the error in calculated concentration to the error in extinction measurements. 2. The error coefficients, which depend only on the `library' of spectra used to fit the experimental curves, have been computed for a number of `libraries' containing the following nucleosides found in s-RNA: adenosine, guanosine, cytidine, uridine, 5-ribosyluracil, 7-methylguanosine, 6-dimethylaminopurine riboside, 6-methylaminopurine riboside and thymine riboside. 3. The error coefficients have been used to determine the best conditions for maximum accuracy in the determination of the compositions of nucleoside mixtures. 4. Experimental determinations of the compositions of nucleoside mixtures have been made and the errors found to be consistent with those predicted by the theoretical analysis. 5. It has been demonstrated that, with certain precautions, the multicomponent spectrophotometric method described is suitable as a basis for automatic nucleotide-composition analysis of oligonucleotides containing nine nucleotides. Used in conjunction with continuous chromatography and flow chemical techniques, this method can be applied to the study of the sequence of s-RNA. PMID:14346087
A theoretical basis for the analysis of multiversion software subject to coincident errors
NASA Technical Reports Server (NTRS)
Eckhardt, D. E., Jr.; Lee, L. D.
1985-01-01
Fundamental to the development of redundant software techniques (known as fault-tolerant software) is an understanding of the impact of multiple joint occurrences of errors, referred to here as coincident errors. A theoretical basis for the study of redundant software is developed which: (1) provides a probabilistic framework for empirically evaluating the effectiveness of a general multiversion strategy when component versions are subject to coincident errors, and (2) permits an analytical study of the effects of these errors. An intensity function, called the intensity of coincident errors, has a central role in this analysis. This function describes the propensity of programmers to introduce design faults in such a way that software components fail together when executing in the application environment. A condition under which a multiversion system is a better strategy than relying on a single version is given.
Error of the slanted edge method for measuring the modulation transfer function of imaging systems.
Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu
2018-03-01
The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.
Error Analysis of p-Version Discontinuous Galerkin Method for Heat Transfer in Built-up Structures
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.
2004-01-01
The purpose of this paper is to provide an error analysis for the p-version of the discontinuous Galerkin finite element method for heat transfer in built-up structures. As a special case of the results in this paper, a theoretical error estimate for the numerical experiments recently conducted by James Tomey is obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yueqi; Lava, Pascal; Reu, Phillip
This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.
Wang, Yueqi; Lava, Pascal; Reu, Phillip; ...
2015-12-23
This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.
Astigmatism in reflector antennas.
NASA Technical Reports Server (NTRS)
Cogdell, J. R.; Davis, J. H.
1973-01-01
Astigmatic phase error in large parabolic reflector antennas is discussed. A procedure for focusing an antenna and diagnosing the presence and degree of astigmatism is described. Theoretical analysis is conducted to determine the nature of this error in such antennas.
Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert
2009-03-10
In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.
Uncertainties of predictions from parton distributions II: theoretical errors
NASA Astrophysics Data System (ADS)
Martin, A. D.; Roberts, R. G.; Stirling, W. J.; Thorne, R. S.
2004-06-01
We study the uncertainties in parton distributions, determined in global fits to deep inelastic and related hard scattering data, due to so-called theoretical errors. Amongst these, we include potential errors due to the change of perturbative order (NLO to NNLO), ln(1/x) and ln(1-x) effects, absorptive corrections and higher-twist contributions. We investigate these uncertainties both by including explicit corrections to our standard global analysis and by examining the sensitivity to changes of the x, Q 2, W 2 cuts on the data that are fitted. In this way we expose those kinematic regions where the conventional DGLAP description is inadequate. As a consequence we obtain a set of NLO, and of NNLO, conservative partons where the data are fully consistent with DGLAP evolution, but over a restricted kinematic domain. We also examine the potential effects of such issues as the choice of input parametrisation, heavy target corrections, assumptions about the strange quark sea and isospin violation. Hence we are able to compare the theoretical errors with those uncertainties due to errors on the experimental measurements, which we studied previously. We use W and Higgs boson production at the Tevatron and the LHC as explicit examples of the uncertainties arising from parton distributions. For many observables the theoretical error is dominant, but for the cross section for W production at the Tevatron both the theoretical and experimental uncertainties are small, and hence the NNLO prediction may serve as a valuable luminosity monitor.
Human factors analysis and classification system-HFACS.
DOT National Transportation Integrated Search
2000-02-01
Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident : reporting systems are not designed around any theoretical framework of human error. As a result, most : accident databases are not conduci...
NASA Astrophysics Data System (ADS)
Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin
2015-05-01
Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.
Error analysis of high-rate GNSS precise point positioning for seismic wave measurement
NASA Astrophysics Data System (ADS)
Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan
2017-06-01
High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.
[Determination of the error of aerosol extinction coefficient measured by DOAS].
Si, Fu-qi; Liu, Jian-guo; Xie, Pin-hua; Zhang, Yu-jun; Wang, Mian; Liu, Wen-qing; Hiroaki, Kuze; Liu, Cheng; Nobuo, Takeuchi
2006-10-01
The method of defining the error of aerosol extinction coefficient measured by differential optical absorption spectroscopy (DOAS) is described. Some factors which could bring errors to result, such as variation of source, integral time, atmospheric turbulence, calibration of system parameter, displacement of system, and back scattering of particles, are analyzed. The error of aerosol extinction coefficient, 0.03 km(-1), is determined by theoretical analysis and practical measurement.
The Human Factors Analysis and Classification System : HFACS : final report.
DOT National Transportation Integrated Search
2000-02-01
Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident reporting systems are not designed around any theoretical framework of human error. As a result, most accident databases are not conducive t...
NASA Technical Reports Server (NTRS)
Federhofer, J. A.
1974-01-01
Laboratory data verifying the pulse quaternary modulation (PQM) theoretical predictions is presented. The first laboratory PQM laser communication system was successfully fabricated, integrated, tested and demonstrated. System bit error rate tests were performed and, in general, indicated approximately a 2 db degradation from the theoretically predicted results. These tests indicated that no gross errors were made in the initial theoretical analysis of PQM. The relative ease with which the entire PQM laboratory system was integrated and tested indicates that PQM is a viable candidate modulation scheme for an operational 400 Mbps baseband laser communication system.
Suppression of vapor cell temperature error for spin-exchange-relaxation-free magnetometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Jixi, E-mail: lujixi@buaa.edu.cn; Qian, Zheng; Fang, Jiancheng
2015-08-15
This paper presents a method to reduce the vapor cell temperature error of the spin-exchange-relaxation-free (SERF) magnetometer. The fluctuation of cell temperature can induce variations of the optical rotation angle, resulting in a scale factor error of the SERF magnetometer. In order to suppress this error, we employ the variation of the probe beam absorption to offset the variation of the optical rotation angle. The theoretical discussion of our method indicates that the scale factor error introduced by the fluctuation of the cell temperature could be suppressed by setting the optical depth close to one. In our experiment, we adjustmore » the probe frequency to obtain various optical depths and then measure the variation of scale factor with respect to the corresponding cell temperature changes. Our experimental results show a good agreement with our theoretical analysis. Under our experimental condition, the error has been reduced significantly compared with those when the probe wavelength is adjusted to maximize the probe signal. The cost of this method is the reduction of the scale factor of the magnetometer. However, according to our analysis, it only has minor effect on the sensitivity under proper operating parameters.« less
Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B
2000-12-01
Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.
Digital Photon Correlation Data Processing Techniques
1976-07-01
velocimeter signals. During the conduct of the contract a complementary theoretical effort with the NASA Langley Research Center was in progress ( NASI -13140...6.3.2 Variability Error In an earlier very brief contract with NASA Langley ( NASI -13140) a simplified variability error analysis was performed
Error analysis of finite element method for Poisson–Nernst–Planck equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yuzhou; Sun, Pengtao; Zheng, Bin
A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.
NASA Astrophysics Data System (ADS)
Gao, Lingyu; Li, Xinghua; Guo, Qianrui; Quan, Jing; Hu, Zhengyue; Su, Zhikun; Zhang, Dong; Liu, Peilu; Li, Haopeng
2018-01-01
The internal structure of off-axis three-mirror system is commonly complex. The mirror installation error in assembly always affects the imaging line-of-sight and further degrades the image quality. Due to the complexity of the optical path in off-axis three-mirror optical system, the straightforward theoretical analysis on the variations of imaging line-of-sight is extremely difficult. In order to simplify the theoretical analysis, an equivalent single-mirror system is proposed and presented in this paper. In addition, the mathematical model of single-mirror system is established and the accurate expressions of imaging coordinate are derived. Utilizing the simulation software ZEMAX, off-axis three-mirror model and single-mirror model are both established. By adjusting the position of mirror and simulating the line-of-sight rotation of optical system, the variations of imaging coordinates are clearly observed. The final simulation results include: in off-axis three-mirror system, the varying sensitivity of the imaging coordinate to the rotation of line-of-sight is approximately 30 um/″; in single-mirror system, the varying sensitivity of the imaging coordinate to the rotation of line-of-sight is 31.5 um/″. Compared to the simulation results of the off-axis three-mirror model, the 5% relative error of single-mirror model analysis highly satisfies the requirement of equivalent analysis and also verifies its validity. This paper presents a new method to analyze the installation error of the mirror in the off-axis three-mirror system influencing on the imaging line-of-sight. Moreover, the off-axis three-mirror model is totally equivalent to the single-mirror model in theoretical analysis.
Error Analysis and Validation for Insar Height Measurement Induced by Slant Range
NASA Astrophysics Data System (ADS)
Zhang, X.; Li, T.; Fan, W.; Geng, X.
2018-04-01
InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.
NASA Technical Reports Server (NTRS)
Mallinckrodt, A. J.
1977-01-01
Data from an extensive array of collocated instrumentation at the Wallops Island test facility were intercompared in order to (1) determine the practical achievable accuracy limitations of various tropospheric and ionospheric correction techniques; (2) examine the theoretical bases and derivation of improved refraction correction techniques; and (3) estimate internal systematic and random error levels of the various tracking stations. The GEOS 2 satellite was used as the target vehicle. Data were obtained regarding the ionospheric and tropospheric propagation errors, the theoretical and data analysis of which was documented in some 30 separate reports over the last 6 years. An overview of project results is presented.
Frame synchronization performance and analysis
NASA Technical Reports Server (NTRS)
Aguilera, C. S. R.; Swanson, L.; Pitt, G. H., III
1988-01-01
The analysis used to generate the theoretical models showing the performance of the frame synchronizer is described for various frame lengths and marker lengths at various signal to noise ratios and bit error tolerances.
Free space optical ultra-wideband communications over atmospheric turbulence channels.
Davaslioğlu, Kemal; Cağiral, Erman; Koca, Mutlu
2010-08-02
A hybrid impulse radio ultra-wideband (IR-UWB) communication system in which UWB pulses are transmitted over long distances through free space optical (FSO) links is proposed. FSO channels are characterized by random fluctuations in the received light intensity mainly due to the atmospheric turbulence. For this reason, theoretical detection error probability analysis is presented for the proposed system for a time-hopping pulse-position modulated (TH-PPM) UWB signal model under weak, moderate and strong turbulence conditions. For the optical system output distributed over radio frequency UWB channels, composite error analysis is also presented. The theoretical derivations are verified via simulation results, which indicate a computationally and spectrally efficient UWB-over-FSO system.
Stochastic Models of Human Errors
NASA Technical Reports Server (NTRS)
Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)
2002-01-01
Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.
NASA Astrophysics Data System (ADS)
Heavens, A. F.; Seikel, M.; Nord, B. D.; Aich, M.; Bouffanais, Y.; Bassett, B. A.; Hobson, M. P.
2014-12-01
The Fisher Information Matrix formalism (Fisher 1935) is extended to cases where the data are divided into two parts (X, Y), where the expectation value of Y depends on X according to some theoretical model, and X and Y both have errors with arbitrary covariance. In the simplest case, (X, Y) represent data pairs of abscissa and ordinate, in which case the analysis deals with the case of data pairs with errors in both coordinates, but X can be any measured quantities on which Y depends. The analysis applies for arbitrary covariance, provided all errors are Gaussian, and provided the errors in X are small, both in comparison with the scale over which the expected signal Y changes, and with the width of the prior distribution. This generalizes the Fisher Matrix approach, which normally only considers errors in the `ordinate' Y. In this work, we include errors in X by marginalizing over latent variables, effectively employing a Bayesian hierarchical model, and deriving the Fisher Matrix for this more general case. The methods here also extend to likelihood surfaces which are not Gaussian in the parameter space, and so techniques such as DALI (Derivative Approximation for Likelihoods) can be generalized straightforwardly to include arbitrary Gaussian data error covariances. For simple mock data and theoretical models, we compare to Markov Chain Monte Carlo experiments, illustrating the method with cosmological supernova data. We also include the new method in the FISHER4CAST software.
Nonparametric Item Response Curve Estimation with Correction for Measurement Error
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
Improved Statistics for Genome-Wide Interaction Analysis
Ueki, Masao; Cordell, Heather J.
2012-01-01
Recently, Wu and colleagues [1] proposed two novel statistics for genome-wide interaction analysis using case/control or case-only data. In computer simulations, their proposed case/control statistic outperformed competing approaches, including the fast-epistasis option in PLINK and logistic regression analysis under the correct model; however, reasons for its superior performance were not fully explored. Here we investigate the theoretical properties and performance of Wu et al.'s proposed statistics and explain why, in some circumstances, they outperform competing approaches. Unfortunately, we find minor errors in the formulae for their statistics, resulting in tests that have higher than nominal type 1 error. We also find minor errors in PLINK's fast-epistasis and case-only statistics, although theory and simulations suggest that these errors have only negligible effect on type 1 error. We propose adjusted versions of all four statistics that, both theoretically and in computer simulations, maintain correct type 1 error rates under the null hypothesis. We also investigate statistics based on correlation coefficients that maintain similar control of type 1 error. Although designed to test specifically for interaction, we show that some of these previously-proposed statistics can, in fact, be sensitive to main effects at one or both loci, particularly in the presence of linkage disequilibrium. We propose two new “joint effects” statistics that, provided the disease is rare, are sensitive only to genuine interaction effects. In computer simulations we find, in most situations considered, that highest power is achieved by analysis under the correct genetic model. Such an analysis is unachievable in practice, as we do not know this model. However, generally high power over a wide range of scenarios is exhibited by our joint effects and adjusted Wu statistics. We recommend use of these alternative or adjusted statistics and urge caution when using Wu et al.'s originally-proposed statistics, on account of the inflated error rate that can result. PMID:22496670
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Sun, Ting; Xing, Fei; You, Zheng
2013-01-01
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527
Pointing error analysis of Risley-prism-based beam steering system.
Zhou, Yuan; Lu, Yafei; Hei, Mo; Liu, Guangcan; Fan, Dapeng
2014-09-01
Based on the vector form Snell's law, ray tracing is performed to quantify the pointing errors of Risley-prism-based beam steering systems, induced by component errors, prism orientation errors, and assembly errors. Case examples are given to elucidate the pointing error distributions in the field of regard and evaluate the allowances of the error sources for a given pointing accuracy. It is found that the assembly errors of the second prism will result in more remarkable pointing errors in contrast with the first one. The pointing errors induced by prism tilt depend on the tilt direction. The allowances of bearing tilt and prism tilt are almost identical if the same pointing accuracy is planned. All conclusions can provide a theoretical foundation for practical works.
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong
2014-01-01
We discuss and analyze an H 1-Galerkin mixed finite element (H 1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H 1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H 1-GMFE method. Based on the discussion on the theoretical error analysis in L 2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H 1-norm. Moreover, we derive and analyze the stability of H 1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148
NASA Astrophysics Data System (ADS)
He, Yingwei; Li, Ping; Feng, Guojin; Cheng, Li; Wang, Yu; Wu, Houping; Liu, Zilong; Zheng, Chundi; Sha, Dingguo
2010-11-01
For measuring large-aperture optical system transmittance, a novel sub-aperture scanning machine with double-rotating arms (SSMDA) was designed to obtain sub-aperture beam spot. Optical system full-aperture transmittance measurements can be achieved by applying sub-aperture beam spot scanning technology. The mathematical model of the SSMDA based on a homogeneous coordinate transformation matrix is established to develop a detailed methodology for analyzing the beam spot scanning errors. The error analysis methodology considers two fundamental sources of scanning errors, namely (1) the length systematic errors and (2) the rotational systematic errors. As the systematic errors of the parameters are given beforehand, computational results of scanning errors are between -0.007~0.028mm while scanning radius is not lager than 400.000mm. The results offer theoretical and data basis to the research on transmission characteristics of large optical system.
Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…
Analyzing human errors in flight mission operations
NASA Technical Reports Server (NTRS)
Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef
1993-01-01
A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.
NASA Astrophysics Data System (ADS)
Jia, Mei-Hui; Wang, Cheng-Lin; Ren, Bin
2017-07-01
Stress, strain and vibration characteristics of rotor parts should be changed significantly under high acceleration, manufacturing error is one of the most important reason. However, current research on this problem has not been carried out. A rotor with an acceleration of 150,000 g is considered as the objective, the effects of manufacturing errors on rotor mechanical properties and dynamic characteristics are executed by the selection of the key affecting factors. Through the force balance equation of the rotor infinitesimal unit establishment, a theoretical model of stress calculation based on slice method is proposed and established, a formula for the rotor stress at any point derives. A finite element model (FEM) of rotor with holes is established with manufacturing errors. The changes of the stresses and strains of a rotor in parallelism and symmetry errors are analyzed, which verify the validity of the theoretical model. The pre-stressing modal analysis is performed based on the aforementioned static analysis. The key dynamic characteristics are analyzed. The results demonstrated that, as the parallelism and symmetry errors increase, the equivalent stresses and strains of the rotor slowly increase linearly, the highest growth rate does not exceed 4%, the maximum change rate of natural frequency is 0.1%. The rotor vibration mode is not significantly affected. The FEM construction method of the rotor with manufacturing errors can be utilized for the quantitative research on rotor characteristics, which will assist in the active control of rotor component reliability under high acceleration.
Error Analysis of Deep Sequencing of Phage Libraries: Peptides Censored in Sequencing
Matochko, Wadim L.; Derda, Ratmir
2013-01-01
Next-generation sequencing techniques empower selection of ligands from phage-display libraries because they can detect low abundant clones and quantify changes in the copy numbers of clones without excessive selection rounds. Identification of errors in deep sequencing data is the most critical step in this process because these techniques have error rates >1%. Mechanisms that yield errors in Illumina and other techniques have been proposed, but no reports to date describe error analysis in phage libraries. Our paper focuses on error analysis of 7-mer peptide libraries sequenced by Illumina method. Low theoretical complexity of this phage library, as compared to complexity of long genetic reads and genomes, allowed us to describe this library using convenient linear vector and operator framework. We describe a phage library as N × 1 frequency vector n = ||ni||, where ni is the copy number of the ith sequence and N is the theoretical diversity, that is, the total number of all possible sequences. Any manipulation to the library is an operator acting on n. Selection, amplification, or sequencing could be described as a product of a N × N matrix and a stochastic sampling operator (S a). The latter is a random diagonal matrix that describes sampling of a library. In this paper, we focus on the properties of S a and use them to define the sequencing operator (S e q). Sequencing without any bias and errors is S e q = S a IN, where IN is a N × N unity matrix. Any bias in sequencing changes IN to a nonunity matrix. We identified a diagonal censorship matrix (C E N), which describes elimination or statistically significant downsampling, of specific reads during the sequencing process. PMID:24416071
Mobility and Position Error Analysis of a Complex Planar Mechanism with Redundant Constraints
NASA Astrophysics Data System (ADS)
Sun, Qipeng; Li, Gangyan
2018-03-01
Nowadays mechanisms with redundant constraints have been created and attracted much attention for their merits. The mechanism of the redundant constraints in a mechanical system is analyzed in this paper. A analysis method of Planar Linkage with a repetitive structure is proposed to get the number and type of constraints. According to the difference of applications and constraint characteristics, the redundant constraints are divided into the theoretical planar redundant constraints and the space-planar redundant constraints. And the calculation formula for the number of redundant constraints and type of judging method are carried out. And a complex mechanism with redundant constraints is analyzed of the influence about redundant constraints on mechanical performance. With the combination of theoretical derivation and simulation research, a mechanism analysis method is put forward about the position error of complex mechanism with redundant constraints. It points out the direction on how to eliminate or reduce the influence of redundant constraints.
Optimally weighted least-squares steganalysis
NASA Astrophysics Data System (ADS)
Ker, Andrew D.
2007-02-01
Quantitative steganalysis aims to estimate the amount of payload in a stego object, and such estimators seem to arise naturally in steganalysis of Least Significant Bit (LSB) replacement in digital images. However, as with all steganalysis, the estimators are subject to errors, and their magnitude seems heavily dependent on properties of the cover. In very recent work we have given the first derivation of estimation error, for a certain method of steganalysis (the Least-Squares variant of Sample Pairs Analysis) of LSB replacement steganography in digital images. In this paper we make use of our theoretical results to find an improved estimator and detector. We also extend the theoretical analysis to another (more accurate) steganalysis estimator (Triples Analysis) and hence derive an improved version of that estimator too. Experimental results show that the new steganalyzers have improved accuracy, particularly in the difficult case of never-compressed covers.
Individual differences in political ideology are effects of adaptive error management.
Petersen, Michael Bang; Aarøe, Lene
2014-06-01
We apply error management theory to the analysis of individual differences in the negativity bias and political ideology. Using principles from evolutionary psychology, we propose a coherent theoretical framework for understanding (1) why individuals differ in their political ideology and (2) the conditions under which these individual differences influence and fail to influence the political choices people make.
Error analysis of the Golay3 optical imaging system.
Wu, Quanying; Fan, Junliu; Wu, Feng; Zhao, Jun; Qian, Lin
2013-05-01
We use aberration theory to derive a generalized pupil function of the Golay3 imaging system when astigmatisms exist in its submirrors. Theoretical analysis and numerical simulation using ZEMAX show that the point spread function (PSF) and the modulation transfer function (MTF) of the Golay3 sparse aperture system have a periodic change when there are piston errors. When the peak-valley value of the wavefront (PV(tilt)) due to the tilt error increases from zero to λ, the PSF and the MTF change significantly, and the change direction is determined by the location of the submirror with the tilt error. When PV(tilt) becomes larger than λ, the PSF and the MTF remain unvaried. We calculate the peaks of the signal-to-noise ratio (PSNR) resulting from the piston and tilt errors according to the Strehl ratio, and show that the PSNR decreases when the errors increase.
Model error in covariance structure models: Some implications for power and Type I error
Coffman, Donna L.
2010-01-01
The present study investigated the degree to which violation of the parameter drift assumption affects the Type I error rate for the test of close fit and power analysis procedures proposed by MacCallum, Browne, and Sugawara (1996) for both the test of close fit and the test of exact fit. The parameter drift assumption states that as sample size increases both sampling error and model error (i.e. the degree to which the model is an approximation in the population) decrease. Model error was introduced using a procedure proposed by Cudeck and Browne (1992). The empirical power for both the test of close fit, in which the null hypothesis specifies that the Root Mean Square Error of Approximation (RMSEA) ≤ .05, and the test of exact fit, in which the null hypothesis specifies that RMSEA = 0, is compared with the theoretical power computed using the MacCallum et al. (1996) procedure. The empirical power and theoretical power for both the test of close fit and the test of exact fit are nearly identical under violations of the assumption. The results also indicated that the test of close fit maintains the nominal Type I error rate under violations of the assumption. PMID:21331302
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon
2013-01-01
We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less
NASA Astrophysics Data System (ADS)
Waeldele, F.
1983-01-01
The influence of sample shape deviations on the measurement uncertainties and the optimization of computer aided coordinate measurement were investigated for a circle and a cylinder. Using the complete error propagation law in matrix form the parameter uncertainties are calculated, taking the correlation between the measurement points into account. Theoretical investigations show that the measuring points have to be equidistantly distributed and that for a cylindrical body a measuring point distribution along a cross section is better than along a helical line. The theoretically obtained expressions to calculate the uncertainties prove to be a good estimation basis. The simple error theory is not satisfactory for estimation. The complete statistical data analysis theory helps to avoid aggravating measurement errors and to adjust the number of measuring points to the required measuring uncertainty.
Error analysis of stochastic gradient descent ranking.
Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan
2013-06-01
Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.
Power Spectral Density Error Analysis of Spectral Subtraction Type of Speech Enhancement Methods
NASA Astrophysics Data System (ADS)
Händel, Peter
2006-12-01
A theoretical framework for analysis of speech enhancement algorithms is introduced for performance assessment of spectral subtraction type of methods. The quality of the enhanced speech is related to physical quantities of the speech and noise (such as stationarity time and spectral flatness), as well as to design variables of the noise suppressor. The derived theoretical results are compared with the outcome of subjective listening tests as well as successful design strategies, performed by independent research groups.
NASA Technical Reports Server (NTRS)
Gates, Ordway B., Jr.; Woodling, C. H.
1959-01-01
Theoretical analysis of the longitudinal behavior of an automatically controlled supersonic interceptor during the attack phase against a nonmaneuvering target is presented. Control of the interceptor's flight path is obtained by use of a pitch rate command system. Topics lift, and pitching moment, effects of initial tracking errors, discussion of normal acceleration limited, limitations of control surface rate and deflection, and effects of neglecting forward velocity changes of interceptor during attack phase.
Research the Gait Characteristics of Human Walking Based on a Robot Model and Experiment
NASA Astrophysics Data System (ADS)
He, H. J.; Zhang, D. N.; Yin, Z. W.; Shi, J. H.
2017-02-01
In order to research the gait characteristics of human walking in different walking ways, a robot model with a single degree of freedom is put up in this paper. The system control models of the robot are established through Matlab/Simulink toolbox. The gait characteristics of straight, uphill, turning, up the stairs, down the stairs up and down areanalyzed by the system control models. To verify the correctness of the theoretical analysis, an experiment was carried out. The comparison between theoretical results and experimental results shows that theoretical results are better agreement with the experimental ones. Analyze the reasons leading to amplitude error and phase error and give the improved methods. The robot model and experimental ways can provide foundation to further research the various gait characteristics of the exoskeleton robot.
Acoustic evidence for phonologically mismatched speech errors.
Gormley, Andrea
2015-04-01
Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of speech errors that uncovers non-accommodated or mismatch errors. A mismatch error is a sub-phonemic error that results in an incorrect surface phonology. This type of error could arise during the processing of phonological rules or they could be made at the motor level of implementation. The results of this work have important implications for both experimental and theoretical research. For experimentalists, it validates the tools used for error induction and the acoustic determination of errors free of the perceptual bias. For theorists, this methodology can be used to test the nature of the processes proposed in language production.
Si, Guo-Ning; Chen, Lan; Li, Bao-Guo
2014-04-01
Base on the Kawakita powder compression equation, a general theoretical model for predicting the compression characteristics of multi-components pharmaceutical powders with different mass ratios was developed. The uniaxial flat-face compression tests of powder lactose, starch and microcrystalline cellulose were carried out, separately. Therefore, the Kawakita equation parameters of the powder materials were obtained. The uniaxial flat-face compression tests of the powder mixtures of lactose, starch, microcrystalline cellulose and sodium stearyl fumarate with five mass ratios were conducted, through which, the correlation between mixture density and loading pressure and the Kawakita equation curves were obtained. Finally, the theoretical prediction values were compared with experimental results. The analysis showed that the errors in predicting mixture densities were less than 5.0% and the errors of Kawakita vertical coordinate were within 4.6%, which indicated that the theoretical model could be used to predict the direct compaction characteristics of multi-component pharmaceutical powders.
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Giglio, Louis
1994-01-01
This paper describes a multichannel physical approach for retrieving rainfall and vertical structure information from satellite-based passive microwave observations. The algorithm makes use of statistical inversion techniques based upon theoretically calculated relations between rainfall rates and brightness temperatures. Potential errors introduced into the theoretical calculations by the unknown vertical distribution of hydrometeors are overcome by explicity accounting for diverse hydrometeor profiles. This is accomplished by allowing for a number of different vertical distributions in the theoretical brightness temperature calculations and requiring consistency between the observed and calculated brightness temperatures. This paper will focus primarily on the theoretical aspects of the retrieval algorithm, which includes a procedure used to account for inhomogeneities of the rainfall within the satellite field of view as well as a detailed description of the algorithm as it is applied over both ocean and land surfaces. The residual error between observed and calculated brightness temperatures is found to be an important quantity in assessing the uniqueness of the solution. It is further found that the residual error is a meaningful quantity that can be used to derive expected accuracies from this retrieval technique. Examples comparing the retrieved results as well as the detailed analysis of the algorithm performance under various circumstances are the subject of a companion paper.
Methods to achieve accurate projection of regional and global raster databases
Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.
2002-01-01
This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.
Metering error quantification under voltage and current waveform distortion
NASA Astrophysics Data System (ADS)
Wang, Tao; Wang, Jia; Xie, Zhi; Zhang, Ran
2017-09-01
With integration of more and more renewable energies and distortion loads into power grid, the voltage and current waveform distortion results in metering error in the smart meters. Because of the negative effects on the metering accuracy and fairness, it is an important subject to study energy metering combined error. In this paper, after the comparing between metering theoretical value and real recorded value under different meter modes for linear and nonlinear loads, a quantification method of metering mode error is proposed under waveform distortion. Based on the metering and time-division multiplier principles, a quantification method of metering accuracy error is proposed also. Analyzing the mode error and accuracy error, a comprehensive error analysis method is presented which is suitable for new energy and nonlinear loads. The proposed method has been proved by simulation.
Simultaneous Control of Error Rates in fMRI Data Analysis
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-01-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730
A Theoretical Foundation for the Study of Inferential Error in Decision-Making Groups.
ERIC Educational Resources Information Center
Gouran, Dennis S.
To provide a theoretical base for investigating the influence of inferential error on group decision making, current literature on both inferential error and decision making is reviewed and applied to the Watergate incident. Although groups tend to make fewer inferential errors because members' inferences are generally not biased in the same…
2014-01-01
In adsorption study, to describe sorption process and evaluation of best-fitting isotherm model is a key analysis to investigate the theoretical hypothesis. Hence, numerous statistically analysis have been extensively used to estimate validity of the experimental equilibrium adsorption values with the predicted equilibrium values. Several statistical error analysis were carried out. In the present study, the following statistical analysis were carried out to evaluate the adsorption isotherm model fitness, like the Pearson correlation, the coefficient of determination and the Chi-square test, have been used. The ANOVA test was carried out for evaluating significance of various error functions and also coefficient of dispersion were evaluated for linearised and non-linearised models. The adsorption of phenol onto natural soil (Local name Kalathur soil) was carried out, in batch mode at 30 ± 20 C. For estimating the isotherm parameters, to get a holistic view of the analysis the models were compared between linear and non-linear isotherm models. The result reveled that, among above mentioned error functions and statistical functions were designed to determine the best fitting isotherm. PMID:25018878
One way Doppler extractor. Volume 1: Vernier technique
NASA Technical Reports Server (NTRS)
Blasco, R. W.; Klein, S.; Nossen, E. J.; Starner, E. R.; Yanosov, J. A.
1974-01-01
A feasibility analysis, trade-offs, and implementation for a One Way Doppler Extraction system are discussed. A Doppler error analysis shows that quantization error is a primary source of Doppler measurement error. Several competing extraction techniques are compared and a Vernier technique is developed which obtains high Doppler resolution with low speed logic. Parameter trade-offs and sensitivities for the Vernier technique are analyzed, leading to a hardware design configuration. A detailed design, operation, and performance evaluation of the resulting breadboard model is presented which verifies the theoretical performance predictions. Performance tests have verified that the breadboard is capable of extracting Doppler, on an S-band signal, to an accuracy of less than 0.02 Hertz for a one second averaging period. This corresponds to a range rate error of no more than 3 millimeters per second.
Influences of optical-spectrum errors on excess relative intensity noise in a fiber-optic gyroscope
NASA Astrophysics Data System (ADS)
Zheng, Yue; Zhang, Chunxi; Li, Lijing
2018-03-01
The excess relative intensity noise (RIN) generated from broadband sources degrades the angular-random-walk performance of a fiber-optic gyroscope dramatically. Many methods have been proposed and managed to suppress the excess RIN. However, the properties of the excess RIN under the influences of different optical errors in the fiber-optic gyroscope have not been systematically investigated. Therefore, it is difficult for the existing RIN-suppression methods to achieve the optimal results in practice. In this work, the influences of different optical-spectrum errors on the power spectral density of the excess RIN are theoretically analyzed. In particular, the properties of the excess RIN affected by the raised-cosine-type ripples in the optical spectrum are elaborately investigated. Experimental measurements of the excess RIN corresponding to different optical-spectrum errors are in good agreement with our theoretical analysis, demonstrating its validity. This work provides a comprehensive understanding of the properties of the excess RIN under the influences of different optical-spectrum errors. Potentially, it can be utilized to optimize the configurations of the existing RIN-suppression methods by accurately evaluating the power spectral density of the excess RIN.
Fundamental Bounds for Sequence Reconstruction from Nanopore Sequencers.
Magner, Abram; Duda, Jarosław; Szpankowski, Wojciech; Grama, Ananth
2016-06-01
Nanopore sequencers are emerging as promising new platforms for high-throughput sequencing. As with other technologies, sequencer errors pose a major challenge for their effective use. In this paper, we present a novel information theoretic analysis of the impact of insertion-deletion (indel) errors in nanopore sequencers. In particular, we consider the following problems: (i) for given indel error characteristics and rate, what is the probability of accurate reconstruction as a function of sequence length; (ii) using replicated extrusion (the process of passing a DNA strand through the nanopore), what is the number of replicas needed to accurately reconstruct the true sequence with high probability? Our results provide a number of important insights: (i) the probability of accurate reconstruction of a sequence from a single sample in the presence of indel errors tends quickly (i.e., exponentially) to zero as the length of the sequence increases; and (ii) replicated extrusion is an effective technique for accurate reconstruction. We show that for typical distributions of indel errors, the required number of replicas is a slow function (polylogarithmic) of sequence length - implying that through replicated extrusion, we can sequence large reads using nanopore sequencers. Moreover, we show that in certain cases, the required number of replicas can be related to information-theoretic parameters of the indel error distributions.
Error analysis in inverse scatterometry. I. Modeling.
Al-Assaad, Rayan M; Byrne, Dale M
2007-02-01
Scatterometry is an optical technique that has been studied and tested in recent years in semiconductor fabrication metrology for critical dimensions. Previous work presented an iterative linearized method to retrieve surface-relief profile parameters from reflectance measurements upon diffraction. With the iterative linear solution model in this work, rigorous models are developed to represent the random and deterministic or offset errors in scatterometric measurements. The propagation of different types of error from the measurement data to the profile parameter estimates is then presented. The improvement in solution accuracies is then demonstrated with theoretical and experimental data by adjusting for the offset errors. In a companion paper (in process) an improved optimization method is presented to account for unknown offset errors in the measurements based on the offset error model.
Kumar, K Vasanth; Porkodi, K; Rocha, F
2008-01-15
A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.
Modeling for IFOG Vibration Error Based on the Strain Distribution of Quadrupolar Fiber Coil
Gao, Zhongxing; Zhang, Yonggang; Zhang, Yunhao
2016-01-01
Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environment, especially in vibrational environment, is necessary for its practical applications. This paper presents a mathematical model for IFOG to theoretically compute the short-term rate errors caused by mechanical vibration. The computational procedures are mainly based on the strain distribution of quadrupolar fiber coil measured by stress analyzer. The definition of asymmetry of strain distribution (ASD) is given in the paper to evaluate the winding quality of the coil. The established model reveals that the high ASD and the variable fiber elastic modulus in large strain situation are two dominant reasons that give rise to nonreciprocity phase shift in IFOG under vibration. Furthermore, theoretical analysis and computational results indicate that vibration errors of both open-loop and closed-loop IFOG increase with the raise of vibrational amplitude, vibrational frequency and ASD. Finally, an estimation of vibration-induced IFOG errors in aircraft is done according to the proposed model. Our work is meaningful in designing IFOG coils to achieve a better anti-vibration performance. PMID:27455257
Determination of suitable drying curve model for bread moisture loss during baking
NASA Astrophysics Data System (ADS)
Soleimani Pour-Damanab, A. R.; Jafary, A.; Rafiee, S.
2013-03-01
This study presents mathematical modelling of bread moisture loss or drying during baking in a conventional bread baking process. In order to estimate and select the appropriate moisture loss curve equation, 11 different models, semi-theoretical and empirical, were applied to the experimental data and compared according to their correlation coefficients, chi-squared test and root mean square error which were predicted by nonlinear regression analysis. Consequently, of all the drying models, a Page model was selected as the best one, according to the correlation coefficients, chi-squared test, and root mean square error values and its simplicity. Mean absolute estimation error of the proposed model by linear regression analysis for natural and forced convection modes was 2.43, 4.74%, respectively.
NASA Astrophysics Data System (ADS)
Liu, Zhigang; Song, Wenguang; Kochan, Orest; Mykyichuk, Mykola; Jun, Su
2017-07-01
The method of theoretical analysis of temperature ranges for the maximum manifestation of the error due to acquired thermoelectric inhomogeneity of thermocouple legs is proposed in this paper. The drift function of the reference function of a type K thermocouples in a ceramic insulation, that consisted of 1.2 mm diameter thermoelements after their exposure to 800°C for 10 000 h in an oxidizing atmosphere (air), is analyzed. The method takes into account various operating conditions to determine the optimal conditions for studying inhomogeneous thermocouples. The method can be applied for other types of thermocouples when taking into account their specific characteristics and the conditions that they have been exposed to.
Zhang, Huisheng; Zhang, Ying; Xu, Dongpo; Liu, Xiaodong
2015-06-01
It has been shown that, by adding a chaotic sequence to the weight update during the training of neural networks, the chaos injection-based gradient method (CIBGM) is superior to the standard backpropagation algorithm. This paper presents the theoretical convergence analysis of CIBGM for training feedforward neural networks. We consider both the case of batch learning as well as the case of online learning. Under mild conditions, we prove the weak convergence, i.e., the training error tends to a constant and the gradient of the error function tends to zero. Moreover, the strong convergence of CIBGM is also obtained with the help of an extra condition. The theoretical results are substantiated by a simulation example.
Statistics of the residual refraction errors in laser ranging data
NASA Technical Reports Server (NTRS)
Gardner, C. S.
1977-01-01
A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.
Investigation of advanced phase-shifting projected fringe profilometry techniques
NASA Astrophysics Data System (ADS)
Liu, Hongyu
1999-11-01
The phase-shifting projected fringe profilometry (PSPFP) technique is a powerful tool in the profile measurements of rough engineering surfaces. Compared with other competing techniques, this technique is notable for its full-field measurement capacity, system simplicity, high measurement speed, and low environmental vulnerability. The main purpose of this dissertation is to tackle three important problems, which severely limit the capability and the accuracy of the PSPFP technique, with some new approaches. Chapter 1 provides some background information of the PSPFP technique including the measurement principles, basic features, and related techniques is briefly introduced. The objectives and organization of the thesis are also outlined. Chapter 2 gives a theoretical treatment to the absolute PSPFP measurement. The mathematical formulations and basic requirements of the absolute PSPFP measurement and its supporting techniques are discussed in detail. Chapter 3 introduces the experimental verification of the proposed absolute PSPFP technique. Some design details of a prototype system are discussed as supplements to the previous theoretical analysis. Various fundamental experiments performed for concept verification and accuracy evaluation are introduced together with some brief comments. Chapter 4 presents the theoretical study of speckle- induced phase measurement errors. In this analysis, the expression for speckle-induced phase errors is first derived based on the multiplicative noise model of image- plane speckles. The statistics and the system dependence of speckle-induced phase errors are then thoroughly studied through numerical simulations and analytical derivations. Based on the analysis, some suggestions on the system design are given to improve measurement accuracy. Chapter 5 discusses a new technique combating surface reflectivity variations. The formula used for error compensation is first derived based on a simplified model of the detection process. The techniques coping with two major effects of surface reflectivity variations are then introduced. Some fundamental problems in the proposed technique are studied through simulations. Chapter 6 briefly summarizes the major contributions of the current work and provides some suggestions for the future research.
NASA Technical Reports Server (NTRS)
Joiner, J.; Dee, D. P.
1998-01-01
One of the outstanding problems in data assimilation has been and continues to be how best to utilize satellite data while balancing the tradeoff between accuracy and computational cost. A number of weather prediction centers have recently achieved remarkable success in improving their forecast skill by changing the method by which satellite data are assimilated into the forecast model from the traditional approach of assimilating retrievals to the direct assimilation of radiances in a variational framework. The operational implementation of such a substantial change in methodology involves a great number of technical details, e.g., pertaining to quality control procedures, systematic error correction techniques, and tuning of the statistical parameters in the analysis algorithm. Although there are clear theoretical advantages to the direct radiance assimilation approach, it is not obvious at all to what extent the improvements that have been obtained so far can be attributed to the change in methodology, or to various technical aspects of the implementation. The issue is of interest because retrieval assimilation retains many practical and logistical advantages which may become even more significant in the near future when increasingly high-volume data sources become available. The central question we address here is: how much improvement can we expect from assimilating radiances rather than retrievals, all other things being equal? We compare the two approaches in a simplified one-dimensional theoretical framework, in which problems related to quality control and systematic error correction are conveniently absent. By assuming a perfect radiative transfer model and perfect knowledge of radiance and background error covariances, we are able to formulate a nonlinear local error analysis for each assimilation method. Direct radiance assimilation is optimal in this idealized context, while the traditional method of assimilating retrievals is suboptimal because it ignores the cross-covariances between background errors and retrieval errors. We show that interactive retrieval assimilation (where the same background used for assimilation is also used in the retrieval step) is equivalent to direct assimilation of radiances with suboptimal analysis weights. We illustrate and extend these theoretical arguments with several one-dimensional assimilation experiments, where we estimate vertical atmospheric profiles using simulated data from both the High-resolution InfraRed Sounder 2 (HIRS2) and the future Atmospheric InfraRed Sounder (AIRS).
Note: Eddy current displacement sensors independent of target conductivity.
Wang, Hongbo; Li, Wei; Feng, Zhihua
2015-01-01
Eddy current sensors (ECSs) are widely used for non-contact displacement measurement. In this note, the quantitative error of an ECS caused by target conductivity was analyzed using a complex image method. The response curves (L-x) of the ECS with different targets were similar and could be overlapped by shifting the curves on x direction with √2δ/2. Both finite element analysis and experiments match well with the theoretical analysis, which indicates that the measured error of high precision ECSs caused by target conductivity can be completely eliminated, and the ECSs can measure different materials precisely without calibration.
NASA Astrophysics Data System (ADS)
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
Constraining the mass–richness relationship of redMaPPer clusters with angular clustering
Baxter, Eric J.; Rozo, Eduardo; Jain, Bhuvnesh; ...
2016-08-04
The potential of using cluster clustering for calibrating the mass–richness relation of galaxy clusters has been recognized theoretically for over a decade. In this paper, we demonstrate the feasibility of this technique to achieve high-precision mass calibration using redMaPPer clusters in the Sloan Digital Sky Survey North Galactic Cap. By including cross-correlations between several richness bins in our analysis, we significantly improve the statistical precision of our mass constraints. The amplitude of the mass–richness relation is constrained to 7 per cent statistical precision by our analysis. However, the error budget is systematics dominated, reaching a 19 per cent total errormore » that is dominated by theoretical uncertainty in the bias–mass relation for dark matter haloes. We confirm the result from Miyatake et al. that the clustering amplitude of redMaPPer clusters depends on galaxy concentration as defined therein, and we provide additional evidence that this dependence cannot be sourced by mass dependences: some other effect must account for the observed variation in clustering amplitude with galaxy concentration. Assuming that the observed dependence of redMaPPer clustering on galaxy concentration is a form of assembly bias, we find that such effects introduce a systematic error on the amplitude of the mass–richness relation that is comparable to the error bar from statistical noise. Finally, the results presented here demonstrate the power of cluster clustering for mass calibration and cosmology provided the current theoretical systematics can be ameliorated.« less
Consistency and convergence for numerical radiation conditions
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1990-01-01
The problem of imposing radiation conditions at artificial boundaries for the numerical simulation of wave propagation is considered. Emphasis is on the behavior and analysis of the error which results from the restriction of the domain. The theory of error estimation is briefly outlined for boundary conditions. Use is made of the asymptotic analysis of propagating wave groups to derive and analyze boundary operators. For dissipative problems this leads to local, accurate conditions, but falls short in the hyperbolic case. A numerical experiment on the solution of the wave equation with cylindrical symmetry is described. A unified presentation of a number of conditions which have been proposed in the literature is given and the time dependence of the error which results from their use is displayed. The results are in qualitative agreement with theoretical considerations. It was found, however, that for this model problem it is particularly difficult to force the error to decay rapidly in time.
Analysis of frequency mixing error on heterodyne interferometric ellipsometry
NASA Astrophysics Data System (ADS)
Deng, Yuan-long; Li, Xue-jin; Wu, Yu-bin; Hu, Ju-guang; Yao, Jian-quan
2007-11-01
A heterodyne interferometric ellipsometer, with no moving parts and a transverse Zeeman laser, is demonstrated. The modified Mach-Zehnder interferometer characterized as a separate frequency and common-path configuration is designed and theoretically analyzed. The experimental data show a fluctuation mainly resulting from the frequency mixing error which is caused by the imperfection of polarizing beam splitters (PBS), the elliptical polarization and non-orthogonality of light beams. The producing mechanism of the frequency mixing error and its influence on measurement are analyzed with the Jones matrix method; the calculation indicates that it results in an error up to several nanometres in the thickness measurement of thin films. The non-orthogonality has no contribution to the phase difference error when it is relatively small; the elliptical polarization and the imperfection of PBS have a major effect on the error.
NASA Technical Reports Server (NTRS)
Webb, L. D.; Washington, H. P.
1972-01-01
Static pressure position error calibrations for a compensated and an uncompensated XB-70 nose boom pitot static probe were obtained in flight. The methods (Pacer, acceleration-deceleration, and total temperature) used to obtain the position errors over a Mach number range from 0.5 to 3.0 and an altitude range from 25,000 feet to 70,000 feet are discussed. The error calibrations are compared with the position error determined from wind tunnel tests, theoretical analysis, and a standard NACA pitot static probe. Factors which influence position errors, such as angle of attack, Reynolds number, probe tip geometry, static orifice location, and probe shape, are discussed. Also included are examples showing how the uncertainties caused by position errors can affect the inlet controls and vertical altitude separation of a supersonic transport.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, WanYin; Zhang, Jie; Florita, Anthony
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance,more » cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.« less
An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments.
Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui
2016-01-23
As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs' tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N₀), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods.
The NASTRAN theoretical manual
NASA Technical Reports Server (NTRS)
1981-01-01
Designed to accommodate additions and modifications, this commentary on NASTRAN describes the problem solving capabilities of the program in a narrative fashion and presents developments of the analytical and numerical procedures that underlie the program. Seventeen major sections and numerous subsections cover; the organizational aspects of the program, utility matrix routines, static structural analysis, heat transfer, dynamic structural analysis, computer graphics, special structural modeling techniques, error analysis, interaction between structures and fluids, and aeroelastic analysis.
Hooper, Brionny J; O'Hare, David P A
2013-08-01
Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.
Attitude-error compensation for airborne down-looking synthetic-aperture imaging lidar
NASA Astrophysics Data System (ADS)
Li, Guang-yuan; Sun, Jian-feng; Zhou, Yu; Lu, Zhi-yong; Zhang, Guo; Cai, Guang-yu; Liu, Li-ren
2017-11-01
Target-coordinate transformation in the lidar spot of the down-looking synthetic-aperture imaging lidar (SAIL) was performed, and the attitude errors were deduced in the process of imaging, according to the principle of the airborne down-looking SAIL. The influence of the attitude errors on the imaging quality was analyzed theoretically. A compensation method for the attitude errors was proposed and theoretically verified. An airborne down-looking SAIL experiment was performed and yielded the same results. A point-by-point error-compensation method for solving the azimuthal-direction space-dependent attitude errors was also proposed.
NASA Astrophysics Data System (ADS)
Zhao, Dan; Wang, Xiaoman; Cheng, Yuan; Liu, Shaogang; Wu, Yanhong; Chai, Liqin; Liu, Yang; Cheng, Qianju
2018-05-01
Piecewise-linear structure can effectively broaden the working frequency band of the piezoelectric energy harvester, and improvement of its research can promote the practical process of energy collection device to meet the requirements for powering microelectronic components. In this paper, the incremental harmonic balance (IHB) method is introduced for the complicated and difficult analysis process of the piezoelectric energy harvester to solve these problems. After obtaining the nonlinear dynamic equation of the single-degree-of-freedom piecewise-linear energy harvester by mathematical modeling and the equation is solved based on the IHB method, the theoretical amplitude-frequency curve of open-circuit voltage is achieved. Under 0.2 g harmonic excitation, a piecewise-linear energy harvester is experimentally tested by unidirectional frequency-increasing scanning. The results demonstrate that the theoretical and experimental amplitudes have the same trend, and the width of the working band with high voltage output are 4.9 Hz and 4.7 Hz, respectively, and the relative error is 4.08%. The open-output peak voltage are 21.53 V and 18.25 V, respectively, and the relative error is 15.23%. Since the theoretical value is consistent with the experimental results, the theoretical model and the incremental harmonic balance method used in this paper are suitable for solving single-degree-of-freedom piecewise-linear piezoelectric energy harvester and can be applied to further parameter optimized design.
Performance Analysis of HF Band FB-MC-SS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hussein Moradi; Stephen Andrew Laraway; Behrouz Farhang-Boroujeny
Abstract—In a recent paper [1] the filter bank multicarrier spread spectrum (FB-MC-SS) waveform was proposed for wideband spread spectrum HF communications. A significant benefit of this waveform is robustness against narrow and partial band interference. Simulation results in [1] demonstrated good performance in a wideband HF channel over a wide range of conditions. In this paper we present a theoretical analysis of the bit error probably for this system. Our analysis tailors the results from [2] where BER performance was analyzed for maximum ration combining systems that accounted for correlation between subcarriers and channel estimation error. Equations are give formore » BER that closely match the simulated performance in most situations.« less
A theoretical basis for the analysis of redundant software subject to coincident errors
NASA Technical Reports Server (NTRS)
Eckhardt, D. E., Jr.; Lee, L. D.
1985-01-01
Fundamental to the development of redundant software techniques fault-tolerant software, is an understanding of the impact of multiple-joint occurrences of coincident errors. A theoretical basis for the study of redundant software is developed which provides a probabilistic framework for empirically evaluating the effectiveness of the general (N-Version) strategy when component versions are subject to coincident errors, and permits an analytical study of the effects of these errors. The basic assumptions of the model are: (1) independently designed software components are chosen in a random sample; and (2) in the user environment, the system is required to execute on a stationary input series. The intensity of coincident errors, has a central role in the model. This function describes the propensity to introduce design faults in such a way that software components fail together when executing in the user environment. The model is used to give conditions under which an N-Version system is a better strategy for reducing system failure probability than relying on a single version of software. A condition which limits the effectiveness of a fault-tolerant strategy is studied, and it is posted whether system failure probability varies monotonically with increasing N or whether an optimal choice of N exists.
Theoretical Analysis of Rain Attenuation Probability
NASA Astrophysics Data System (ADS)
Roy, Surendra Kr.; Jha, Santosh Kr.; Jha, Lallan
2007-07-01
Satellite communication technologies are now highly developed and high quality, distance-independent services have expanded over a very wide area. As for the system design of the Hokkaido integrated telecommunications(HIT) network, it must first overcome outages of satellite links due to rain attenuation in ka frequency bands. In this paper theoretical analysis of rain attenuation probability on a slant path has been made. The formula proposed is based Weibull distribution and incorporates recent ITU-R recommendations concerning the necessary rain rates and rain heights inputs. The error behaviour of the model was tested with the loading rain attenuation prediction model recommended by ITU-R for large number of experiments at different probability levels. The novel slant path rain attenuastion prediction model compared to the ITU-R one exhibits a similar behaviour at low time percentages and a better root-mean-square error performance for probability levels above 0.02%. The set of presented models exhibits the advantage of implementation with little complexity and is considered useful for educational and back of the envelope computations.
Tube Bulge Process : Theoretical Analysis and Finite Element Simulations
NASA Astrophysics Data System (ADS)
Velasco, Raphael; Boudeau, Nathalie
2007-05-01
This paper is focused on the determination of mechanics characteristics for tubular materials, using tube bulge process. A comparative study is made between two different models: theoretical model and finite element analysis. The theoretical model is completely developed, based first on a geometrical analysis of the tube profile during bulging, which is assumed to strain in arc of circles. Strain and stress analysis complete the theoretical model, which allows to evaluate tube thickness and state of stress, at any point of the free bulge region. Free bulging of a 304L stainless steel is simulated using Ls-Dyna 970. To validate FE simulations approach, a comparison between theoretical and finite elements models is led on several parameters such as: thickness variation at the free bulge region pole with bulge height, tube thickness variation with z axial coordinate, and von Mises stress variation with plastic strain. Finally, the influence of geometrical parameters deviations on flow stress curve is observed using analytical model: deviations of the tube outer diameter, its initial thickness and the bulge height measurement are taken into account to obtain a resulting error on plastic strain and von Mises stress.
Fundamental principles of absolute radiometry and the philosophy of this NBS program (1968 to 1971)
NASA Technical Reports Server (NTRS)
Geist, J.
1972-01-01
A description is given work performed on a program to develop an electrically calibrated detector (also called absolute radiometer, absolute detector, and electrically calibrated radiometer) that could be used to realize, maintain, and transfer a scale of total irradiance. The program includes a comprehensive investigation of the theoretical basis of absolute detector radiometry, as well as the design and construction of a number of detectors. A theoretical analysis of the sources of error is also included.
1979-01-01
syn- thesis proceed s by ignoring unacceptable syntax or other errors , pro- tection against subsequent execution of a faulty reaction scheme can be...resulting TAPE9 . During subroutine syn thesis and reaction processing, a search is made (fo r each secondary electron collision encountered) to...program library, which can be cat- alogued and saved if any future specialized modifications (beyond the scope of the syn thesis capability of LASER
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Study on optical 3D angular deformations measurement
NASA Astrophysics Data System (ADS)
Gao, Yang; Wang, Xingshu; Huang, Zongsheng; Yang, Jinliang
2013-12-01
3D angular deformations will be inevitable when ships are sailing, due to the changes of the environmental temperature and external stresses. The measurement of 3D angular deformations is one of the most critical and difficult issues in navy and shipbuilding industry around the world. In this paper, we propose an optical method to measure 3D ship angular deformations and discuss the measurement errors in detail. Theoretical analysis shows that the measured errors of the pitching and yawing deformations are induced by the installation errors of the image aperture, and the measured error of the rolling deformation depends on the subpixel location algorithm in image processing. It indicates that the measured errors of the optical measurement proposed in this paper are at the magnitude of angular seconds, when the elaborated installation and precise image processing technology are both performed.
A cognitive taxonomy of medical errors.
Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H
2004-06-01
Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.
Research on the novel FBG detection system for temperature and strain field distribution
NASA Astrophysics Data System (ADS)
Liu, Zhi-chao; Yang, Jin-hua
2017-10-01
In order to collect the information of temperature and strain field distribution information, the novel FBG detection system was designed. The system applied linear chirped FBG structure for large bandwidth. The structure of novel FBG cover was designed as a linear change in thickness, in order to have a different response at different locations. It can obtain the temperature and strain field distribution information by reflection spectrum simultaneously. The structure of novel FBG cover was designed, and its theoretical function is calculated. Its solution is derived for strain field distribution. By simulation analysis the change trend of temperature and strain field distribution were analyzed in the conditions of different strain strength and action position, the strain field distribution can be resolved. The FOB100 series equipment was used to test the temperature in experiment, and The JSM-A10 series equipment was used to test the strain field distribution in experiment. The average error of experimental results was better than 1.1% for temperature, and the average error of experimental results was better than 1.3% for strain. There were individual errors when the strain was small in test data. It is feasibility by theoretical analysis, simulation calculation and experiment, and it is very suitable for application practice.
Feischl, Michael; Gantner, Gregor; Praetorius, Dirk
2015-01-01
We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698
Error mechanism analyses of an ultra-precision stage for high speed scan motion over a large stroke
NASA Astrophysics Data System (ADS)
Wang, Shaokai; Tan, Jiubin; Cui, Jiwen
2015-02-01
Reticle Stage (RS) is designed to complete scan motion with high speed in nanometer-scale over a large stroke. Comparing with the allowable scan accuracy of a few nanometers, errors caused by any internal or external disturbances are critical and must not be ignored. In this paper, RS is firstly introduced in aspects of mechanical structure, forms of motion, and controlling method. Based on that, mechanisms of disturbances transferred to final servo-related error in scan direction are analyzed, including feedforward error, coupling between the large stroke stage (LS) and the short stroke stage (SS), and movement of measurement reference. Especially, different forms of coupling between SS and LS are discussed in detail. After theoretical analysis above, the contributions of these disturbances to final error are simulated numerically. The residual positioning error caused by feedforward error in acceleration process is about 2 nm after settling time, the coupling between SS and LS about 2.19 nm, and the movements of MF about 0.6 nm.
Agreement and Movement: A Syntactic Analysis of Attraction
ERIC Educational Resources Information Center
Franck, Julie; Lassi, Glenda; Frauenfelder, Ulrich H.; Rizzi, Luigi
2006-01-01
This paper links experimental psycholinguistics and theoretical syntax in the study of subject--verb agreement. Three experiments of elicited spoken production making use of specific characteristics of Italian and French are presented. They manipulate and examine its impact on the occurrence of "attraction" errors (i.e. incorrect agreement with a…
An analysis of the adaptability of Loran-C to air navigation
NASA Technical Reports Server (NTRS)
Littlefield, J. A.
1981-01-01
The sources of position errors characteristics of the Loran-C navigation system were identified. Particular emphasis was given to their point on entry as well as their elimination. It is shown that the ratio of realized accuracy to theoretical accuracy of the Loran-C is highly receiver dependent.
A stochastic dynamic model for human error analysis in nuclear power plants
NASA Astrophysics Data System (ADS)
Delgado-Loperena, Dharma
Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.
Experimental study on an FBG strain sensor
NASA Astrophysics Data System (ADS)
Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng
2018-01-01
Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.
Analysis of Performance of Stereoscopic-Vision Software
NASA Technical Reports Server (NTRS)
Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert
2007-01-01
A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.
Statistical image quantification toward optimal scan fusion and change quantification
NASA Astrophysics Data System (ADS)
Potesil, Vaclav; Zhou, Xiang Sean
2007-03-01
Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.
Thermal error analysis and compensation for digital image/volume correlation
NASA Astrophysics Data System (ADS)
Pan, Bing
2018-02-01
Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.
Effects of stinger axial dynamics and mass compensation methods on experimental modal analysis
NASA Astrophysics Data System (ADS)
Hu, Ximing
1992-06-01
A longitudinal bar model that includes both stinger elastic and inertia properties is used to analyze the stinger's axial dynamics as well as the mass compensation that is required to obtain accurate input forces when a stinger is installed between the excitation source, force transducer, and the structure under test. Stinger motion transmissibility and force transmissibility, axial resonance and excitation energy transfer problems are discussed in detail. Stinger mass compensation problems occur when the force transducer is mounted on the exciter end of the stinger. These problems are studied theoretically, numerically, and experimentally. It is found that the measured Frequency Response Function (FRF) can be underestimated if mass compensation is based on the stinger exciter-end acceleration and can be overestimated if the mass compensation is based on the structure-end acceleration due to the stinger's compliance. A new mass compensation method that is based on two accelerations is introduced and is seen to improve the accuracy considerably. The effects of the force transducer's compliance on the mass compensation are also discussed. A theoretical model is developed that describes the measurement system's FRD around a test structure's resonance. The model shows that very large measurement errors occur when there is a small relative phase shift between the force and acceleration measurements. These errors can be in hundreds of percent corresponding to a phase error on the order of one or two degrees. The physical reasons for this unexpected error pattern are explained. This error is currently unknown to the experimental modal analysis community. Two sample structures consisting of a rigid mass and a double cantilever beam are used in the numerical calculations and experiments.
Carrier recovery methods for a dual-mode modem: A design approach
NASA Technical Reports Server (NTRS)
Richards, C. W.; Wilson, S. G.
1984-01-01
A dual mode model with selectable QPSK or 16-QASK modulation schemes is discussed. The theoretical reasoning as well as the practical trade-offs made during the development of a modem are presented, with attention given to the carrier recovery method used for coherent demodulation. Particular attention is given to carrier recovery methods that can provide little degradation due to phase error for both QPSK and 16-QASK, while being insensitive to the amplitude characteristic of a 16-QASK modulation scheme. A computer analysis of the degradation is symbol error rate (SER) for QPSK and 16-QASK due to phase error is prresented. Results find that an energy increase of roughly 4 dB is needed to maintain a SER of 1X10(-5) for QPSK with 20 deg of phase error and 16-QASK with 7 deg phase error.
Combined proportional and additive residual error models in population pharmacokinetic modelling.
Proost, Johannes H
2017-11-15
In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Wind tunnel seeding particles for laser velocimeter
NASA Technical Reports Server (NTRS)
Ghorieshi, Anthony
1992-01-01
The design of an optimal air foil has been a major challenge for aerospace industries. The main objective is to reduce the drag force while increasing the lift force in various environmental air conditions. Experimental verification of theoretical and computational results is a crucial part of the analysis because of errors buried in the solutions, due to the assumptions made in theoretical work. Experimental studies are an integral part of a good design procedure; however, empirical data are not always error free due to environmental obstacles or poor execution, etc. The reduction of errors in empirical data is a major challenge in wind tunnel testing. One of the recent advances of particular interest is the use of a non-intrusive measurement technique known as laser velocimetry (LV) which allows for obtaining quantitative flow data without introducing flow disturbing probes. The laser velocimeter technique is based on measurement of scattered light by the particles present in the flow but not the velocity of the flow. Therefore, for an accurate flow velocity measurement with laser velocimeters, two criterion are investigated: (1) how well the particles track the local flow field, and (2) the requirement of light scattering efficiency to obtain signals with the LV. In order to demonstrate the concept of predicting the flow velocity by velocity measurement of particle seeding, the theoretical velocity of the gas flow is computed and compared with experimentally obtained velocity of particle seeding.
Theoretical Bounds of Direct Binary Search Halftoning.
Liao, Jan-Ray
2015-11-01
Direct binary search (DBS) produces the images of the best quality among half-toning algorithms. The reason is that it minimizes the total squared perceived error instead of using heuristic approaches. The search for the optimal solution involves two operations: (1) toggle and (2) swap. Both operations try to find the binary states for each pixel to minimize the total squared perceived error. This error energy minimization leads to a conjecture that the absolute value of the filtered error after DBS converges is bounded by half of the peak value of the autocorrelation filter. However, a proof of the bound's existence has not yet been found. In this paper, we present a proof that shows the bound existed as conjectured under the condition that at least one swap occurs after toggle converges. The theoretical analysis also indicates that a swap with a pixel further away from the center of the autocorrelation filter results in a tighter bound. Therefore, we propose a new DBS algorithm which considers toggle and swap separately, and the swap operations are considered in the order from the edge to the center of the filter. Experimental results show that the new algorithm is more efficient than the previous algorithm and can produce half-toned images of the same quality as the previous algorithm.
Density scaling on n = 1 error field penetration in ohmically heated discharges in EAST
NASA Astrophysics Data System (ADS)
Wang, Hui-Hui; Sun, You-Wen; Shi, Tong-Hui; Zang, Qing; Liu, Yue-Qiang; Yang, Xu; Gu, Shuai; He, Kai-Yang; Gu, Xiang; Qian, Jin-Ping; Shen, Biao; Luo, Zheng-Ping; Chu, Nan; Jia, Man-Ni; Sheng, Zhi-Cai; Liu, Hai-Qing; Gong, Xian-Zu; Wan, Bao-Nian; Contributors, EAST
2018-05-01
Density scaling of error field penetration in EAST is investigated with different n = 1 magnetic perturbation coil configurations in ohmically heated discharges. The density scalings of error field penetration thresholds under two magnetic perturbation spectra are br\\propto n_e0.5 and br\\propto n_e0.6 , where b r is the error field and n e is the line averaged electron density. One difficulty in understanding the density scaling is that key parameters other than density in determining the field penetration process may also be changed when the plasma density changes. Therefore, they should be determined from experiments. The estimated theoretical analysis (br\\propto n_e0.54 in lower density region and br\\propto n_e0.40 in higher density region), using the density dependence of viscosity diffusion time, electron temperature and mode frequency measured from the experiments, is consistent with the observed scaling. One of the key points to reproduce the observed scaling in EAST is that the viscosity diffusion time estimated from energy confinement time is almost constant. It means that the plasma confinement lies in saturation ohmic confinement regime rather than the linear Neo-Alcator regime causing weak density dependence in the previous theoretical studies.
Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation
NASA Astrophysics Data System (ADS)
Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu
2016-11-01
Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.
A reformulation of the Cost Plus Net Value Change (C+NVC) model of wildfire economics
Geoffrey H. Donovan; Douglas B. Rideout
2003-01-01
The Cost plus Net Value Change (C+NVC) model provides the theoretical foundation for wildland fire economics and provides the basis for the National Fire Management Analysis System (NFMAS). The C+NVC model is based on the earlier least Cost plus Loss model (LC+L) expressed by Sparhawk (1925). Mathematical and graphical analysis of the LC+L model illustrates two errors...
Optical phase-locked loop (OPLL) for free-space laser communications with heterodyne detection
NASA Technical Reports Server (NTRS)
Win, Moe Z.; Chen, Chien-Chung; Scholtz, Robert A.
1991-01-01
Several advantages of coherent free-space optical communications are outlined. Theoretical analysis is formulated for an OPLL disturbed by shot noise, modulation noise, and frequency noise consisting of a white component, a 1/f component, and a 1/f-squared component. Each of the noise components is characterized by its associated power spectral density. It is shown that the effect of modulation depends only on the ratio of loop bandwidth and data rate, and is negligible for an OPLL with loop bandwidth smaller than one fourth the data rate. Total phase error variance as a function of loop bandwidth is displayed for several values of carrier signal to noise ratio. Optimal loop bandwidth is also calculated as a function of carrier signal to noise ratio. An OPLL experiment is performed, where it is shown that the measured phase error variance closely matches the theoretical predictions.
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
Research on effects of phase error in phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Wang, Hongjun; Wang, Zhao; Zhao, Hong; Tian, Ailing; Liu, Bingcai
2007-12-01
Referring to phase-shifting interferometry technology, the phase shifting error from the phase shifter is the main factor that directly affects the measurement accuracy of the phase shifting interferometer. In this paper, the resources and sorts of phase shifting error were introduction, and some methods to eliminate errors were mentioned. Based on the theory of phase shifting interferometry, the effects of phase shifting error were analyzed in detail. The Liquid Crystal Display (LCD) as a new shifter has advantage as that the phase shifting can be controlled digitally without any mechanical moving and rotating element. By changing coded image displayed on LCD, the phase shifting in measuring system was induced. LCD's phase modulation characteristic was analyzed in theory and tested. Based on Fourier transform, the effect model of phase error coming from LCD was established in four-step phase shifting interferometry. And the error range was obtained. In order to reduce error, a new error compensation algorithm was put forward. With this method, the error can be obtained by process interferogram. The interferogram can be compensated, and the measurement results can be obtained by four-step phase shifting interferogram. Theoretical analysis and simulation results demonstrate the feasibility of this approach to improve measurement accuracy.
Mixed-venous oxygen tension by nitrogen rebreathing - A critical, theoretical analysis.
NASA Technical Reports Server (NTRS)
Kelman, G. R.
1972-01-01
There is dispute about the validity of the nitrogen rebreathing technique for determination of mixed-venous oxygen tension. This theoretical analysis examines the circumstances under which the technique is likely to be applicable. When the plateau method is used the probable error in mixed-venous oxygen tension is plus or minus 2.5 mm Hg at rest, and of the order of plus or minus 1 mm Hg during exercise. Provided, that the rebreathing bag size is reasonably chosen, Denison's (1967) extrapolation technique gives results at least as accurate as those obtained by the plateau method. At rest, however, extrapolation should be to 30 rather than to 20 sec.
NASA Astrophysics Data System (ADS)
Su, Yunquan; Yao, Xuefeng; Wang, Shen; Ma, Yinji
2017-03-01
An effective correction model is proposed to eliminate the refraction error effect caused by an optical window of a furnace in digital image correlation (DIC) deformation measurement under high-temperature environment. First, a theoretical correction model with the corresponding error correction factor is established to eliminate the refraction error induced by double-deck optical glass in DIC deformation measurement. Second, a high-temperature DIC experiment using a chromium-nickel austenite stainless steel specimen is performed to verify the effectiveness of the correction model by the correlation calculation results under two different conditions (with and without the optical glass). Finally, both the full-field and the divisional displacement results with refraction influence are corrected by the theoretical model and then compared to the displacement results extracted from the images without refraction influence. The experimental results demonstrate that the proposed theoretical correction model can effectively improve the measurement accuracy of DIC method by decreasing the refraction errors from measured full-field displacements under high-temperature environment.
A Case of Error Disclosure: A Communication Privacy Management Analysis
Petronio, Sandra; Helft, Paul R.; Child, Jeffrey T.
2013-01-01
To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios’s theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient’s family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public health Much of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information, unwittingly following conversational scripts that convey misleading messages, and the difficulty in regulating privacy boundaries in the stressful circumstances that occur with error disclosures. As a consequence, the potential contribution to public health is the ability to more clearly see the significance of the disclosure process that has implications for many public health issues. PMID:25170501
Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H
2015-11-30
We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.
Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms.
Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan
2015-08-14
High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms.
Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms
Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan
2015-01-01
High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms. PMID:26287203
Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J.
2014-01-01
Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured the validity of mediation analysis can be severely undermined. In this paper we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. PMID:25220625
Enhanced orbit determination filter: Inclusion of ground system errors as filter parameters
NASA Technical Reports Server (NTRS)
Masters, W. C.; Scheeres, D. J.; Thurman, S. W.
1994-01-01
The theoretical aspects of an orbit determination filter that incorporates ground-system error sources as model parameters for use in interplanetary navigation are presented in this article. This filter, which is derived from sequential filtering theory, allows a systematic treatment of errors in calibrations of transmission media, station locations, and earth orientation models associated with ground-based radio metric data, in addition to the modeling of the spacecraft dynamics. The discussion includes a mathematical description of the filter and an analytical comparison of its characteristics with more traditional filtering techniques used in this application. The analysis in this article shows that this filter has the potential to generate navigation products of substantially greater accuracy than more traditional filtering procedures.
Guan, W; Meng, X F; Dong, X M
2014-12-01
Rectification error is a critical characteristic of inertial accelerometers. Accelerometers working in operational situations are stimulated by composite inputs, including constant acceleration and vibration, from multiple directions. However, traditional methods for evaluating rectification error only use one-dimensional vibration. In this paper, a double turntable centrifuge (DTC) was utilized to produce the constant acceleration and vibration simultaneously and we tested the rectification error due to the composite accelerations. At first, we deduced the expression of the rectification error with the output of the DTC and a static model of the single-axis pendulous accelerometer under test. Theoretical investigation and analysis were carried out in accordance with the rectification error model. Then a detailed experimental procedure and testing results were described. We measured the rectification error with various constant accelerations at different frequencies and amplitudes of the vibration. The experimental results showed the distinguished characteristics of the rectification error caused by the composite accelerations. The linear relation between the constant acceleration and the rectification error was proved. The experimental procedure and results presented in this context can be referenced for the investigation of the characteristics of accelerometer with multiple inputs.
Goldmann Tonometer Prism with an Optimized Error Correcting Applanation Surface.
McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko; Schwiegerling, Jim
2016-09-01
We evaluate solutions for an applanating surface modification to the Goldmann tonometer prism, which substantially negates the errors due to patient variability in biomechanics. A modified Goldmann or correcting applanation tonometry surface (CATS) prism is presented which was optimized to minimize the intraocular pressure (IOP) error due to corneal thickness, stiffness, curvature, and tear film. Mathematical modeling with finite element analysis (FEA) and manometric IOP referenced cadaver eyes were used to optimize and validate the design. Mathematical modeling of the optimized CATS prism indicates an approximate 50% reduction in each of the corneal biomechanical and tear film errors. Manometric IOP referenced pressure in cadaveric eyes demonstrates substantial equivalence to GAT in nominal eyes with the CATS prism as predicted by modeling theory. A CATS modified Goldmann prism is theoretically able to significantly improve the accuracy of IOP measurement without changing Goldmann measurement technique or interpretation. Clinical validation is needed but the analysis indicates a reduction in CCT error alone to less than ±2 mm Hg using the CATS prism in 100% of a standard population compared to only 54% less than ±2 mm Hg error with the present Goldmann prism. This article presents an easily adopted novel approach and critical design parameters to improve the accuracy of a Goldmann applanating tonometer.
Zhang, Tisheng; Niu, Xiaoji; Ban, Yalong; Zhang, Hongping; Shi, Chuang; Liu, Jingnan
2015-01-01
A GNSS/INS deeply-coupled system can improve the satellite signals tracking performance by INS aiding tracking loops under dynamics. However, there was no literature available on the complete modeling of the INS branch in the INS-aided tracking loop, which caused the lack of a theoretical tool to guide the selections of inertial sensors, parameter optimization and quantitative analysis of INS-aided PLLs. This paper makes an effort on the INS branch in modeling and parameter optimization of phase-locked loops (PLLs) based on the scalar-based GNSS/INS deeply-coupled system. It establishes the transfer function between all known error sources and the PLL tracking error, which can be used to quantitatively evaluate the candidate inertial measurement unit (IMU) affecting the carrier phase tracking error. Based on that, a steady-state error model is proposed to design INS-aided PLLs and to analyze their tracking performance. Based on the modeling and error analysis, an integrated deeply-coupled hardware prototype is developed, with the optimization of the aiding information. Finally, the performance of the INS-aided PLLs designed based on the proposed steady-state error model is evaluated through the simulation and road tests of the hardware prototype. PMID:25569751
Evaluation of errors in quantitative determination of asbestos in rock
NASA Astrophysics Data System (ADS)
Baietto, Oliviero; Marini, Paola; Vitaliti, Martina
2016-04-01
The quantitative determination of the content of asbestos in rock matrices is a complex operation which is susceptible to important errors. The principal methodologies for the analysis are Scanning Electron Microscopy (SEM) and Phase Contrast Optical Microscopy (PCOM). Despite the PCOM resolution is inferior to that of SEM, PCOM analysis has several advantages, including more representativity of the analyzed sample, more effective recognition of chrysotile and a lower cost. The DIATI LAA internal methodology for the analysis in PCOM is based on a mild grinding of a rock sample, its subdivision in 5-6 grain size classes smaller than 2 mm and a subsequent microscopic analysis of a portion of each class. The PCOM is based on the optical properties of asbestos and of the liquids with note refractive index in which the particles in analysis are immersed. The error evaluation in the analysis of rock samples, contrary to the analysis of airborne filters, cannot be based on a statistical distribution. In fact for airborne filters a binomial distribution (Poisson), which theoretically defines the variation in the count of fibers resulting from the observation of analysis fields, chosen randomly on the filter, can be applied. The analysis in rock matrices instead cannot lean on any statistical distribution because the most important object of the analysis is the size of the of asbestiform fibers and bundles of fibers observed and the resulting relationship between the weights of the fibrous component compared to the one granular. The error evaluation generally provided by public and private institutions varies between 50 and 150 percent, but there are not, however, specific studies that discuss the origin of the error or that link it to the asbestos content. Our work aims to provide a reliable estimation of the error in relation to the applied methodologies and to the total content of asbestos, especially for the values close to the legal limits. The error assessments must be made through the repetition of the same analysis on the same sample to try to estimate the error on the representativeness of the sample and the error related to the sensitivity of the operator, in order to provide a sufficiently reliable uncertainty of the method. We used about 30 natural rock samples with different asbestos content, performing 3 analysis on each sample to obtain a trend sufficiently representative of the percentage. Furthermore we made on one chosen sample 10 repetition of the analysis to try to define more specifically the error of the methodology.
NASA Technical Reports Server (NTRS)
Hall, D. K.; Foster, J. L.; Salomonson, V. V.; Klein, A. G.; Chien, J. Y. L.
1998-01-01
Following the launch of the Earth Observing System first morning (EOS-AM1) satellite, daily, global snow-cover mapping will be performed automatically at a spatial resolution of 500 m, cloud-cover permitting, using Moderate Resolution Imaging Spectroradiometer (MODIS) data. A technique to calculate theoretical accuracy of the MODIS-derived snow maps is presented. Field studies demonstrate that under cloud-free conditions when snow cover is complete, snow-mapping errors are small (less than 1%) in all land covers studied except forests where errors are greater and more variable. The theoretical accuracy of MODIS snow-cover maps is largely determined by percent forest cover north of the snowline. Using the 17-class International Geosphere-Biosphere Program (IGBP) land-cover maps of North America and Eurasia, the Northern Hemisphere is classified into seven land-cover classes and water. Snow-mapping errors estimated for each of the seven land-cover classes are extrapolated to the entire Northern Hemisphere for areas north of the average continental snowline for each month. Average monthly errors for the Northern Hemisphere are expected to range from 5 - 10%, and the theoretical accuracy of the future global snow-cover maps is 92% or higher. Error estimates will be refined after the first full year that MODIS data are available.
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Diagnosing and dealing with multicollinearity.
Schroeder, M A
1990-04-01
The purpose of this article was to increase nurse researchers' awareness of the effects of collinear data in developing theoretical models for nursing practice. Collinear data distort the true value of the estimates generated from ordinary least-squares analysis. Theoretical models developed to provide the underpinnings of nursing practice need not be abandoned, however, because they fail to produce consistent estimates over repeated applications. It is also important to realize that multicollinearity is a data problem, not a problem associated with misspecification of a theorectical model. An investigator must first be aware of the problem, and then it is possible to develop an educated solution based on the degree of multicollinearity, theoretical considerations, and sources of error associated with alternative, biased, least-square regression techniques. Decisions based on theoretical and statistical considerations will further the development of theory-based nursing practice.
Ultrasonic density measurement cell design and simulation of non-ideal effects.
Higuti, Ricardo Tokio; Buiochi, Flávio; Adamowski, Júlio Cezar; de Espinosa, Francisco Montero
2006-07-01
This paper presents a theoretical analysis of a density measurement cell using an unidimensional model composed by acoustic and electroacoustic transmission lines in order to simulate non-ideal effects. The model is implemented using matrix operations, and is used to design the cell considering its geometry, materials used in sensor assembly, range of liquid sample properties and signal analysis techniques. The sensor performance in non-ideal conditions is studied, considering the thicknesses of adhesive and metallization layers, and the effect of residue of liquid sample which can impregnate on the sample chamber surfaces. These layers are taken into account in the model, and their effects are compensated to reduce the error on density measurement. The results show the contribution of residue layer thickness to density error and its behavior when two signal analysis methods are used.
Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi
2018-02-14
This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.
NASA Astrophysics Data System (ADS)
Shinoda, Masahisa; Nakatani, Hidehiko
2015-04-01
We theoretically calculate the behavior of the focusing error signal in the land-groove-type optical disk when the objective lens traverses on out of the radius of the optical disk. The differential astigmatic method is employed instead of the conventional astigmatic method for generating the focusing error signals. The signal behaviors are compared and analyzed in terms of the gain difference of the slope sensitivity of the focusing error signals from the land and the groove. In our calculation, the format of digital versatile disc-random access memory (DVD-RAM) is adopted as the land-groove-type optical disk model, and advantageous conditions for suppressing the gain difference are investigated. The calculation method and results described in this paper will be reflected in the next generation land-groove-type optical disks.
Coherent detection of position errors in inter-satellite laser communications
NASA Astrophysics Data System (ADS)
Xu, Nan; Liu, Liren; Liu, De'an; Sun, Jianfeng; Luan, Zhu
2007-09-01
Due to the improved receiver sensitivity and wavelength selectivity, coherent detection became an attractive alternative to direct detection in inter-satellite laser communications. A novel method to coherent detection of position errors information is proposed. Coherent communication system generally consists of receive telescope, local oscillator, optical hybrid, photoelectric detector and optical phase lock loop (OPLL). Based on the system composing, this method adds CCD and computer as position error detector. CCD captures interference pattern while detection of transmission data from the transmitter laser. After processed and analyzed by computer, target position information is obtained from characteristic parameter of the interference pattern. The position errors as the control signal of PAT subsystem drive the receiver telescope to keep tracking to the target. Theoretical deviation and analysis is presented. The application extends to coherent laser rang finder, in which object distance and position information can be obtained simultaneously.
The response function of modulated grid Faraday cup plasma instruments
NASA Technical Reports Server (NTRS)
Barnett, A.; Olbert, S.
1986-01-01
Modulated grid Faraday cup plasma analyzers are a very useful tool for making in situ measurements of space plasmas. One of their great attributes is that their simplicity permits their angular response function to be calculated theoretically. An expression is derived for this response function by computing the trajectories of the charged particles inside the cup. The Voyager Plasma Science (PLS) experiment is used as a specific example. Two approximations to the rigorous response function useful for data analysis are discussed. The theoretical formulas were tested by multi-sensor analysis of solar wind data. The tests indicate that the formulas represent the true cup response function for all angles of incidence with a maximum error of only a few percent.
NASA Technical Reports Server (NTRS)
Walker, R.; Gupta, N.
1984-01-01
The important algorithm issues necessary to achieve a real time flutter monitoring system; namely, the guidelines for choosing appropriate model forms, reduction of the parameter convergence transient, handling multiple modes, the effect of over parameterization, and estimate accuracy predictions, both online and for experiment design are addressed. An approach for efficiently computing continuous-time flutter parameter Cramer-Rao estimate error bounds were developed. This enables a convincing comparison of theoretical and simulation results, as well as offline studies in preparation for a flight test. Theoretical predictions, simulation and flight test results from the NASA Drones for Aerodynamic and Structural Test (DAST) Program are compared.
Theoretical analysis of two nonpolarizing beam splitters in asymmetrical glass cubes.
Shi, Jin Hui; Wang, Zheng Ping
2008-05-01
The design principle for a nonpolarizing beam splitter based on the Brewster condition in a cube is introduced. Nonpolarizing beam splitters in an asymmetrical glass cube are proposed and theoretically investigated, and applied examples are given. To realize 50% reflectance and 50% transmittance at specified wavelengths for both polarization components with an error of less than 2%, two measurements are taken by adjusting the refractive index of the substrate material and optimizing the thicknesses of each film in the design procedures. The simulated results show that the targets are achieved using the method reported here.
Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges
2009-02-13
The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.
Anandakrishnan, Ramu; Onufriev, Alexey
2008-03-01
In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.
Influence of conservative corrections on parameter estimation for extreme-mass-ratio inspirals
NASA Astrophysics Data System (ADS)
Huerta, E. A.; Gair, Jonathan R.
2009-04-01
We present an improved numerical kludge waveform model for circular, equatorial extreme-mass-ratio inspirals (EMRIs). The model is based on true Kerr geodesics, augmented by radiative self-force corrections derived from perturbative calculations, and in this paper for the first time we include conservative self-force corrections that we derive by comparison to post-Newtonian results. We present results of a Monte Carlo simulation of parameter estimation errors computed using the Fisher matrix and also assess the theoretical errors that would arise from omitting the conservative correction terms we include here. We present results for three different types of system, namely, the inspirals of black holes, neutron stars, or white dwarfs into a supermassive black hole (SMBH). The analysis shows that for a typical source (a 10M⊙ compact object captured by a 106M⊙ SMBH at a signal to noise ratio of 30) we expect to determine the two masses to within a fractional error of ˜10-4, measure the spin parameter q to ˜10-4.5, and determine the location of the source on the sky and the spin orientation to within 10-3 steradians. We show that, for this kludge model, omitting the conservative corrections leads to a small error over much of the parameter space, i.e., the ratio R of the theoretical model error to the Fisher matrix error is R<1 for all ten parameters in the model. For the few systems with larger errors typically R<3 and hence the conservative corrections can be marginally ignored. In addition, we use our model and first-order self-force results for Schwarzschild black holes to estimate the error that arises from omitting the second-order radiative piece of the self-force. This indicates that it may not be necessary to go beyond first order to recover accurate parameter estimates.
Optimal information transfer in enzymatic networks: A field theoretic formulation
NASA Astrophysics Data System (ADS)
Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.
2017-07-01
Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in networks of arbitrary complexity.
Identification of dynamic systems, theory and formulation
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1985-01-01
The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.
Hasan, Mehedi; Guemri, Rabiaa; Maldonado-Basilio, Ramón; Lucarz, Frédéric; de Bougrenet de la Tocnaye, Jean-Louis; Hall, Trevor
2015-12-15
A novel photonic circuit design for implementing frequency 8-tupling and 24-tupling was presented [Opt. Lett.39, 6950 (2014)10.1364/OL.39.006950OPLEDP0146-9592], and although its key message remains unaltered, there were typographical errors in the equations that are corrected in this erratum.
2005-09-09
the properties that Ps 2 = (2N,) M = 2 (147) Ps4 = 0-(l-Ps 2 ) 2 M =4 . (148) Interestingly, the bit-error probability from Gray-coded 4PSK is the...comparisons will motivate the reader to invent efficient systems that can achieve the theoretical possibilities. 61 REFERENCES [1] Ban, M., Kurokawa, K
Precision Measurement of Black Hole Binary Dynamics: Analyzing the LISA Data Stream
NASA Technical Reports Server (NTRS)
McWilliams, Sean T.; Thorpe, James Ira; Baker, John G.; Arnaud, Keith A.; Kelly, Bernard J.
2008-01-01
One of the richest potential sources of insight into fundamental physics that LISA will be capable of observing is the inspiral of supermassive black hole binaries (BHBs). However, the data analysis challenge presented by the LISA data stream is quite unlike the situation for present day gravitational wave detectors. In order to make the precision measurements necessary to achieve LISA's science goals, the BHB signal must be distinguished from a data stream that not only contains instrumental noise, but potentially thousands of other signals as well, so that the "background" we wish to separate out to focus on the BHB signal is likely to be highly nonstationary and nongaussian, as well as being of scientific interest in its own right. In addition, whereas the theoretical templates that we calculate in order to ultimately estimate the parameters can afford to be somewhat inaccurate and still be effective for present day and near future detectors, this is not the case for LISA, and extremely high fidelity of the theoretical templates for high signal-to-noise signals will be required to prevent theoretical errors from dominating the parameter estimates. NVe, will describe efforts in the community of LISA data analysts to address the challenges regarding the specific issue of BHB signals. These efforts include using a Markov Chain Monte Carlo approach with the freedom to model the BHB and the other signals present in the data stream simultaneously, rather than trying to remove other signals and risk biasing the remaining data. The Mock LISA Data Challenge is a community of LISA scientists who generate rounds of simulated LISA noise with increasingly difficult signal content, and invite the LISA data analysis community to exercise their methods, or develop new methods, in an attempt to extract the parameters for the signals embedded in the mock data. In addition to practical approaches such ,is this to assess the level of parameter accuracy, one can apply the Fisher matrix formalism to assess both the statistical errors from noise and the theoretical errors
The Effect of Error Correlation on Interfactor Correlation in Psychometric Measurement
ERIC Educational Resources Information Center
Westfall, Peter H.; Henning, Kevin S. S.; Howell, Roy D.
2012-01-01
This article shows how interfactor correlation is affected by error correlations. Theoretical and practical justifications for error correlations are given, and a new equivalence class of models is presented to explain the relationship between interfactor correlation and error correlations. The class allows simple, parsimonious modeling of error…
Improving CMD Areal Density Analysis: Algorithms and Strategies
NASA Astrophysics Data System (ADS)
Wilson, R. E.
2014-06-01
Essential ideas, successes, and difficulties of Areal Density Analysis (ADA) for color-magnitude diagrams (CMD¡¯s) of resolved stellar populations are examined, with explanation of various algorithms and strategies for optimal performance. A CMDgeneration program computes theoretical datasets with simulated observational error and a solution program inverts the problem by the method of Differential Corrections (DC) so as to compute parameter values from observed magnitudes and colors, with standard error estimates and correlation coefficients. ADA promises not only impersonal results, but also significant saving of labor, especially where a given dataset is analyzed with several evolution models. Observational errors and multiple star systems, along with various single star characteristics and phenomena, are modeled directly via the Functional Statistics Algorithm (FSA). Unlike Monte Carlo, FSA is not dependent on a random number generator. Discussions include difficulties and overall requirements, such as need for fast evolutionary computation and realization of goals within machine memory limits. Degradation of results due to influence of pixelization on derivatives, Initial Mass Function (IMF) quantization, IMF steepness, low Areal Densities (A ), and large variation in A are reduced or eliminated through a variety of schemes that are explained sufficiently for general application. The Levenberg-Marquardt and MMS algorithms for improvement of solution convergence are contained within the DC program. An example of convergence, which typically is very good, is shown in tabular form. A number of theoretical and practical solution issues are discussed, as are prospects for further development.
Ranking and validation of spallation models for isotopic production cross sections of heavy residua
NASA Astrophysics Data System (ADS)
Sharma, Sushil K.; Kamys, Bogusław; Goldenbaum, Frank; Filges, Detlef
2017-07-01
The production cross sections of isotopically identified residual nuclei of spallation reactions induced by 136Xe projectiles at 500AMeV on hydrogen target were analyzed in a two-step model. The first stage of the reaction was described by the INCL4.6 model of an intranuclear cascade of nucleon-nucleon and pion-nucleon collisions whereas the second stage was analyzed by means of four different models; ABLA07, GEM2, GEMINI++ and SMM. The quality of the data description was judged quantitatively using two statistical deviation factors; the H-factor and the M-factor. It was found that the present analysis leads to a different ranking of models as compared to that obtained from the qualitative inspection of the data reproduction. The disagreement was caused by sensitivity of the deviation factors to large statistical errors present in some of the data. A new deviation factor, the A factor, was proposed, that is not sensitive to the statistical errors of the cross sections. The quantitative ranking of models performed using the A-factor agreed well with the qualitative analysis of the data. It was concluded that using the deviation factors weighted by statistical errors may lead to erroneous conclusions in the case when the data cover a large range of values. The quality of data reproduction by the theoretical models is discussed. Some systematic deviations of the theoretical predictions from the experimental results are observed.
Theory of sampling: four critical success factors before analysis.
Wagner, Claas; Esbensen, Kim H
2015-01-01
Food and feed materials characterization, risk assessment, and safety evaluations can only be ensured if QC measures are based on valid analytical data, stemming from representative samples. The Theory of Sampling (TOS) is the only comprehensive theoretical framework that fully defines all requirements to ensure sampling correctness and representativity, and to provide the guiding principles for sampling in practice. TOS also defines the concept of material heterogeneity and its impact on the sampling process, including the effects from all potential sampling errors. TOS's primary task is to eliminate bias-generating errors and to minimize sampling variability. Quantitative measures are provided to characterize material heterogeneity, on which an optimal sampling strategy should be based. Four critical success factors preceding analysis to ensure a representative sampling process are presented here.
Effective field theory approach to heavy quark fragmentation
Fickinger, Michael; Fleming, Sean; Kim, Chul; ...
2016-11-17
Using an approach based on Soft Collinear Effective Theory (SCET) and Heavy Quark Effective Theory (HQET) we determine the b-quark fragmentation function from electron-positron annihilation data at the Z-boson peak at next-to-next-to leading order with next-to-next-to leading log resummation of DGLAP logarithms, and next-to-next-to-next-to leading log resummation of endpoint logarithms. This analysis improves, by one order, the previous extraction of the b-quark fragmentation function. We find that while the addition of the next order in the calculation does not much shift the extracted form of the fragmentation function, it does reduce theoretical errors indicating that the expansion is converging. Usingmore » an approach based on effective field theory allows us to systematically control theoretical errors. Furthermore, while the fits of theory to data are generally good, the fits seem to be hinting that higher order correction from HQET may be needed to explain the b-quark fragmentation function at smaller values of momentum fraction.« less
Hybrid Transverse Polar Navigation for High-Precision and Long-Term INSs
Wu, Qiuping; Zhang, Rong; Hu, Peida; Li, Haixia
2018-01-01
Transverse navigation has been proposed to help inertial navigation systems (INSs) fill the gap of polar navigation ability. However, as the transverse system does not have the ability of navigate globally, a complicated switch between the transverse and the traditional algorithms is necessary when the system moves across the polar circles. To maintain the inner continuity and consistency of the core algorithm, a hybrid transverse polar navigation is proposed in this research based on a combination of Earth-fixed-frame mechanization and transverse-frame outputs. Furthermore, a thorough analysis of kinematic error characteristics, proper damping technology and corresponding long-term contributions of main error sources is conducted for the high-precision INSs. According to the analytical expressions of the long-term navigation errors in polar areas, the 24-h period symmetrical oscillation with a slowly divergent amplitude dominates the transverse horizontal position errors, and the first-order drift dominates the transverse azimuth error, which results from the g0 gyro drift coefficients that occur in corresponding directions. Simulations are conducted to validate the theoretical analysis and the deduced analytical expressions. The results show that the proposed hybrid transverse navigation can ensure the same accuracy and oscillation characteristics in polar areas as the traditional algorithm in low and mid latitude regions. PMID:29757242
Hybrid Transverse Polar Navigation for High-Precision and Long-Term INSs.
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Zhang, Rong; Hu, Peida; Li, Haixia
2018-05-12
Transverse navigation has been proposed to help inertial navigation systems (INSs) fill the gap of polar navigation ability. However, as the transverse system does not have the ability of navigate globally, a complicated switch between the transverse and the traditional algorithms is necessary when the system moves across the polar circles. To maintain the inner continuity and consistency of the core algorithm, a hybrid transverse polar navigation is proposed in this research based on a combination of Earth-fixed-frame mechanization and transverse-frame outputs. Furthermore, a thorough analysis of kinematic error characteristics, proper damping technology and corresponding long-term contributions of main error sources is conducted for the high-precision INSs. According to the analytical expressions of the long-term navigation errors in polar areas, the 24-h period symmetrical oscillation with a slowly divergent amplitude dominates the transverse horizontal position errors, and the first-order drift dominates the transverse azimuth error, which results from the gyro drift coefficients that occur in corresponding directions. Simulations are conducted to validate the theoretical analysis and the deduced analytical expressions. The results show that the proposed hybrid transverse navigation can ensure the same accuracy and oscillation characteristics in polar areas as the traditional algorithm in low and mid latitude regions.
Allan Cheyne, J; Solman, Grayden J F; Carriere, Jonathan S A; Smilek, Daniel
2009-04-01
We present arguments and evidence for a three-state attentional model of task engagement/disengagement. The model postulates three states of mind-wandering: occurrent task inattention, generic task inattention, and response disengagement. We hypothesize that all three states are both causes and consequences of task performance outcomes and apply across a variety of experimental and real-world tasks. We apply this model to the analysis of a widely used GO/NOGO task, the Sustained Attention to Response Task (SART). We identify three performance characteristics of the SART that map onto the three states of the model: RT variability, anticipations, and omissions. Predictions based on the model are tested, and largely corroborated, via regression and lag-sequential analyses of both successful and unsuccessful withholding on NOGO trials as well as self-reported mind-wandering and everyday cognitive errors. The results revealed theoretically consistent temporal associations among the state indicators and between these and SART errors as well as with self-report measures. Lag analysis was consistent with the hypotheses that temporal transitions among states are often extremely abrupt and that the association between mind-wandering and performance is bidirectional. The bidirectional effects suggest that errors constitute important occasions for reactive mind-wandering. The model also enables concrete phenomenological, behavioral, and physiological predictions for future research.
Error monitoring and empathy: Explorations within a neurophysiological context.
Amiruddin, Azhani; Fueggle, Simone N; Nguyen, An T; Gignac, Gilles E; Clunies-Ross, Karen L; Fox, Allison M
2017-06-01
Past literature has proposed that empathy consists of two components: cognitive and affective empathy. Error monitoring mechanisms indexed by the error-related negativity (ERN) have been associated with empathy. Studies have found that a larger ERN is associated with higher levels of empathy. We aimed to expand upon previous work by investigating how error monitoring relates to the independent theoretical domains of cognitive and affective empathy. Study 1 (N = 24) explored the relationship between error monitoring mechanisms and subcomponents of empathy using the Questionnaire of Cognitive and Affective Empathy and found no relationship. Study 2 (N = 38) explored the relationship between the error monitoring mechanisms and overall empathy. Contrary to past findings, there was no evidence to support a relationship between error monitoring mechanisms and scores on empathy measures. A subsequent meta-analysis (Study 3, N = 125) summarizing the relationship across previously published studies together with the two studies reported in the current paper indicated that overall there was no significant association between ERN and empathy and that there was significant heterogeneity across studies. Future investigations exploring the potential variables that may moderate these relationships are discussed. © 2017 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Dong, Hao; Hu, Yahui
2018-04-01
The bend-torsion coupling dynamics load-sharing model of the helicopter face gear split torque transmission system is established by using concentrated quality standard, to analyzing the dynamic load-sharing characteristic. The mathematical models include nonlinear support stiffness, time-varying meshing stiffness, damping, gear backlash. The results showed that the errors collectively influenced the load sharing characteristics, only reduce a certain error, it is never fully reached the perfect loading sharing characteristics. The system load-sharing performance can be improved through floating shaft support. The above-method will provide a theoretical basis and data support for its dynamic performance optimization design.
Submillimeter, millimeter, and microwave spectral line catalogue
NASA Technical Reports Server (NTRS)
Poynter, R. L.; Pickett, H. M.
1980-01-01
A computer accessible catalogue of submillimeter, millimeter, and microwave spectral lines in the frequency range between O and 3000 GHz (such as; wavelengths longer than 100 m) is discussed. The catalogue was used as a planning guide and as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue was constructed by using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances.
Comments on stellar boundary cooling and the reality of supermetallicity
NASA Technical Reports Server (NTRS)
Deming, D.
1980-01-01
The paper discusses the 'super-metal-rich' (SMR) stars and reexamines Peterson's analysis of the SMR prototype mu Leo (1978) with regard to a postulated error in continuum error. Model atmospheres are used to compute theoretical equivalent widths and to explore the sensitivity of these widths to metallicity, temperature, surface gravity, and microturbulence. It is shown that Peterson's results are sensitive to continuum placement, and that her data does not indicate that the temperature gradient is steeper in mu Leo than in normal giants. It is concluded that the SMR stars are very metal rich and are also somewhat boundary cooled, possibly due to high metallicity.
3D measurement using combined Gray code and dual-frequency phase-shifting approach
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin
2018-04-01
The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.
ERIC Educational Resources Information Center
Tulis, Maria; Steuer, Gabriele; Dresel, Markus
2018-01-01
Research on learning from errors gives reason to assume that errors provide a high potential to facilitate deep learning if students are willing and able to take these learning opportunities. The first aim of this study was to analyse whether beliefs about errors as learning opportunities can be theoretically and empirically distinguished from…
Analysis and modeling of leakage current sensor under pulsating direct current
NASA Astrophysics Data System (ADS)
Li, Kui; Dai, Yihua; Wang, Yao; Niu, Feng; Chen, Zhao; Huang, Shaopo
2017-05-01
In this paper, the transformation characteristics of current sensor under pulsating DC leakage current is investigated. The mathematical model of current sensor is proposed to accurately describe the secondary side current and excitation current. The transformation process of current sensor is illustrated in details and the transformation error is analyzed from multi aspects. A simulation model is built and a sensor prototype is designed to conduct comparative evaluation, and both simulation and experimental results are presented to verify the correctness of theoretical analysis.
Theoretical and experimental errors for in situ measurements of plant water potential.
Shackel, K A
1984-07-01
Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (-0.6 to -1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design.
Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1
Shackel, Kenneth A.
1984-01-01
Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701
Zollanvari, Amin; Dougherty, Edward R
2014-06-01
The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.
Fundamentals of Free-Space Optical Communications
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Moision, Bruce; Erkmen, Baris
2012-01-01
Free-space optical communication systems potentially gain many dBs over RF systems. There is no upper limit on the theoretically achievable photon efficiency when the system is quantum-noise-limited: a) Intensity modulations plus photon counting can achieve arbitrarily high photon efficiency, but with sub-optimal spectral efficiency. b) Quantum-ideal number states can achieve the ultimate capacity in the limit of perfect transmissivity. Appropriate error correction codes are needed to communicate reliably near the capacity limits. Poisson-modeled noises, detector losses, and atmospheric effects must all be accounted for: a) Theoretical models are used to analyze performance degradations. b) Mitigation strategies derived from this analysis are applied to minimize these degradations.
Three-dimensional touch interface for medical education.
Panchaphongsaphak, Bundit; Burgkart, Rainer; Riener, Robert
2007-05-01
We present the technical principle and evaluation of a multimodal virtual reality (VR) system for medical education, called a touch simulator. This touch simulator comes with an innovative three-dimensional (3-D) touch sensitive input device. The device comprises a six-axis force-torque sensor connected to a tangible object representing the shape of an anatomical structure. Information related to the point of contact is recorded by the sensor, processed, and audiovisually displayed. The touch simulator provides a high level of user-friendliness and fidelity compared to other purely graphically oriented simulation environments. In this paper, the touch simulator has been realized as an interactive neuroanatomical training simulator. The user can visualize and manipulate graphical information of the brain surface or different cross-sectional slices by a finger-touch on a brain-like shaped tangible object. We evaluated the system by theoretical derivations, experiments, and subjective questionnaires. In the theoretical analysis, we could show that the contact point estimation error mainly depends on the accuracy and the noise of the sensor, the amount and direction of the applied force, and the geometry of the tangible object. The theoretical results could be validated by experiments: applying a normal force of 10 N on a 120 mm x 120 mm x 120 mm cube causes a maximum error of 2.5 +/- 0.7 mm. This error becomes smaller when increasing the contact force. Based on the survey results, the touch simulator may be a useful tool for assisting medical schools in the visualization of brain image data and the study of neuroanatomy.
QTest: Quantitative Testing of Theories of Binary Choice.
Regenwetter, Michel; Davis-Stober, Clintin P; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William
2014-01-01
The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of "Random Cumulative Prospect Theory." A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences.
Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields
Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne
2015-01-01
Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789
Rong, Hao; Tian, Jin
2015-05-01
The study contributes to human reliability analysis (HRA) by proposing a method that focuses more on human error causality within a sociotechnical system, illustrating its rationality and feasibility by using a case of the Minuteman (MM) III missile accident. Due to the complexity and dynamics within a sociotechnical system, previous analyses of accidents involving human and organizational factors clearly demonstrated that the methods using a sequential accident model are inadequate to analyze human error within a sociotechnical system. System-theoretic accident model and processes (STAMP) was used to develop a universal framework of human error causal analysis. To elaborate the causal relationships and demonstrate the dynamics of human error, system dynamics (SD) modeling was conducted based on the framework. A total of 41 contributing factors, categorized into four types of human error, were identified through the STAMP-based analysis. All factors are related to a broad view of sociotechnical systems, and more comprehensive than the causation presented in the accident investigation report issued officially. Recommendations regarding both technical and managerial improvement for a lower risk of the accident are proposed. The interests of an interdisciplinary approach provide complementary support between system safety and human factors. The integrated method based on STAMP and SD model contributes to HRA effectively. The proposed method will be beneficial to HRA, risk assessment, and control of the MM III operating process, as well as other sociotechnical systems. © 2014, Human Factors and Ergonomics Society.
A multiloop generalization of the circle criterion for stability margin analysis
NASA Technical Reports Server (NTRS)
Safonov, M. G.; Athans, M.
1979-01-01
In order to provide a theoretical tool suited for characterizing the stability margins of multiloop feedback systems, multiloop input-output stability results generalizing the circle stability criterion are considered. Generalized conic sectors with 'centers' and 'radii' determined by linear dynamical operators are employed to specify the stability margins as a frequency dependent convex set of modeling errors (including nonlinearities, gain variations and phase variations) which the system must be able to tolerate in each feedback loop without instability. The resulting stability criterion gives sufficient conditions for closed loop stability in the presence of frequency dependent modeling errors, even when the modeling errors occur simultaneously in all loops. The stability conditions yield an easily interpreted scalar measure of the amount by which a multiloop system exceeds, or falls short of, its stability margin specifications.
A mass-energy preserving Galerkin FEM for the coupled nonlinear fractional Schrödinger equations
NASA Astrophysics Data System (ADS)
Zhang, Guoyu; Huang, Chengming; Li, Meng
2018-04-01
We consider the numerical simulation of the coupled nonlinear space fractional Schrödinger equations. Based on the Galerkin finite element method in space and the Crank-Nicolson (CN) difference method in time, a fully discrete scheme is constructed. Firstly, we focus on a rigorous analysis of conservation laws for the discrete system. The definitions of discrete mass and energy here correspond with the original ones in physics. Then, we prove that the fully discrete system is uniquely solvable. Moreover, we consider the unconditionally convergent properties (that is to say, we complete the error estimates without any mesh ratio restriction). We derive L2-norm error estimates for the nonlinear equations and L^{∞}-norm error estimates for the linear equations. Finally, some numerical experiments are included showing results in agreement with the theoretical predictions.
NASA Astrophysics Data System (ADS)
Glover, Paul W. J.
2016-07-01
When scientists apply Archie's first law they often include an extra parameter a, which was introduced about 10 years after the equation's first publication by Winsauer et al. (1952), and which is sometimes called the "tortuosity" or "lithology" parameter. This parameter is not, however, theoretically justified. Paradoxically, the Winsauer et al. (1952) form of Archie's law often performs better than the original, more theoretically correct version. The difference in the cementation exponent calculated from these two forms of Archie's law is important, and can lead to a misestimation of reserves by at least 20 % for typical reservoir parameter values. We have examined the apparent paradox, and conclude that while the theoretical form of the law is correct, the data that we have been analysing with Archie's law have been in error. There are at least three types of systematic error that are present in most measurements: (i) a porosity error, (ii) a pore fluid salinity error, and (iii) a temperature error. Each of these systematic errors is sufficient to ensure that a non-unity value of the parameter a is required in order to fit the electrical data well. Fortunately, the inclusion of this parameter in the fit has compensated for the presence of the systematic errors in the electrical and porosity data, leading to a value of cementation exponent that is correct. The exceptions are those cementation exponents that have been calculated for individual core plugs. We make a number of recommendations for reducing the systematic errors that contribute to the problem and suggest that the value of the parameter a may now be used as an indication of data quality.
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-08-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.
Improved method for implicit Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, F. B.; Martin, W. R.
2001-01-01
The Implicit Monte Carlo (IMC) method has been used for over 30 years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Reference [2] provided an exact error analysis of IMC for 0-D problems and demonstrated that IMC can exhibit substantial errors when timesteps are large. These temporal errors are inherent in the method and are in addition to spatial discretization errors and approximations that address nonlinearities (due to variation of physical constants). In Reference [3], IMC and four other methods were analyzed in detail and compared on both theoretical grounds and themore » accuracy of numerical tests. As discussed in, two alternative schemes for solving the radiative transfer equations, the Carter-Forest (C-F) method and the Ahrens-Larsen (A-L) method, do not exhibit the errors found in IMC; for 0-D, both of these methods are exact for all time, while for 3-D, A-L is exact for all time and C-F is exact within a timestep. These methods can yield substantially superior results to IMC.« less
Bayesian learning for spatial filtering in an EEG-based brain-computer interface.
Zhang, Haihong; Yang, Huijuan; Guan, Cuntai
2013-07-01
Spatial filtering for EEG feature extraction and classification is an important tool in brain-computer interface. However, there is generally no established theory that links spatial filtering directly to Bayes classification error. To address this issue, this paper proposes and studies a Bayesian analysis theory for spatial filtering in relation to Bayes error. Following the maximum entropy principle, we introduce a gamma probability model for describing single-trial EEG power features. We then formulate and analyze the theoretical relationship between Bayes classification error and the so-called Rayleigh quotient, which is a function of spatial filters and basically measures the ratio in power features between two classes. This paper also reports our extensive study that examines the theory and its use in classification, using three publicly available EEG data sets and state-of-the-art spatial filtering techniques and various classifiers. Specifically, we validate the positive relationship between Bayes error and Rayleigh quotient in real EEG power features. Finally, we demonstrate that the Bayes error can be practically reduced by applying a new spatial filter with lower Rayleigh quotient.
NASA Astrophysics Data System (ADS)
Shinoda, Masahisa; Nakatani, Hidehiko; Nakai, Kenya; Ohmaki, Masayuki
2015-09-01
We theoretically calculate behaviors of focusing error signals generated by an astigmatic method in a land-groove-type optical disk. The focusing error signal from the land does not coincide with that from the groove. This behavior is enhanced when a focused spot of an optical pickup moves beyond the radius of the optical disk. A gain difference between the slope sensitivities of focusing error signals from the land and the groove is an important factor with respect to stable focusing servo control. In our calculation, the format of digital versatile disc-random access memory (DVD-RAM) is adopted as the land-groove-type optical disk model, and the dependences of the gain difference on various factors are investigated. The gain difference strongly depends on the optical intensity distribution of the laser beam in the optical pickup. The calculation method and results in this paper will be reflected in newly developed land-groove-type optical disks.
Analysis of GRACE Range-rate Residuals with Emphasis on Reprocessed Star-Camera Datasets
NASA Astrophysics Data System (ADS)
Goswami, S.; Flury, J.; Naeimi, M.; Bandikova, T.; Guerr, T. M.; Klinger, B.
2015-12-01
Since March 2002 the two GRACE satellites orbit the Earth at rela-tively low altitude. Determination of the gravity field of the Earth including itstemporal variations from the satellites' orbits and the inter-satellite measure-ments is the goal of the mission. Yet, the time-variable gravity signal has notbeen fully exploited. This can be seen better in the computed post-fit range-rateresiduals. The errors reflected in the range-rate residuals are due to the differ-ent sources as systematic errors, mismodelling errors and tone errors. Here, weanalyse the effect of three different star-camera data sets on the post-fit range-rate residuals. On the one hand, we consider the available attitude data andon other hand we take the two different data sets which has been reprocessedat Institute of Geodesy, Hannover and Institute of Theoretical Geodesy andSatellite Geodesy, TU Graz Austria respectively. Then the differences in therange-rate residuals computed from different attitude dataset are analyzed inthis study. Details will be given and results will be discussed.
Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.
2009-01-01
In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956
Closed-Loop Analysis of Soft Decisions for Serial Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlesinger, Adam M.
2013-01-01
We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Self-calibration method without joint iteration for distributed small satellite SAR systems
NASA Astrophysics Data System (ADS)
Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan
2013-12-01
The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.
NASA Astrophysics Data System (ADS)
Benedetti, A.; Morcrette, J.-J.; Boucher, O.; Dethof, A.; Engelen, R. J.; Fisher, M.; Flentje, H.; Huneeus, N.; Jones, L.; Kaiser, J. W.; Kinne, S.; Mangold, A.; Razinger, M.; Simmons, A. J.; Suttie, M.
2009-07-01
This study presents the new aerosol assimilation system, developed at the European Centre for Medium-Range Weather Forecasts, for the Global and regional Earth-system Monitoring using Satellite and in-situ data (GEMS) project. The aerosol modeling and analysis system is fully integrated in the operational four-dimensional assimilation apparatus. Its purpose is to produce aerosol forecasts and reanalyses of aerosol fields using optical depth data from satellite sensors. This paper is the second of a series which describes the GEMS aerosol effort. It focuses on the theoretical architecture and practical implementation of the aerosol assimilation system. It also provides a discussion of the background errors and observations errors for the aerosol fields, and presents a subset of results from the 2-year reanalysis which has been run for 2003 and 2004 using data from the Moderate Resolution Imaging Spectroradiometer on the Aqua and Terra satellites. Independent data sets are used to show that despite some compromises that have been made for feasibility reasons in regards to the choice of control variable and error characteristics, the analysis is very skillful in drawing to the observations and in improving the forecasts of aerosol optical depth.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Discordance between net analyte signal theory and practical multivariate calibration.
Brown, Christopher D
2004-08-01
Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.
Improved Atmospheric Soundings and Error Estimates from Analysis of AIRS/AMSU Data
NASA Technical Reports Server (NTRS)
Susskind, Joel
2007-01-01
The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Three very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control; and 3) development of an accurate AIRS only cloud clearing and retrieval system. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions, without the need for microwave observations in the cloud clearing step as has been done previously. In this methodology, longwave C02 channel observations in the spectral region 700 cm-' to 750 cm-' are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm-' to 2395 cm-' are used for temperature sounding purposes. The new methodology for improved error estimates and their use in quality control is described briefly and results are shown indicative of their accuracy. Results are also shown of forecast impact experiments assimilating AIRS Version 5.0 retrieval products in the Goddard GEOS 5 Data Assimilation System using different quality control thresholds.
Error analysis of satellite attitude determination using a vision-based approach
NASA Astrophysics Data System (ADS)
Carozza, Ludovico; Bevilacqua, Alessandro
2013-09-01
Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).
Falaggis, Konstantinos; Towers, David P; Towers, Catherine E
2012-09-20
Multiwavelength interferometry (MWI) is a well established technique in the field of optical metrology. Previously, we have reported a theoretical analysis of the method of excess fractions that describes the mutual dependence of unambiguous measurement range, reliability, and the measurement wavelengths. In this paper wavelength, selection strategies are introduced that are built on the theoretical description and maximize the reliability in the calculated fringe order for a given measurement range, number of wavelengths, and level of phase noise. Practical implementation issues for an MWI interferometer are analyzed theoretically. It is shown that dispersion compensation is best implemented by use of reference measurements around absolute zero in the interferometer. Furthermore, the effects of wavelength uncertainty allow the ultimate performance of an MWI interferometer to be estimated.
NASA Technical Reports Server (NTRS)
Gejji, Raghvendra, R.
1992-01-01
Network transmission errors such as collisions, CRC errors, misalignment, etc. are statistical in nature. Although errors can vary randomly, a high level of errors does indicate specific network problems, e.g. equipment failure. In this project, we have studied the random nature of collisions theoretically as well as by gathering statistics, and established a numerical threshold above which a network problem is indicated with high probability.
NASA Astrophysics Data System (ADS)
Fu, Chao; Ren, Xingmin; Yang, Yongfeng; Xia, Yebao; Deng, Wangqun
2018-07-01
A non-intrusive interval precise integration method (IPIM) is proposed in this paper to analyze the transient unbalance response of uncertain rotor systems. The transfer matrix method (TMM) is used to derive the deterministic equations of motion of a hollow-shaft overhung rotor. The uncertain transient dynamic problem is solved by combing the Chebyshev approximation theory with the modified precise integration method (PIM). Transient response bounds are calculated by interval arithmetic of the expansion coefficients. Theoretical error analysis of the proposed method is provided briefly, and its accuracy is further validated by comparing with the scanning method in simulations. Numerical results show that the IPIM can keep good accuracy in vibration prediction of the start-up transient process. Furthermore, the proposed method can also provide theoretical guidance to other transient dynamic mechanical systems with uncertainties.
Procedures for experimental measurement and theoretical analysis of large plastic deformations
NASA Technical Reports Server (NTRS)
Morris, R. E.
1974-01-01
Theoretical equations are derived and analytical procedures are presented for the interpretation of experimental measurements of large plastic strains in the surface of a plate. Orthogonal gage lengths established on the metal surface are measured before and after deformation. The change in orthogonality after deformation is also measured. Equations yield the principal strains, deviatoric stresses in the absence of surface friction forces, true stresses if the stress normal to the surface is known, and the orientation angle between the deformed gage line and the principal stress-strain axes. Errors in the measurement of nominal strains greater than 3 percent are within engineering accuracy. Applications suggested for this strain measurement system include the large-strain-stress analysis of impact test models, burst tests of spherical or cylindrical pressure vessels, and to augment small-strain instrumentation tests where large strains are anticipated.
Identifiability Of Systems With Modeling Errors
NASA Technical Reports Server (NTRS)
Hadaegh, Yadolah " fred"
1988-01-01
Advances in theory of modeling errors reported. Recent paper on errors in mathematical models of deterministic linear or weakly nonlinear systems. Extends theoretical work described in NPO-16661 and NPO-16785. Presents concrete way of accounting for difference in structure between mathematical model and physical process or system that it represents.
Evaluation of the Analysis Influence on Transport in Reanalysis Regional Water Cycles
NASA Technical Reports Server (NTRS)
Bosilovich, M. G.; Chen, J.; Robertson, F. R.
2011-01-01
Regional water cycles of reanalyses do not follow theoretical assumptions applicable to pure simulated budgets. The data analysis changes the wind, temperature and moisture, perturbing the theoretical balance. Of course, the analysis is correcting the model forecast error, so that the state fields should be more aligned with observations. Recently, it has been reported that the moisture convergence over continental regions, even those with significant quantities of radiosonde profiles present, can produce long term values not consistent with theoretical bounds. Specifically, long averages over continents produce some regions of moisture divergence. This implies that the observational analysis leads to a source of water in the region. One such region is the Unite States Great Plains, which many radiosonde and lidar wind observations are assimilated. We will utilize a new ancillary data set from the MERRA reanalysis called the Gridded Innovations and Observations (GIO) which provides the assimilated observations on MERRA's native grid allowing more thorough consideration of their impact on regional and global climatology. Included with the GIO data are the observation minus forecast (OmF) and observation minus analysis (OmA). Using OmF and OmA, we can identify the bias of the analysis against each observing system and gain a better understanding of the observations that are controlling the regional analysis. In this study we will focus on the wind and moisture assimilation.
Planning for Coupling Effects in Bitoric Mixed Astigmatism Ablative Treatments.
Alpins, Noel; Ong, James K Y; Stamatelatos, George
2017-08-01
To demonstrate how to determine the historical coupling adjustments of bitoric mixed astigmatism ablative treatments and how to use these historical coupling adjustments to adjust future bitoric treatments. The individual coupling adjustments of the myopic and hyperopic cylindrical components of a bitoric treatment were derived empirically from a retrospective study where the theoretical combined treatment effect on spherical equivalent was compared to the actual change in refractive spherical equivalent. The coupling adjustments that provided the best fit in both mean and standard deviation were determined to be the historical coupling adjustments. Theoretical treatments that incorporated the historical coupling adjustments were then calculated. The actual distribution of postoperative spherical equivalent errors was compared to the theoretically adjusted distribution. The study group comprised 242 eyes and included 118 virgin right eyes and 124 virgin left eyes of 155 individuals. For the laser used, the myopic coupling adjustment was -0.02 and the hyperopic coupling adjustment was 0.30, as derived by global nonlinear optimization. This implies that almost no adjustment of the myopic component of the bitoric treatment is necessary, but that the hyperopic component of the bitoric treatment generates a large amount of unintended spherical shift. The theoretically adjusted treatments targeted zero mean spherical equivalent error, as intended, and the distribution of the theoretical spherical equivalent errors had the same spread as the distribution of actual postoperative spherical equivalent errors. Bitoric mixed astigmatism ablative treatments may display non-trivial coupling effects. Historical coupling adjustments should be taken into consideration when planning mixed astigmatism treatments to improve surgical outcomes. [J Refract Surg. 2017;33(8):545-551.]. Copyright 2017, SLACK Incorporated.
Coil motion effects in watt balances: a theoretical check
NASA Astrophysics Data System (ADS)
Li, Shisong; Schlamminger, Stephan; Haddad, Darine; Seifert, Frank; Chao, Leon; Pratt, Jon R.
2016-04-01
A watt balance is a precision apparatus for the measurement of the Planck constant that has been proposed as a primary method for realizing the unit of mass in a revised International System of Units. In contrast to an ampere balance, which was historically used to realize the unit of current in terms of the kilogram, the watt balance relates electrical and mechanical units through a virtual power measurement and has far greater precision. However, because the virtual power measurement requires the execution of a prescribed motion of a coil in a fixed magnetic field, systematic errors introduced by horizontal and rotational deviations of the coil from its prescribed path will compromise the accuracy. We model these potential errors using an analysis that accounts for the fringing field in the magnet, creating a framework for assessing the impact of this class of errors on the uncertainty of watt balance results.
Cao, Haifeng; Zhang, Jingxu; Yang, Fei; An, Qichang; Zhao, Hongchao; Guo, Peng
2018-05-01
The Thirty Meter Telescope (TMT) project will design and build a 30-m-diameter telescope for research in astronomy in visible and infrared wavelengths. The primary mirror of TMT is made up of 492 hexagonal mirror segments under active control. The highly segmented primary mirror will utilize edge sensors to align and stabilize the relative piston, tip, and tilt degrees of segments. The support system assembly (SSA) of the segmented mirror utilizes a guide flexure to decouple the axial support and lateral support, while its deformation will cause measurement error of the edge sensor. We have analyzed the theoretical relationship between the segment movement and the measurement value of the edge sensor. Further, we have proposed an error correction method with a matrix. The correction process and the simulation results of the edge sensor will be described in this paper.
Homogeneous studies of transiting extrasolar planets - III. Additional planets and stellar models
NASA Astrophysics Data System (ADS)
Southworth, John
2010-11-01
I derive the physical properties of 30 transiting extrasolar planetary systems using a homogeneous analysis of published data. The light curves are modelled with the JKTEBOP code, with special attention paid to the treatment of limb darkening, orbital eccentricity and error analysis. The light from some systems is contaminated by faint nearby stars, which if ignored will systematically bias the results. I show that it is not realistically possible to account for this using only transit light curves: light-curve solutions must be constrained by measurements of the amount of contaminating light. A contamination of 5 per cent is enough to make the measurement of a planetary radius 2 per cent too low. The physical properties of the 30 transiting systems are obtained by interpolating in tabulated predictions from theoretical stellar models to find the best match to the light-curve parameters and the measured stellar velocity amplitude, temperature and metal abundance. Statistical errors are propagated by a perturbation analysis which constructs complete error budgets for each output parameter. These error budgets are used to compile a list of systems which would benefit from additional photometric or spectroscopic measurements. The systematic errors arising from the inclusion of stellar models are assessed by using five independent sets of theoretical predictions for low-mass stars. This model dependence sets a lower limit on the accuracy of measurements of the physical properties of the systems, ranging from 1 per cent for the stellar mass to 0.6 per cent for the mass of the planet and 0.3 per cent for other quantities. The stellar density and the planetary surface gravity and equilibrium temperature are not affected by this model dependence. An external test on these systematic errors is performed by comparing the two discovery papers of the WASP-11/HAT-P-10 system: these two studies differ in their assessment of the ratio of the radii of the components and the effective temperature of the star. I find that the correlations of planetary surface gravity and mass with orbital period have significance levels of only 3.1σ and 2.3σ, respectively. The significance of the latter has not increased with the addition of new data since Paper II. The division of planets into two classes based on Safronov number is increasingly blurred. Most of the objects studied here would benefit from improved photometric and spectroscopic observations, as well as improvements in our understanding of low-mass stars and their effective temperature scale.
Westgate, Philip M
2013-07-20
Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.
Review of Nearshore Morphologic Prediction
NASA Astrophysics Data System (ADS)
Plant, N. G.; Dalyander, S.; Long, J.
2014-12-01
The evolution of the world's erodible coastlines will determine the balance between the benefits and costs associated with human and ecological utilization of shores, beaches, dunes, barrier islands, wetlands, and estuaries. So, we would like to predict coastal evolution to guide management and planning of human and ecological response to coastal changes. After decades of research investment in data collection, theoretical and statistical analysis, and model development we have a number of empirical, statistical, and deterministic models that can predict the evolution of the shoreline, beaches, dunes, and wetlands over time scales of hours to decades, and even predict the evolution of geologic strata over the course of millennia. Comparisons of predictions to data have demonstrated that these models can have meaningful predictive skill. But these comparisons also highlight the deficiencies in fundamental understanding, formulations, or data that are responsible for prediction errors and uncertainty. Here, we review a subset of predictive models of the nearshore to illustrate tradeoffs in complexity, predictive skill, and sensitivity to input data and parameterization errors. We identify where future improvement in prediction skill will result from improved theoretical understanding, and data collection, and model-data assimilation.
Game theoretic power allocation and waveform selection for satellite communications
NASA Astrophysics Data System (ADS)
Shu, Zhihui; Wang, Gang; Tian, Xin; Shen, Dan; Pham, Khanh; Blasch, Erik; Chen, Genshe
2015-05-01
Game theory is a useful method to model interactions between agents with conflicting interests. In this paper, we set up a Game Theoretic Model for Satellite Communications (SATCOM) to solve the interaction between the transmission pair (blue side) and the jammer (red side) to reach a Nash Equilibrium (NE). First, the IFT Game Application Model (iGAM) for SATCOM is formulated to improve the utility of the transmission pair while considering the interference from a jammer. Specifically, in our framework, the frame error rate performance of different modulation and coding schemes is used in the game theoretic solution. Next, the game theoretic analysis shows that the transmission pair can choose the optimal waveform and power given the received power from the jammer. We also describe how the jammer chooses the optimal power given the waveform and power allocation from the transmission pair. Finally, simulations are implemented for the iGAM and the simulation results show the effectiveness of the SATCOM power allocation, waveform selection scheme, and jamming mitigation.
Precise and Scalable Static Program Analysis of NASA Flight Software
NASA Technical Reports Server (NTRS)
Brat, G.; Venet, A.
2005-01-01
Recent NASA mission failures (e.g., Mars Polar Lander and Mars Orbiter) illustrate the importance of having an efficient verification and validation process for such systems. One software error, as simple as it may be, can cause the loss of an expensive mission, or lead to budget overruns and crunched schedules. Unfortunately, traditional verification methods cannot guarantee the absence of errors in software systems. Therefore, we have developed the CGS static program analysis tool, which can exhaustively analyze large C programs. CGS analyzes the source code and identifies statements in which arrays are accessed out of bounds, or, pointers are used outside the memory region they should address. This paper gives a high-level description of CGS and its theoretical foundations. It also reports on the use of CGS on real NASA software systems used in Mars missions (from Mars PathFinder to Mars Exploration Rover) and on the International Space Station.
NASA Astrophysics Data System (ADS)
Yamamoto, Tetsuya; Takeda, Kazuki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. To further improve the BER performance, cyclic delay transmit diversity (CDTD) can be used. CDTD simultaneously transmits the same signal from different antennas after adding different cyclic delays to increase the number of equivalent propagation paths. Although a joint use of CDTD and MMSE-FDE for direct sequence code division multiple access (DS-CDMA) achieves larger frequency diversity gain, the BER performance improvement is limited by the residual inter-chip interference (ICI) after FDE. In this paper, we propose joint FDE and despreading for DS-CDMA using CDTD. Equalization and despreading are simultaneously performed in the frequency-domain to suppress the residual ICI after FDE. A theoretical conditional BER analysis is presented for the given channel condition. The BER analysis is confirmed by computer simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, J.J. Jr.; Hyder, Z.
The Nguyen and Pinder method is one of four techniques commonly used for analysis of response data from slug tests. Limited field research has raised questions about the reliability of the parameter estimates obtained with this method. A theoretical evaluation of this technique reveals that errors were made in the derivation of the analytical solution upon which the technique is based. Simulation and field examples show that the errors result in parameter estimates that can differ from actual values by orders of magnitude. These findings indicate that the Nguyen and Pinder method should no longer be a tool in themore » repertoire of the field hydrogeologist. If data from a slug test performed in a partially penetrating well in a confined aquifer need to be analyzed, recent work has shown that the Hvorslev method is the best alternative among the commonly used techniques.« less
Ernst, Dominique; Köhler, Jürgen
2013-01-21
We provide experimental results on the accuracy of diffusion coefficients obtained by a mean squared displacement (MSD) analysis of single-particle trajectories. We have recorded very long trajectories comprising more than 1.5 × 10(5) data points and decomposed these long trajectories into shorter segments providing us with ensembles of trajectories of variable lengths. This enabled a statistical analysis of the resulting MSD curves as a function of the lengths of the segments. We find that the relative error of the diffusion coefficient can be minimized by taking an optimum number of points into account for fitting the MSD curves, and that this optimum does not depend on the segment length. Yet, the magnitude of the relative error for the diffusion coefficient does, and achieving an accuracy in the order of 10% requires the recording of trajectories with about 1000 data points. Finally, we compare our results with theoretical predictions and find very good qualitative and quantitative agreement between experiment and theory.
Learning from Errors in Dual Vocational Education: Video-Enhanced Instructional Strategies
ERIC Educational Resources Information Center
Cattaneo, Alberto A. P.; Boldrini, Elena
2017-01-01
Purpose: Starting from the identification of some theoretically driven instructional principles, this paper presents a set of empirical cases based on strategies to learn from errors. The purpose of this paper is to provide first evidence about the feasibility and the effectiveness for learning of video-enhanced error-based strategies in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudiarta, I. Wayan; Angraini, Lily Maysari, E-mail: lilyangraini@unram.ac.id
We have applied the finite difference time domain (FDTD) method with the supersymmetric quantum mechanics (SUSY-QM) procedure to determine excited energies of one dimensional quantum systems. The theoretical basis of FDTD, SUSY-QM, a numerical algorithm and an illustrative example for a particle in a one dimensional square-well potential were given in this paper. It was shown that the numerical results were in excellent agreement with theoretical results. Numerical errors produced by the SUSY-QM procedure was due to errors in estimations of superpotentials and supersymmetric partner potentials.
Global Methods for Image Motion Analysis
1992-10-01
a variant of the same error function as in Adiv [2]. Another related approach was presented by Maybank [46,45]. Nearly all researchers in motion...with an application to stereo vision. In Proc. 7th Intern. Joint Conference on AI, pages 674{679, Vancouver, 1981. [45] S. J. Maybank . Algorithm for...analysing optical ow based on the least-squares method. Image and Vision Computing, 4:38{42, 1986. [46] S. J. Maybank . A Theoretical Study of Optical
NASA Astrophysics Data System (ADS)
Gupta, A. P.; Shanker, Jai
1980-02-01
The relation between long wavelength optical mode frequencies and the Anderson-Gruneisen parameter δ for alkali halides studied by Madan suffers from a mathematical error which is rectified in the present communication. A theoretical analysis of δ is presented adopting six potential functions for the short range repulsion energy. Values of δ and γTO calculated from the Varshni-Shukla potential are found in closest agreement with experimental data.
1979-11-01
Science Aeronautique, Vol. 6, pp. 38-49, 1950. 9. Anon.: "Methods of testing at constant attitude", ICAO Circular 16-AN/13, 1951. 10. H.L. Jonkers...spectral density analysis, it was determined that a notch filter at 17.7 hertz and a third-order Butterworth low-pass filter with a break frequency of 20...of the effects of specific errors, they are circular in nature and do not address the basic theoretical problem. Therefore, the Cramer-Rao bound
Fractal Point Process and Queueing Theory and Application to Communication Networks
1999-12-31
use of nonlinear dynamics and chaos in the design of innovative analog error-protection codes for com- munications applications. In the chaos...the fol- lowing theses, patent, and papers. 1. A. Narula, M. D. Trott , and G. W. Wornell, "Information-Theoretic Analysis of Multiple-Antenna...Bounds," in Proc. Int. Conf. Dec. Control, (Japan), Dec. 1996. 5. G. W. Wornell and M. D. Trott , "Efficient Signal Processing Tech- niques for
Stochastic stability of sigma-point Unscented Predictive Filter.
Cao, Lu; Tang, Yu; Chen, Xiaoqian; Zhao, Yong
2015-07-01
In this paper, the Unscented Predictive Filter (UPF) is derived based on unscented transformation for nonlinear estimation, which breaks the confine of conventional sigma-point filters by employing Kalman filter as subject investigated merely. In order to facilitate the new method, the algorithm flow of UPF is given firstly. Then, the theoretical analyses demonstrate that the estimate accuracy of the model error and system for the UPF is higher than that of the conventional PF. Moreover, the authors analyze the stochastic boundedness and the error behavior of Unscented Predictive Filter (UPF) for general nonlinear systems in a stochastic framework. In particular, the theoretical results present that the estimation error remains bounded and the covariance keeps stable if the system׳s initial estimation error, disturbing noise terms as well as the model error are small enough, which is the core part of the UPF theory. All of the results have been demonstrated by numerical simulations for a nonlinear example system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Temperature Dependence of Faraday Effect-Induced Bias Error in a Fiber Optic Gyroscope
Li, Xuyou; Guang, Xingxing; Xu, Zhenlong; Li, Guangchun
2017-01-01
Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environments, such as magnetic field and temperature field variation, is necessary for its practical applications. This paper presents an investigation of Faraday effect-induced bias error of IFOG under varying temperature. Jones matrix method is utilized to formulize the temperature dependence of Faraday effect-induced bias error. Theoretical results show that the Faraday effect-induced bias error changes with the temperature in the non-skeleton polarization maintaining (PM) fiber coil. This phenomenon is caused by the temperature dependence of linear birefringence and Verdet constant of PM fiber. Particularly, Faraday effect-induced bias errors of two polarizations always have opposite signs that can be compensated optically regardless of the changes of the temperature. Two experiments with a 1000 m non-skeleton PM fiber coil are performed, and the experimental results support these theoretical predictions. This study is promising for improving the bias stability of IFOG. PMID:28880203
Temperature Dependence of Faraday Effect-Induced Bias Error in a Fiber Optic Gyroscope.
Li, Xuyou; Liu, Pan; Guang, Xingxing; Xu, Zhenlong; Guan, Lianwu; Li, Guangchun
2017-09-07
Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environments, such as magnetic field and temperature field variation, is necessary for its practical applications. This paper presents an investigation of Faraday effect-induced bias error of IFOG under varying temperature. Jones matrix method is utilized to formulize the temperature dependence of Faraday effect-induced bias error. Theoretical results show that the Faraday effect-induced bias error changes with the temperature in the non-skeleton polarization maintaining (PM) fiber coil. This phenomenon is caused by the temperature dependence of linear birefringence and Verdet constant of PM fiber. Particularly, Faraday effect-induced bias errors of two polarizations always have opposite signs that can be compensated optically regardless of the changes of the temperature. Two experiments with a 1000 m non-skeleton PM fiber coil are performed, and the experimental results support these theoretical predictions. This study is promising for improving the bias stability of IFOG.
Galerkin v. discrete-optimal projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir
Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Ling; Zhao, Haihua; Kim, Seung Jun
In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less
Anatomy of the Higgs fits: A first guide to statistical treatments of the theoretical uncertainties
NASA Astrophysics Data System (ADS)
Fichet, Sylvain; Moreau, Grégory
2016-04-01
The studies of the Higgs boson couplings based on the recent and upcoming LHC data open up a new window on physics beyond the Standard Model. In this paper, we propose a statistical guide to the consistent treatment of the theoretical uncertainties entering the Higgs rate fits. Both the Bayesian and frequentist approaches are systematically analysed in a unified formalism. We present analytical expressions for the marginal likelihoods, useful to implement simultaneously the experimental and theoretical uncertainties. We review the various origins of the theoretical errors (QCD, EFT, PDF, production mode contamination…). All these individual uncertainties are thoroughly combined with the help of moment-based considerations. The theoretical correlations among Higgs detection channels appear to affect the location and size of the best-fit regions in the space of Higgs couplings. We discuss the recurrent question of the shape of the prior distributions for the individual theoretical errors and find that a nearly Gaussian prior arises from the error combinations. We also develop the bias approach, which is an alternative to marginalisation providing more conservative results. The statistical framework to apply the bias principle is introduced and two realisations of the bias are proposed. Finally, depending on the statistical treatment, the Standard Model prediction for the Higgs signal strengths is found to lie within either the 68% or 95% confidence level region obtained from the latest analyses of the 7 and 8 TeV LHC datasets.
Adaptively resizing populations: Algorithm, analysis, and first results
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Smuda, Ellen
1993-01-01
Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.
Effect of neoclassical toroidal viscosity on error-field penetration thresholds in tokamak plasmas.
Cole, A J; Hegna, C C; Callen, J D
2007-08-10
A model for field-error penetration is developed that includes nonresonant as well as the usual resonant field-error effects. The nonresonant components cause a neoclassical toroidal viscous torque that keeps the plasma rotating at a rate comparable to the ion diamagnetic frequency. The new theory is used to examine resonant error-field penetration threshold scaling in Ohmic tokamak plasmas. Compared to previous theoretical results, we find the plasma is less susceptible to error-field penetration and locking, by a factor that depends on the nonresonant error-field amplitude.
Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhang, W J
2012-10-01
The trajectory tracking problem of a closed-chain five-bar robot is studied in this paper. Based on an error transformation function and the backstepping technique, an approximation-based tracking algorithm is proposed, which can guarantee the control performance of the robotic system in both the stable and transient phases. In particular, the overshoot, settling time, and final tracking error of the robotic system can be all adjusted by properly setting the parameters in the error transformation function. The radial basis function neural network (RBFNN) is used to compensate the complicated nonlinear terms in the closed-loop dynamics of the robotic system. The approximation error of the RBFNN is only required to be bounded, which simplifies the initial "trail-and-error" configuration of the neural network. Illustrative examples are given to verify the theoretical analysis and illustrate the effectiveness of the proposed algorithm. Finally, it is also shown that the proposed approximation-based controller can be simplified by a smart mechanical design of the closed-chain robot, which demonstrates the promise of the integrated design and control philosophy.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-01
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-07
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
Standardising analysis of carbon monoxide rebreathing for application in anti-doping.
Alexander, Anthony C; Garvican, Laura A; Burge, Caroline M; Clark, Sally A; Plowman, James S; Gore, Christopher J
2011-03-01
Determination of total haemoglobin mass (Hbmass) via carbon monoxide (CO) depends critically on repeatable measurement of percent carboxyhaemoglobin (%HbCO) in blood with a hemoximeter. The main aim of this study was to determine, for an OSM3 hemoximeter, the number of replicate measures as well as the theoretical change in percent carboxyhaemoglobin required to yield a random error of analysis (Analyser Error) of ≤1%. Before and after inhalation of CO, nine participants provided a total of 576 blood samples that were each analysed five times for percent carboxyhaemoglobin on one of three OSM3 hemoximeters; with approximately one-third of blood samples analysed on each OSM3. The Analyser Error was calculated for the first two (duplicate), first three (triplicate) and first four (quadruplicate) measures on each OSM3, as well as for all five measures (quintuplicates). Two methods of CO-rebreathing, a 2-min and 10-min procedure, were evaluated for Analyser Error. For duplicate analyses of blood, the Analyser Error for the 2-min method was 3.7, 4.0 and 5.0% for the three OSM3s when the percent carboxyhaemoglobin increased by two above resting values. With quintuplicate analyses of blood, the corresponding errors reduced to .8, .9 and 1.0% for the 2-min method when the percent carboxyhaemoglobin increased by 5.5 above resting values. In summary, to minimise the Analyser Error to ∼≤1% on an OSM3 hemoximeter, researchers should make ≥5 replicates of percent carboxyhaemoglobin and the volume of CO administered should be sufficient increase percent carboxyhaemoglobin by ≥5.5 above baseline levels. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.
Howling, D. H.; Fitzgerald, P. J.
1959-01-01
The Schwarzschild-Villiger effect has been experimentally demonstrated with the optical system used in this laboratory. Using a photographic mosaic specimen as a model, it has been shown that the conclusions of Naora are substantiated and that the SV effect, in large or small magnitude, is always present in optical systems. The theoretical transmission error arising from the presence of the SV effect has been derived for various optical conditions of measurement. The results have been experimentally confirmed. The SV contribution of the substage optics of microspectrophotometers has also been considered. A simple method of evaluating a flare function f(A) is advanced which provides a measure of the SV error present in a system. It is demonstrated that measurements of specimens of optical density less than unity can be made with less than 1 per cent error, when using illuminating beam diameter/specimen diameter ratios of unity and uncoated optical surfaces. For denser specimens it is shown that care must be taken to reduce the illuminating beam/specimen diameter ratio to a value dictated by the magnitude of a flare function f(A), evaluated for a particular optical system, in order to avoid excessive transmission error. It is emphasized that observed densities (transmissions) are not necessarily true densities (transmissions) because of the possibility of SV error. The ambiguity associated with an estimation of stray-light error by means of an opaque object has also been demonstrated. The errors illustrated are not necessarily restricted to microspectrophotometry but may possibly be found in such fields as spectral analysis, the interpretation of x-ray diffraction patterns, the determination of ionizing particle tracks and particle densities in photographic emulsions, and in many other types of photometric analysis. PMID:14403512
Modeling human response errors in synthetic flight simulator domain
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.
1992-01-01
This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.
NASA Astrophysics Data System (ADS)
Luo, Minghua; Shimizu, Etsuro; Zhang, Feifei; Ito, Masanori
This paper describes a six-axis force/tactile sensor for robot fingers. A mathematical model of this sensor is proposed. By this model, the grasping force and its moments, and touching position of robot finger for holding an object can be calculated. A new sensor is fabricated based on this model, where the elastic sensing unit of the sensor is made of a brazen plate. A new compensating method for decreasing error is proposed. Furthermore, the performance of this sensor is examined. The test results present approximate relationship between theoretical input and output of the sensor. It is obvious that the performance of the new sensor is better than the sensor with no compensation.
A channel dynamics model for real-time flood forecasting
Hoos, Anne B.; Koussis, Antonis D.; Beale, Guy O.
1989-01-01
A new channel dynamics scheme (alternative system predictor in real time (ASPIRE)), designed specifically for real-time river flow forecasting, is introduced to reduce uncertainty in the forecast. ASPIRE is a storage routing model that limits the influence of catchment model forecast errors to the downstream station closest to the catchment. Comparisons with the Muskingum routing scheme in field tests suggest that the ASPIRE scheme can provide more accurate forecasts, probably because discharge observations are used to a maximum advantage and routing reaches (and model errors in each reach) are uncoupled. Using ASPIRE in conjunction with the Kalman filter did not improve forecast accuracy relative to a deterministic updating procedure. Theoretical analysis suggests that this is due to a large process noise to measurement noise ratio.
A fast determination method for transverse relaxation of spin-exchange-relaxation-free magnetometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Jixi, E-mail: lujixi@buaa.edu.cn; Qian, Zheng; Fang, Jiancheng
2015-04-15
We propose a fast and accurate determination method for transverse relaxation of the spin-exchange-relaxation-free (SERF) magnetometer. This method is based on the measurement of magnetic resonance linewidth via a chirped magnetic field excitation and the amplitude spectrum analysis. Compared with the frequency sweeping via separate sinusoidal excitation, our method can realize linewidth determination within only few seconds and meanwhile obtain good frequency resolution. Therefore, it can avoid the drift error in long term measurement and improve the accuracy of the determination. As the magnetic resonance frequency of the SERF magnetometer is very low, we include the effect of the negativemore » resonance frequency caused by the chirp and achieve the coefficient of determination of the fitting results better than 0.998 with 95% confidence bounds to the theoretical equation. The experimental results are in good agreement with our theoretical analysis.« less
NASA Astrophysics Data System (ADS)
Qin, Le; Xie, HuiMin; Zhu, RongHua; Wu, Dan; Che, ZhiGang; Zou, ShiKun
2014-04-01
This paper investigates the effect of the location of testing area in residual stress measurement by Moiré interferometry combined with hole-drilling method. The selection of the location of the testing area is analyzed from theory and experiment. In the theoretical study, the factors which affect the surface released radial strain ɛ r were analyzed on the basis of the formulae of the hole-drilling method, and the relations between those factors and ɛ r were established. By combining Moiré interferometry with the hole-drilling method, the residual stress of interference-fit specimen was measured to verify the theoretical analysis. According to the analysis results, the testing area for minimizing the error of strain measurement is determined. Moreover, if the orientation of the maximum principal stress is known, the value of strain will be measured with higher precision by the Moiré interferometry method.
Spelling errors among children with ADHD symptoms: the role of working memory.
Re, Anna Maria; Mirandola, Chiara; Esposito, Stefania Sara; Capodieci, Agnese
2014-09-01
Research has shown that children with attention deficit/hyperactivity disorder (ADHD) may present a series of academic difficulties, including spelling errors. Given that correct spelling is supported by the phonological component of working memory (PWM), the present study examined whether or not the spelling difficulties of children with ADHD are emphasized when children's PWM is overloaded. A group of 19 children with ADHD symptoms (between 8 and 11 years of age), and a group of typically developing children matched for age, schooling, gender, rated intellectual abilities, and socioeconomic status, were administered two dictation texts: one under typical conditions and one under a pre-load condition that required the participants to remember a series of digits while writing. The results confirmed that children with ADHD symptoms have spelling difficulties, produce a higher percentages of errors compared to the control group children, and that these difficulties are enhanced under a higher load of PWM. An analysis of errors showed that this holds true, especially for phonological errors. The increased errors in the PWM condition was not due to a tradeoff between working memory and writing, as children with ADHD also performed more poorly in the PWM task. The theoretical and practical implications are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
QTest: Quantitative Testing of Theories of Binary Choice
Regenwetter, Michel; Davis-Stober, Clintin P.; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William
2014-01-01
The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of “Random Cumulative Prospect Theory.” A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences. PMID:24999495
Dopamine prediction errors in reward learning and addiction: from theory to neural circuitry
Keiflin, Ronald; Janak, Patricia H.
2015-01-01
Summary Midbrain dopamine (DA) neurons are proposed to signal reward prediction error (RPE), a fundamental parameter in associative learning models. This RPE hypothesis provides a compelling theoretical framework for understanding DA function in reward learning and addiction. New studies support a causal role for DA-mediated RPE activity in promoting learning about natural reward; however, this question has not been explicitly tested in the context of drug addiction. In this review, we integrate theoretical models with experimental findings on the activity of DA systems, and on the causal role of specific neuronal projections and cell types, to provide a circuit-based framework for probing DA-RPE function in addiction. By examining error-encoding DA neurons in the neural network in which they are embedded, hypotheses regarding circuit-level adaptations that possibly contribute to pathological error-signaling and addiction can be formulated and tested. PMID:26494275
A theoretical approach to measuring pilot workload
NASA Technical Reports Server (NTRS)
Kantowitz, B. H.
1984-01-01
Theoretical assumptions used by researchers in the area of attention, with emphasis upon errors and inconsistent assumptions used by some researchers were studied. Two GAT experiments, two laboratory studies and one field experiment were conducted.
Improved methods for the measurement and analysis of stellar magnetic fields
NASA Technical Reports Server (NTRS)
Saar, Steven H.
1988-01-01
The paper presents several improved methods for the measurement of magnetic fields on cool stars which take into account simple radiative transfer effects and the exact Zeeman patterns. Using these methods, high-resolution, low-noise data can be fitted with theoretical line profiles to determine the mean magnetic field strength in stellar active regions and a model-dependent fraction of the stellar surface (filling factor) covered by these regions. Random errors in the derived field strength and filling factor are parameterized in terms of signal-to-noise ratio, wavelength, spectral resolution, stellar rotation rate, and the magnetic parameters themselves. Weak line blends, if left uncorrected, can have significant systematic effects on the derived magnetic parameters, and thus several methods are developed to compensate partially for them. The magnetic parameters determined by previous methods likely have systematic errors because of such line blends and because of line saturation effects. Other sources of systematic error are explored in detail. These sources of error currently make it difficult to determine the magnetic parameters of individual stars to better than about + or - 20 percent.
Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna
2013-05-01
Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding the dynamics of the cognitive process can inform the design of interventions to manage errors and improve residents' safety. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kesler, Steven R.
The lifting line theory was first developed by Prandtl and was used primarily on analysis of airplane wings. Though the theory is about one hundred years old, it is still used in the initial calculations to find the lift of a wing. The question that guided this thesis was, "How close does Prandtl's lifting line theory predict the thrust of a propeller?" In order to answer this question, an experiment was designed that measured the thrust of a propeller for different speeds. The measured thrust was compared to what the theory predicted. In order to do this experiment and analysis, a propeller needed to be used. A walnut wood ultralight propeller was chosen that had a 1.30 meter (51 inches) length from tip to tip. In this thesis, Prandtl's lifting line theory was modified to account for the different incoming velocity depending on the radial position of the airfoil. A modified equation was used to reflect these differences. A working code was developed based on this modified equation. A testing rig was built that allowed the propeller to be rotated at high speeds while measuring the thrust. During testing, the rotational speed of the propeller ranged from 13-43 rotations per second. The thrust from the propeller was measured at different speeds and ranged from 16-33 Newton's. The test data were then compared to the theoretical results obtained from the lifting line code. A plot in Chapter 5 (the results section) shows the theoretical vs. actual thrust for different rotational speeds. The theory over predicted the actual thrust of the propeller. Depending on the rotational speed, the error was: at low speeds 36%, at low to moderate speeds 84%, and at high speeds the error increased to 195%. Different reasons for these errors are discussed.
Error Modeling and Experimental Study of a Flexible Joint 6-UPUR Parallel Six-Axis Force Sensor.
Zhao, Yanzhi; Cao, Yachao; Zhang, Caifeng; Zhang, Dan; Zhang, Jie
2017-09-29
By combining a parallel mechanism with integrated flexible joints, a large measurement range and high accuracy sensor is realized. However, the main errors of the sensor involve not only assembly errors, but also deformation errors of its flexible leg. Based on a flexible joint 6-UPUR (a kind of mechanism configuration where U-universal joint, P-prismatic joint, R-revolute joint) parallel six-axis force sensor developed during the prephase, assembly and deformation error modeling and analysis of the resulting sensors with a large measurement range and high accuracy are made in this paper. First, an assembly error model is established based on the imaginary kinematic joint method and the Denavit-Hartenberg (D-H) method. Next, a stiffness model is built to solve the stiffness matrix. The deformation error model of the sensor is obtained. Then, the first order kinematic influence coefficient matrix when the synthetic error is taken into account is solved. Finally, measurement and calibration experiments of the sensor composed of the hardware and software system are performed. Forced deformation of the force-measuring platform is detected by using laser interferometry and analyzed to verify the correctness of the synthetic error model. In addition, the first order kinematic influence coefficient matrix in actual circumstances is calculated. By comparing the condition numbers and square norms of the coefficient matrices, the conclusion is drawn theoretically that it is very important to take into account the synthetic error for design stage of the sensor and helpful to improve performance of the sensor in order to meet needs of actual working environments.
Error Modeling and Experimental Study of a Flexible Joint 6-UPUR Parallel Six-Axis Force Sensor
Zhao, Yanzhi; Cao, Yachao; Zhang, Caifeng; Zhang, Dan; Zhang, Jie
2017-01-01
By combining a parallel mechanism with integrated flexible joints, a large measurement range and high accuracy sensor is realized. However, the main errors of the sensor involve not only assembly errors, but also deformation errors of its flexible leg. Based on a flexible joint 6-UPUR (a kind of mechanism configuration where U-universal joint, P-prismatic joint, R-revolute joint) parallel six-axis force sensor developed during the prephase, assembly and deformation error modeling and analysis of the resulting sensors with a large measurement range and high accuracy are made in this paper. First, an assembly error model is established based on the imaginary kinematic joint method and the Denavit-Hartenberg (D-H) method. Next, a stiffness model is built to solve the stiffness matrix. The deformation error model of the sensor is obtained. Then, the first order kinematic influence coefficient matrix when the synthetic error is taken into account is solved. Finally, measurement and calibration experiments of the sensor composed of the hardware and software system are performed. Forced deformation of the force-measuring platform is detected by using laser interferometry and analyzed to verify the correctness of the synthetic error model. In addition, the first order kinematic influence coefficient matrix in actual circumstances is calculated. By comparing the condition numbers and square norms of the coefficient matrices, the conclusion is drawn theoretically that it is very important to take into account the synthetic error for design stage of the sensor and helpful to improve performance of the sensor in order to meet needs of actual working environments. PMID:28961209
2016-01-01
Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627
Theoretical studies of system performance and adaptive optics design parameters
NASA Astrophysics Data System (ADS)
Tyson, Robert K.
1990-08-01
The ultimate performance of an adaptive optics (AO) system can be sensitive to specific design parameters of individual components. The type and configuration of a wavefront sensor or the shape of individual deformable mirror actuator influence functions can have a profound effect on the correctability of the AO system. This paper will discuss the results of a theoretical study which employed both closed form analytic solutions and computer models. A parametric analysis of wavefront sensor characteristics, noise, and subaperture geometry are independently evaluated against system response to an aberrated wave characteristic of atmospheric turbulence. Similarly, the shape and extent of the deformable mirror influence function and the placement and number of actuators is evaluated to characterize the effects of fitting error and coupling.
NASA Technical Reports Server (NTRS)
Rossow, Vernon J
1951-01-01
The analysis of Technical Note 2250, 1950, is extended to include the effects of flow rotation. It is found that the theoretical pressure distributions over drive cylinders can be related by the hypersonic similarity rule with sufficient accuracy for most engineering purposes. The error introduced into pressure distributions and drag effective cylinders by ignoring the rotation term in the characteristic equations is investigated.
Information Theoretic Studies and Assessment of Space Object Identification
2014-03-24
localization are contained in Ref. [5]. 1.7.1 A Bayesian MPE Based Analysis of 2D Point-Source-Pair Superresolution In a second recently submitted paper [6], a...related problem of the optical superresolution (OSR) of a pair of equal-brightness point sources separated spatially by a distance (or angle) smaller...1403.4897 [physics.optics] (19 March 2014). 6. S. Prasad, “Asymptotics of Bayesian error probability and 2D pair superresolution ,” submitted to Opt. Express
Simulation studies of the application of SEASAT data in weather and state of sea forecasting models
NASA Technical Reports Server (NTRS)
Cardone, V. J.; Greenwood, J. A.
1979-01-01
The design and analysis of SEASAT simulation studies in which the error structure of conventional analyses and forecasts is modeled realistically are presented. The development and computer implementation of a global spectral ocean wave model is described. The design of algorithms for the assimilation of theoretical wind data into computers and for the utilization of real wind data and wave height data in a coupled computer system are presented.
Spectral characteristics of background error covariance and multiscale data assimilation
Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; ...
2016-05-17
The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less
Cosmographic analysis with Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando
2018-05-01
The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parametrize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Padé series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Padé approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the Joint Light-curve Analysis supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.
Numerical study of signal propagation in corrugated coaxial cables
Li, Jichun; Machorro, Eric A.; Shields, Sidney
2017-01-01
Our article focuses on high-fidelity modeling of signal propagation in corrugated coaxial cables. Taking advantage of the axisymmetry, the authors reduce the 3-D problem to a 2-D problem by solving time-dependent Maxwell's equations in cylindrical coordinates.They then develop a nodal discontinuous Galerkin method for solving their model equations. We prove stability and error analysis for the semi-discrete scheme. We we present our numerical results, we demonstrate that our algorithm not only converges as our theoretical analysis predicts, but it is also very effective in solving a variety of signal propagation problems in practical corrugated coaxial cables.
Target Uncertainty Mediates Sensorimotor Error Correction
Vijayakumar, Sethu; Wolpert, Daniel M.
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects’ scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one’s response. By suggesting that subjects’ decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated. PMID:28129323
Target Uncertainty Mediates Sensorimotor Error Correction.
Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects' scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one's response. By suggesting that subjects' decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated.
ERIC Educational Resources Information Center
Polli, Frida E.; Barton, Jason J. S.; Thakkar, Katharine N.; Greve, Douglas N.; Goff, Donald C.; Rauch, Scott L.; Manoach, Dara S.
2008-01-01
To perform well on any challenging task, it is necessary to evaluate your performance so that you can learn from errors. Recent theoretical and experimental work suggests that the neural sequellae of error commission in a dorsal anterior cingulate circuit index a type of contingency- or reinforcement-based learning, while activation in a rostral…
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
Numerical Analysis of Orbital Perturbation Effects on Inclined Geosynchronous SAR
Dong, Xichao; Hu, Cheng; Long, Teng; Li, Yuanhao
2016-01-01
The geosynchronous synthetic aperture radar (GEO SAR) is susceptible to orbit perturbations, leading to orbit drifts and variations. The influences behave very differently from those in low Earth orbit (LEO) SAR. In this paper, the impacts of perturbations on GEO SAR orbital elements are modelled based on the perturbed dynamic equations, and then, the focusing is analyzed theoretically and numerically by using the Systems Tool Kit (STK) software. The accurate GEO SAR slant range histories can be calculated according to the perturbed orbit positions in STK. The perturbed slant range errors are mainly the first and second derivatives, leading to image drifts and defocusing. Simulations of the point target imaging are performed to validate the aforementioned analysis. In the GEO SAR with an inclination of 53° and an argument of perigee of 90°, the Doppler parameters and the integration time are different and dependent on the geometry configurations. Thus, the influences are varying at different orbit positions: at the equator, the first-order phase errors should be mainly considered; at the perigee and apogee, the second-order phase errors should be mainly considered; at other positions, first-order and second-order exist simultaneously. PMID:27598168
Williams, Larry J; O'Boyle, Ernest H
2015-09-01
A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).
Theoretical uncertainties on the radius of low- and very-low-mass stars
NASA Astrophysics Data System (ADS)
Tognelli, E.; Prada Moroni, P. G.; Degl'Innocenti, S.
2018-05-01
We performed an analysis of the main theoretical uncertainties that affect the radius of low- and very-low-mass stars predicted by current stellar models. We focused on stars in the mass range 0.1-1 M⊙, on both the zero-age main sequence (ZAMS) and on 1, 2, and 5 Gyr isochrones. First, we quantified the impact on the radius of the uncertainty of several quantities, namely the equation of state, radiative opacity, atmospheric models, convection efficiency, and initial chemical composition. Then, we computed the cumulative radius error stripe obtained by adding the radius variation due to all the analysed quantities. As a general trend, the radius uncertainty increases with the stellar mass. For ZAMS structures the cumulative error stripe of very-low-mass stars is about ±2 and ±3 per cent, while at larger masses it increases up to ±4 and ±5 per cent. The radius uncertainty gets larger and age dependent if isochrones are considered, reaching for M ˜ 1 M⊙ about +12(-15) per cent at an age of 5 Gyr. We also investigated the radius uncertainty at a fixed luminosity. In this case, the cumulative error stripe is the same for both ZAMS and isochrone models and it ranges from about ±4 to +7 and +9(-5) per cent. We also showed that the sole uncertainty on the chemical composition plays an important role in determining the radius error stripe, producing a radius variation that ranges between about ±1 and ±2 per cent on ZAMS models with fixed mass and about ±3 and ±5 per cent at a fixed luminosity.
Rußig, Lorenz L; Schulze, Ralf K W
2013-12-01
The goal of the present study was to develop a theoretical analysis of errors in implant position, which can occur owing to minute registration errors of a reference marker in a cone beam computed tomography volume when inserting an implant with a surgical stent. A virtual dental-arch model was created using anatomic data derived from the literature. Basic trigonometry was used to compute effects of defined minute registration errors of only voxel size. The errors occurring at the implant's neck and apex both in horizontal as in vertical direction were computed for mean ±95%-confidence intervals of jaw width and length and typical implant lengths (8, 10 and 12 mm). Largest errors occur in vertical direction for larger voxel sizes and for greater arch dimensions. For a 10 mm implant in the frontal region, these can amount to a mean of 0.716 mm (range: 0.201-1.533 mm). Horizontal errors at the neck are negligible, with a mean overall deviation of 0.009 mm (range: 0.001-0.034 mm). Errors increase with distance to the registration marker and voxel size and are affected by implant length. Our study shows that minute and realistic errors occurring in the automated registration of a reference object have an impact on the implant's position and angulation. These errors occur in the fundamental initial step in the long planning chain; thus, they are critical and should be made aware to users of these systems. © 2012 John Wiley & Sons A/S.
High-Accuracy Measurement of Small Movement of an Object behind Cloth Using Airborne Ultrasound
NASA Astrophysics Data System (ADS)
Hoshiba, Kotaro; Hirata, Shinnosuke; Hachiya, Hiroyuki
2013-07-01
The acoustic measurement of vital information such as breathing and heartbeat in the standing position whilst the subject is wearing clothes is a difficult problem. In this paper, we present the basic experimental results to measure small movement of an object behind cloth. We measured acoustic characteristics of various types of cloth to obtain the transmission loss through cloth. To observe the relationship between measurement error and target speed under a low signal-to-noise ratio (SNR), we tried to measure the movement of an object behind cloth. The target was placed apart from the cloth to separate the target reflection from the cloth reflection. We found that a small movement of less than 6 mm/s could be observed using the M-sequence, moving target indicator (MTI) filter, and tracking phase difference, when the SNR was less than 0 dB. We also present the results of theoretical error analysis in the MTI filter and phase tracking for high-accuracy measurement. Characteristics of the systematic error were clarified.
A bounding-based solution approach for the continuous arc covering problem
NASA Astrophysics Data System (ADS)
Wei, Ran; Murray, Alan T.; Batta, Rajan
2014-04-01
Road segments, telecommunication wiring, water and sewer pipelines, canals and the like are important features of the urban environment. They are often conceived of and represented as network-based arcs. As a result of the usefulness and significance of arc-based features, there is a need to site facilities along arcs to serve demand. Examples of such facilities include surveillance equipment, cellular towers, refueling centers and emergency response stations, with the intent of being economically efficient as well as providing good service along the arcs. While this amounts to a continuous location problem by nature, various discretizations are generally relied upon to solve such problems. The result is potential for representation errors that negatively impact analysis and decision making. This paper develops a solution approach for the continuous arc covering problem that theoretically eliminates representation errors. The developed approach is applied to optimally place acoustic sensors and cellular base stations along a road network. The results demonstrate the effectiveness of this approach for ameliorating any error and uncertainty in the modeling process.
NASA Astrophysics Data System (ADS)
Hellwagner, Johannes; Sharma, Kshama; Tan, Kong Ooi; Wittmann, Johannes J.; Meier, Beat H.; Madhu, P. K.; Ernst, Matthias
2017-06-01
Pulse imperfections like pulse transients and radio-frequency field maladjustment or inhomogeneity are the main sources of performance degradation and limited reproducibility in solid-state nuclear magnetic resonance experiments. We quantitatively analyze the influence of such imperfections on the performance of symmetry-based pulse sequences and describe how they can be compensated. Based on a triple-mode Floquet analysis, we develop a theoretical description of symmetry-based dipolar recoupling sequences, in particular, R2 6411, calculating first- and second-order effective Hamiltonians using real pulse shapes. We discuss the various origins of effective fields, namely, pulse transients, deviation from the ideal flip angle, and fictitious fields, and develop strategies to counteract them for the restoration of full transfer efficiency. We compare experimental applications of transient-compensated pulses and an asynchronous implementation of the sequence to a supercycle, SR26, which is known to be efficient in compensating higher-order error terms. We are able to show the superiority of R26 compared to the supercycle, SR26, given the ability to reduce experimental error on the pulse sequence by pulse-transient compensation and a complete theoretical understanding of the sequence.
Adaptive control of dynamical synchronization on evolving networks with noise disturbances
NASA Astrophysics Data System (ADS)
Yuan, Wu-Jie; Zhou, Jian-Fang; Sendiña-Nadal, Irene; Boccaletti, Stefano; Wang, Zhen
2018-02-01
In real-world networked systems, the underlying structure is often affected by external and internal unforeseen factors, making its evolution typically inaccessible. An adaptive strategy was introduced for maintaining synchronization on unpredictably evolving networks [Sorrentino and Ott, Phys. Rev. Lett. 100, 114101 (2008), 10.1103/PhysRevLett.100.114101], which yet does not consider the noise disturbances widely existing in networks' environments. We provide here strategies to control dynamical synchronization on slowly and unpredictably evolving networks subjected to noise disturbances which are observed at the node and at the communication channel level. With our strategy, the nodes' coupling strength is adaptively adjusted with the aim of controlling synchronization, and according only to their received signal and noise disturbances. We first provide a theoretical analysis of the control scheme by introducing an error potential function to seek for the minimization of the synchronization error. Then, we show numerical experiments which verify our theoretical results. In particular, it is found that our adaptive strategy is effective even for the case in which the dynamics of the uncontrolled network would be explosive (i.e., the states of all the nodes would diverge to infinity).
Verifying the body tide at the Canary Islands using tidal gravimetry observations
NASA Astrophysics Data System (ADS)
Arnoso, J.; Benavent, M.; Bos, M. S.; Montesinos, F. G.; Vieira, R.
2011-05-01
Gravity tide records from El Hierro, Tenerife and Lanzarote Islands (Canarian Archipelago) have been analyzed and compared to the theoretical body tide model (DDW) of Dehant el al. (1999). The use of more stringent criterion of tidal analysis using VAV program allowed us to reduce the error bars by a factor of two of the gravimetric factors at Tenerife and Lanzarote compared with previous published values. Also, the calibration values have been revisited at those sites. Precise ocean tide loading (OTL) corrections based on up-to-date global ocean models and improved regional ocean model have been obtained for the main tidal harmonics O 1, K 1, M 2, S 2. We also point out the importance of using the most accurate coastline definition for OTL calculations in the Canaries. The remaining observational errors depend on the accuracy of the calibration of the gravimeters and/or on the length of the observed data series. Finally, the comparison of the tidal observations with the theoretical body tide models has been done with an accuracy level of 0.1% at El Hierro, 0.4% at Tenerife and 0.5% at Lanzarote.
Generalized algebraic scene-based nonuniformity correction algorithm.
Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott
2005-02-01
A generalization of a recently developed algebraic scene-based nonuniformity correction algorithm for focal plane array (FPA) sensors is presented. The new technique uses pairs of image frames exhibiting arbitrary one- or two-dimensional translational motion to compute compensator quantities that are then used to remove nonuniformity in the bias of the FPA response. Unlike its predecessor, the generalization does not require the use of either a blackbody calibration target or a shutter. The algorithm has a low computational overhead, lending itself to real-time hardware implementation. The high-quality correction ability of this technique is demonstrated through application to real IR data from both cooled and uncooled infrared FPAs. A theoretical and experimental error analysis is performed to study the accuracy of the bias compensator estimates in the presence of two main sources of error.
Suzuki, Y
2001-11-01
A methodology for selecting the measurement conditions in the dye-binding method for determining serum protein has been studied by a theoretical calculation. This calculation was based on the fact that a protein error occurs because of a reaction between the side chains of a positively charged amino acid residue in a protein molecule and a dissociated dye anion. The calculated characteristics of this method are summarized as follows: (1) Although the reaction between the dye and the protein occurs up to about pH 12, a change in the color shade, called protein error, is observed only in a pH region restricted within narrow limits. (2) Although the apparent absorbance (the absorbance of the test solution measured against a reagent blank) is lower than the true absorbance indicated by the formed dye-protein complex, the apparent absorbance correlates with the true absorbance with a correlation coefficient of 1.0. (3) At a higher dye concentration, the calibration curve is more linear at a higher pH than at a lower pH. Most of these characteristics were similarly observed experimentally in the reactions of BPB, BCG and BCP with human and bovine albumins. It is concluded that in order to ensure the linearity of the calibration curve, the measurement should be performed at a higher dye concentration and sufficiently high pH where the detection sensitivity is satisfied.
Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction
Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir
2016-10-20
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less
Triple-frequency radar retrievals of snowfall properties from the OLYMPEX field campaign
NASA Astrophysics Data System (ADS)
Leinonen, J. S.; Lebsock, M. D.; Sy, O. O.; Tanelli, S.
2017-12-01
Retrieval of snowfall properties with radar is subject to significant errors arising from the uncertainties in the size and structure of snowflakes. Recent modeling and theoretical studies have shown that multi-frequency radars can potentially constrain the microphysical properties and thus reduce the uncertainties in the retrieved snow water content. So far, there have only been limited efforts to leverage the theoretical advances in actual snowfall retrievals. In this study, we have implemented an algorithm that retrieves the snowfall properties from triple-frequency radar data using the radar scattering properties from a combination of snowflake scattering databases, which were derived using numerical scattering methods. Snowflake number concentration, characteristic size and density are derived using a combination of optimal estimation and Kalman smoothing; the snow water content and other bulk properties are then derived from these. The retrieval framework is probabilistic and thus naturally provides error estimates for the retrieved quantities. We tested the retrieval algorithm using data from the APR3 airborne radar flown onboard the NASA DC-8 aircraft during the Olympic Mountain Experiment (OLYMPEX) in late 2015. We demonstrated consistent retrieval of snow properties and smooth transition from single- and dual-frequency retrievals to using all three frequencies simultaneously. The error analysis shows that the retrieval accuracy is improved when additional frequencies are introduced. We also compare the findings to in situ measurements of snow properties as well as measurements by polarimetric ground-based radar.
Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less
NASA Technical Reports Server (NTRS)
Chang, Ching L.; Jiang, Bo-Nan
1990-01-01
A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.
Error analysis for a spaceborne laser ranging system
NASA Technical Reports Server (NTRS)
Pavlis, E. C.
1979-01-01
The dependence (or independence) of baseline accuracies, obtained from a typical mission of a spaceborne ranging system, on several factors is investigated. The emphasis is placed on a priori station information, but factors such as the elevation cut-off angle, the geometry of the network, the mean orbital height, and to a limited extent geopotential modeling are also examined. The results are obtained through simulations, but some theoretical justification is also given. Guidelines for freeing the results from these dependencies are suggested for most of the factors.
Studies of asteroids, comets, and Jupiter's outer satellites
NASA Technical Reports Server (NTRS)
Bowell, Edward
1991-01-01
Observational, theoretical, and computational research was performed, mainly on asteroids. Two principal areas of research, centering on astrometry and photometry, are interrelated in their aim to study the overall structure of the asteroid belt and the physical and orbital properties of individual asteroids. Two highlights are: detection of CN emission from Chiron; and realization that 1990 MB is the first known Trojan type asteroid of a planet other than Jupiter. A new method of asteroid orbital error analysis, based on Bayesian theory, was developed.
Study of an instrument for sensing errors in a telescope wavefront
NASA Technical Reports Server (NTRS)
Golden, L. J.; Shack, R. V.; Slater, D. N.
1973-01-01
Partial results are presented of theoretical and experimental investigations of different focal plane sensor configurations for determining the error in a telescope wavefront. The coarse range sensor and fine range sensors are used in the experimentation. The design of a wavefront error simulator is presented along with the Hartmann test, the shearing polarization interferometer, the Zernike test, and the Zernike polarization test.
Accuracy of a class of concurrent algorithms for transient finite element analysis
NASA Technical Reports Server (NTRS)
Ortiz, Michael; Sotelino, Elisa D.; Nour-Omid, Bahram
1988-01-01
The accuracy of a new class of concurrent procedures for transient finite element analysis is examined. A phase error analysis is carried out which shows that wave retardation leading to unacceptable loss of accuracy may occur if a Courant condition based on the dimensions of the subdomains is violated. Numerical tests suggest that this Courant condition is conservative for typical structural applications and may lead to a marked increase in accuracy as the number of subdomains is increased. Theoretical speed-up ratios are derived which suggest that the algorithms under consideration can be expected to exhibit a performance superior to that of globally implicit methods when implemented on parallel machines.
Four Bootstrap Confidence Intervals for the Binomial-Error Model.
ERIC Educational Resources Information Center
Lin, Miao-Hsiang; Hsiung, Chao A.
1992-01-01
Four bootstrap methods are identified for constructing confidence intervals for the binomial-error model. The extent to which similar results are obtained and the theoretical foundation of each method and its relevance and ranges of modeling the true score uncertainty are discussed. (SLD)
Thermodynamic analysis of onset characteristics in a miniature thermoacoustic Stirling engine
NASA Astrophysics Data System (ADS)
Huang, Xin; Zhou, Gang; Li, Qing
2013-06-01
This paper analyzes the onset characteristics of a miniature thermoacoustic Stirling heat engine using the thermodynamic analysis method. The governing equations of components are reduced from the basic thermodynamic relations and the linear thermoacoustic theory. By solving the governing equation group numerically, the oscillation frequencies and onset temperatures are obtained. The dependences of the kinds of working gas, the length of resonator tube, the diameter of resonator tube, on the oscillation frequency are calculated. Meanwhile, the influences of hydraulic radius and mean pressure on the onset temperature for different working gas are also presented. The calculation results indicate that there exists an optimal dimensionless hydraulic radius to obtain the lowest onset temperature, whose value lies in the range of 0.30-0.35 for different working gases. Furthermore, the amplitude and phase relationship of pressures and volume flows are analyzed in the time-domain. Some experiments have been performed to validate the calculations. The calculation results agree well with the experimental values. Finally, an error analysis is made, giving the reasons that cause the errors of theoretical calculations.
Controlling false-negative errors in microarray differential expression analysis: a PRIM approach.
Cole, Steve W; Galic, Zoran; Zack, Jerome A
2003-09-22
Theoretical considerations suggest that current microarray screening algorithms may fail to detect many true differences in gene expression (Type II analytic errors). We assessed 'false negative' error rates in differential expression analyses by conventional linear statistical models (e.g. t-test), microarray-adapted variants (e.g. SAM, Cyber-T), and a novel strategy based on hold-out cross-validation. The latter approach employs the machine-learning algorithm Patient Rule Induction Method (PRIM) to infer minimum thresholds for reliable change in gene expression from Boolean conjunctions of fold-induction and raw fluorescence measurements. Monte Carlo analyses based on four empirical data sets show that conventional statistical models and their microarray-adapted variants overlook more than 50% of genes showing significant up-regulation. Conjoint PRIM prediction rules recover approximately twice as many differentially expressed transcripts while maintaining strong control over false-positive (Type I) errors. As a result, experimental replication rates increase and total analytic error rates decline. RT-PCR studies confirm that gene inductions detected by PRIM but overlooked by other methods represent true changes in mRNA levels. PRIM-based conjoint inference rules thus represent an improved strategy for high-sensitivity screening of DNA microarrays. Freestanding JAVA application at http://microarray.crump.ucla.edu/focus
Model parameter-related optimal perturbations and their contributions to El Niño prediction errors
NASA Astrophysics Data System (ADS)
Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua
2018-04-01
Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.
Zou, Ling; Zhao, Haihua; Kim, Seung Jun
2016-11-16
In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
NASA Technical Reports Server (NTRS)
Allman, Mark; Ostermann, Shawn; Kruse, Hans
1996-01-01
In several experiments using NASA's Advanced Communications Technology Satellite (ACTS), investigators have reported disappointing throughput using the transmission control protocol/Internet protocol (TCP/IP) protocol suite over 1.536Mbit/sec (T1) satellite circuits. A detailed analysis of file transfer protocol (FTP) file transfers reveals that both the TCP window size and the TCP 'slow starter' algorithm contribute to the observed limits in throughput. In this paper we summarize the experimental and and theoretical analysis of the throughput limit imposed by TCP on the satellite circuit. We then discuss in detail the implementation of a multi-socket FTP, XFTP client and server. XFTP has been tested using the ACTS system. Finally, we discuss a preliminary set of tests on a link with non-zero bit error rates. XFTP shows promising performance under these conditions, suggesting the possibility that a multi-socket application may be less effected by bit errors than a single, large-window TCP connection.
NASA Astrophysics Data System (ADS)
Zhang, Shuqing; Wang, Yongquan; Zhi, Xiyang
2017-05-01
A method of diminishing the shape error of membrane mirror is proposed in this paper. The inner inflating pressure is considerably decreased by adopting the pre-shaped membrane. Small deformation of the membrane mirror with greatly reduced shape error is sequentially achieved. Primarily a finite element model of the above pre-shaped membrane is built on the basis of its mechanical properties. Then accurate shape data under different pressures can be acquired by iteratively calculating the node displacements of the model. Shape data are applicable to build up deformed reflecting surfaces for the simulative analysis in ZEMAX. Finally, ground-based imaging experiments of 4-bar targets and nature scene are conducted. Experiment results indicate that the MTF of the infrared system can reach to 0.3 at a high spatial resolution of 10l p/mm, and texture details of the nature scene are well-presented. The method can provide theoretical basis and technical support for the applications in lightweight optical components with ultra-large apertures.
Nonlinear effects in the time measurement device based on surface acoustic wave filter excitation.
Prochazka, Ivan; Panek, Petr
2009-07-01
A transversal surface acoustic wave filter has been used as a time interpolator in a time interval measurement device. We are presenting the experiments and results of an analysis of the nonlinear effects in such a time interpolator. The analysis shows that the nonlinear distortion in the time interpolator circuits causes a deterministic measurement error which can be understood as the time interpolation nonlinearity. The dependence of this error on time of the measured events can be expressed as a sparse Fourier series thus it usually oscillates very quickly in comparison to the clock period. The theoretical model is in good agreement with experiments carried out on an experimental two-channel timing system. Using highly linear amplifiers in the time interpolator and adjusting the filter excitation level to the optimum, we have achieved the interpolation nonlinearity below 0.2 ps. The overall single-shot precision of the experimental timing device is 0.9 ps rms in each channel.
Analysis and improvement of the quantum image matching
NASA Astrophysics Data System (ADS)
Dang, Yijie; Jiang, Nan; Hu, Hao; Zhang, Wenyin
2017-11-01
We investigate the quantum image matching algorithm proposed by Jiang et al. (Quantum Inf Process 15(9):3543-3572, 2016). Although the complexity of this algorithm is much better than the classical exhaustive algorithm, there may be an error in it: After matching the area between two images, only the pixel at the upper left corner of the matched area played part in following steps. That is to say, the paper only matched one pixel, instead of an area. If more than one pixels in the big image are the same as the one at the upper left corner of the small image, the algorithm will randomly measure one of them, which causes the error. In this paper, an improved version is presented which takes full advantage of the whole matched area to locate a small image in a big image. The theoretical analysis indicates that the network complexity is higher than the previous algorithm, but it is still far lower than the classical algorithm. Hence, this algorithm is still efficient.
Experimental verification of Theodorsen's theoretical jet-boundary correction factors
NASA Technical Reports Server (NTRS)
Schliestett, George Van
1934-01-01
Prandtl's suggested use of a doubly infinite arrangement of airfoil images in the theoretical determination of wind-tunnel jet-boundary corrections was first adapted by Glauert to the case of closed rectangular jets. More recently, Theodorsen, using the same image arrangement but a different analytical treatment, has extended this work to include not only closed but also partly closed and open tunnels. This report presents the results of wind-tunnel tests conducted at the Georgia School of Technology for the purpose of verifying the five cases analyzed by Theodorsen. The tests were conducted in a square tunnel and the results constitute a satisfactory verification of his general method of analysis. During the preparation of the data two minor errors were discovered in the theory and these have been rectified.
NASA Astrophysics Data System (ADS)
Li, Tianxing; Zhou, Junxiang; Deng, Xiaozhong; Li, Jubo; Xing, Chunrong; Su, Jianxin; Wang, Huiliang
2018-07-01
A manufacturing error of a cycloidal gear is the key factor affecting the transmission accuracy of a robot rotary vector (RV) reducer. A methodology is proposed to realize the digitized measurement and data processing of the cycloidal gear manufacturing error based on the gear measuring center, which can quickly and accurately measure and evaluate the manufacturing error of the cycloidal gear by using both the whole tooth profile measurement and a single tooth profile measurement. By analyzing the particularity of the cycloidal profile and its effect on the actual meshing characteristics of the RV transmission, the cycloid profile measurement strategy is planned, and the theoretical profile model and error measurement model of cycloid-pin gear transmission are established. Through the digital processing technology, the theoretical trajectory of the probe and the normal vector of the measured point are calculated. By means of precision measurement principle and error compensation theory, a mathematical model for the accurate calculation and data processing of manufacturing error is constructed, and the actual manufacturing error of the cycloidal gear is obtained by the optimization iterative solution. Finally, the measurement experiment of the cycloidal gear tooth profile is carried out on the gear measuring center and the HEXAGON coordinate measuring machine, respectively. The measurement results verify the correctness and validity of the measurement theory and method. This methodology will provide the basis for the accurate evaluation and the effective control of manufacturing precision of the cycloidal gear in a robot RV reducer.
Waffle mode error in the AEOS adaptive optics point-spread function
NASA Astrophysics Data System (ADS)
Makidon, Russell B.; Sivaramakrishnan, Anand; Roberts, Lewis C., Jr.; Oppenheimer, Ben R.; Graham, James R.
2003-02-01
Adaptive optics (AO) systems have improved astronomical imaging capabilities significantly over the last decade, and have the potential to revolutionize the kinds of science done with 4-5m class ground-based telescopes. However, provided sufficient detailed study and analysis, existing AO systems can be improved beyond their original specified error budgets. Indeed, modeling AO systems has been a major activity in the past decade: sources of noise in the atmosphere and the wavefront sensing WFS) control loop have received a great deal of attention, and many detailed and sophisticated control-theoretic and numerical models predicting AO performance are already in existence. However, in terms of AO system performance improvements, wavefront reconstruction (WFR) and wavefront calibration techniques have commanded relatively little attention. We elucidate the nature of some of these reconstruction problems, and demonstrate their existence in data from the AEOS AO system. We simulate the AO correction of AEOS in the I-band, and show that the magnitude of the `waffle mode' error in the AEOS reconstructor is considerably larger than expected. We suggest ways of reducing the magnitude of this error, and, in doing so, open up ways of understanding how wavefront reconstruction might handle bad actuators and partially-illuminated WFS subapertures.
Haldar, Justin P.; Leahy, Richard M.
2013-01-01
This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. PMID:23353603
NASA Technical Reports Server (NTRS)
Bartenwerfer, M.
1982-01-01
When measuring velocities in turbulent gas flow, approximation signal analysis with hot wire anemometers having one and two wire probes are used. A numeric test of standard analyses shows the resulting systemmatic error increases quickly with increasing turbulent intensity. Since it also depends on the turbulence structure, it cannot be corrected. The use of such probes is thus restricted to low turbulence. By means of three wire probes (in two dimensional flows with X wire probes) in principle, instantaneous values of velocity can be determined, and an asymmetric arrangement of wires has a theoretical advantage.
NASA Astrophysics Data System (ADS)
Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro
2017-07-01
Numerical analysis on the rotation of an ultrasonically levitated droplet in centrifugal coordinate is discussed. A droplet levitated in an acoustic chamber is simulated using the distributed point source method and the moving particle semi-implicit method. Centrifugal coordinate is adopted to avoid the Laplacian differential error, which causes numerical divergence or inaccuracy in the global coordinate calculation. Consequently, the duration of calculation stability has increased 30 times longer than that in a the previous paper. Moreover, the droplet radius versus rotational acceleration characteristics show a similar trend to the theoretical and experimental values in the literature.
A critique of supernova data analysis in cosmology
NASA Astrophysics Data System (ADS)
Gopal Vishwakarma, Ram; Narlikar, Jayant V.
2010-12-01
Observational astronomy has shown significant growth over the last decade and has made important contributions to cosmology. A major paradigm shift in cosmology was brought about by observations of Type Ia supernovae. The notion that the universe is accelerating has led to several theoretical challenges. Unfortunately, although high-quality supernovae data-sets are being produced, their statistical analysis leaves much to be desired. Instead of using the data to directly test the model, several studies seem to concentrate on assuming the model to be correct and limiting themselves to estimating model parameters and internal errors. As shown here, the important purpose of testing a cosmological theory is thereby vitiated.
Response function of modulated grid Faraday cup plasma instruments
NASA Technical Reports Server (NTRS)
Barnett, A.; Olbert, S.
1986-01-01
Modulated grid Faraday cup plasma analyzers are a very useful tool for making in situ measurements of space plasmas. One of their great attributes is that their simplicity permits their angular response function to be calculated theoretically. An expression is derived for this response function by computing the trajectories of the charged particles inside the cup. The Voyager plasma science experiment is used as a specific example. Two approximations to the rigorous response function useful for data analysis are discussed. Multisensor analysis of solar wind data indicates that the formulas represent the true cup response function for all angles of incidence with a maximum error of only a few percent.
Östling, Robert; Börstell, Carl; Courtaux, Servane
2018-01-01
We use automatic processing of 120,000 sign videos in 31 different sign languages to show a cross-linguistic pattern for two types of iconic form–meaning relationships in the visual modality. First, we demonstrate that the degree of inherent plurality of concepts, based on individual ratings by non-signers, strongly correlates with the number of hands used in the sign forms encoding the same concepts across sign languages. Second, we show that certain concepts are iconically articulated around specific parts of the body, as predicted by the associational intuitions by non-signers. The implications of our results are both theoretical and methodological. With regard to theoretical implications, we corroborate previous research by demonstrating and quantifying, using a much larger material than previously available, the iconic nature of languages in the visual modality. As for the methodological implications, we show how automatic methods are, in fact, useful for performing large-scale analysis of sign language data, to a high level of accuracy, as indicated by our manual error analysis. PMID:29867684
A Self-Referencing Intensity-Based Fiber Optic Sensor with Multipoint Sensing Characteristics
Choi, Sang-Jin; Kim, Young-Chon; Song, Minho; Pan, Jae-Kyung
2014-01-01
A self-referencing, intensity-based fiber optic sensor (FOS) is proposed and demonstrated. The theoretical analysis for the proposed design is given, and the validity of the theoretical analysis is confirmed via experiments. We define the measurement parameter, X, and the calibration factor, β, to find the transfer function, Hm,n, of the intensity-based FOS head. The self-referencing and multipoint sensing characteristics of the proposed system are validated by showing the measured Hm,n2 and relative error versus the optical power attenuation of the sensor head for four cases: optical source fluctuation, various remote sensing point distances, fiber Bragg gratings (FBGs) with different characteristics, and multiple sensor heads with cascade and/or parallel forms. The power-budget analysis and limitations of the measurement rates are discussed, and the measurement results of fiber-reinforced plastic (FRP) coupon strain using the proposed FOS are given as an actual measurement. The proposed FOS has several benefits, including a self-referencing characteristic, the flexibility to determine FBGs, and a simple structure in terms of the number of devices and measuring procedure. PMID:25046010
Critically evaluating the theory and performance of Bayesian analysis of macroevolutionary mixtures
Moore, Brian R.; Höhna, Sebastian; May, Michael R.; Rannala, Bruce; Huelsenbeck, John P.
2016-01-01
Bayesian analysis of macroevolutionary mixtures (BAMM) has recently taken the study of lineage diversification by storm. BAMM estimates the diversification-rate parameters (speciation and extinction) for every branch of a study phylogeny and infers the number and location of diversification-rate shifts across branches of a tree. Our evaluation of BAMM reveals two major theoretical errors: (i) the likelihood function (which estimates the model parameters from the data) is incorrect, and (ii) the compound Poisson process prior model (which describes the prior distribution of diversification-rate shifts across branches) is incoherent. Using simulation, we demonstrate that these theoretical issues cause statistical pathologies; posterior estimates of the number of diversification-rate shifts are strongly influenced by the assumed prior, and estimates of diversification-rate parameters are unreliable. Moreover, the inability to correctly compute the likelihood or to correctly specify the prior for rate-variable trees precludes the use of Bayesian approaches for testing hypotheses regarding the number and location of diversification-rate shifts using BAMM. PMID:27512038
A Quantum Theoretical Explanation for Probability Judgment Errors
ERIC Educational Resources Information Center
Busemeyer, Jerome R.; Pothos, Emmanuel M.; Franco, Riccardo; Trueblood, Jennifer S.
2011-01-01
A quantum probability model is introduced and used to explain human probability judgment errors including the conjunction and disjunction fallacies, averaging effects, unpacking effects, and order effects on inference. On the one hand, quantum theory is similar to other categorization and memory models of cognition in that it relies on vector…
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
2018-04-01
Reports an error in "Robust, replicable, and theoretically-grounded: A response to Brown and Coyne's (2017) commentary on the relationship between emodiversity and health" by Jordi Quoidbach, Moïra Mikolajczak, June Gruber, Ilios Kotsou, Aleksandr Kogan and Michael I. Norton ( Journal of Experimental Psychology: General , 2018[Mar], Vol 147[3], 451-458). In the article, there is an error in the byline for the first author due to a printer error. The complete, correct institutional affiliation for Jordi Quoidbach is ESADE Business School, Ramon Llull University. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2018-06787-002.) In 2014 in the Journal of Experimental Psychology: General , we reported 2 studies demonstrating that the diversity of emotions that people experience-as measured by the Shannon-Wiener entropy index-was an independent predictor of mental and physical health, over and above the effect of mean levels of emotion. Brown and Coyne (2017) questioned both our use of Shannon's entropy and our analytic approach. We thank Brown and Coyne for their interest in our research; however, both their theoretical and empirical critiques do not undermine the central theoretical tenets and empirical findings of our research. We present an in-depth examination that reveals that our findings are statistically robust, replicable, and reflect a theoretically grounded phenomenon with real-world implications. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEG), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of 1) the birth and death model, 2) the single gene expression model, 3) the genetic toggle switch model, and 4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate out theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks. PMID:27105653
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Youfang; Terebus, Anna; Liang, Jie
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-04-22
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less
Fluorescence errors in integrating sphere measurements of remote phosphor type LED light sources
NASA Astrophysics Data System (ADS)
Keppens, A.; Zong, Y.; Podobedov, V. B.; Nadal, M. E.; Hanselaer, P.; Ohno, Y.
2011-05-01
The relative spectral radiant flux error caused by phosphor fluorescence during integrating sphere measurements is investigated both theoretically and experimentally. Integrating sphere and goniophotometer measurements are compared and used for model validation, while a case study provides additional clarification. Criteria for reducing fluorescence errors to a degree of negligibility as well as a fluorescence error correction method based on simple matrix algebra are presented. Only remote phosphor type LED light sources are studied because of their large phosphor surfaces and high application potential in general lighting.
Study on analysis from sources of error for Airborne LIDAR
NASA Astrophysics Data System (ADS)
Ren, H. C.; Yan, Q.; Liu, Z. J.; Zuo, Z. Q.; Xu, Q. Q.; Li, F. F.; Song, C.
2016-11-01
With the advancement of Aerial Photogrammetry, it appears that to obtain geo-spatial information of high spatial and temporal resolution provides a new technical means for Airborne LIDAR measurement techniques, with unique advantages and broad application prospects. Airborne LIDAR is increasingly becoming a new kind of space for earth observation technology, which is mounted by launching platform for aviation, accepting laser pulses to get high-precision, high-density three-dimensional coordinate point cloud data and intensity information. In this paper, we briefly demonstrates Airborne laser radar systems, and that some errors about Airborne LIDAR data sources are analyzed in detail, so the corresponding methods is put forwarded to avoid or eliminate it. Taking into account the practical application of engineering, some recommendations were developed for these designs, which has crucial theoretical and practical significance in Airborne LIDAR data processing fields.
Analysis of dynamic behavior of multiple-stage planetary gear train used in wind driven generator.
Wang, Jungang; Wang, Yong; Huo, Zhipu
2014-01-01
A dynamic model of multiple-stage planetary gear train composed of a two-stage planetary gear train and a one-stage parallel axis gear is proposed to be used in wind driven generator to analyze the influence of revolution speed and mesh error on dynamic load sharing characteristic based on the lumped parameter theory. Dynamic equation of the model is solved using numerical method to analyze the uniform load distribution of the system. It is shown that the load sharing property of the system is significantly affected by mesh error and rotational speed; load sharing coefficient and change rate of internal and external meshing of the system are of obvious difference from each other. The study provides useful theoretical guideline for the design of the multiple-stage planetary gear train of wind driven generator.
NASA Technical Reports Server (NTRS)
Li, Jing; Hylton, Alan; Budinger, James; Nappier, Jennifer; Downey, Joseph; Raible, Daniel
2012-01-01
Due to its simplicity and robustness against wavefront distortion, pulse position modulation (PPM) with photon counting detector has been seriously considered for long-haul optical wireless systems. This paper evaluates the dual-pulse case and compares it with the conventional single-pulse case. Analytical expressions for symbol error rate and bit error rate are first derived and numerically evaluated, for the strong, negative-exponential turbulent atmosphere; and bandwidth efficiency and throughput are subsequently assessed. It is shown that, under a set of practical constraints including pulse width and pulse repetition frequency (PRF), dual-pulse PPM enables a better channel utilization and hence a higher throughput than it single-pulse counterpart. This result is new and different from the previous idealistic studies that showed multi-pulse PPM provided no essential information-theoretic gains than single-pulse PPM.
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin; Steele, Glen; Zucha, Joan; Schlesinger, Adam
2013-01-01
We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.
Testing jumps via false discovery rate control.
Yen, Yu-Min
2013-01-01
Many recently developed nonparametric jump tests can be viewed as multiple hypothesis testing problems. For such multiple hypothesis tests, it is well known that controlling type I error often makes a large proportion of erroneous rejections, and such situation becomes even worse when the jump occurrence is a rare event. To obtain more reliable results, we aim to control the false discovery rate (FDR), an efficient compound error measure for erroneous rejections in multiple testing problems. We perform the test via the Barndorff-Nielsen and Shephard (BNS) test statistic, and control the FDR with the Benjamini and Hochberg (BH) procedure. We provide asymptotic results for the FDR control. From simulations, we examine relevant theoretical results and demonstrate the advantages of controlling the FDR. The hybrid approach is then applied to empirical analysis on two benchmark stock indices with high frequency data.
Analysis of Dynamic Behavior of Multiple-Stage Planetary Gear Train Used in Wind Driven Generator
Wang, Jungang; Wang, Yong; Huo, Zhipu
2014-01-01
A dynamic model of multiple-stage planetary gear train composed of a two-stage planetary gear train and a one-stage parallel axis gear is proposed to be used in wind driven generator to analyze the influence of revolution speed and mesh error on dynamic load sharing characteristic based on the lumped parameter theory. Dynamic equation of the model is solved using numerical method to analyze the uniform load distribution of the system. It is shown that the load sharing property of the system is significantly affected by mesh error and rotational speed; load sharing coefficient and change rate of internal and external meshing of the system are of obvious difference from each other. The study provides useful theoretical guideline for the design of the multiple-stage planetary gear train of wind driven generator. PMID:24511295
Understanding the Doppler effect by analysing spectrograms of the sound of a passing vehicle
NASA Astrophysics Data System (ADS)
Lubyako, Dmitry; Martinez-Piedra, Gordon; Ushenin, Arthur; Denvir, Patrick; Dunlop, John; Hall, Alex; Le Roux, Gus; van Someren, Laurence; Weinberger, Harvey
2017-11-01
The purpose of this paper is to demonstrate how the Doppler effect can be analysed to deduce information about a moving source of sound waves. Specifically, we find the speed of a car and the distance of its closest approach to an observer using sound recordings from smartphones. A key focus of this paper is how this can be achieved in a classroom, both theoretically and experimentally, to deepen students’ understanding of the Doppler effect. Included are our own experimental data (48 sound recordings) to allow others to reproduce the analysis, if they cannot repeat the whole experiment themselves. In addition to its educational purpose, this paper examines the percentage errors in our results. This enabled us to determine sources of error, allowing those conducting similar future investigations to optimize their accuracy.
Outage probability of a relay strategy allowing intra-link errors utilizing Slepian-Wolf theorem
NASA Astrophysics Data System (ADS)
Cheng, Meng; Anwar, Khoirul; Matsumoto, Tad
2013-12-01
In conventional decode-and-forward (DF) one-way relay systems, a data block received at the relay node is discarded, if the information part is found to have errors after decoding. Such errors are referred to as intra-link errors in this article. However, in a setup where the relay forwards data blocks despite possible intra-link errors, the two data blocks, one from the source node and the other from the relay node, are highly correlated because they were transmitted from the same source. In this article, we focus on the outage probability analysis of such a relay transmission system, where source-destination and relay-destination links, Link 1 and Link 2, respectively, are assumed to suffer from the correlated fading variation due to block Rayleigh fading. The intra-link is assumed to be represented by a simple bit-flipping model, where some of the information bits recovered at the relay node are the flipped version of their corresponding original information bits at the source. The correlated bit streams are encoded separately by the source and relay nodes, and transmitted block-by-block to a common destination using different time slots, where the information sequence transmitted over Link 2 may be a noise-corrupted interleaved version of the original sequence. The joint decoding takes place at the destination by exploiting the correlation knowledge of the intra-link (source-relay link). It is shown that the outage probability of the proposed transmission technique can be expressed by a set of double integrals over the admissible rate range, given by the Slepian-Wolf theorem, with respect to the probability density function ( pdf) of the instantaneous signal-to-noise power ratios (SNR) of Link 1 and Link 2. It is found that, with the Slepian-Wolf relay technique, so far as the correlation ρ of the complex fading variation is | ρ|<1, the 2nd order diversity can be achieved only if the two bit streams are fully correlated. This indicates that the diversity order exhibited in the outage curve converges to 1 when the bit streams are not fully correlated. Moreover, the Slepian-Wolf outage probability is proved to be smaller than that of the 2nd order maximum ratio combining (MRC) diversity, if the average SNRs of the two independent links are the same. Exact as well as asymptotic expressions of the outage probability are theoretically derived in the article. In addition, the theoretical outage results are compared with the frame-error-rate (FER) curves, obtained by a series of simulations for the Slepian-Wolf relay system based on bit-interleaved coded modulation with iterative detection (BICM-ID). It is shown that the FER curves exhibit the same tendency as the theoretical results.
Increasing reliability of Gauss-Kronrod quadrature by Eratosthenes' sieve method
NASA Astrophysics Data System (ADS)
Adam, Gh.; Adam, S.
2001-04-01
The reliability of the local error estimates returned by the Gauss-Kronrod quadrature rules can be raised up to the theoretical 100% rate of success, under error estimate sharpening, provided a number of natural validating conditions are required. The self-validating scheme of the local error estimates, which is easy to implement and adds little supplementary computing effort, strengthens considerably the correctness of the decisions within the automatic adaptive quadrature.
Single-sample method for the estimation of glomerular filtration rate in children
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tauxe, W.N.; Bagchi, A.; Tepe, P.G.
1987-03-01
A method for the determination of the glomerular filtration rate (GFR) in children which involves the use of a single-plasma sample (SPS) after the injection of a radioactive indicator such as radioiodine labeled diatrizoate (Hypaque) has been developed. This is analogous to previously published SPS techniques of effective renal plasma flow (ERPF) in adults and children and GFR SPS techniques in adults. As a reference standard, GFR has been calculated from compartment analysis of injected radiopharmaceuticals (Sapirstein Method). Theoretical volumes of distribution were calculated at various times after injection (Vt) by dividing the total injected counts (I) by the plasmamore » concentration (Ct) expressed in liters, determined by counting an aliquot of plasma in a well type scintillation counter. Errors of predicting GFR from the various Vt values were determined as the standard error of estimate (Sy.x) in ml/min. They were found to be relatively high early after injection and to fall to a nadir of 3.9 ml/min at 91 min. The Sy.x Vt relationship was examined in linear, quadratic, and exponential form, but the simpler linear relationship was found to yield the lowest error. Other data calculated from the compartment analysis of the reference plasma disappearance curves are presented, but at this time have apparently little clinical relevance.« less
Jones, Reese E; Mandadapu, Kranthi K
2012-04-21
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
NASA Astrophysics Data System (ADS)
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
NASA Astrophysics Data System (ADS)
Zhang, Wei
2011-07-01
The longitudinal dispersion coefficient, DL, is a fundamental parameter of longitudinal solute transport models: the advection-dispersion (AD) model and various deadzone models. Since DL cannot be measured directly, and since its calibration using tracer test data is quite expensive and not always available, researchers have developed various methods, theoretical or empirical, for estimating DL by easier available cross-sectional hydraulic measurements (i.e., the transverse velocity profile, etc.). However, for known and unknown reasons, DL cannot be satisfactorily predicted using these theoretical/empirical formulae. Either there is very large prediction error for theoretical methods, or there is a lack of generality for the empirical formulae. Here, numerical experiments using Mike21, a software package that implements one of the most rigorous two-dimensional hydrodynamic and solute transport equations, for longitudinal solute transport in hypothetical streams, are presented. An analysis of the evolution of simulated solute clouds indicates that the two fundamental assumptions in Fischer's longitudinal transport analysis may be not reasonable. The transverse solute concentration distribution, and hence the longitudinal transport appears to be controlled by a dimensionless number ?, where Q is the average volumetric flowrate, Dt is a cross-sectional average transverse dispersion coefficient, and W is channel flow width. A simple empirical ? relationship may be established. Analysis and a revision of Fischer's theoretical formula suggest that ɛ influences the efficiency of transverse mixing and hence has restraining effect on longitudinal spreading. The findings presented here would improve and expand our understanding of longitudinal solute transport in open channel flow.
ASSOCIATIVE ADJUSTMENTS TO REDUCE ERRORS IN DOCUMENT SEARCHING.
ERIC Educational Resources Information Center
BRYANT, EDWARD C.; AND OTHERS
ASSOCIATIVE ADJUSTMENTS TO A DOCUMENT FILE ARE CONSIDERED AS A MEANS FOR IMPROVING RETRIEVAL. A THEORETICAL INVESTIGATION OF THE STATISTICAL PROPERTIES OF A GENERALIZED MISMATCH MEASURE WAS CARRIED OUT AND IMPROVEMENTS IN RETRIEVAL RESULTING FROM PERFORMING ASSOCIATIVE REGRESSION ADJUSTMENTS ON DATA FILE WERE EXAMINED BOTH FROM THE THEORETICAL AND…
NASA Astrophysics Data System (ADS)
Mao, Cuili; Lu, Rongsheng; Liu, Zhijian
2018-07-01
In fringe projection profilometry, the phase errors caused by the nonlinear intensity response of digital projectors needs to be correctly compensated. In this paper, a multi-frequency inverse-phase method is proposed. The theoretical model of periodical phase errors is analyzed. The periodical phase errors can be adaptively compensated in the wrapped maps by using a set of fringe patterns. The compensated phase is then unwrapped with multi-frequency method. Compared with conventional methods, the proposed method can greatly reduce the periodical phase error without calibrating measurement system. Some simulation and experimental results are presented to demonstrate the validity of the proposed approach.
Research of laser echo signal simulator
NASA Astrophysics Data System (ADS)
Xu, Rui; Shi, Rui; Wang, Xin; Li, Zhou
2015-11-01
Laser echo signal simulator is one of the most significant components of hardware-in-the-loop (HWIL) simulation systems for LADAR. System model and time series model of laser echo signal simulator are established. Some influential factors which could induce fixed error and random error on the simulated return signals are analyzed, and then these system insertion errors are analyzed quantitatively. Using this theoretical model, the simulation system is investigated experimentally. The results corrected by subtracting fixed error indicate that the range error of the simulated laser return signal is less than 0.25m, and the distance range that the system can simulate is from 50m to 20km.
Measurements of Reynolds stress profiles in unstratified tidal flow
Stacey, M.T.; Monismith, Stephen G.; Burau, J.R.
1999-01-01
In this paper we present a method for measuring profiles of turbulence quantities using a broadband acoustic doppler current profiler (ADCP). The method follows previous work on the continental shelf and extends the analysis to develop estimates of the errors associated with the estimation methods. ADCP data was collected in an unstratified channel and the results of the analysis are compared to theory. This comparison shows that the method provides an estimate of the Reynolds stresses, which is unbiased by Doppler noise, and an estimate of the turbulent kinetic energy (TKE) which is biased by an amount proportional to the Doppler noise. The noise in each of these quantities as well as the bias in the TKE match well with the theoretical values produced by the error analysis. The quantification of profiles of Reynolds stresses simultaneous with the measurement of mean velocity profiles allows for extensive analysis of the turbulence of the flow. In this paper, we examine the relation between the turbulence and the mean flow through the calculation of u*, the friction velocity, and Cd, the coefficient of drag. Finally, we calculate quantities of particular interest in turbulence modeling and analysis, the characteristic lengthscales, including a lengthscale which represents the stream-wise scale of the eddies which dominate the Reynolds stresses. Copyright 1999 by the American Geophysical Union.
Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast
NASA Astrophysics Data System (ADS)
Chu, Tianli; Xiong, Zixiang
2003-12-01
This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM) by McCanne) based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.
Quantum error correction of continuous-variable states against Gaussian noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ralph, T. C.
2011-08-15
We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.
Artificial Intelligence and Second Language Learning: An Efficient Approach to Error Remediation
ERIC Educational Resources Information Center
Dodigovic, Marina
2007-01-01
While theoretical approaches to error correction vary in the second language acquisition (SLA) literature, most sources agree that such correction is useful and leads to learning. While some point out the relevance of the communicative context in which the correction takes place, others stress the value of consciousness-raising. Trying to…
Energy dissipation of slot-type flip buckets
NASA Astrophysics Data System (ADS)
Wu, Jian-hua; Li, Shu-fang; Ma, Fei
2018-03-01
The energy dissipation is a key index in the evaluation of energy dissipation elements. In the present work, a flip bucket with a slot, called the slot-type flip bucket, is theoretically and experimentally investigated by the method of estimating the energy dissipation. The theoretical analysis shows that, in order to have the energy dissipation, it is necessary to determine the sequent flow depth h 1 and the flow speed V 1 at the corresponding position through the flow depth h 2 after the hydraulic jump. The relative flow depth h 2 / h 。 is a function of the approach flow Froude number Fr 。, the relative slot width b/B 。, and the relative slot angle θ/β. The expression for estimating the energy dissipation is developed, and the maximum error is not larger than 9.21%.
Energy dissipation of slot-type flip buckets
NASA Astrophysics Data System (ADS)
Wu, Jian-hua; Li, Shu-fang; Ma, Fei
2018-04-01
The energy dissipation is a key index in the evaluation of energy dissipation elements. In the present work, a flip bucket with a slot, called the slot-type flip bucket, is theoretically and experimentally investigated by the method of estimating the energy dissipation. The theoretical analysis shows that, in order to have the energy dissipation, it is necessary to determine the sequent flow depth h 1 and the flow speed V 1 at the corresponding position through the flow depth h 2 after the hydraulic jump. The relative flow depth h 2 / h o is a function of the approach flow Froude number Fr o, the relative slot width b/ B o, and the relative slot angle θ/ β. The expression for estimating the energy dissipation is developed, and the maximum error is not larger than 9.21%.
Analysis and design of a second-order digital phase-locked loop
NASA Technical Reports Server (NTRS)
Blasche, P. R.
1979-01-01
A specific second-order digital phase-locked loop (DPLL) was modeled as a first-order Markov chain with alternatives. From the matrix of transition probabilities of the Markov chain, the steady-state phase error of the DPLL was determined. In a similar manner the loop's response was calculated for a fading input. Additionally, a hardware DPLL was constructed and tested to provide a comparison to the results obtained from the Markov chain model. In all cases tested, good agreement was found between the theoretical predictions and the experimental data.
Frequency noise measurement of diode-pumped Nd:YAG ring lasers
NASA Technical Reports Server (NTRS)
Chen, Chien-Chung; Win, Moe Zaw
1990-01-01
The combined frequency noise spectrum of two model 120-01A nonplanar ring oscillator lasers was measured by first heterodyne detecting the IF signal and then measuring the IF frequency noise using an RF frequency discriminator. The results indicated the presence of a 1/f-squared noise component in the power-spectral density of the frequency fluctuations between 1 Hz and 1 kHz. After incorporating this 1/f-squared into the analysis of the optical phase tracking loop, the measured phase error variance closely matches the theoretical predictions.
A novel variable baseline visibility detection system and its measurement method
NASA Astrophysics Data System (ADS)
Li, Meng; Jiang, Li-hui; Xiong, Xing-long; Zhang, Guizhong; Yao, JianQuan
2017-10-01
As an important meteorological observation instrument, the visibility meter can ensure the safety of traffic operation. However, due to the optical system contamination as well as sample error, the accuracy and stability of the equipment are difficult to meet the requirement in the low-visibility environment. To settle this matter, a novel measurement equipment was designed based upon multiple baseline, which essentially acts as an atmospheric transmission meter with movable optical receiver, applying weighted least square method to process signal. Theoretical analysis and experiments in real atmosphere environment support this technique.
Low pressure gas flow analysis through an effusive inlet using mass spectrometry
NASA Technical Reports Server (NTRS)
Brown, David R.; Brown, Kenneth G.
1988-01-01
A mass spectrometric method for analyzing flow past and through an effusive inlet designed for use on the tethered satellite and other entering vehicles is discussed. Source stream concentrations of species in a gaseous mixture are determined using a calibration of measured mass spectral intensities versus source stream pressure for standard gas mixtures and pure gases. Concentrations are shown to be accurate within experimental error. Theoretical explanations for observed mass discrimination effects as they relate to the various flow situations in the effusive inlet and the experimental apparatus are discussed.
Evidence for the color-octet mechanism from CERN LEP2 gamma gamma --> J/psi + X Data.
Klasen, Michael; Kniehl, Bernd A; Mihaila, Luminiţa N; Steinhauser, Matthias
2002-07-15
We present theoretical predictions for the transverse-momentum distribution of J/psi mesons promptly produced in gammagamma collisions within the factorization formalism of nonrelativistic quantum chromodynamics, including the contributions from both direct and resolved photons, and we perform a conservative error analysis. The fraction of J/psi mesons from decays of bottom-flavored hadrons is estimated to be negligibly small. New data taken by the DELPHI Collaboration at LEP2 nicely confirm these predictions, while they disfavor those obtained within the traditional color-singlet model.
Giblin, Jay; Syed, Muhammad; Banning, Michael T; Kuno, Masaru; Hartland, Greg
2010-01-26
Absorption cross sections ((sigma)abs) of single branched CdSe nanowires (NWs) have been measured by photothermal heterodyne imaging (PHI). Specifically, PHI signals from isolated gold nanoparticles (NPs) with known cross sections were compared to those of individual CdSe NWs excited at 532 nm. This allowed us to determine average NW absorption cross sections at 532 nm of (sigma)abs = (3.17 +/- 0.44) x 10(-11) cm2/microm (standard error reported). This agrees well with a theoretical value obtained using a classical electromagnetic analysis ((sigma)abs = 5.00 x 10(-11) cm2/microm) and also with prior ensemble estimates. Furthermore, NWs exhibit significant absorption polarization sensitivities consistent with prior NW excitation polarization anisotropy measurements. This has enabled additional estimates of the absorption cross section parallel ((sigma)abs) and perpendicular ((sigma)abs(perpendicular) to the NW growth axis, as well as the corresponding NW absorption anisotropy ((rho)abs). Resulting values of (sigma)abs = (5.6 +/- 1.1) x 10(-11) cm2/microm, (sigma)abs(perpendicular) = (1.26 +/- 0.21) x 10(-11) cm2/microm, and (rho)abs = 0.63+/- 0.04 (standard errors reported) are again in good agreement with theoretical predictions. These measurements all indicate sizable NW absorption cross sections and ultimately suggest the possibility of future direct single NW absorption studies.
NASA Astrophysics Data System (ADS)
Abe, M.; Prasannaa, V. S.; Das, B. P.
2018-03-01
Heavy polar diatomic molecules are currently among the most promising probes of fundamental physics. Constraining the electric dipole moment of the electron (e EDM ), in order to explore physics beyond the standard model, requires a synergy of molecular experiment and theory. Recent advances in experiment in this field have motivated us to implement a finite-field coupled-cluster (FFCC) approach. This work has distinct advantages over the theoretical methods that we had used earlier in the analysis of e EDM searches. We used relativistic FFCC to calculate molecular properties of interest to e EDM experiments, that is, the effective electric field (Eeff) and the permanent electric dipole moment (PDM). We theoretically determine these quantities for the alkaline-earth monofluorides (AEMs), the mercury monohalides (Hg X ), and PbF. The latter two systems, as well as BaF from the AEMs, are of interest to e EDM searches. We also report the calculation of the properties using a relativistic finite-field coupled-cluster approach with single, double, and partial triples' excitations, which is considered to be the gold standard of electronic structure calculations. We also present a detailed error estimate, including errors that stem from our choice of basis sets, and higher-order correlation effects.
The Dolinar Receiver in an Information Theoretic Framework
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Birnbaum, Kevin M.; Moision, Bruce E.; Dolinar, Samuel J.
2011-01-01
Optical communication at the quantum limit requires that measurements on the optical field be maximally informative, but devising physical measurements that accomplish this objective has proven challenging. The Dolinar receiver exemplifies a rare instance of success in distinguishing between two coherent states: an adaptive local oscillator is mixed with the signal prior to photodetection, which yields an error probability that meets the Helstrom lower bound with equality. Here we apply the same local-oscillator-based architecture with aninformation-theoretic optimization criterion. We begin with analysis of this receiver in a general framework for an arbitrary coherent-state modulation alphabet, and then we concentrate on two relevant examples. First, we study a binary antipodal alphabet and show that the Dolinar receiver's feedback function not only minimizes the probability of error, but also maximizes the mutual information. Next, we study ternary modulation consistingof antipodal coherent states and the vacuum state. We derive an analytic expression for a near-optimal local oscillator feedback function, and, via simulation, we determine its photon information efficiency (PIE). We provide the PIE versus dimensional information efficiency (DIE) trade-off curve and show that this modulation and the our receiver combination performs universally better than (generalized) on-off keying plus photoncounting, although, the advantage asymptotically vanishes as the bits-per-photon diverges towards infinity.
Theoretical model for design and analysis of protectional eyewear.
Zelzer, B; Speck, A; Langenbucher, A; Eppig, T
2013-05-01
Protectional eyewear has to fulfill both mechanical and optical stress tests. To pass those optical tests the surfaces of safety spectacles have to be optimized to minimize optical aberrations. Starting with the surface data of three measured safety spectacles, a theoretical spectacle model (four spherical surfaces) is recalculated first and then optimized while keeping the front surface unchanged. Next to spherical power, astigmatic power and prism imbalance we used the wavefront error (five different viewing directions) to simulate the optical performance and to optimize the safety spectacle geometries. All surfaces were spherical (maximum global deviation 'peak-to-valley' between the measured surface and the best-fit sphere: 0.132mm). Except the spherical power of the model Axcont (-0.07m(-1)) all simulated optical performance before optimization was better than the limits defined by standards. The optimization reduced the wavefront error by 1% to 0.150 λ (Windor/Infield), by 63% to 0.194 λ (Axcont/Bolle) and by 55% to 0.199 λ (2720/3M) without dropping below the measured thickness. The simulated optical performance of spectacle designs could be improved when using a smart optimization. A good optical design counteracts degradation by parameter variation throughout the manufacturing process. Copyright © 2013. Published by Elsevier GmbH.
Mitigating Photon Jitter in Optical PPM Communication
NASA Technical Reports Server (NTRS)
Moision, Bruce
2008-01-01
A theoretical analysis of photon-arrival jitter in an optical pulse-position-modulation (PPM) communication channel has been performed, and now constitutes the basis of a methodology for designing receivers to compensate so that errors attributable to photon-arrival jitter would be minimized or nearly minimized. Photon-arrival jitter is an uncertainty in the estimated time of arrival of a photon relative to the boundaries of a PPM time slot. Photon-arrival jitter is attributable to two main causes: (1) receiver synchronization error [error in the receiver operation of partitioning time into PPM slots] and (2) random delay between the time of arrival of a photon at a detector and the generation, by the detector circuitry, of a pulse in response to the photon. For channels with sufficiently long time slots, photon-arrival jitter is negligible. However, as durations of PPM time slots are reduced in efforts to increase throughputs of optical PPM communication channels, photon-arrival jitter becomes a significant source of error, leading to significant degradation of performance if not taken into account in design. For the purpose of the analysis, a receiver was assumed to operate in a photon- starved regime, in which photon counts follow a Poisson distribution. The analysis included derivation of exact equations for symbol likelihoods in the presence of photon-arrival jitter. These equations describe what is well known in the art as a matched filter for a channel containing Gaussian noise. These equations would yield an optimum receiver if they could be implemented in practice. Because the exact equations may be too complex to implement in practice, approximations that would yield suboptimal receivers were also derived.
León Blanco, José M; González-R, Pedro L; Arroyo García, Carmen Martina; Cózar-Bernal, María José; Calle Suárez, Marcos; Canca Ortiz, David; Rabasco Álvarez, Antonio María; González Rodríguez, María Luisa
2018-01-01
This work was aimed at determining the feasibility of artificial neural networks (ANN) by implementing backpropagation algorithms with default settings to generate better predictive models than multiple linear regression (MLR) analysis. The study was hypothesized on timolol-loaded liposomes. As tutorial data for ANN, causal factors were used, which were fed into the computer program. The number of training cycles has been identified in order to optimize the performance of the ANN. The optimization was performed by minimizing the error between the predicted and real response values in the training step. The results showed that training was stopped at 10 000 training cycles with 80% of the pattern values, because at this point the ANN generalizes better. Minimum validation error was achieved at 12 hidden neurons in a single layer. MLR has great prediction ability, with errors between predicted and real values lower than 1% in some of the parameters evaluated. Thus, the performance of this model was compared to that of the MLR using a factorial design. Optimal formulations were identified by minimizing the distance among measured and theoretical parameters, by estimating the prediction errors. Results indicate that the ANN shows much better predictive ability than the MLR model. These findings demonstrate the increased efficiency of the combination of ANN and design of experiments, compared to the conventional MLR modeling techniques.
Why GPS makes distances bigger than they are
Ranacher, Peter; Brunauer, Richard; Trutschnig, Wolfgang; Van der Spek, Stefan; Reich, Siegfried
2016-01-01
ABSTRACT Global navigation satellite systems such as the Global Positioning System (GPS) is one of the most important sensors for movement analysis. GPS is widely used to record the trajectories of vehicles, animals and human beings. However, all GPS movement data are affected by both measurement and interpolation errors. In this article we show that measurement error causes a systematic bias in distances recorded with a GPS; the distance between two points recorded with a GPS is – on average – bigger than the true distance between these points. This systematic ‘overestimation of distance’ becomes relevant if the influence of interpolation error can be neglected, which in practice is the case for movement sampled at high frequencies. We provide a mathematical explanation of this phenomenon and illustrate that it functionally depends on the autocorrelation of GPS measurement error (C). We argue that C can be interpreted as a quality measure for movement data recorded with a GPS. If there is a strong autocorrelation between any two consecutive position estimates, they have very similar error. This error cancels out when average speed, distance or direction is calculated along the trajectory. Based on our theoretical findings we introduce a novel approach to determine C in real-world GPS movement data sampled at high frequencies. We apply our approach to pedestrian trajectories and car trajectories. We found that the measurement error in the data was strongly spatially and temporally autocorrelated and give a quality estimate of the data. Most importantly, our findings are not limited to GPS alone. The systematic bias and its implications are bound to occur in any movement data collected with absolute positioning if interpolation error can be neglected. PMID:27019610
Vallejo, Guillermo; Ato, Manuel; Fernández García, Paula; Livacic Rojas, Pablo E; Tuero Herrero, Ellián
2016-08-01
S. Usami (2014) describes a method to realistically determine sample size in longitudinal research using a multilevel model. The present research extends the aforementioned work to situations where it is likely that the assumption of homogeneity of the errors across groups is not met and the error term does not follow a scaled identity covariance structure. For this purpose, we followed a procedure based on transforming the variance components of the linear growth model and the parameter related to the treatment effect into specific and easily understandable indices. At the same time, we provide the appropriate statistical machinery for researchers to use when data loss is unavoidable, and changes in the expected value of the observed responses are not linear. The empirical powers based on unknown variance components were virtually the same as the theoretical powers derived from the use of statistically processed indexes. The main conclusion of the study is the accuracy of the proposed method to calculate sample size in the described situations with the stipulated power criteria.
A T-Type Capacitive Sensor Capable of Measuring 5-DOF Error Motions of Precision Spindles
Xiang, Kui; Qiu, Rongbo; Mei, Deqing; Chen, Zichen
2017-01-01
The precision spindle is a core component of high-precision machine tools, and the accurate measurement of its error motions is important for improving its rotation accuracy as well as the work performance of the machine. This paper presents a T-type capacitive sensor (T-type CS) with an integrated structure. The proposed sensor can measure the 5-degree-of-freedom (5-DOF) error motions of a spindle in-situ and simultaneously by integrating electrode groups in the cylindrical bore of the stator and the outer end face of its flange, respectively. Simulation analysis and experimental results show that the sensing electrode groups with differential measurement configuration have near-linear output for the different types of rotor displacements. What’s more, the additional capacitance generated by fringe effects has been reduced about 90% with the sensing electrode groups fabricated based on flexible printed circuit board (FPCB) and related processing technologies. The improved signal processing circuit has also been increased one times in the measuring performance and makes the measured differential output capacitance up to 93% of the theoretical values. PMID:28846631
NASA Technical Reports Server (NTRS)
Doggett, Leroy E.; Schaefer, Bradley E.
1994-01-01
We report the results of five Moonwatches, in which more than 2000 observers throughout North America attempted to sight the thin lunar crescent. For each Moonwatch we were able to determine the position of the Lunar Date Line (LDL), the line along which a normal observer has a 50% probability of spotting the Moon. The observational LDLs were then compared with predicted LDLs derived from crescent visibility prediction algorithms. We find that ancient and medieval rules are higly unreliable. More recent empirical criteria, based on the relative altitude and azimuth of the Moon at the time of sunset, have a reasonable accuracy, with the best specific formulation being due to Yallop. The modern theoretical model by Schaefer (based on the physiology of the human eye and the local observing conditions) is found to have the least systematic error, the least average error, and the least maximum error of all models tested. Analysis of the observations also provided information about atmospheric, optical and human factors that affect the observations. We show that observational lunar calendars have a natural bias to begin early.
Iterative random vs. Kennard-Stone sampling for IR spectrum-based classification task using PLS2-DA
NASA Astrophysics Data System (ADS)
Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz
2018-04-01
External testing (ET) is preferred over auto-prediction (AP) or k-fold-cross-validation in estimating more realistic predictive ability of a statistical model. With IR spectra, Kennard-stone (KS) sampling algorithm is often used to split the data into training and test sets, i.e. respectively for model construction and for model testing. On the other hand, iterative random sampling (IRS) has not been the favored choice though it is theoretically more likely to produce reliable estimation. The aim of this preliminary work is to compare performances of KS and IRS in sampling a representative training set from an attenuated total reflectance - Fourier transform infrared spectral dataset (of four varieties of blue gel pen inks) for PLS2-DA modeling. The `best' performance achievable from the dataset is estimated with AP on the full dataset (APF, error). Both IRS (n = 200) and KS were used to split the dataset in the ratio of 7:3. The classic decision rule (i.e. maximum value-based) is employed for new sample prediction via partial least squares - discriminant analysis (PLS2-DA). Error rate of each model was estimated repeatedly via: (a) AP on full data (APF, error); (b) AP on training set (APS, error); and (c) ET on the respective test set (ETS, error). A good PLS2-DA model is expected to produce APS, error and EVS, error that is similar to the APF, error. Bearing that in mind, the similarities between (a) APS, error vs. APF, error; (b) ETS, error vs. APF, error and; (c) APS, error vs. ETS, error were evaluated using correlation tests (i.e. Pearson and Spearman's rank test), using series of PLS2-DA models computed from KS-set and IRS-set, respectively. Overall, models constructed from IRS-set exhibits more similarities between the internal and external error rates than the respective KS-set, i.e. less risk of overfitting. In conclusion, IRS is more reliable than KS in sampling representative training set.
Quantum-Theoretical Methods and Studies Relating to Properties of Materials
1989-12-19
particularly sensitive to the behavior of the electron distribution close to the nuclei, which contributes only to E(l). Although the above results were...other condensed phases. So it was a useful test case to test the behavior of the theoretical computations for the gas phase relative to that in the...increasingly complicated and time- comsuming electron-correlation approximations should assure a small error in the theoret- ically computed enthalpy for a
Stochastic Surface Mesh Reconstruction
NASA Astrophysics Data System (ADS)
Ozendi, M.; Akca, D.; Topan, H.
2018-05-01
A generic and practical methodology is presented for 3D surface mesh reconstruction from the terrestrial laser scanner (TLS) derived point clouds. It has two main steps. The first step deals with developing an anisotropic point error model, which is capable of computing the theoretical precisions of 3D coordinates of each individual point in the point cloud. The magnitude and direction of the errors are represented in the form of error ellipsoids. The following second step is focused on the stochastic surface mesh reconstruction. It exploits the previously determined error ellipsoids by computing a point-wise quality measure, which takes into account the semi-diagonal axis length of the error ellipsoid. The points only with the least errors are used in the surface triangulation. The remaining ones are automatically discarded.
Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan
2015-07-22
Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the traditional PPP mode because of its advantages of independence, high positioning precision, and real-time performance. It could be an alternative solution for regional positioning service before global PPP service comes into operation.
Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan
2015-01-01
Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the traditional PPP mode because of its advantages of independence, high positioning precision, and real-time performance. It could be an alternative solution for regional positioning service before global PPP service comes into operation. PMID:26205276
Gordon, Morris; Parakh, Dillan
2017-10-01
Errors in healthcare are a major patient safety issue, with incident reporting a key solution. The incident reporting system has been integrated within a new medical curriculum, encouraging medical students to take part in this key safety process. The aim of this study was to describe the system and assess how students perceived the reporting system with regards to its role in enhancing safety. Employing a thematic analysis, this study used interviews with medical students at the end of the first year. Thematic indices were developed according to the information emerging from the data. Through open, axial and then selective stages of coding, an understanding of how the system was perceived was established. Analysis of the interview specified five core themes: (1) Aims of the incident reporting system; (2) internalized cognition of the system; (3) the impact of the reporting system; (4) threshold for reporting; (5) feedback on the systems operation. Selective analysis revealed three overriding findings: lack of error awareness and error wisdom as underpinned by key theoretical constructs, student support of the principle of safety, and perceptions of a blame culture. Students did not interpret reporting as a manner to support institutional learning and safety, rather many perceived it as a tool for a blame culture. The impact reporting had on students was unexpected and may give insight into how other undergraduates and early graduates interpret such a system. Future studies should aim to produce interventions that can support a reporting culture.
Shappell, Scott; Detwiler, Cristy; Holcomb, Kali; Hackworth, Carla; Boquet, Albert; Wiegmann, Douglas A
2007-04-01
The aim of this study was to extend previous examinations of aviation accidents to include specific aircrew, environmental, supervisory, and organizational factors associated with two types of commercial aviation (air carrier and commuter/ on-demand) accidents using the Human Factors Analysis and Classification System (HFACS). HFACS is a theoretically based tool for investigating and analyzing human error associated with accidents and incidents. Previous research has shown that HFACS can be reliably used to identify human factors trends associated with military and general aviation accidents. Using data obtained from both the National Transportation Safety Board and the Federal Aviation Administration, 6 pilot-raters classified aircrew, supervisory, organizational, and environmental causal factors associated with 1020 commercial aviation accidents that occurred over a 13-year period. The majority of accident causal factors were attributed to aircrew and the environment, with decidedly fewer associated with supervisory and organizational causes. Comparisons were made between HFACS causal categories and traditional situational variables such as visual conditions, injury severity, and regional differences. These data will provide support for the continuation, modification, and/or development of interventions aimed at commercial aviation safety. HFACS provides a tool for assessing human factors associated with accidents and incidents.
Intelligent control system for continuous technological process of alkylation
NASA Astrophysics Data System (ADS)
Gebel, E. S.; Hakimov, R. A.
2018-01-01
Relevance of intelligent control for complex dynamic objects and processes are shown in this paper. The model of a virtual analyzer based on a neural network is proposed. Comparative analysis of mathematical models implemented in MathLab software showed that the most effective from the point of view of the reproducibility of the result is the model with seven neurons in the hidden layer, the training of which was performed using the method of scaled coupled gradients. Comparison of the data from the laboratory analysis and the theoretical model are showed that the root-mean-square error does not exceed 3.5, and the calculated value of the correlation coefficient corresponds to a "strong" connection between the values.
The deuteron-radius puzzle is alive: A new analysis of nuclear structure uncertainties
NASA Astrophysics Data System (ADS)
Hernandez, O. J.; Ekström, A.; Nevo Dinur, N.; Ji, C.; Bacca, S.; Barnea, N.
2018-03-01
To shed light on the deuteron radius puzzle we analyze the theoretical uncertainties of the nuclear structure corrections to the Lamb shift in muonic deuterium. We find that the discrepancy between the calculated two-photon exchange correction and the corresponding experimentally inferred value by Pohl et al. [1] remain. The present result is consistent with our previous estimate, although the discrepancy is reduced from 2.6 σ to about 2 σ. The error analysis includes statistic as well as systematic uncertainties stemming from the use of nucleon-nucleon interactions derived from chiral effective field theory at various orders. We therefore conclude that nuclear theory uncertainty is more likely not the source of the discrepancy.
2013-01-01
A new approach, the projective system approach, is proposed to realize modified projective synchronization between two different chaotic systems. By simple analysis of trajectories in the phase space, a projective system of the original chaotic systems is obtained to replace the errors system to judge the occurrence of modified projective synchronization. Theoretical analysis and numerical simulations show that, although the projective system may not be unique, modified projective synchronization can be achieved provided that the origin of any of projective systems is asymptotically stable. Furthermore, an example is presented to illustrate that even a necessary and sufficient condition for modified projective synchronization can be derived by using the projective system approach. PMID:24187522
Krueger, Joachim I; Funder, David C
2004-06-01
Mainstream social psychology focuses on how people characteristically violate norms of action through social misbehaviors such as conformity with false majority judgments, destructive obedience, and failures to help those in need. Likewise, they are seen to violate norms of reasoning through cognitive errors such as misuse of social information, self-enhancement, and an over-readiness to attribute dispositional characteristics. The causes of this negative research emphasis include the apparent informativeness of norm violation, the status of good behavior and judgment as unconfirmable null hypotheses, and the allure of counter-intuitive findings. The shortcomings of this orientation include frequently erroneous imputations of error, findings of mutually contradictory errors, incoherent interpretations of error, an inability to explain the sources of behavioral or cognitive achievement, and the inhibition of generalized theory. Possible remedies include increased attention to the complete range of behavior and judgmental accomplishment, analytic reforms emphasizing effect sizes and Bayesian inference, and a theoretical paradigm able to account for both the sources of accomplishment and of error. A more balanced social psychology would yield not only a more positive view of human nature, but also an improved understanding of the bases of good behavior and accurate judgment, coherent explanations of occasional lapses, and theoretically grounded suggestions for improvement.
NASA Astrophysics Data System (ADS)
Kadaj, Roman
2016-12-01
The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS observations.
Haldar, Justin P; Leahy, Richard M
2013-05-01
This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. Copyright © 2013 Elsevier Inc. All rights reserved.
Seward, Kirsty; Wolfenden, Luke; Wiggers, John; Finch, Meghan; Wyse, Rebecca; Oldmeadow, Christopher; Presseau, Justin; Clinton-McHarg, Tara; Yoong, Sze Lin
2017-04-04
While there are number of frameworks which focus on supporting the implementation of evidence based approaches, few psychometrically valid measures exist to assess constructs within these frameworks. This study aimed to develop and psychometrically assess a scale measuring each domain of the Theoretical Domains Framework for use in assessing the implementation of dietary guidelines within a non-health care setting (childcare services). A 75 item 14-domain Theoretical Domains Framework Questionnaire (TDFQ) was developed and administered via telephone interview to 202 centre based childcare service cooks who had a role in planning the service menu. Confirmatory factor analysis (CFA) was undertaken to assess the reliability, discriminant validity and goodness of fit of the 14-domain theoretical domain framework measure. For the CFA, five iterative processes of adjustment were undertaken where 14 items were removed, resulting in a final measure consisting of 14 domains and 61 items. For the final measure: the Chi-Square goodness of fit statistic was 3447.19; the Standardized Root Mean Square Residual (SRMR) was 0.070; the Root Mean Square Error of Approximation (RMSEA) was 0.072; and the Comparative Fit Index (CFI) had a value of 0.78. While only one of the three indices support goodness of fit of the measurement model tested, a 14-domain model with 61 items showed good discriminant validity and internally consistent items. Future research should aim to assess the psychometric properties of the developed TDFQ in other community-based settings.
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
Steyer, Andrew J.; Van Vleck, Erik S.
2018-04-13
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steyer, Andrew J.; Van Vleck, Erik S.
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
Submillimeter, millimeter, and microwave spectral line catalogue
NASA Technical Reports Server (NTRS)
Poynter, R. L.; Pickett, H. M.
1984-01-01
This report describes a computer accessible catalogue of submillimeter, millimeter, and microwave spectral lines in the frequency range between 0 and 10000 GHz (i.e., wavelengths longer than 30 micrometers). The catalogue can be used as a planning guide or as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue has been constructed using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances. Future versions of this catalogue will add more atoms and molecules and update the present listings (151 species) as new data appear. The catalogue is available from the authors as a magnetic tape recorded in card images and as a set of microfiche records.
Submillimeter, millimeter, and microwave spectral line catalogue
NASA Technical Reports Server (NTRS)
Poynter, R. L.; Pickett, H. M.
1981-01-01
A computer accessible catalogue of submillimeter, millimeter and microwave spectral lines in the frequency range between 0 and 3000 GHZ (i.e., wavelengths longer than 100 mu m) is presented which can be used a planning guide or as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue was constructed by using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances. Future versions of this catalogue will add more atoms and molecules and update the present listings (133 species) as new data appear. The catalogue is available as a magnetic tape recorded in card images and as a set of microfiche records.
Validation of MODIS Aerosol Optical Depth Retrieval Over Land
NASA Technical Reports Server (NTRS)
Chu, D. A.; Kaufman, Y. J.; Ichoku, C.; Remer, L. A.; Tanre, D.; Holben, B. N.; Einaudi, Franco (Technical Monitor)
2001-01-01
Aerosol optical depths are derived operationally for the first time over land in the visible wavelengths by MODIS (Moderate Resolution Imaging Spectroradiometer) onboard the EOSTerra spacecraft. More than 300 Sun photometer data points from more than 30 AERONET (Aerosol Robotic Network) sites globally were used in validating the aerosol optical depths obtained during July - September 2000. Excellent agreement is found with retrieval errors within (Delta)tau=+/- 0.05 +/- 0.20 tau, as predicted, over (partially) vegetated surfaces, consistent with pre-launch theoretical analysis and aircraft field experiments. In coastal and semi-arid regions larger errors are caused predominantly by the uncertainty in evaluating the surface reflectance. The excellent fit was achieved despite the ongoing improvements in instrument characterization and calibration. This results show that MODIS-derived aerosol optical depths can be used quantitatively in many applications with cautions for residual clouds, snow/ice, and water contamination.
The Higgs transverse momentum distribution at NNLL and its theoretical errors
Neill, Duff; Rothstein, Ira Z.; Vaidya, Varun
2015-12-15
In this letter, we present the NNLL-NNLO transverse momentum Higgs distribution arising from gluon fusion. In the regime p ⊥ << m h we include the resummation of the large logs at next to next-to leading order and then match on to the α 2 s fixed order result near p ⊥~m h. By utilizing the rapidity renormalization group (RRG) we are able to smoothly match between the resummed, small p ⊥ regime and the fixed order regime. We give a detailed discussion of the scale dependence of the result including an analysis of the rapidity scale dependence. Our centralmore » value differs from previous results, in the transition region as well as the tail, by an amount which is outside the error band. Lastly, this difference is due to the fact that the RRG profile allows us to smoothly turn off the resummation.« less
NASA Astrophysics Data System (ADS)
Muguet, Francis F.; Robinson, G. Wilse; Bassez-Muguet, M. Palmyre
1995-03-01
With the help of a new scheme to correct for the basis set superposition error (BSSE), we find that an eclipsed nonlinear geometry becomes energetically favored over the eclipsed linear hydrogen-bonded geometry. From a normal mode analysis of the potential energy surface (PES) in the vicinity of the nonlinear geometry, we suggest that several dynamical interchange pathways must be taken into account. The minimal molecular symmetry group to be considered should be the double group of G36, but still larger multiple groups may be required. An interpretation of experimental vibration-rotation-tunneling (VRT) data in terms of the G144 group, which implies monomer inversions, may not be the only alternative. It appears that group theoretical considerations alone are insufficient for understanding the complex VRT dynamics of the ammonia dimer.
ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆
Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk
2014-01-01
In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725
Shabbir, Javid
2018-01-01
In the present paper we propose an improved class of estimators in the presence of measurement error and non-response under stratified random sampling for estimating the finite population mean. The theoretical and numerical studies reveal that the proposed class of estimators performs better than other existing estimators. PMID:29401519
Conical Probe Calibration and Wind Tunnel Data Analysis of the Channeled Centerbody Inlet Experiment
NASA Technical Reports Server (NTRS)
Truong, Samson Siu
2011-01-01
For a multi-hole test probe undergoing wind tunnel tests, the resulting data needs to be analyzed for any significant trends. These trends include relating the pressure distributions, the geometric orientation, and the local velocity vector to one another. However, experimental runs always involve some sort of error. As a result, a calibration procedure is required to compensate for this error. For this case, it is the misalignment bias angles resulting from the distortion associated with the angularity of the test probe or the local velocity vector. Through a series of calibration steps presented here, the angular biases are determined and removed from the data sets. By removing the misalignment, smoother pressure distributions contribute to more accurate experimental results, which in turn could be then compared to theoretical and actual in-flight results to derive any similarities. Error analyses will also be performed to verify the accuracy of the calibration error reduction. The resulting calibrated data will be implemented into an in-flight RTF script that will output critical flight parameters during future CCIE experimental test runs. All of these tasks are associated with and in contribution to NASA Dryden Flight Research Center s F-15B Research Testbed s Small Business Innovation Research of the Channeled Centerbody Inlet Experiment.
Momentum distributions for H 2 ( e , e ' p )
Ford, William P.; Jeschonnek, Sabine; Van Orden, J. W.
2014-12-29
[Background] A primary goal of deuteron electrodisintegration is the possibility of extracting the deuteron momentum distribution. This extraction is inherently fraught with difficulty, as the momentum distribution is not an observable and the extraction relies on theoretical models dependent on other models as input. [Purpose] We present a new method for extracting the momentum distribution which takes into account a wide variety of model inputs thus providing a theoretical uncertainty due to the various model constituents. [Method] The calculations presented here are using a Bethe-Salpeter like formalism with a wide variety of bound state wave functions, form factors, and finalmore » state interactions. We present a method to extract the momentum distributions from experimental cross sections, which takes into account the theoretical uncertainty from the various model constituents entering the calculation. [Results] In order to test the extraction pseudo-data was generated, and the extracted "experimental'' distribution, which has theoretical uncertainty from the various model inputs, was compared with the theoretical distribution used to generate the pseudo-data. [Conclusions] In the examples we compared the original distribution was typically within the error band of the extracted distribution. The input wave functions do contain some outliers which are discussed in the text, but at least this process can provide an upper bound on the deuteron momentum distribution. Due to the reliance on the theoretical calculation to obtain this quantity any extraction method should account for the theoretical error inherent in these calculations due to model inputs.« less
A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process
NASA Technical Reports Server (NTRS)
Wang, Yi; Tamai, Tetsuo
2009-01-01
Since the complexity of software systems continues to grow, most engineers face two serious problems: the state space explosion problem and the problem of how to debug systems. In this paper, we propose a game-theoretic approach to full branching time model checking on three-valued semantics. The three-valued models and logics provide successful abstraction that overcomes the state space explosion problem. The game style model checking that generates counter-examples can guide refinement or identify validated formulas, which solves the system debugging problem. Furthermore, output of our game style method will give significant information to engineers in detecting where errors have occurred and what the causes of the errors are.
Increasing the statistical significance of entanglement detection in experiments.
Jungnitsch, Bastian; Niekamp, Sönke; Kleinmann, Matthias; Gühne, Otfried; Lu, He; Gao, Wei-Bo; Chen, Yu-Ao; Chen, Zeng-Bing; Pan, Jian-Wei
2010-05-28
Entanglement is often verified by a violation of an inequality like a Bell inequality or an entanglement witness. Considerable effort has been devoted to the optimization of such inequalities in order to obtain a high violation. We demonstrate theoretically and experimentally that such an optimization does not necessarily lead to a better entanglement test, if the statistical error is taken into account. Theoretically, we show for different error models that reducing the violation of an inequality can improve the significance. Experimentally, we observe this phenomenon in a four-photon experiment, testing the Mermin and Ardehali inequality for different levels of noise. Furthermore, we provide a way to develop entanglement tests with high statistical significance.
Positioning performance analysis of the time sum of arrival algorithm with error features
NASA Astrophysics Data System (ADS)
Gong, Feng-xun; Ma, Yan-qiu
2018-03-01
The theoretical positioning accuracy of multilateration (MLAT) with the time difference of arrival (TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location performance of the time sum of arrival (TSOA) algorithm from the root mean square error ( RMSE) and geometric dilution of precision (GDOP) in additive white Gaussian noise (AWGN) environment. The TSOA localization model is constructed. Using it, the distribution of location ambiguity region is presented with 4-base stations. And then, the location performance analysis is started from the 4-base stations with calculating the RMSE and GDOP variation. Subsequently, when the location parameters are changed in number of base stations, base station layout and so on, the performance changing patterns of the TSOA location algorithm are shown. So, the TSOA location characteristics and performance are revealed. From the RMSE and GDOP state changing trend, the anti-noise performance and robustness of the TSOA localization algorithm are proved. The TSOA anti-noise performance will be used for reducing the blind-zone and the false location rate of MLAT systems.
Hozo, Iztok; Schell, Michael J; Djulbegovic, Benjamin
2008-07-01
The absolute truth in research is unobtainable, as no evidence or research hypothesis is ever 100% conclusive. Therefore, all data and inferences can in principle be considered as "inconclusive." Scientific inference and decision-making need to take into account errors, which are unavoidable in the research enterprise. The errors can occur at the level of conclusions that aim to discern the truthfulness of research hypothesis based on the accuracy of research evidence and hypothesis, and decisions, the goal of which is to enable optimal decision-making under present and specific circumstances. To optimize the chance of both correct conclusions and correct decisions, the synthesis of all major statistical approaches to clinical research is needed. The integration of these approaches (frequentist, Bayesian, and decision-analytic) can be accomplished through formal risk:benefit (R:B) analysis. This chapter illustrates the rational choice of a research hypothesis using R:B analysis based on decision-theoretic expected utility theory framework and the concept of "acceptable regret" to calculate the threshold probability of the "truth" above which the benefit of accepting a research hypothesis outweighs its risks.
The failure analysis and lifetime prediction for the solder joint of the magnetic head
NASA Astrophysics Data System (ADS)
Xiao, Xianghui; Peng, Minfang; Cardoso, Jaime S.; Tang, Rongjun; Zhou, YingLiang
2015-02-01
Micro-solder joint (MSJ) lifetime prediction methodology and failure analysis (FA) are to assess reliability by fatigue model with a series of theoretical calculations, numerical simulation and experimental method. Due to shortened time of solder joints on high-temperature, high-frequency sampling error that is not allowed in productions may exist in various models, including round-off error. Combining intermetallic compound (IMC) growth theory and the FA technology for the magnetic head in actual production, this thesis puts forward a new growth model to predict life expectancy for solder joint of the magnetic head. And the impact of IMC, generating from interface reaction between slider (magnetic head, usually be called slider) and bonding pad, on mechanical performance during aging process is analyzed in it. By further researching on FA of solder ball bonding, thesis chooses AuSn4 growth model that affects least to solder joint mechanical property to indicate that the IMC methodology is suitable to forecast the solder lifetime. And the diffusion constant under work condition 60 °C is 0.015354; the solder lifetime t is 14.46 years.
Modeling of Rolling Element Bearing Mechanics: Computer Program Updates
NASA Technical Reports Server (NTRS)
Ryan, S. G.
1997-01-01
The Rolling Element Bearing Analysis System (REBANS) extends the capability available with traditional quasi-static bearing analysis programs by including the effects of bearing race and support flexibility. This tool was developed under contract for NASA-MSFC. The initial version delivered at the close of the contract contained several errors and exhibited numerous convergence difficulties. The program has been modified in-house at MSFC to correct the errors and greatly improve the convergence. The modifications consist of significant changes in the problem formulation and nonlinear convergence procedures. The original approach utilized sequential convergence for nested loops to achieve final convergence. This approach proved to be seriously deficient in robustness. Convergence was more the exception than the rule. The approach was changed to iterate all variables simultaneously. This approach has the advantage of using knowledge of the effect of each variable on each other variable (via the system Jacobian) when determining the incremental changes. This method has proved to be quite robust in its convergence. This technical memorandum documents the changes required for the original Theoretical Manual and User's Manual due to the new approach.
Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media
Cooley, R.L.; Christensen, S.
2006-01-01
Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kalanov, Temur Z.
2015-04-01
Analysis of the foundations of the theory of negative numbers is proposed. The unity of formal logic and of rational dialectics is methodological basis of the analysis. Statement of the problem is as follows. As is known, point O in the Cartesian coordinate system XOY determines the position of zero on the scale. The number ``zero'' belongs to both the scale of positive numbers and the scale of negative numbers. In this case, the following formallogical contradiction arises: the number 0 is both positive number and negative number; or, equivalently, the number 0 is neither positive number nor negative number, i.e. number 0 has no sign. Then the following question arises: Do negative numbers exist in science and practice? A detailed analysis of the problem shows that negative numbers do not exist because the foundations of the theory of negative numbers contrary to the formal-logical laws. It is proved that: (a) all numbers have no signs; (b) the concepts ``negative number'' and ``negative sign of number'' represent a formallogical error; (c) signs ``plus'' and ``minus'' are only symbols of mathematical operations. The logical errors determine the essence of the theory of negative numbers: the theory of negative number is a false theory.
Simulation of water-table aquifers using specified saturated thickness
Sheets, Rodney A.; Hill, Mary C.; Haitjema, Henk M.; Provost, Alden M.; Masterson, John P.
2014-01-01
Simulating groundwater flow in a water-table (unconfined) aquifer can be difficult because the saturated thickness available for flow depends on model-calculated hydraulic heads. It is often possible to realize substantial time savings and still obtain accurate head and flow solutions by specifying an approximate saturated thickness a priori, thus linearizing this aspect of the model. This specified-thickness approximation often relies on the use of the “confined” option in numerical models, which has led to confusion and criticism of the method. This article reviews the theoretical basis for the specified-thickness approximation, derives an error analysis for relatively ideal problems, and illustrates the utility of the approximation with a complex test problem. In the transient version of our complex test problem, the specified-thickness approximation produced maximum errors in computed drawdown of about 4% of initial aquifer saturated thickness even when maximum drawdowns were nearly 20% of initial saturated thickness. In the final steady-state version, the approximation produced maximum errors in computed drawdown of about 20% of initial aquifer saturated thickness (mean errors of about 5%) when maximum drawdowns were about 35% of initial saturated thickness. In early phases of model development, such as during initial model calibration efforts, the specified-thickness approximation can be a very effective tool to facilitate convergence. The reduced execution time and increased stability obtained through the approximation can be especially useful when many model runs are required, such as during inverse model calibration, sensitivity and uncertainty analyses, multimodel analysis, and development of optimal resource management scenarios.
NASA Astrophysics Data System (ADS)
Zhao, Chen-Guang; Tan, Jiu-Bin; Liu, Tao
2010-09-01
The mechanism of a non-polarizing beam splitter (NPBS) with asymmetrical transfer coefficients causing the rotation of polarization direction is explained in principle, and the measurement nonlinear error caused by NPBS is analyzed based on Jones matrix theory. Theoretical calculations show that the nonlinear error changes periodically, and the error period and peak values increase with the deviation between transmissivities of p-polarization and s-polarization states. When the transmissivity of p-polarization is 53% and that of s-polarization is 48%, the maximum error reaches 2.7 nm. The imperfection of NPBS is one of the main error sources in simultaneous phase-shifting polarization interferometer, and its influence can not be neglected in the nanoscale ultra-precision measurement.
Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking
NASA Astrophysics Data System (ADS)
Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.
2012-08-01
We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.
NASA Technical Reports Server (NTRS)
Schafer, Louis J; Stepka, Francis S; Brown, W Byron
1953-01-01
An analysis was made to permit the calculation of the effectiveness of oxide coatings in retarding the transient heat flow into turbine blades when the combustion gas temperature of a turbojet engine is suddenly changed. The analysis is checked with experimental data obtained from a turbojet engine whose blades were coated with two different coating materials (silicon dioxide and boric oxide) by adding silicone oil and tributyl borate to the engine fuel. The very thin coatings (approximately 0.001 in.) that formed on the blades produced a negligible effect on the turbine-blade transient temperature response. With the analysis discussed here, it was possible to predict the turbine rotor-blade temperature response with a maximum error of 40 F.
Model and particle-in-cell simulation of ion energy distribution in collisionless sheath
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Zhuwen, E-mail: zzwwdxy@gznc.edu.cn; Key Laboratory of Photoelectron Materials Design and Simulation in Guizhou Province, Guiyang 550018; Scientific Research Innovation Team in Plasma and Functional Thin Film Materials in Guizhou Province, Guiyang 550018
2015-06-15
In this paper, we propose a self-consistent theoretical model, which is described by the ion energy distributions (IEDs) in collisionless sheaths, and the analytical results for different combined dc/radio frequency (rf) capacitive coupled plasma discharge cases, including sheath voltage errors analysis, are compared with the results of numerical simulations using a one-dimensional plane-parallel particle-in-cell (PIC) simulation. The IEDs in collisionless sheaths are performed on combination of dc/rf voltage sources electrodes discharge using argon as the process gas. The incident ions on the grounded electrode are separated, according to their different radio frequencies, and dc voltages on a separated electrode, themore » IEDs, and widths of energy in sheath and the plasma sheath thickness are discussed. The IEDs, the IED widths, and sheath voltages by the theoretical model are investigated and show good agreement with PIC simulations.« less
Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation
NASA Astrophysics Data System (ADS)
Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan
2018-01-01
It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.
Hybrid density-functional calculations of phonons in LaCoO3
NASA Astrophysics Data System (ADS)
Gryaznov, Denis; Evarestov, Robert A.; Maier, Joachim
2010-12-01
Phonon frequencies at Γ point in nonmagnetic rhombohedral phase of LaCoO3 were calculated using density-functional theory with hybrid exchange correlation functional PBE0. The calculations involved a comparison of results for two types of basis functions commonly used in ab initio calculations, namely, the plane-wave approach and linear combination of atomic orbitals, as implemented in VASP and CRYSTAL computer codes, respectively. A good qualitative, but also within an error margin of less than 30%, a quantitative agreement was observed not only between the two formalisms but also between theoretical and experimental phonon frequency predictions. Moreover, the correlation between the phonon symmetries in cubic and rhombohedral phases is discussed in detail on the basis of group-theoretical analysis. It is concluded that the hybrid PBE0 functional is able to predict correctly the phonon properties in LaCoO3 .
Coherent beam combination of fiber lasers with a strongly confined waveguide: numerical model.
Tao, Rumao; Si, Lei; Ma, Yanxing; Zhou, Pu; Liu, Zejin
2012-08-20
Self-imaging properties of fiber lasers in a strongly confined waveguide (SCW) and their application in coherent beam combination (CBC) are studied theoretically. Analytical formulas are derived for the positions, amplitudes, and phases of the N images at the end of an SCW, which is important for quantitative analysis of waveguide CBC. The formulas are verified with experimental results and numerical simulation of a finite difference beam propagation method (BPM). The error of our analytical formulas is less than 6%, which can be reduced to less than 1.5% with Goos-Hahnchen penetration depth considered. Based on the theoretical model and BPM, we studied the combination of two laser beams based on an SCW. The effects of the waveguide refractive index and Gaussian beam waist are studied. We also simulated the CBC of nine and 16 fiber lasers, and a single beam without side lobes was achieved.
A novel vibration structure for dynamic balancing measurement
NASA Astrophysics Data System (ADS)
Qin, Peng; Cai, Ping; Hu, Qinghan; Li, Yingxia
2006-11-01
Based on the conception of instantaneous motion center in theoretical mechanics, the paper presents a novel virtual vibration structure for dynamic balancing measurement with high precision. The structural features and the unbalancing response characteristics of this vibration structure are analyzed in depth. The relation between the real measuring system and the virtual one is emphatically expounded. Theoretical analysis indicates that the flexibly hinged integrative plate spring sets holds fixed vibration center, with the result that this vibration system has the most excellent effect of plane separation. In addition, the sensors are mounted on the same longitudinal section. Thus the influence of phase error on the primary unbalance reduction ratio is eliminated. Furthermore, the performance changes in sensors caused by environmental factor have less influence on the accuracy of the measurement. The result for this system is more accurate measurement with lower requirement for a second correction run.
Globular Cluster Abundances from High-Resolution Integrated-Light Spectra. I. 47 Tuc
NASA Astrophysics Data System (ADS)
McWilliam, Andrew; Bernstein, Rebecca A.
2008-09-01
We describe the detailed chemical abundance analysis of a high-resolution (R ~ 35,000), integrated-light (IL), spectrum of the core of the Galactic globular cluster 47 Tuc, obtained using the du Pont echelle at Las Campanas. We develop an abundance analysis strategy that can be applied to spatial unresolved extragalactic clusters. We have computed abundances for Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Y, Zr, Ba, La, Nd, and Eu. For an analysis with the known color-magnitude diagram (CMD) for 47 Tuc we obtain a mean [Fe/H] value of -0.75 +/- 0.026 +/- 0.045 dex (random and systematic error), in good agreement with the mean of five recent high-resolution abundance studies, at -0.70 dex. Typical random errors on our mean [X/Fe] ratios are 0.07-0.10 dex, similar to studies of individual stars in 47 Tuc. Na and Al appear enhanced, perhaps due to proton burning in the most luminous cluster stars. Our IL abundance analysis with an unknown CMD employed theoretical Teramo isochrones; however, we apply zero-point abundance corrections to account for the factor of 3 underprediction of stars at the AGB bump luminosity. While line diagnostics alone provide only mild constraints on the cluster age (ruling out ages younger than ~2 Gyr), when theoretical IL B - V colors are combined with metallicity derived from the Fe I lines, the age is constrained to 10-15 Gyr and we obtain [ Fe/H ] = - 0.70 +/- 0.021 +/- 0.052 dex. We find that Fe I line diagnostics may also be used to constrain the horizontal-branch morphology of an unresolved cluster. Lastly, our spectrum synthesis of 5.4 million TiO lines indicates that the 7300-7600 Å TiO window should be useful for estimating the effect of M giants on the IL abundances, and important for clusters more metal-rich than 47 Tuc.
Bosco, Francesca M; Angeleri, Romina; Sacco, Katiuscia; Bara, Bruno G
2015-01-01
The purpose of this study is to investigate the pragmatic abilities of individuals with traumatic brain injury (TBI). Several studies in the literature have previously reported communicative deficits in individuals with TBI, however such research has focused principally on communicative deficits in general, without providing an analysis of the errors committed in understanding and expressing communicative acts. Within the theoretical framework of Cognitive Pragmatics theory and Cooperative principle we focused on intermediate communicative errors that occur in both the comprehension and the production of various pragmatic phenomena, expressed through both linguistic and extralinguistic communicative modalities. To investigate the pragmatic abilities of individuals with TBI. A group of 30 individuals with TBI and a matched control group took part in the experiment. They were presented with a series of videotaped vignettes depicting everyday communicative exchanges, and were tested on the comprehension and production of various kinds of communicative acts (standard communicative act, deceit and irony). The participants' answers were evaluated as correct or incorrect. Incorrect answers were then further evaluated with regard to the presence of different intermediate errors. Individuals with TBI performed worse than control participants on all the tasks investigated when considering correct versus incorrect answers. Furthermore, a series of logistic regression analyses showed that group membership (TBI versus controls) significantly predicted the occurrence of intermediate errors. This result holds in both the comprehension and production tasks, and in both linguistic and extralinguistic modalities. Participants with TBI tend to have difficulty in managing different types of communicative acts, and they make more intermediate errors than the control participants. Intermediate errors concern the comprehension and production of the expression act, the comprehension of the actors' meaning, as well as the respect of the Cooperative principle. © 2014 Royal College of Speech and Language Therapists.
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-01-01
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs. PMID:27999351
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems.
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-12-18
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs.
NASA Astrophysics Data System (ADS)
Phung, D.-H.; Samain, E.; Maurice, N.; Albanesse, D.; Mariey, H.; Aimar, M.; M. Lagarde, G.; Artaud, G.; Issler, J.-L.; Vedrenne, N.; Velluet, M.-T.; Toyoshima, M.; Akioka, M.; Kolev, D.; Munemasa, Y.; Takenaka, H.; Iwakiri, N.
2016-03-01
In collaboration between CNES, NICT, Geoazur, the first successful lasercom link between the micro-satellite SOCRATES and an OGS in Europe has been established. This paper presents some results of telecom and scintillation first data analysis for 4 successful links in June & July 2015 between SOTA terminal and MEO optical ground station (OGS) at Caussols France. The telecom and scintillation data have been continuously recorded during the passes by using a detector developed at the laboratory. An irradiance of 190 nW/m2 and 430 nW/m2 has been detected for 1549 nm and 976 nm downlinks at 35° elevation. Spectrums of power fluctuation measured at OGS are analyzed at different elevation angles and at different diameters of telescope aperture to determine fluctuations caused by pointing error (due to satellite & OGS telescope vibrations) and caused by atmospheric turbulence. Downlink & Uplink budgets are analyzed, the theoretical estimation matches well to measured power levels. Telecom signal forms and bit error rates (BER) of 1549 nm and 976 nm downlink are also shown at different diameters of telescope aperture. BER is 'Error Free' with full-aperture 1.5m telescope, and almost in `good channel' with 0.4 m sub-aperture of telescope. We also show the comparison between the expected and measured BER distributions.
The mathematical origins of the kinetic compensation effect: 2. The effect of systematic errors.
Barrie, Patrick J
2012-01-07
The kinetic compensation effect states that there is a linear relationship between Arrhenius parameters ln A and E for a family of related processes. It is a widely observed phenomenon in many areas of science, notably heterogeneous catalysis. This paper explores mathematical, rather than physicochemical, explanations for the compensation effect in certain situations. Three different topics are covered theoretically and illustrated by examples. Firstly, the effect of systematic errors in experimental kinetic data is explored, and it is shown that these create apparent compensation effects. Secondly, analysis of kinetic data when the Arrhenius parameters depend on another parameter is examined. In the case of temperature programmed desorption (TPD) experiments when the activation energy depends on surface coverage, it is shown that a common analysis method induces a systematic error, causing an apparent compensation effect. Thirdly, the effect of analysing the temperature dependence of an overall rate of reaction, rather than a rate constant, is investigated. It is shown that this can create an apparent compensation effect, but only under some conditions. This result is illustrated by a case study for a unimolecular reaction on a catalyst surface. Overall, the work highlights the fact that, whenever a kinetic compensation effect is observed experimentally, the possibility of it having a mathematical origin should be carefully considered before any physicochemical conclusions are drawn.
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Cess, Robert D.; Charlock, Thomas P.; Coakley, James A.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 1 provides both summarized and detailed overviews of the CERES Release 1 data analysis system. CERES will produce global top-of-the-atmosphere shortwave and longwave radiative fluxes at the top of the atmosphere, at the surface, and within the atmosphere by using the combination of a large variety of measurements and models. The CERES processing system includes radiance observations from CERES scanning radiometers, cloud properties derived from coincident satellite imaging radiometers, temperature and humidity fields from meteorological analysis models, and high-temporal-resolution geostationary satellite radiances to account for unobserved times. CERES will provide a continuation of the ERBE record and the lowest error climatology of consistent cloud properties and radiation fields. CERES will also substantially improve our knowledge of the Earth's surface radiation budget.
NASA Astrophysics Data System (ADS)
Vargas-Magaña, Mariana; Ho, Shirley; Cuesta, Antonio J.; O'Connell, Ross; Ross, Ashley J.; Eisenstein, Daniel J.; Percival, Will J.; Grieb, Jan Niklas; Sánchez, Ariel G.; Tinker, Jeremy L.; Tojeiro, Rita; Beutler, Florian; Chuang, Chia-Hsun; Kitaura, Francisco-Shu; Prada, Francisco; Rodríguez-Torres, Sergio A.; Rossi, Graziano; Seo, Hee-Jong; Brownstein, Joel R.; Olmstead, Matthew; Thomas, Daniel
2018-06-01
We investigate the potential sources of theoretical systematics in the anisotropic Baryon Acoustic Oscillation (BAO) distance scale measurements from the clustering of galaxies in configuration space using the final Data Release (DR12) of the Baryon Oscillation Spectroscopic Survey (BOSS). We perform a detailed study of the impact on BAO measurements from choices in the methodology such as fiducial cosmology, clustering estimators, random catalogues, fitting templates, and covariance matrices. The theoretical systematic uncertainties in BAO parameters are found to be 0.002 in the isotropic dilation α and 0.003 in the quadrupolar dilation ɛ. The leading source of systematic uncertainty is related to the reconstruction techniques. Theoretical uncertainties are sub-dominant compared with the statistical uncertainties for BOSS survey, accounting 0.2σstat for α and 0.25σstat for ɛ (σα, stat ˜ 0.010 and σɛ, stat ˜ 0.012, respectively). We also present BAO-only distance scale constraints from the anisotropic analysis of the correlation function. Our constraints on the angular diameter distance DA(z) and the Hubble parameter H(z), including both statistical and theoretical systematic uncertainties, are 1.5 per cent and 2.8 per cent at zeff = 0.38, 1.4 per cent and 2.4 per cent at zeff = 0.51, and 1.7 per cent and 2.6 per cent at zeff = 0.61. This paper is part of a set that analyses the final galaxy clustering data set from BOSS. The measurements and likelihoods presented here are cross-checked with other BAO analysis in Alam et al. The systematic error budget concerning the methodology on post-reconstruction BAO analysis presented here is used in Alam et al. to produce the final cosmological constraints from BOSS.
ERIC Educational Resources Information Center
Leighton, Jacqueline P.; Bustos Gómez, María Clara
2018-01-01
Formative assessments and feedback are vital to enhancing learning outcomes but require that learners feel at ease identifying their errors, and receiving feedback from a trusted source--teachers. An experimental test of a new theoretical framework was conducted to cultivate a pedagogical alliance to enhance students' (a) trust in the teacher, (b)…
Kienle, A; Patterson, M S
1997-09-01
We investigate theoretically the errors in determining the reduced scattering and absorption coefficients of semi-infinite turbid media from frequency-domain reflectance measurements made at small distances between the source and the detector(s). The errors are due to the uncertainties in the measurement of the phase, the modulation and the steady-state reflectance as well as to the diffusion approximation which is used as a theoretical model to describe light propagation in tissue. Configurations using one and two detectors are examined for the measurement of the phase and the modulation and for the measurement of the phase and the steady-state reflectance. Three solutions of the diffusion equation are investigated. We show that measurements of the phase and the steady-state reflectance at two different distances are best suited for the determination of the optical properties close to the source. For this arrangement the errors in the absorption coefficient due to typical uncertainties in the measurement are greater than those resulting from the application of the diffusion approximation at a modulation frequency of 200 MHz. A Monte Carlo approach is also examined; this avoids the errors due to the diffusion approximation.
Multiple imputation of missing fMRI data in whole brain analysis
Vaden, Kenneth I.; Gebregziabher, Mulugeta; Kuchinsky, Stefanie E.; Eckert, Mark A.
2012-01-01
Whole brain fMRI analyses rarely include the entire brain because of missing data that result from data acquisition limits and susceptibility artifact, in particular. This missing data problem is typically addressed by omitting voxels from analysis, which may exclude brain regions that are of theoretical interest and increase the potential for Type II error at cortical boundaries or Type I error when spatial thresholds are used to establish significance. Imputation could significantly expand statistical map coverage, increase power, and enhance interpretations of fMRI results. We examined multiple imputation for group level analyses of missing fMRI data using methods that leverage the spatial information in fMRI datasets for both real and simulated data. Available case analysis, neighbor replacement, and regression based imputation approaches were compared in a general linear model framework to determine the extent to which these methods quantitatively (effect size) and qualitatively (spatial coverage) increased the sensitivity of group analyses. In both real and simulated data analysis, multiple imputation provided 1) variance that was most similar to estimates for voxels with no missing data, 2) fewer false positive errors in comparison to mean replacement, and 3) fewer false negative errors in comparison to available case analysis. Compared to the standard analysis approach of omitting voxels with missing data, imputation methods increased brain coverage in this study by 35% (from 33,323 to 45,071 voxels). In addition, multiple imputation increased the size of significant clusters by 58% and number of significant clusters across statistical thresholds, compared to the standard voxel omission approach. While neighbor replacement produced similar results, we recommend multiple imputation because it uses an informed sampling distribution to deal with missing data across subjects that can include neighbor values and other predictors. Multiple imputation is anticipated to be particularly useful for 1) large fMRI data sets with inconsistent missing voxels across subjects and 2) addressing the problem of increased artifact at ultra-high field, which significantly limit the extent of whole brain coverage and interpretations of results. PMID:22500925
Skylab S-193 radar altimeter experiment analyses and results
NASA Technical Reports Server (NTRS)
Brown, G. S. (Editor)
1977-01-01
The design of optimum filtering procedures for geoid recovery is discussed. Statistical error bounds are obtained for pointing angle estimates using average waveform data. A correlation of tracking loop bandwidth with magnitude of pointing error is established. The impact of ocean currents and precipitation on the received power are shown to be measurable effects. For large sea state conditions, measurements of sigma 0 deg indicate a distinct saturation level of about 8 dB. Near-nadir less than 15 deg values of sigma 0 deg are also presented and compared with theoretical models. Examination of Great Salt Lake Desert scattering data leads to rejection of a previously hypothesized specularly reflecting surface. Pulse-to-pulse correlation results are in agreement with quasi-monochromatic optics theoretical predictions and indicate a means for estimating direction of pointing error. Pulse compression techniques for and results of estimating significant waveheight from waveform data are presented and are also shown to be in good agreement with surface truth data. A number of results pertaining to system performance are presented.
Love, Peter E D; Smith, Jim; Teo, Pauline
2018-05-01
Error management theory is drawn upon to examine how a project-based organization, which took the form of a program alliance, was able to change its established error prevention mindset to one that enacted a learning mindfulness that provided an avenue to curtail its action errors. The program alliance was required to unlearn its existing routines and beliefs to accommodate the practices required to embrace error management. As a result of establishing an error management culture the program alliance was able to create a collective mindfulness that nurtured learning and supported innovation. The findings provide a much-needed context to demonstrate the relevance of error management theory to effectively address rework and safety problems in construction projects. The robust theoretical underpinning that is grounded in practice and presented in this paper provides a mechanism to engender learning from errors, which can be utilized by construction organizations to improve the productivity and performance of their projects. Copyright © 2018 Elsevier Ltd. All rights reserved.
Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek
2016-11-01
The aims of this study were to quantify the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE) and to explore any differences between respondents. A cross-sectional survey of patient-facing doctors, nurses and pharmacists within three major hospitals of Abu Dhabi, the UAE. An online questionnaire was developed based on the Theoretical Domains Framework (TDF, a framework of behaviour change theories). Principal component analysis (PCA) was used to identify components and internal reliability determined. Ethical approval was obtained from a UK university and all hospital ethics committees. Two hundred and ninety-four responses were received. Questionnaire items clustered into six components of knowledge and skills, feedback and support, action and impact, motivation, effort and emotions. Respondents generally gave positive responses for knowledge and skills, feedback and support and action and impact components. Responses were more neutral for the motivation and effort components. In terms of emotions, the component with the most negative scores, there were significant differences in terms of years registered as health professional (those registered longest most positive, p = 0.002) and age (older most positive, p < 0.001) with no differences for gender and health profession. Emotional-related issues are the dominant barrier to reporting and are common to all professions. There is a need to develop, test and implement an intervention to impact health professionals' emotions. Such an intervention should focus on evidence-based behaviour change techniques of reducing negative emotions, focusing on emotional consequences and providing social support. • This research used the Theoretical Domains Framework to quantify the behavioural determinants of health professional reporting of medication errors. • Questionnaire items relating to emotions surrounding reporting generated the most negative responses with significant differences in terms of years registered as health professional (those registered longest most positive) and age (older most positive) with no differences for gender and health profession. • Interventions based on behaviour change techniques mapped to emotions should be prioritised for development.
Space charge enhanced plasma gradient effects on satellite electric field measurements
NASA Technical Reports Server (NTRS)
Diebold, Dan; Hershkowitz, Noah; Dekock, J.; Intrator, T.; Hsieh, M-K.
1991-01-01
It has been recognized that plasma gradients can cause error in magnetospheric electric field measurements made by double probes. Space charge enhanced Plasma Gradient Induced Error (PGIE) is discussed in general terms, presenting the results of a laboratory experiment designed to demonstrate this error, and deriving a simple expression that quantifies this error. Experimental conditions were not identical to magnetospheric conditions, although efforts were made to insure the relevant physics applied to both cases. The experimental data demonstrate some of the possible errors in electric field measurements made by strongly emitting probes due to space charge effects in the presence of plasma gradients. Probe errors in space and laboratory conditions are discussed, as well as experimental error. In the final section, theoretical aspects are examined and an expression is derived for the maximum steady state space charge enhanced PGIE taken by two identical current biased probes.
On adaptive modified projective synchronization of a supply chain management system
NASA Astrophysics Data System (ADS)
Tirandaz, Hamed
2017-12-01
In this paper, the synchronization problem of a chaotic supply chain management system is studied. A novel adaptive modified projective synchronization method is introduced to control the behaviour of the leader supply chain system by a follower chaotic system and to adjust the leader system parameters until the measurable errors of the system parameters converge to zero. The stability evaluation and convergence analysis are carried out by the Lyapanov stability theorem. The proposed synchronization and antisynchronization techniques are studied for identical supply chain chaotic systems. Finally, some numerical simulations are presented to verify the effectiveness of the theoretical discussions.
Testing of models of VVH particle sources and propagation
NASA Technical Reports Server (NTRS)
Blanford, G. E., Jr.; Friedlander, M. W.; Hoppe, M.; Klarmann, J.; Walker, R. M.; Wefel, J. P.
1974-01-01
For comparisons between theoretical and observed charge spectra of VVH particles to be meaningful, at least two conditions must be met. First, charge resolution must be adequate to separate important groups of nuclei, and there should be no significant systematic errors in the charge scale developed. Second, there must be adequate rejection of slower particles of smaller Z, which have been observed in several flights. Within these conditions, it has been shown that observed features of the charge spectrum are not accidents of the analysis but reflect real variations in the relative abundances that must be explained by any successful model.
Pilot self-coding applied in optical OFDM systems
NASA Astrophysics Data System (ADS)
Li, Changping; Yi, Ying; Lee, Kyesan
2015-04-01
This paper studies the frequency offset correction technique which can be applied in optical OFDM systems. Through theoretical analysis and computer simulations, we can observe that our proposed scheme named pilot self-coding (PSC) has a distinct influence for rectifying the frequency offset, which could mitigate the OFDM performance deterioration because of inter-carrier interference and common phase error. The main approach is to assign a pilot subcarrier before data subcarriers and copy this subcarrier sequence to the symmetric side. The simulation results verify that our proposed PSC is indeed effective against the high degree of frequency offset.
Numerical solution of the time fractional reaction-diffusion equation with a moving boundary
NASA Astrophysics Data System (ADS)
Zheng, Minling; Liu, Fawang; Liu, Qingxia; Burrage, Kevin; Simpson, Matthew J.
2017-06-01
A fractional reaction-diffusion model with a moving boundary is presented in this paper. An efficient numerical method is constructed to solve this moving boundary problem. Our method makes use of a finite difference approximation for the temporal discretization, and spectral approximation for the spatial discretization. The stability and convergence of the method is studied, and the errors of both the semi-discrete and fully-discrete schemes are derived. Numerical examples, motivated by problems from developmental biology, show a good agreement with the theoretical analysis and illustrate the efficiency of our method.
NASA Astrophysics Data System (ADS)
Obozov, A. A.; Serpik, I. N.; Mihalchenko, G. S.; Fedyaeva, G. A.
2017-01-01
In the article, the problem of application of the pattern recognition (a relatively young area of engineering cybernetics) for analysis of complicated technical systems is examined. It is shown that the application of a statistical approach for hard distinguishable situations could be the most effective. The different recognition algorithms are based on Bayes approach, which estimates posteriori probabilities of a certain event and an assumed error. Application of the statistical approach to pattern recognition is possible for solving the problem of technical diagnosis complicated systems and particularly big powered marine diesel engines.
Method for a quantitative investigation of the frozen flow hypothesis
Schock; Spillar
2000-09-01
We present a technique to test the frozen flow hypothesis quantitatively, using data from wave-front sensors such as those found in adaptive optics systems. Detailed treatments of the theoretical background of the method and of the error analysis are presented. Analyzing data from the 1.5-m and 3.5-m telescopes at the Starfire Optical Range, we find that the frozen flow hypothesis is an accurate description of the temporal development of atmospheric turbulence on time scales of the order of 1-10 ms but that significant deviations from the frozen flow behavior are present for longer time scales.
[Innovative training for enhancing patient safety. Safety culture and integrated concepts].
Rall, M; Schaedle, B; Zieger, J; Naef, W; Weinlich, M
2002-11-01
Patient safety is determined by the performance safety of the medical team. Errors in medicine are amongst the leading causes of death of hospitalized patients. These numbers call for action. Backgrounds, methods and new forms of training are introduced in this article. Concepts from safety research are transformed to the field of emergency medical treatment. Strategies from realistic patient simulator training sessions and innovative training concepts are discussed. The reasons for the high numbers of errors in medicine are not due to a lack of medical knowledge, but due to human factors and organisational circumstances. A first step towards an improved patient safety is to accept this. We always need to be prepared that errors will occur. A next step would be to separate "error" from guilt (culture of blame) allowing for a real analysis of accidents and establishment of meaningful incident reporting systems. Concepts with a good success record from aviation like "crew resource management" (CRM) training have been adapted my medicine and are ready to use. These concepts require theoretical education as well as practical training. Innovative team training sessions using realistic patient simulator systems with video taping (for self reflexion) and interactive debriefing following the sessions are very promising. As the need to reduce error rates in medicine is very high and the reasons, methods and training concepts are known, we are urged to implement these new training concepts widely and consequently. To err is human - not to counteract it is not.
Lee, Yoojin; Callaghan, Martina F; Nagy, Zoltan
2017-01-01
In magnetic resonance imaging, precise measurements of longitudinal relaxation time ( T 1 ) is crucial to acquire useful information that is applicable to numerous clinical and neuroscience applications. In this work, we investigated the precision of T 1 relaxation time as measured using the variable flip angle method with emphasis on the noise propagated from radiofrequency transmit field ([Formula: see text]) measurements. The analytical solution for T 1 precision was derived by standard error propagation methods incorporating the noise from the three input sources: two spoiled gradient echo (SPGR) images and a [Formula: see text] map. Repeated in vivo experiments were performed to estimate the total variance in T 1 maps and we compared these experimentally obtained values with the theoretical predictions to validate the established theoretical framework. Both the analytical and experimental results showed that variance in the [Formula: see text] map propagated comparable noise levels into the T 1 maps as either of the two SPGR images. Improving precision of the [Formula: see text] measurements significantly reduced the variance in the estimated T 1 map. The variance estimated from the repeatedly measured in vivo T 1 maps agreed well with the theoretically-calculated variance in T 1 estimates, thus validating the analytical framework for realistic in vivo experiments. We concluded that for T 1 mapping experiments, the error propagated from the [Formula: see text] map must be considered. Optimizing the SPGR signals while neglecting to improve the precision of the [Formula: see text] map may result in grossly overestimating the precision of the estimated T 1 values.
Yang, Sheng-Sung; Ho, Chia-Lu; Siu, Sammy
2010-12-01
In this paper, we propose an algorithm based on the central limit theorem to compute the sensitivity of the multilayer perceptron (MLP) due to the errors of the inputs and weights. For simplicity and practicality, all inputs and weights studied here are independently identically distributed (i.i.d.). The theoretical results derived from the proposed algorithm show that the sensitivity of the MLP is affected by the number of layers and the number of neurons adopted in each layer. To prove the reliability of the proposed algorithm, some experimental results of the sensitivity are also presented, and they match the theoretical ones. The good agreement between the theoretical results and the experimental results verifies the reliability and feasibility of the proposed algorithm. Furthermore, the proposed algorithm can also be applied to compute precisely the sensitivity of the MLP with any available activation functions and any types of i.i.d. inputs and weights.
Measurement error in time-series analysis: a simulation study comparing modelled and monitored data.
Butland, Barbara K; Armstrong, Ben; Atkinson, Richard W; Wilkinson, Paul; Heal, Mathew R; Doherty, Ruth M; Vieno, Massimo
2013-11-13
Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003-2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2). Even if correlations between model and monitor data appear reasonably strong, additive classical measurement error in model data may lead to appreciable bias in health effect estimates. As process-based air pollution models become more widely used in epidemiological time-series analysis, assessments of error impact that include statistical simulation may be useful.
Probing the Cosmological Principle in the counts of radio galaxies at different frequencies
NASA Astrophysics Data System (ADS)
Bengaly, Carlos A. P.; Maartens, Roy; Santos, Mario G.
2018-04-01
According to the Cosmological Principle, the matter distribution on very large scales should have a kinematic dipole that is aligned with that of the CMB. We determine the dipole anisotropy in the number counts of two all-sky surveys of radio galaxies. For the first time, this analysis is presented for the TGSS survey, allowing us to check consistency of the radio dipole at low and high frequencies by comparing the results with the well-known NVSS survey. We match the flux thresholds of the catalogues, with flux limits chosen to minimise systematics, and adopt a strict masking scheme. We find dipole directions that are in good agreement with each other and with the CMB dipole. In order to compare the amplitude of the dipoles with theoretical predictions, we produce sets of lognormal realisations. Our realisations include the theoretical kinematic dipole, galaxy clustering, Poisson noise, simulated redshift distributions which fit the NVSS and TGSS source counts, and errors in flux calibration. The measured dipole for NVSS is ~2 times larger than predicted by the mock data. For TGSS, the dipole is almost ~ 5 times larger than predicted, even after checking for completeness and taking account of errors in source fluxes and in flux calibration. Further work is required to understand the nature of the systematics that are the likely cause of the anomalously large TGSS dipole amplitude.
NASA Astrophysics Data System (ADS)
Li, Pengzhan; Zhang, Tianjue; Ji, Bin; Hou, Shigang; Guo, Juanjuan; Yin, Meng; Xing, Jiansheng; Lv, Yinlong; Guan, Fengping; Lin, Jun
2017-01-01
A new project, the 230 MeV proton superconducting synchrocyclotron for cancer therapy, was proposed at CIAE in 2013. A model cavity is designed to verify the frequency modulation trimming algorithm featuring a half-wave structure and eight sets of rotating blades for 1 kHz frequency modulation. Based on the electromagnetic (EM) field distribution analysis of the model cavity, the variable capacitor works as a function of time and the frequency can be written in Maclaurin series. Curve fitting is applied for theoretical frequency and original simulation frequency. The second-order fitting excels at the approximation given its minimum variance. Constant equivalent inductance is considered as an important condition in the calculation. The equivalent parameters of theoretical frequency can be achieved through this conversion. Then the trimming formula for rotor blade outer radius is found by discretization in time domain. Simulation verification has been performed and the results show that the calculation radius with minus 0.012 m yields an acceptable result. The trimming amendment in the time range of 0.328-0.4 ms helps to reduce the frequency error to 0.69% in Simulation C with an increment of 0.075 mm/0.001 ms, which is half of the error in Simulation A (constant radius in 0.328-0.4 ms). The verification confirms the feasibility of the trimming algorithm for synchrocyclotron frequency modulation.
Practical considerations for a second-order directional hearing aid microphone system
NASA Astrophysics Data System (ADS)
Thompson, Stephen C.
2003-04-01
First-order directional microphone systems for hearing aids have been available for several years. Such a system uses two microphones and has a theoretical maximum free-field directivity index (DI) of 6.0 dB. A second-order microphone system using three microphones could provide a theoretical increase in free-field DI to 9.5 dB. These theoretical maximum DI values assume that the microphones have exactly matched sensitivities at all frequencies of interest. In practice, the individual microphones in the hearing aid always have slightly different sensitivities. For the small microphone separation necessary to fit in a hearing aid, these sensitivity matching errors degrade the directivity from the theoretical values, especially at low frequencies. This paper shows that, for first-order systems the directivity degradation due to sensitivity errors is relatively small. However, for second-order systems with practical microphone sensitivity matching specifications, the directivity degradation below 1 kHz is not tolerable. A hybrid order directive system is proposed that uses first-order processing at low frequencies and second-order directive processing at higher frequencies. This hybrid system is suggested as an alternative that could provide improved directivity index in the frequency regions that are important to speech intelligibility.
SU-E-T-484: In Vivo Dosimetry Tolerances in External Beam Fast Neutron Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, L; Gopan, O
Purpose: Optical stimulated luminescence (OSL) dosimetry with Landauer Al2O3:C nanodots was developed at our institution as a passive in vivo dosimetry (IVD) system for patients treated with fast neutron therapy. The purpose of this study was to establish clinically relevant tolerance limits for detecting treatment errors requiring further investigation. Methods: Tolerance levels were estimated by conducting a series of IVD expected dose calculations for square field sizes ranging between 2.8 and 28.8 cm. For each field size evaluated, doses were calculated for open and internal wedged fields with angles of 30°, 45°, or 60°. Theoretical errors were computed for variationsmore » of incorrect beam configurations. Dose errors, defined as the percent difference from the expected dose calculation, were measured with groups of three nanodots placed in a 30 x 30 cm solid water phantom, at beam isocenter (150 cm SAD, 1.7 cm Dmax). The tolerances were applied to IVD patient measurements. Results: The overall accuracy of the nanodot measurements is 2–3% for open fields. Measurement errors agreed with calculated errors to within 3%. Theoretical estimates of dosimetric errors showed that IVD measurements with OSL nanodots will detect the absence of an internal wedge or a wrong wedge angle. Incorrect nanodot placement on a wedged field is more likely to be caught if the offset is in the direction of the “toe” of the wedge where the dose difference in percentage is about 12%. Errors caused by an incorrect flattening filter size produced a 2% measurement error that is not detectable by IVD measurement alone. Conclusion: IVD with nanodots will detect treatment errors associated with the incorrect implementation of the internal wedge. The results of this study will streamline the physicists’ investigations in determining the root cause of an IVD reading that is out of normally accepted tolerances.« less
DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M
2010-03-29
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
Modeling laser velocimeter signals as triply stochastic Poisson processes
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.
1976-01-01
Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.
Development and Initial Validation of the Multicultural Personality Inventory (MPI).
Ponterotto, Joseph G; Fietzer, Alexander W; Fingerhut, Esther C; Woerner, Scott; Stack, Lauren; Magaldi-Dopman, Danielle; Rust, Jonathan; Nakao, Gen; Tsai, Yu-Ting; Black, Natasha; Alba, Renaldo; Desai, Miraj; Frazier, Chantel; LaRue, Alyse; Liao, Pei-Wen
2014-01-01
Two studies summarize the development and initial validation of the Multicultural Personality Inventory (MPI). In Study 1, the 115-item prototype MPI was administered to 415 university students where exploratory factor analysis resulted in a 70-item, 7-factor model. In Study 2, the 70-item MPI and theoretically related companion instruments were administered to a multisite sample of 576 university students. Confirmatory factory analysis found the 7-factor structure to be a relatively good fit to the data (Comparative Fit Index =.954; root mean square error of approximation =.057), and MPI factors predicted variance in criterion variables above and beyond the variance accounted for by broad personality traits (i.e., Big Five). Study limitations and directions for further validation research are specified.
NASA Astrophysics Data System (ADS)
Sikder, Somali; Ghosh, Shila
2018-02-01
This paper presents the construction of unipolar transposed modified Walsh code (TMWC) and analysis of its performance in optical code-division multiple-access (OCDMA) systems. Specifically, the signal-to-noise ratio, bit error rate (BER), cardinality, and spectral efficiency were investigated. The theoretical analysis demonstrated that the wavelength-hopping time-spreading system using TMWC was robust against multiple-access interference and more spectrally efficient than systems using other existing OCDMA codes. In particular, the spectral efficiency was calculated to be 1.0370 when TMWC of weight 3 was employed. The BER and eye pattern for the designed TMWC were also successfully obtained using OptiSystem simulation software. The results indicate that the proposed code design is promising for enhancing network capacity.
Zhang, Juanjuan; Collins, Steven H.
2017-01-01
This study uses theory and experiments to investigate the relationship between the passive stiffness of series elastic actuators and torque tracking performance in lower-limb exoskeletons during human walking. Through theoretical analysis with our simplified system model, we found that the optimal passive stiffness matches the slope of the desired torque-angle relationship. We also conjectured that a bandwidth limit resulted in a maximum rate of change in torque error that can be commanded through control input, which is fixed across desired and passive stiffness conditions. This led to hypotheses about the interactions among optimal control gains, passive stiffness and desired quasi-stiffness. Walking experiments were conducted with multiple angle-based desired torque curves. The observed lowest torque tracking errors identified for each combination of desired and passive stiffnesses were shown to be linearly proportional to the magnitude of the difference between the two stiffnesses. The proportional gains corresponding to the lowest observed errors were seen inversely proportional to passive stiffness values and to desired stiffness. These findings supported our hypotheses, and provide guidance to application-specific hardware customization as well as controller design for torque-controlled robotic legged locomotion. PMID:29326580
NASA Astrophysics Data System (ADS)
Chen, Dongju; Huo, Chen; Cui, Xianxian; Pan, Ri; Fan, Jinwei; An, Chenhui
2018-05-01
The objective of this work is to study the influence of error induced by gas film in micro-scale on the static and dynamic behavior of a shaft supported by the aerostatic bearings. The static and dynamic balance models of the aerostatic bearing are presented by the calculated stiffness and damping in micro scale. The static simulation shows that the deformation of aerostatic spindle system in micro scale is decreased. For the dynamic behavior, both the stiffness and damping in axial and radial directions are increased in micro scale. The experiments of the stiffness and rotation error of the spindle show that the deflection of the shaft resulting from the calculating parameters in the micro scale is very close to the deviation of the spindle system. The frequency information in transient analysis is similar to the actual test, and they are also higher than the results from the traditional case without considering micro factor. Therefore, it can be concluded that the value considering micro factor is closer to the actual work case of the aerostatic spindle system. These can provide theoretical basis for the design and machining process of machine tools.
NASA Astrophysics Data System (ADS)
Vieira, Daniel; Krems, Roman
2017-04-01
Fine-structure transitions in collisions of O(3Pj) with atomic hydrogen are an important cooling mechanism in the interstellar medium; knowledge of the rate coefficients for these transitions has a wide range of astrophysical applications. The accuracy of the theoretical calculation is limited by inaccuracy in the ab initio interaction potentials used in the coupled-channel quantum scattering calculations from which the rate coefficients can be obtained. In this work we use the latest ab initio results for the O(3Pj) + H interaction potentials to improve on previous calculations of the rate coefficients. We further present a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate coefficients to variations of the underlying adiabatic interaction potentials. To account for the inaccuracy inherent in the ab initio calculations we compute error bars for the rate coefficients corresponding to 20% variation in each of the interaction potentials. We obtain these error bars by fitting a Gaussian Process model to a data set of potential curves and rate constants. We use the fitted model to do sensitivity analysis, determining the relative importance of individual adiabatic potential curves to a given fine-structure transition. NSERC.
A New Method for Calculating Counts in Cells
NASA Astrophysics Data System (ADS)
Szapudi, István
1998-04-01
In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.
He, Pingan; Jagannathan, S
2007-04-01
A novel adaptive-critic-based neural network (NN) controller in discrete time is designed to deliver a desired tracking performance for a class of nonlinear systems in the presence of actuator constraints. The constraints of the actuator are treated in the controller design as the saturation nonlinearity. The adaptive critic NN controller architecture based on state feedback includes two NNs: the critic NN is used to approximate the "strategic" utility function, whereas the action NN is employed to minimize both the strategic utility function and the unknown nonlinear dynamic estimation errors. The critic and action NN weight updates are derived by minimizing certain quadratic performance indexes. Using the Lyapunov approach and with novel weight updates, the uniformly ultimate boundedness of the closed-loop tracking error and weight estimates is shown in the presence of NN approximation errors and bounded unknown disturbances. The proposed NN controller works in the presence of multiple nonlinearities, unlike other schemes that normally approximate one nonlinearity. Moreover, the adaptive critic NN controller does not require an explicit offline training phase, and the NN weights can be initialized at zero or random. Simulation results justify the theoretical analysis.
How infants' reaches reveal principles of sensorimotor decision making
NASA Astrophysics Data System (ADS)
Dineva, Evelina; Schöner, Gregor
2018-01-01
In Piaget's classical A-not-B-task, infants repeatedly make a sensorimotor decision to reach to one of two cued targets. Perseverative errors are induced by switching the cue from A to B, while spontaneous errors are unsolicited reaches to B when only A is cued. We argue that theoretical accounts of sensorimotor decision-making fail to address how motor decisions leave a memory trace that may impact future sensorimotor decisions. Instead, in extant neural models, perseveration is caused solely by the history of stimulation. We present a neural dynamic model of sensorimotor decision-making within the framework of Dynamic Field Theory, in which a dynamic instability amplifies fluctuations in neural activation into macroscopic, stable neural activation states that leave memory traces. The model predicts perseveration, but also a tendency to repeat spontaneous errors. To test the account, we pool data from several A-not-B experiments. A conditional probabilities analysis accounts quantitatively how motor decisions depend on the history of reaching. The results provide evidence for the interdependence among subsequent reaching decisions that is explained by the model, showing that by amplifying small differences in activation and affecting learning, decisions have consequences beyond the individual behavioural act.
McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W
2015-03-27
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.
NASA Astrophysics Data System (ADS)
Goulden, T.; Hopkinson, C.
2013-12-01
The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.
Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A
2007-02-01
The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.
Errors Associated with IOLMaster Biometry as a Function of Internal Ocular Dimensions
Faria-Ribeiro, Miguel; Lopes-Ferreira, Daniela; López-Gil, Norberto; Jorge, Jorge; González-Méijome, José Manuel
2014-01-01
Purpose To evaluate the error in the estimation of axial length (AL) with the IOLMaster partial coherence interferometry (PCI) biometer and obtain a correction factor that varies as a function of AL and crystalline lens thickness (LT). Methods Optical simulations were produced for theoretical eyes using Zemax-EE software. Thirty-three combinations including eleven different AL (from 20 mm to 30 mm in 1 mm steps) and three different LT (3.6 mm, 4.2 mm and 4.8 mm) were used. Errors were obtained comparing the AL measured for a constant equivalent refractive index of 1.3549 and for the actual combinations of indices and intra-ocular dimensions of LT and AL in each model eye. Results In the range from 20 mm to 30 mm AL and 3.6–4.8 mm LT, the instrument measurements yielded an error between −0.043 mm and +0.089 mm. Regression analyses for the three LT condition were combined in order to derive a correction factor as a function of the instrument measured AL for each combination of AL and LT in the theoretical eye. Conclusions The assumption of a single “average” refractive index in the estimation of AL by the IOLMaster PCI biometer only induces very small errors in a wide range of combinations of ocular dimensions. Even so, the accurate estimation of those errors may help to improve accuracy of intra-ocular lens calculations through exact ray tracing, particularly in longer eyes and eyes with thicker or thinner crystalline lenses. PMID:24766863
Errors associated with IOLMaster biometry as a function of internal ocular dimensions.
Faria-Ribeiro, Miguel; Lopes-Ferreira, Daniela; López-Gil, Norberto; Jorge, Jorge; González-Méijome, José Manuel
2014-01-01
To evaluate the error in the estimation of axial length (AL) with the IOLMaster partial coherence interferometry (PCI) biometer and obtain a correction factor that varies as a function of AL and crystalline lens thickness (LT). Optical simulations were produced for theoretical eyes using Zemax-EE software. Thirty-three combinations including eleven different AL (from 20mm to 30mm in 1mm steps) and three different LT (3.6mm, 4.2mm and 4.8mm) were used. Errors were obtained comparing the AL measured for a constant equivalent refractive index of 1.3549 and for the actual combinations of indices and intra-ocular dimensions of LT and AL in each model eye. In the range from 20mm to 30mm AL and 3.6-4.8mm LT, the instrument measurements yielded an error between -0.043mm and +0.089mm. Regression analyses for the three LT condition were combined in order to derive a correction factor as a function of the instrument measured AL for each combination of AL and LT in the theoretical eye. The assumption of a single "average" refractive index in the estimation of AL by the IOLMaster PCI biometer only induces very small errors in a wide range of combinations of ocular dimensions. Even so, the accurate estimation of those errors may help to improve accuracy of intra-ocular lens calculations through exact ray tracing, particularly in longer eyes and eyes with thicker or thinner crystalline lenses. Copyright © 2013 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
Implementation of an experimental fault-tolerant memory system
NASA Technical Reports Server (NTRS)
Carter, W. C.; Mccarthy, C. E.
1976-01-01
The experimental fault-tolerant memory system described in this paper has been designed to enable the modular addition of spares, to validate the theoretical fault-secure and self-testing properties of the translator/corrector, to provide a basis for experiments using the new testing and correction processes for recovery, and to determine the practicality of such systems. The hardware design and implementation are described, together with methods of fault insertion. The hardware/software interface, including a restricted single error correction/double error detection (SEC/DED) code, is specified. Procedures are carefully described which, (1) test for specified physical faults, (2) ensure that single error corrections are not miscorrections due to triple faults, and (3) enable recovery from double errors.
Liquid crystal point diffraction interferometer. Ph.D. Thesis - Arizona Univ., 1995
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.
1995-01-01
A new instrument, the liquid crystal point diffraction-interferometer (LCPDI), has been developed for the measurement of phase objects. This instrument maintains the compact, robust design of Linnik's point diffraction interferometer (PDI) and adds to it phase stepping capability for quantitative interferogram analysis. The result is a compact, simple to align, environmentally insensitive interferometer capable of accurately measuring optical wavefronts with very high data density and with automated data reduction. This dissertation describes the theory of both the PDI and liquid crystal phase control. The design considerations for the LCPDI are presented, including manufacturing considerations. The operation and performance of the LCPDI are discussed, including sections regarding alignment, calibration, and amplitude modulation effects. The LCPDI is then demonstrated using two phase objects: defocus difference wavefront, and a temperature distribution across a heated chamber filled with silicone oil. The measured results are compared to theoretical or independently measured results and show excellent agreement. A computer simulation of the LCPDI was performed to verify the source of observed periodic phase measurement error. The error stems from intensity variations caused by dye molecules rotating within the liquid crystal layer. Methods are discussed for reducing this error. Algorithms are presented which reduce this error; they are also useful for any phase-stepping interferometer that has unwanted intensity fluctuations, such as those caused by unregulated lasers.
Dual-wavelengths photoacoustic temperature measurement
NASA Astrophysics Data System (ADS)
Liao, Yu; Jian, Xiaohua; Dong, Fenglin; Cui, Yaoyao
2017-02-01
Thermal therapy is an approach applied in cancer treatment by heating local tissue to kill the tumor cells, which requires a high sensitivity of temperature monitoring during therapy. Current clinical methods like fMRI near infrared or ultrasound for temperature measurement still have limitations on penetration depth or sensitivity. Photoacoustic temperature sensing is a newly developed temperature sensing method that has a potential to be applied in thermal therapy, which usually employs a single wavelength laser for signal generating and temperature detecting. Because of the system disturbances including laser intensity, ambient temperature and complexity of target, the accidental errors of measurement is unavoidable. For solving these problems, we proposed a new method of photoacoustic temperature sensing by using two wavelengths to reduce random error and increase the measurement accuracy in this paper. Firstly a brief theoretical analysis was deduced. Then in the experiment, a temperature measurement resolution of about 1° in the range of 23-48° in ex vivo pig blood was achieved, and an obvious decrease of absolute error was observed with averagely 1.7° in single wavelength pattern while nearly 1° in dual-wavelengths pattern. The obtained results indicates that dual-wavelengths photoacoustic sensing of temperature is able to reduce random error and improve accuracy of measuring, which could be a more efficient method for photoacoustic temperature sensing in thermal therapy of tumor.
NASA Astrophysics Data System (ADS)
Wang, Yupeng; Chang, Kyunghi
In this paper, we analyze the coexistence issues of M-WiMAX TDD and WCDMA FDD systems. Smart antenna techniques are applied to mitigate the performance loss induced by adjacent channel interference (ACI) in the scenarios where performance is heavily degraded. In addition, an ACI model is proposed to capture the effect of transmit beamforming at the M-WiMAX base station. Furthermore, a MCS-based throughput analysis is proposed, to jointly consider the effects of ACI, system packet error rate requirement, and the available modulation and coding schemes, which is not possible by using the conventional Shannon equation based analysis. From the results, we find that the proposed MCS-based analysis method is quite suitable to analyze the system theoretical throughput in a practical manner.
The Measurement of Gravitomagnetism: A Challenging Enterprise
NASA Astrophysics Data System (ADS)
Iorio, Lorenzo
2007-11-01
This book is intended to give an updated overview on the state-of-the art of the theoretical and experimental efforts aimed to detect the elusive Lense-Thirring effect in the gravitational field of the Earth. The reader, after a robust introduction to the historical (Chapter 2) and theoretical (Chapters 3-5) aspects of the subject, will get acquainted with the subtleties required to design suitable observables which are able to sufficiently enhance the signal-to-noise ratio. Moreover, he/she should be able to follow autonomously the exciting developments which, hopefully, will take place in the near future if and when reliable few percent tests of this prediction of general relativity should become available. In an Earth-space based experiment with artificial satellites a good compromise between the need of reducing the impact of the systematic errors of gravitational origin and of non-gravitational origin must be obtained; this is not an easy task because such requirements are often in conflict one with each other. Consequently, a great attention is paid to elucidate many classical perturbing effects which, if not carefully modelled and accounted for in the data analysis, may alias the recovery of the gravitomagnetic signature. Indeed, we are dealing with a fundamental test of general relativity which must be honest, robust and based on solid error analysis. A critical and detailed discussion of the latest test with the LAGEOS satellites is included. The book will also be useful for better understanding the interplay among various geodetic, geophysical, general relativistic, astronomical and matter-wave interferometric effects which occurs in the weak-field and slow-motion approximation and which will become increasingly important in the near future thanks to the improvements in the accuracy of the orbital reconstruction process.
Accuracy of finite-difference modeling of seismic waves : Simulation versus laboratory measurements
NASA Astrophysics Data System (ADS)
Arntsen, B.
2017-12-01
The finite-difference technique for numerical modeling of seismic waves is still important and for some areas extensively used.For exploration purposes is finite-difference simulation at the core of both traditional imaging techniques such as reverse-time migration and more elaborate Full-Waveform Inversion techniques.The accuracy and fidelity of finite-difference simulation of seismic waves are hard to quantify and meaningfully error analysis is really onlyeasily available for simplistic media. A possible alternative to theoretical error analysis is provided by comparing finite-difference simulated data with laboratory data created using a scale model. The advantage of this approach is the accurate knowledge of the model, within measurement precision, and the location of sources and receivers.We use a model made of PVC immersed in water and containing horizontal and tilted interfaces together with several spherical objects to generateultrasonic pressure reflection measurements. The physical dimensions of the model is of the order of a meter, which after scaling represents a model with dimensions of the order of 10 kilometer and frequencies in the range of one to thirty hertz.We find that for plane horizontal interfaces the laboratory data can be reproduced by the finite-difference scheme with relatively small error, but for steeply tilted interfaces the error increases. For spherical interfaces the discrepancy between laboratory data and simulated data is sometimes much more severe, to the extent that it is not possible to simulate reflections from parts of highly curved bodies. The results are important in view of the fact that finite-difference modeling is often at the core of imaging and inversion algorithms tackling complicatedgeological areas with highly curved interfaces.
NASA Astrophysics Data System (ADS)
Fojtíková, Lucia; Kristeková, Miriam; Málek, Jiří; Sokos, Efthimios; Csicsay, Kristián; Zahradník, Jiří
2016-01-01
Extension of permanent seismic networks is usually governed by a number of technical, economic, logistic, and other factors. Planned upgrade of the network can be justified by theoretical assessment of the network capability in terms of reliable estimation of the key earthquake parameters (e.g., location and focal mechanisms). It could be useful not only for scientific purposes but also as a concrete proof during the process of acquisition of the funding needed for upgrade and operation of the network. Moreover, the theoretical assessment can also identify the configuration where no improvement can be achieved with additional stations, establishing a tradeoff between the improvement and additional expenses. This paper presents suggestion of a combination of suitable methods and their application to the Little Carpathians local seismic network (Slovakia, Central Europe) monitoring epicentral zone important from the point of seismic hazard. Three configurations of the network are considered: 13 stations existing before 2011, 3 stations already added in 2011, and 7 new planned stations. Theoretical errors of the relative location are estimated by a new method, specifically developed in this paper. The resolvability of focal mechanisms determined by waveform inversion is analyzed by a recent approach based on 6D moment-tensor error ellipsoids. We consider potential seismic events situated anywhere in the studied region, thus enabling "mapping" of the expected errors. Results clearly demonstrate that the network extension remarkably decreases the errors, mainly in the planned 23-station configuration. The already made three-station extension of the network in 2011 allowed for a few real data examples. Free software made available by the authors enables similar application in any other existing or planned networks.
Phase stabilization of multidimensional amplification architectures for ultrashort pulses
NASA Astrophysics Data System (ADS)
Müller, M.; Kienel, M.; Klenke, A.; Eidam, T.; Limpert, J.; Tünnermann, A.
2015-03-01
The active phase stabilization of spatially and temporally combined ultrashort pulses is investigated theoretically and experimentally. Particularly, considering a combining scheme applying 2 amplifier channels and 4 divided-pulse replicas a bistable behavior is observed. The reason is mutual influence of the optical error signals that is intrinsic to temporal polarization beam combining. A successful mitigation strategy is proposed and is analyzed theoretically and experimentally.
Theoretical and experimental investigation of millimeter-wave TED's in cross-waveguide oscillators
NASA Astrophysics Data System (ADS)
Rydberg, A.
1985-07-01
Theoretical and experimental investigations of millimeterwave GaAs second harmonic transferred electron device (TED) oscillators using separate circuits for frequency and power optimization, are described. The theory predicts the oscillation frequency with less than 2 percent error for the second harmonic. Apart from the 2d and 3d, a 4th harmonic from the TED was observed up to 130 GHz.
Estimation of perspective errors in 2D2C-PIV measurements for 3D concentrated vortices
NASA Astrophysics Data System (ADS)
Ma, Bao-Feng; Jiang, Hong-Gang
2018-06-01
Two-dimensional planar PIV (2D2C) is still extensively employed in flow measurement owing to its availability and reliability, although more advanced PIVs have been developed. It has long been recognized that there exist perspective errors in velocity fields when employing the 2D2C PIV to measure three-dimensional (3D) flows, the magnitude of which depends on out-of-plane velocity and geometric layouts of the PIV. For a variety of vortex flows, however, the results are commonly represented by vorticity fields, instead of velocity fields. The present study indicates that the perspective error in vorticity fields relies on gradients of the out-of-plane velocity along a measurement plane, instead of the out-of-plane velocity itself. More importantly, an estimation approach to the perspective error in 3D vortex measurements was proposed based on a theoretical vortex model and an analysis on physical characteristics of the vortices, in which the gradient of out-of-plane velocity is uniquely determined by the ratio of the maximum out-of-plane velocity to maximum swirling velocity of the vortex; meanwhile, the ratio has upper limits for naturally formed vortices. Therefore, if the ratio is imposed with the upper limits, the perspective error will only rely on the geometric layouts of PIV that are known in practical measurements. Using this approach, the upper limits of perspective errors of a concentrated vortex can be estimated for vorticity and other characteristic quantities of the vortex. In addition, the study indicates that the perspective errors in vortex location, vortex strength, and vortex radius can be all zero for axisymmetric vortices if they are calculated by proper methods. The dynamic mode decomposition on an oscillatory vortex indicates that the perspective errors of each DMD mode are also only dependent on the gradient of out-of-plane velocity if the modes are represented by vorticity.
NASA Astrophysics Data System (ADS)
Forsyth, J. M.
1983-02-01
In this document the authors summarize our investigation of the reflecting properties of X-ray multilayers. The breadth of this investigation indicates the utility of the difference equation formalism in the analysis of such structure. The formalism is particularly useful in analyzing multilayers whose structure is not a simple periodic bilayer. The complexity in structure can be either intentional, as in multilayers made by in-situ reflectance monitoring, or it can be a consequence of a degradation mechanism, such as random thickness errors or interlayer diffusion. Both the analysis of thickness errors and the analysis of interlayer diffusion are conceptually simple, effectively one dimensional problems that are straightforwared to pose. In the authors analysis of in-situ reflectance monitoring, they provide a quantitative understanding of an experimentally successful process that has not previously been treated theoretically. As X-ray multilayers come into wider use, there will undoubtedly be an increasing need for a more precise understanding of their reflecting properties. Thus, it is expected that in the future more detailed modeling will be undertaken of less easily specified structures than those above. The authors believe that their formalism will continue to prove useful in the modeling of these more complex structures. One such structure that may be of interest is that of a multilayer degraded by interfacial roughness.
Neural network uncertainty assessment using Bayesian statistics: a remote sensing application
NASA Technical Reports Server (NTRS)
Aires, F.; Prigent, C.; Rossow, W. B.
2004-01-01
Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component analysis is proposed to suppress the multicollinearities in order to make these Jacobians robust and physically meaningful.
Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin
Walker, J.F.; Osen, L.L.; Hughes, P.E.
1987-01-01
A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%.
Kozintseva, Elena; Skvortsov, Anatoliy
2016-03-01
The aim of our study was to evolve views on writing disorders in Wernicke's agraphia by comparing group data and analysis of a single patient. We showed how a single-case study can be useful in obtaining essential results that can be hidden by averaging group data. Analysis of a single patient proved to be important for resolving contradictions of the "holistic" and "elementaristic" paradigms of psychology and for the development of theoretical knowledge with the example of a writing disorder. The implementation of a holistic approach was undertaken by presenting the tasks differing in functions in which writing had been performed since its appearance in human culture (communicative, mnestic, and regulatory). In spite of the identical composition of involved psychological components, these differences were identified when certain types of errors were analyzed in the single subject. The results are discussed in terms of used writing strategy, resulting in a way of operation of involved components that lead to qualitative and quantitative changes of writing errors within the syndrome of Wernicke's agraphia. © 2016 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
Deep neural networks for texture classification-A theoretical analysis.
Basu, Saikat; Mukhopadhyay, Supratik; Karki, Manohar; DiBiano, Robert; Ganguly, Sangram; Nemani, Ramakrishna; Gayaka, Shreekant
2018-01-01
We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Performance Analysis of Local Ensemble Kalman Filter
NASA Astrophysics Data System (ADS)
Tong, Xin T.
2018-03-01
Ensemble Kalman filter (EnKF) is an important data assimilation method for high-dimensional geophysical systems. Efficient implementation of EnKF in practice often involves the localization technique, which updates each component using only information within a local radius. This paper rigorously analyzes the local EnKF (LEnKF) for linear systems and shows that the filter error can be dominated by the ensemble covariance, as long as (1) the sample size exceeds the logarithmic of state dimension and a constant that depends only on the local radius; (2) the forecast covariance matrix admits a stable localized structure. In particular, this indicates that with small system and observation noises, the filter error will be accurate in long time even if the initialization is not. The analysis also reveals an intrinsic inconsistency caused by the localization technique, and a stable localized structure is necessary to control this inconsistency. While this structure is usually taken for granted for the operation of LEnKF, it can also be rigorously proved for linear systems with sparse local observations and weak local interactions. These theoretical results are also validated by numerical implementation of LEnKF on a simple stochastic turbulence in two dynamical regimes.
Wear Improvement of Tools in the Cold Forging Process for Long Hex Flange Nuts.
Hsia, Shao-Yi; Shih, Po-Yueh
2015-09-25
Cold forging has played a critical role in fasteners and has been widely used in automotive production, manufacturing, aviation and 3C (Computer, Communication, and Consumer electronics). Despite its extensive use in fastener forming and die design, operator experience and trial and error make it subjective and unreliable owing to the difficulty of controlling the development schedule. This study used finite element analysis to establish and simulate wear in automotive repair fastener manufacturing dies based on actual process conditions. The places on a die that wore most quickly were forecast, with the stress levels obtained being substituted into the Archard equation to calculate die wear. A 19.87% improvement in wear optimization occurred by applying the Taguchi quality method to the new design. Additionally, a comparison of actual manufacturing data to simulations revealed a nut forging size error within 2%, thereby demonstrating the accuracy of this theoretical analysis. Finally, SEM micrographs of the worn surfaces on the upper punch indicate that the primary wear mechanism on the cold forging die for long hex flange nuts was adhesive wear. The results can simplify the development schedule, reduce the number of trials and further enhance production quality and die life.
Wear Improvement of Tools in the Cold Forging Process for Long Hex Flange Nuts
Hsia, Shao-Yi; Shih, Po-Yueh
2015-01-01
Cold forging has played a critical role in fasteners and has been widely used in automotive production, manufacturing, aviation and 3C (Computer, Communication, and Consumer electronics). Despite its extensive use in fastener forming and die design, operator experience and trial and error make it subjective and unreliable owing to the difficulty of controlling the development schedule. This study used finite element analysis to establish and simulate wear in automotive repair fastener manufacturing dies based on actual process conditions. The places on a die that wore most quickly were forecast, with the stress levels obtained being substituted into the Archard equation to calculate die wear. A 19.87% improvement in wear optimization occurred by applying the Taguchi quality method to the new design. Additionally, a comparison of actual manufacturing data to simulations revealed a nut forging size error within 2%, thereby demonstrating the accuracy of this theoretical analysis. Finally, SEM micrographs of the worn surfaces on the upper punch indicate that the primary wear mechanism on the cold forging die for long hex flange nuts was adhesive wear. The results can simplify the development schedule, reduce the number of trials and further enhance production quality and die life. PMID:28793589
Accuracy of latent-variable estimation in Bayesian semi-supervised learning.
Yamazaki, Keisuke
2015-09-01
Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.
Underwater striling engine design with modified one-dimensional model
NASA Astrophysics Data System (ADS)
Li, Daijin; Qin, Kan; Luo, Kai
2015-09-01
Stirling engines are regarded as an efficient and promising power system for underwater devices. Currently, many researches on one-dimensional model is used to evaluate thermodynamic performance of Stirling engine, but in which there are still some aspects which cannot be modeled with proper mathematical models such as mechanical loss or auxiliary power. In this paper, a four-cylinder double-acting Stirling engine for Unmanned Underwater Vehicles (UUVs) is discussed. And a one-dimensional model incorporated with empirical equations of mechanical loss and auxiliary power obtained from experiments is derived while referring to the Stirling engine computer model of National Aeronautics and Space Administration (NASA). The P-40 Stirling engine with sufficient testing results from NASA is utilized to validate the accuracy of this one-dimensional model. It shows that the maximum error of output power of theoretical analysis results is less than 18% over testing results, and the maximum error of input power is no more than 9%. Finally, a Stirling engine for UUVs is designed with Schmidt analysis method and the modified one-dimensional model, and the results indicate this designed engine is capable of showing desired output power.
NASA Technical Reports Server (NTRS)
Knox, C. E.
1978-01-01
Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.
Fault-tolerant measurement-based quantum computing with continuous-variable cluster states.
Menicucci, Nicolas C
2014-03-28
A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensures that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.
Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.
Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei
2017-04-01
Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Lee, Seung-Woo; Lee, Chang-Woo; Jeong, Hyunseok
2015-01-01
We propose a quantum algorithm to obtain the lowest eigenstate of any Hamiltonian simulated by a quantum computer. The proposed algorithm begins with an arbitrary initial state of the simulated system. A finite series of transforms is iteratively applied to the initial state assisted with an ancillary qubit. The fraction of the lowest eigenstate in the initial state is then amplified up to 1. We prove that our algorithm can faithfully work for any arbitrary Hamiltonian in the theoretical analysis. Numerical analyses are also carried out. We firstly provide a numerical proof-of-principle demonstration with a simple Hamiltonian in order to compare our scheme with the so-called "Demon-like algorithmic cooling (DLAC)", recently proposed in Xu (Nat Photonics 8:113, 2014). The result shows a good agreement with our theoretical analysis, exhibiting the comparable behavior to the best `cooling' with the DLAC method. We then consider a random Hamiltonian model for further analysis of our algorithm. By numerical simulations, we show that the total number of iterations is proportional to , where is the difference between the two lowest eigenvalues and is an error defined as the probability that the finally obtained system state is in an unexpected (i.e., not the lowest) eigenstate.
Variational Bayesian Parameter Estimation Techniques for the General Linear Model
Starke, Ludger; Ostwald, Dirk
2017-01-01
Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572
An Empirical State Error Covariance Matrix for Batch State Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).
NASA Astrophysics Data System (ADS)
Mishenina, T.; Pignatari, M.; Côté, B.; Thielemann, F.-K.; Soubiran, C.; Basak, N.; Gorbaneva, T.; Korotin, S. A.; Kovtyukh, V. V.; Wehmeyer, B.; Bisterzo, S.; Travaglio, C.; Gibson, B. K.; Jordan, C.; Paul, A.; Ritter, C.; Herwig, F.; NuGrid Collaboration
2017-08-01
Atmospheric parameters and chemical compositions for 10 stars with metallicities in the region of -2.2 < [Fe/H] < -0.6 were precisely determined using high-resolution, high signal-to-noise, spectra. For each star, the abundances, for 14-27 elements, were derived using both local thermodynamic equilibrium (LTE) and non-LTE (NLTE) approaches. In particular, differences by assuming LTE or NLTE are about 0.10 dex; depending on [Fe/H], Teff, gravity and element lines used in the analysis. We find that the O abundance has the largest error, ranging from 0.10 and 0.2 dex. The best measured elements are Cr, Fe, and Mn; with errors between 0.03 and 0.11 dex. The stars in our sample were included in previous different observational work. We provide a consistent data analysis. The data dispersion introduced in the literature by different techniques and assumptions used by the different authors is within the observational errors, excepting for HD103095. We compare these results with stellar observations from different data sets and a number of theoretical galactic chemical evolution (GCE) simulations. We find a large scatter in the GCE results, used to study the origin of the elements. Within this scatter as found in previous GCE simulations, we cannot reproduce the evolution of the elemental ratios [Sc/Fe], [Ti/Fe], and [V/Fe] at different metallicities. The stellar yields from core-collapse supernovae are likely primarily responsible for this discrepancy. Possible solutions and open problems are discussed.
Health information systems: a survey of frameworks for developing countries.
Marcelo, A B
2010-01-01
The objective of this paper is to perform a survey of excellent research on health information systems (HIS) analysis and design, and their underlying theoretical frameworks. It classifies these frameworks along major themes, and analyzes the different approaches to HIS development that are practical in resource-constrained environments. Literature review based on PubMed citations and conference proceedings, as well as Internet searches on information systems in general, and health information systems in particular. The field of health information systems development has been studied extensively. Despite this, failed implementations are still common. Theoretical frameworks for HIS development are available that can guide implementers. As awareness, acceptance, and demand for health information systems increase globally, the variety of approaches and strategies will also follow. For developing countries with scarce resources, a trial-and-error approach can be very costly. Lessons from the successes and failures of initial HIS implementations have been abstracted into theoretical frameworks. These frameworks organize complex HIS concepts into methodologies that standardize techniques in implementation. As globalization continues to impact healthcare in the developing world, demand for more responsive health systems will become urgent. More comprehensive frameworks and practical tools to guide HIS implementers will be imperative.
Kiran, Swathi; Thompson, Cynthia K
2003-06-01
The effect of typicality of category exemplars on naming was investigated using a single subject experimental design across participants and behaviors in 4 patients with fluent aphasia. Participants received a semantic feature treatment to improve naming of either typical or atypical items within semantic categories, while generalization was tested to untrained items of the category. The order of typicality and category trained was counterbalanced across participants. Results indicated that patients trained on naming of atypical exemplars demonstrated generalization to naming of intermediate and typical items. However, patients trained on typical items demonstrated no generalized naming effect to intermediate or atypical examples. Furthermore, analysis of errors indicated an evolution of errors throughout training, from those with no apparent relationship to the target to primarily semantic and phonemic paraphasias. Performance on standardized language tests also showed changes as a function of treatment. Theoretical and clinical implications regarding the impact of considering semantic complexity on rehabilitation of naming deficits in aphasia are discussed.
Kiran, Swathi; Thompson, Cynthia K
2003-08-01
The effect of typicality of category exemplars on naming was investigated using a single subject experimental design across participants and behaviors in 4 patients with fluent aphasia. Participants received a semantic feature treatment to improve naming of either typical or atypical items within semantic categories, while generalization was tested to untrained items of the category. The order of typicality and category trained was counterbalanced across participants. Results indicated that patients trained on naming of atypical exemplars demonstrated generalization to naming of intermediate and typical items. However, patients trained on typical items demonstrated no generalized naming effect to intermediate or atypical examples. Furthermore, analysis of errors indicated an evolution of errors throughout training, from those with no apparent relationship to the target to primarily semantic and phonemic paraphasias. Performance on standardized language tests also showed changes as a function of treatment. Theoretical and clinical implications regarding the impact of considering semantic complexity on rehabilitation of naming deficits in aphasia are discussed.
Modeling Single-Event Transient Propagation in a SiGe BiCMOS Direct-Conversion Receiver
NASA Astrophysics Data System (ADS)
Ildefonso, Adrian; Song, Ickhyun; Tzintzarov, George N.; Fleetwood, Zachary E.; Lourenco, Nelson E.; Wachter, Mason T.; Cressler, John D.
2017-08-01
The propagation of single-event transient (SET) signals in a silicon-germanium direct-conversion receiver carrying modulated data is explored. A theoretical analysis of transient propagation, verified by simulation, is presented. A new methodology to characterize and quantify the impact of SETs in communication systems carrying modulated data is proposed. The proposed methodology uses a pulsed radiation source to induce distortions in the signal constellation. The error vector magnitude due to SETs can then be calculated to quantify errors. Two different modulation schemes were simulated: QPSK and 16-QAM. The distortions in the constellation diagram agree with the presented circuit theory. Furthermore, the proposed methodology was applied to evaluate the improvements in the SET response due to a known radiation-hardening-by-design (RHBD) technique, where the common-base device of the low-noise amplifier was operated in inverse mode. The proposed methodology can be a valid technique to determine the most sensitive parts of a system carrying modulated data.
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Ackerman, Thomas P.
1996-01-01
A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. We have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. We conclude that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, we conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results.
NASA Technical Reports Server (NTRS)
Varnai, Tamas; Marshak, Alexander
2000-01-01
This paper presents a simple approach to estimate the uncertainties that arise in satellite retrievals of cloud optical depth when the retrievals use one-dimensional radiative transfer theory for heterogeneous clouds that have variations in all three dimensions. For the first time, preliminary error bounds are set to estimate the uncertainty of cloud optical depth retrievals. These estimates can help us better understand the nature of uncertainties that three-dimensional effects can introduce into retrievals of this important product of the MODIS instrument. The probability distribution of resulting retrieval errors is examined through theoretical simulations of shortwave cloud reflection for a wide variety of cloud fields. The results are used to illustrate how retrieval uncertainties change with observable and known parameters, such as solar elevation or cloud brightness. Furthermore, the results indicate that a tendency observed in an earlier study, clouds appearing thicker for oblique sun, is indeed caused by three-dimensional radiative effects.
NASA Astrophysics Data System (ADS)
Li, Xinying; Xiao, Jiangnan
2015-06-01
We propose a novel scheme for optical frequency-locked multi-carrier generation based on one electro-absorption modulated laser (EML) and one phase modulator (PM) in cascade driven by different sinusoidal radio-frequency (RF) clocks. The optimal operating zone for the cascaded EML and PM is found out based on theoretical analysis and numerical simulation. We experimentally demonstrate 25 optical subcarriers with frequency spacing of 12.5 GHz and power difference less than 5 dB can be generated based on the cascaded EML and PM operating in the optimal zone, which agrees well with the numerical simulation. We also experimentally demonstrate 28-Gbaud polarization division multiplexing quadrature phase shift keying (PDM-QPSK) modulated coherent optical transmission based on the cascaded EML and PM. The bit error ratio (BER) can be below the pre-forward-error-correction (pre-FEC) threshold of 3.8 × 10-3 after 80-km single-mode fiber-28 (SMF-28) transmission.
Noise facilitation in associative memories of exponential capacity.
Karbasi, Amin; Salavati, Amir Hesam; Shokrollahi, Amin; Varshney, Lav R
2014-11-01
Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an exponential number of patterns that satisfy certain subspace constraints. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in brain regions thought to operate associatively, such as hippocampus and olfactory cortex. Here we consider associative memories with boundedly noisy internal computations and analytically characterize performance. As long as the internal noise level is below a specified threshold, the error probability in the recall phase can be made exceedingly small. More surprising, we show that internal noise improves the performance of the recall phase while the pattern retrieval capacity remains intact: the number of stored patterns does not reduce with noise (up to a threshold). Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks.
Pirsiavash, Ali; Broumandan, Ali; Lachapelle, Gérard
2017-07-05
The performance of Signal Quality Monitoring (SQM) techniques under different multipath scenarios is analyzed. First, SQM variation profiles are investigated as critical requirements in evaluating the theoretical performance of SQM metrics. The sensitivity and effectiveness of SQM approaches for multipath detection and mitigation are then defined and analyzed by comparing SQM profiles and multipath error envelopes for different discriminators. Analytical discussions includes two discriminator strategies, namely narrow and high resolution correlator techniques for BPSK(1), and BOC(1,1) signaling schemes. Data analysis is also carried out for static and kinematic scenarios to validate the SQM profiles and examine SQM performance in actual multipath environments. Results show that although SQM is sensitive to medium and long-delay multipath, its effectiveness in mitigating these ranges of multipath errors varies based on tracking strategy and signaling scheme. For short-delay multipath scenarios, the multipath effect on pseudorange measurements remains mostly undetected due to the low sensitivity of SQM metrics.
Standard deviation and standard error of the mean.
Lee, Dong Kyu; In, Junyong; Lee, Sangseok
2015-06-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.
Standard deviation and standard error of the mean
In, Junyong; Lee, Sangseok
2015-01-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results. PMID:26045923
A silicon avalanche photodiode detector circuit for Nd:YAG laser scattering
NASA Astrophysics Data System (ADS)
Hsieh, C.-L.; Haskovec, J.; Carlstrom, T. N.; Deboo, J. C.; Greenfield, C. M.; Snider, R. T.; Trost, P.
1990-06-01
A silicon avalanche photodiode with an internal gain of about 50 to 100 is used in a temperature controlled environment to measure the Nd:YAG laser Thomson scattered spectrum in the wavelength range from 700 to 1150 nm. A charge sensitive preamplifier was developed for minimizing the noise contribution from the detector electronics. Signal levels as low as 20 photoelectrons (S/N = 1) can be detected. Measurements show that both the signal and the variance of the signal vary linearly with the input light level over the range of interest, indicating Poisson statistics. The signal is processed using a 100 ns delay line and a differential amplifier which subtracts the low frequency background light component. The background signal is amplified with a computer controlled variable gain amplifier and is used for an estimate of the measurement error, calibration, and Z sub eff measurements of the plasma. The signal processing was analyzed using a theoretical model to aid the system design and establish the procedure for data error analysis.
Silicon avalanche photodiode detector circuit for Nd:YAG laser scattering
NASA Astrophysics Data System (ADS)
Hsieh, C. L.; Haskovec, J.; Carlstrom, T. N.; DeBoo, J. C.; Greenfield, C. M.; Snider, R. T.; Trost, P.
1990-10-01
A silicon avalanche photodiode with an internal gain of about 50 to 100 is used in a temperature-controlled environment to measure the Nd:YAG laser Thomson scattered spectrum in the wavelength range from 700 to 1150 nm. A charge-sensitive preamplifier has been developed for minimizing the noise contribution from the detector electronics. Signal levels as low as 20 photoelectrons (S/N=1) can be detected. Measurements show that both the signal and the variance of the signal vary linearly with the input light level over the range of interest, indicating Poisson statistics. The signal is processed using a 100 ns delay line and a differential amplifier which subtracts the low-frequency background light component. The background signal is amplified with a computer-controlled variable gain amplifier and is used for an estimate of the measurement error, calibration, and Zeff measurements of the plasma. The signal processing has been analyzed using a theoretical model to aid the system design and establish the procedure for data error analysis.
NASA Astrophysics Data System (ADS)
Yoo, Sung Jin
2016-11-01
This paper presents a theoretical design approach for output-feedback formation tracking of multiple mobile robots under wheel perturbations. It is assumed that these perturbations are unknown and the linear and angular velocities of the robots are unmeasurable. First, adaptive state observers for estimating unmeasurable velocities of the robots are developed under the robots' kinematics and dynamics including wheel perturbation effects. Then, we derive a virtual-structure-based formation tracker scheme according to the observer dynamic surface design procedure. The main difficulty of the output-feedback control design is to manage the coupling problems between unmeasurable velocities and unknown wheel perturbation effects. These problems are avoided by using the adaptive technique and the function approximation property based on fuzzy logic systems. From the Lyapunov stability analysis, it is shown that point tracking errors of each robot and synchronisation errors for the desired formation converge to an adjustable neighbourhood of the origin, while all signals in the controlled closed-loop system are semiglobally uniformly ultimately bounded.
Influence of modulation frequency in rubidium cell frequency standards
NASA Technical Reports Server (NTRS)
Audoin, C.; Viennet, J.; Cyr, N.; Vanier, J.
1983-01-01
The error signal which is used to control the frequency of the quartz crystal oscillator of a passive rubidium cell frequency standard is considered. The value of the slope of this signal, for an interrogation frequency close to the atomic transition frequency is calculated and measured for various phase (or frequency) modulation waveforms, and for several values of the modulation frequency. A theoretical analysis is made using a model which applies to a system in which the optical pumping rate, the relaxation rates and the RF field are homogeneous. Results are given for sine-wave phase modulation, square-wave frequency modulation and square-wave phase modulation. The influence of the modulation frequency on the slope of the error signal is specified. It is shown that the modulation frequency can be chosen as large as twice the non-saturated full-width at half-maximum without a drastic loss of the sensitivity to an offset of the interrogation frequency from center line, provided that the power saturation factor and the amplitude of modulation are properly adjusted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurkin, N V; Konyshev, V A; Novikov, A G
2015-01-31
We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s{sup -1} DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on themore » optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 – 50 km up to a maximum length of 250 km. (optical transmission of information)« less
Error Analyses of the North Alabama Lightning Mapping Array (LMA)
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Five-wave-packet quantum error correction based on continuous-variable cluster entanglement
Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi
2015-01-01
Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395
NASA Technical Reports Server (NTRS)
Rutledge, Charles K.
1988-01-01
The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.
Brandão, Eric; Mareze, Paulo; Lenzi, Arcanjo; da Silva, Andrey R
2013-05-01
In this paper, the measurement of the absorption coefficient of non-locally reactive sample layers of thickness d1 backed by a rigid wall is investigated. The investigation is carried out with the aid of real and theoretical experiments, which assume a monopole sound source radiating sound above an infinite non-locally reactive layer. A literature search revealed that the number of papers devoted to this matter is rather limited in comparison to those which address the measurement of locally reactive samples. Furthermore, the majority of papers published describe the use of two or more microphones whereas this paper focuses on the measurement with the pressure-particle velocity sensor (PU technique). For these reasons, the assumption that the sample is locally reactive is initially explored, so that the associated measurement errors can be quantified. Measurements in the impedance tube and in a semi-anechoic room are presented to validate the theoretical experiment. For samples with a high non-local reaction behavior, for which the measurement errors tend to be high, two different algorithms are proposed in order to minimize the associated errors.
Coarse-graining errors and numerical optimization using a relative entropy framework
NASA Astrophysics Data System (ADS)
Chaimovich, Aviel; Shell, M. Scott
2011-03-01
The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.
Effect of twist on single-mode fiber-optic 3 × 3 couplers
NASA Astrophysics Data System (ADS)
Chen, Dandan; Ji, Minning; Peng, Lei
2018-01-01
In the fabricating process of a 3 × 3 fused tapered coupler, the three fibers are usually twisted to be close-contact. The effect of twist on 3 × 3 fused tapered couplers is investigated in this paper. It is found that though a linear 3 × 3 coupler may realize equal power splitting ratio theoretically by twisting a special angle, it is hard to be fabricated actually because the twist angle and the coupler's length must be determined in advance. While an equilateral 3 × 3 coupler can not only realize approximate equal power splitting ratio theoretically but can also be fabricated just by controlling the elongation length. The effect of twist on the equilateral 3 × 3 coupler lies in the relationship between the equal ratio error and the twist angle. The more the twist angle is, the larger the equal ratio error may be. The twist angle usually should be no larger than 90° on one coupling period length in order to keep the equal ratio error small enough. The simulation results agree well with the experimental data.
Asymmetric affective forecasting errors and their correlation with subjective well-being
2018-01-01
Aims Social scientists have postulated that the discrepancy between achievements and expectations affects individuals' subjective well-being. Still, little has been done to qualify and quantify such a psychological effect. Our empirical analysis assesses the consequences of positive and negative affective forecasting errors—the difference between realized and expected subjective well-being—on the subsequent level of subjective well-being. Data We use longitudinal data on a representative sample of 13,431 individuals from the German Socio-Economic Panel. In our sample, 52% of individuals are females, average age is 43 years, average years of education is 11.4 and 27% of our sample lives in East Germany. Subjective well-being (measured by self-reported life satisfaction) is assessed on a 0–10 discrete scale and its sample average is equal to 6.75 points. Methods We develop a simple theoretical framework to assess the consequences of positive and negative affective forecasting errors—the difference between realized and expected subjective well-being—on the subsequent level of subjective well-being, properly accounting for the endogenous adjustment of expectations to positive and negative affective forecasting errors, and use it to derive testable predictions. Given the theoretical framework, we estimate two panel-data equations, the first depicting the association between positive and negative affective forecasting errors and the successive level of subjective well-being and the second describing the correlation between subjective well-being expectations for the future and hedonic failures and successes. Our models control for individual fixed effects and a large battery of time-varying demographic characteristics, health and socio-economic status. Results and conclusions While surpassing expectations is uncorrelated with subjective well-being, failing to match expectations is negatively associated with subsequent realizations of subjective well-being. Expectations are positively (negatively) correlated to positive (negative) forecasting errors. We speculate that in the first case the positive adjustment in expectations is strong enough to cancel out the potential positive effects on subjective well-being of beaten expectations, while in the second case it is not, and individuals persistently bear the negative emotional consequences of not achieving expectations. PMID:29513685
Low sidelobe level low-cost earth station antennas for the 12 GHz broadcasting satellite service
NASA Technical Reports Server (NTRS)
Collin, R. E.; Gabel, L. R.
1979-01-01
An experimental investigation of the performance of 1.22 m and 1.83 m diameter paraboloid antennas with an f/D ratio of 0.38 and using a feed developed by Kumar is reported. It is found that sidelobes below 30 dB can be obtained only if the paraboloids are relatively free of surface errors. A theoretical analysis of clam shell distortion shows that this is a limiting factor in achieving low sidelobe levels with many commercially available low cost paraboloids. The use of absorbing pads and small reflecting plates for sidelobe reduction is also considered.
Closed-Loop Analysis of Soft Decisions for Serial Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlensinger, Adam M.
2012-01-01
Modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more overhead through noisier channels, and software-defined radios use error-correction techniques that approach Shannon s theoretical limit of performance. The authors describe the benefit of closed-loop measurements for a receiver when paired with a counterpart transmitter and representative channel conditions. We also describe a real-time Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in real-time during the development of software defined radios.
Graph Theory-Based Pinning Synchronization of Stochastic Complex Dynamical Networks.
Li, Xiao-Jian; Yang, Guang-Hong
2017-02-01
This paper is concerned with the adaptive pinning synchronization problem of stochastic complex dynamical networks (CDNs). Based on algebraic graph theory and Lyapunov theory, pinning controller design conditions are derived, and the rigorous convergence analysis of synchronization errors in the probability sense is also conducted. Compared with the existing results, the topology structures of stochastic CDN are allowed to be unknown due to the use of graph theory. In particular, it is shown that the selection of nodes for pinning depends on the unknown lower bounds of coupling strengths. Finally, an example on a Chua's circuit network is given to validate the effectiveness of the theoretical results.
Analysis of the discontinuous Galerkin method applied to the European option pricing problem
NASA Astrophysics Data System (ADS)
Hozman, J.
2013-12-01
In this paper we deal with a numerical solution of a one-dimensional Black-Scholes partial differential equation, an important scalar nonstationary linear convection-diffusion-reaction equation describing the pricing of European vanilla options. We present a derivation of the numerical scheme based on the space semidiscretization of the model problem by the discontinuous Galerkin method with nonsymmetric stabilization of diffusion terms and with the interior and boundary penalty. The main attention is paid to the investigation of a priori error estimates for the proposed scheme. The appended numerical experiments illustrate the theoretical results and the potency of the method, consequently.
The synchronisation of fractional-order hyperchaos compound system
NASA Astrophysics Data System (ADS)
Noghredani, Naeimadeen; Riahi, Aminreza; Pariz, Naser; Karimpour, Ali
2018-02-01
This paper presents a new compound synchronisation scheme among four hyperchaotic memristor system with incommensurate fractional-order derivatives. First a new controller was designed based on adaptive technique to minimise the errors and guarantee compound synchronisation of four fractional-order memristor chaotic systems. According to the suitability of compound synchronisation as a reliable solution for secure communication, we then examined the application of the proposed adaptive compound synchronisation scheme in the presence of noise for secure communication. In addition, the unpredictability and complexity of the drive systems enhance the security of secure communication. The corresponding theoretical analysis and results of simulation validated the effectiveness of the proposed synchronisation scheme using MATLAB.
NASA Astrophysics Data System (ADS)
Shaw, Jeremy A.; Daescu, Dacian N.
2017-08-01
This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.
Synthesis, spectroscopic analysis and theoretical study of new pyrrole-isoxazoline derivatives
NASA Astrophysics Data System (ADS)
Rawat, Poonam; Singh, R. N.; Baboo, Vikas; Niranjan, Priydarshni; Rani, Himanshu; Saxena, Rajat; Ahmad, Sartaj
2017-02-01
In the present work, we have efficiently synthesized the pyrrole-isoxazoline derivatives (4a-d) by cyclization of substituted 4-chalconylpyrrole (3a-d) with hydroxylamine hydrochloride. The reactivity of substituted 4-chalconylpyrrole (3a-d), towards nucleophiles hydroxylamine hydrochloride was evaluated on the basis of electrophilic reactivity descriptors (fk+, sk+, ωk+) and they were found to be high at unsaturated β carbon of chalconylpyrrole indicating its more proneness to nucleophilic attack and thereby favoring the formation of reported new pyrrole-isoxazoline compounds (4a-d). The structures of newly synthesized pyrrole-isoxazoline derivatives were derived from IR, 1H NMR, Mass, UV-Vis and elemental analysis. All experimental spectral data corroborate well with the calculated spectral data. The FT-IR analysis shows red shifts in vN-H and vC = O stretching due to dimer formation through intermolecular hydrogen bonding. On basis set superposition error correction, the intermolecular interaction energy for (4a-d) is found to be 10.10, 9.99, 10.18, 11.01 and 11.19 kcal/mol respectively. The calculated first hyperpolarizability (β0) values of (4a-d) molecules are in the range of 7.40-9.05 × 10-30 esu indicating their suitability for non-linear optical (NLO) applications. Experimental spectral results, theoretical data, analysis of chalcone intermediates and pyrrole-isoxazolines find usefulness in advancement of pyrrole-azole chemistry.
Atomic clock ensemble in space (ACES) data analysis
NASA Astrophysics Data System (ADS)
Meynadier, F.; Delva, P.; le Poncin-Lafitte, C.; Guerlin, C.; Wolf, P.
2018-02-01
The Atomic Clocks Ensemble in Space (ACES/PHARAO mission, ESA & CNES) will be installed on board the International Space Station (ISS) next year. A crucial part of this experiment is its two-way microwave link (MWL), which will compare the timescale generated on board with those provided by several ground stations disseminated on the Earth. A dedicated data analysis center is being implemented at SYRTE—Observatoire de Paris, where our team currently develops theoretical modelling, numerical simulations and the data analysis software itself. In this paper, we present some key aspects of the MWL measurement method and the associated algorithms for simulations and data analysis. We show the results of tests using simulated data with fully realistic effects such as fundamental measurement noise, Doppler, atmospheric delays, or cycle ambiguities. We demonstrate satisfactory performance of the software with respect to the specifications of the ACES mission. The main scientific product of our analysis is the clock desynchronisation between ground and space clocks, i.e. the difference of proper times between the space clocks and ground clocks at participating institutes. While in flight, this measurement will allow for tests of general relativity and Lorentz invariance at unprecedented levels, e.g. measurement of the gravitational redshift at the 3×10-6 level. As a specific example, we use real ISS orbit data with estimated errors at the 10 m level to study the effect of such errors on the clock desynchronisation obtained from MWL data. We demonstrate that the resulting effects are totally negligible.
Simulation and analysis of atmospheric transmission performance in airborne Terahertz communication
NASA Astrophysics Data System (ADS)
Pan, Chengsheng; Shi, Xin; Liu, Chengyang; Wang, Xue; Ding, Yuanming
2018-02-01
For the special meteorological condition of high altitude transmission; first the influence of atmospheric turbulence on the Terahertz wireless communication is analyzed, and the atmospheric constants model with increase in height is given. On this basis, the relationship between the flicker index and the high altitude horizon transmission distance of the Terahertz wave is analyzed by simulation. Then, through the analysis of high altitude path loss and noise, the high altitude wireless link model is built. Finally, the link loss budget is given according to the current Terahertz device parameters, and bit error rate (BER) performance of on-off keyed modulation (OOK) and pulse position modulation (PPM) in four Terahertz frequency bands is compared and analyzed. All these above provided theoretical reference for high-altitude Terahertz wireless communication transmission.
The design and analysis of channel transmission communication system of XCTD profiler
NASA Astrophysics Data System (ADS)
Zheng, Yu; Wang, Xiao-Rui; Jin, Xiang-Yu; Song, Guo-Min; Shang, Ying-Sheng; Li, Hong-Zhi
2016-10-01
In this paper, a channel transmission communication system of expendable conductivity-temperature-depth is established in accordance to the operation characteristics of the transmission line to more accurately assess the characteristics of deep-sea abandoned profiler channel. The wrapping inductance is eliminated to maximum extent through the wrapping pattern of the underwater spool and the overwater spool and the calculation of the wrapping diameter. The feasibility of the proposed channel transmission communication system is verified through theoretical analysis and practical measurement of the transmission signal error rate in the amplitude shift keying (ASK) modulation. The proposed design provides a new research method for the channel assessment of complex abandoned measuring instrument and an important experiment evidence for the rapid development of the deep-sea abandoned measuring instrument.
The design and analysis of channel transmission communication system of XCTD profiler.
Zheng, Yu; Wang, Xiao-Rui; Jin, Xiang-Yu; Song, Guo-Min; Shang, Ying-Sheng; Li, Hong-Zhi
2016-10-01
In this paper, a channel transmission communication system of expendable conductivity-temperature-depth is established in accordance to the operation characteristics of the transmission line to more accurately assess the characteristics of deep-sea abandoned profiler channel. The wrapping inductance is eliminated to maximum extent through the wrapping pattern of the underwater spool and the overwater spool and the calculation of the wrapping diameter. The feasibility of the proposed channel transmission communication system is verified through theoretical analysis and practical measurement of the transmission signal error rate in the amplitude shift keying (ASK) modulation. The proposed design provides a new research method for the channel assessment of complex abandoned measuring instrument and an important experiment evidence for the rapid development of the deep-sea abandoned measuring instrument.
Geophysical parameters from the analysis of laser ranging to Starlette
NASA Technical Reports Server (NTRS)
Schutz, B. E.; Shum, C. K.; Tapley, B. D.
1991-01-01
The University of Texas Center for Space Research (UT/CSR) research efforts covering the time period from August 1, 1990 through January 31, 1991 have concentrated on the following areas: (1) Laser Data Processing (more than 15 years of Starlette data (1975-90) have been processed and cataloged); (2) Seasonal Variation of Zonal Tides (observed Starlette time series has been compared with meteorological data-derived time series); (3) Ocean Tide Solutions . (error analysis has been performed using Starlette and other tide solutions); and (4) Lunar Deceleration (formulation to compute theoretical lunar deceleration has been verified and applied to several tidal solutions). Concise descriptions of research achievement for each of the above areas are given. Copies of abstracts for some of the publications and conference presentations are included in the appendices.
Boltalin, A I; Korenev, Yu M; Sipachev, V A
2007-07-19
Molecular constants of MPbF3 (M=Li, Na, K, Rb, and Cs) were calculated theoretically at the MP2(full) and B3LYP levels with the SDD (Pb, K, Rb, and Cs) and cc-aug-pVQZ (F, Li, and Na) basis sets to determine the thermochemical characteristics of the substances. Satisfactory agreement with experiment was obtained, including the unexpected nonmonotonic dependence of substance dissociation energies on the alkali metal atomic number. The bond lengths of the theoretical CsPbF3 model were substantially elongated compared with experimental estimates, likely because of errors in both theoretical calculations and electron diffraction data processing.
Correcting For Seed-Particle Lag In LV Measurements
NASA Technical Reports Server (NTRS)
Jones, Gregory S.; Gartrell, Luther R.; Kamemoto, Derek Y.
1994-01-01
Two experiments conducted to evaluate effects of sizes of seed particles on errors in LV measurements of mean flows. Both theoretical and conventional experimental methods used to evaluate errors. First experiment focused on measurement of decelerating stagnation streamline of low-speed flow around circular cylinder with two-dimensional afterbody. Second performed in transonic flow and involved measurement of decelerating stagnation streamline of hemisphere with cylindrical afterbody. Concluded, mean-quantity LV measurements subject to large errors directly attributable to sizes of particles. Predictions of particle-response theory showed good agreement with experimental results, indicating velocity-error-correction technique used in study viable for increasing accuracy of laser velocimetry measurements. Technique simple and useful in any research facility in which flow velocities measured.
An Empirical State Error Covariance Matrix Orbit Determination Example
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.
Multiwavelength Thermometry at High Temperature: Why It is Advantageous to Work in the Ultraviolet
NASA Astrophysics Data System (ADS)
Girard, F.; Battuello, M.; Florio, M.
2014-07-01
In principle, multiwavelength radiation thermometry allows one to correctly measure the temperature of surfaces of unknown and varying surface emissivity. Unfortunately, none of the practical realizations proposed in the past proved to be sufficiently reliable because of a strong influence of the errors arising from incorrect modeling of the emissivity and of the limited number of operating wavelengths. The use of array detectors allows a high degree of flexibility both in terms of number and spectral position of the working wavelength bands. In the case of applications at high temperatures, i.e., near 2000 C or above, an analysis of the theoretical measuring principles of multiwavelength thermometry, suggests the opportunity of investigating the possible advantages in extending the operating wavelengths toward the ultraviolet region. To this purpose, a simulation program was developed which allows investigation of the effect of different influencing parameters. This paper presents a brief theoretical introduction and practical analysis of the method. The best choices are derived in terms of the different influencing parameters and data relative to the simulation of both real materials and fictitious emissivity curves and have been studied and analyzed with different emissivity models to check the robustness of the method.
[Cognitive Reserve Scale: testing the theoretical model and norms].
Leon-Estrada, I; Garcia-Garcia, J; Roldan-Tapia, L
2017-01-01
The cognitive reserve theory may contribute to explain cognitive performance differences among individuals with similar cognitive decline and among healthy ones. However, more psychometric analysis are needed to guarantee the usage of tests for assessing cognitive reserve. To study validity evidences in relation to the structure of the Cognitive Reserve Scale (CRS) and to create reference norms to interpret the scores. A total of 172 participants completed the scale and they were classified into two age groups: aged 36-64 years (n = 110) and 65-88 years (n = 62). The exploratory factor analysis using ESEM revealed that the data fitted the proposed model. Overall, the discriminative indices were acceptable (between 0.21 and 0.50) and congruence was observed in the periods of young adulthood, adulthood and late adulthood, in both age group. Besides, the index of reliability (Cronbach's alpha: 0.80) and the typical mean error test (mean: 51.40 ± 11.11) showed adequate values for this type of instrument. The CRS seemed to be set under the hypothetical theoretical model, and the scores might be interpreted by the norms showed. This study provided guarantees for the usage of the CRS in research.
ERIC Educational Resources Information Center
Matthews, Danielle E.; Theakston, Anna L.
2006-01-01
How do English-speaking children inflect nouns for plurality and verbs for the past tense? We assess theoretical answers to this question by considering errors of omission, which occur when children produce a stem in place of its inflected counterpart (e.g., saying "dress" to refer to 5 dresses). A total of 307 children (aged 3;11-9;9)…
Abraham, Sara A; Kearfott, Kimberlee J
2018-06-15
Optically stimulated luminescent dosimeters are devices that, when stimulated with light, emit light in proportion to the integrated ionizing radiation dose. The stimulation of optically stimulated luminescent material results in the loss of a small fraction of signal stored within the dosimetric traps. Previous studies have investigated the signal loss due to readout stimulation and the optical annealing of optically stimulated luminescent dosimeters. This study builds on former research by examining the behavior of optically stimulated luminescent signals after annealing, exploring the functionality of a previously developed signal loss model, and comparing uncertainties for dosimeters reused with or without annealing. For a completely annealed dosimeter, the minimum signal level was 56 ± 8 counts, and readings followed a Gaussian distribution. For dosimeters above this signal level, the fractional signal loss due to the reading process has a linear relationship with the calculated signal. At low signal levels (below 20,000 counts) in this optically stimulated luminescent dosimeter system, calculated signal percent errors increase significantly but otherwise are on average 0.72 ± 0.27%, 0.40 ± 0.19%, 0.33 ± 0.12%, and 0.24 ± 0.07% for 30, 75, 150, and 300 readings, respectively. Theoretical calculations of uncertainties showed that annealing before reusing dosimeters allows for dose errors below 1% with as few as 30 readings. Reusing dosimeters multiple times increases the dose errors especially with low numbers of readouts, so theoretically around 300 readings would be necessary to achieve errors around 1% or below in most scenarios. Note that these dose errors do not include the error associated with the signal-to-dose conversion factor.
Camps, Vicente J; Piñero, David P; Mateo, Veronica; Ribera, David; de Fez, Dolores; Blanes-Mompó, Francisco J; Alzamora-Rodríguez, Antonio
2013-11-01
To calculate theoretically the errors in the estimation of corneal power when using the keratometric index (nk) in eyes that underwent laser refractive surgery for the correction of myopia and to define and validate clinically an algorithm for minimizing such errors. Differences between corneal power estimation by using the classical nk and by using the Gaussian equation in eyes that underwent laser myopic refractive surgery were simulated and evaluated theoretically. Additionally, an adjusted keratometric index (nkadj) model dependent on r1c was developed for minimizing these differences. The model was validated clinically by retrospectively using the data from 32 myopic eyes [range, -1.00 to -6.00 diopters (D)] that had undergone laser in situ keratomileusis using a solid-state laser platform. The agreement between Gaussian (Pc) and adjusted keratometric (Pkadj) corneal powers in such eyes was evaluated. It was found that overestimations of corneal power up to 3.5 D were possible for nk = 1.3375 according to our simulations. The nk value to avoid the keratometric error ranged between 1.2984 and 1.3297. The following nkadj models were obtained: nkadj = -0.0064286r1c + 1.37688 (Gullstrand eye model) and nkadj = -0.0063804r1c + 1.37806 (Le Grand). The mean difference between Pkadj and Pc was 0.00 D, with limits of agreement of -0.45 and +0.46 D. This difference correlated significantly with the posterior corneal radius (r = -0.94, P < 0.01). The use of a single nk for estimating the corneal power in eyes that underwent a laser myopic refractive surgery can lead to significant errors. These errors can be minimized by using a variable nk dependent on r1c.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Streets, W.E.
As the need for rapid and more accurate determinations of gamma-emitting radionuclides in environmental and mixed waste samples grows, there is continued interest in the development of theoretical tools to eliminate the need for some laboratory analyses and to enhance the quality of information from necessary analyses. In gamma spectrometry the use of theoretical self-absorption coefficients (SACs) can eliminate the need to determine the SAC empirically by counting a known source through each sample. This empirical approach requires extra counting time and introduces another source of counting error, which must be included in the calculation of results. The empirical determinationmore » of SACs is routinely used when the nuclides of interest are specified; theoretical determination of the SAC can enhance the information for the analysis of true unknowns, where there may be no prior knowledge about radionuclides present in a sample. Determination of an exact SAC does require knowledge about the total composition of a sample. In support of the Department of Energy`s (DOE) Environmental Survey Program, the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory developed theoretical self-absorption models to estimate SACs for the determination of non-specified radionuclides in samples of unknown, widely-varying, compositions. Subsequently, another SAC model, in a different counting geometry and for specified nuclides, was developed for another application. These two models are now used routinely for the determination of gamma-emitting radionuclides in a wide variety of environmental and mixed waste samples.« less
NASA Astrophysics Data System (ADS)
Wang, Guochao; Xie, Xuedong; Yan, Shuhua
2010-10-01
Principle of the dual-wavelength single grating nanometer displacement measuring system, with a long range, high precision, and good stability, is presented. As a result of the nano-level high-precision displacement measurement, the error caused by a variety of adverse factors must be taken into account. In this paper, errors, due to the non-ideal performance of the dual-frequency laser, including linear error caused by wavelength instability and non-linear error caused by elliptic polarization of the laser, are mainly discussed and analyzed. On the basis of theoretical modeling, the corresponding error formulas are derived as well. Through simulation, the limit value of linear error caused by wavelength instability is 2nm, and on the assumption that 0.85 x T = , 1 Ty = of the polarizing beam splitter(PBS), the limit values of nonlinear-error caused by elliptic polarization are 1.49nm, 2.99nm, 4.49nm while the non-orthogonal angle is selected correspondingly at 1°, 2°, 3° respectively. The law of the error change is analyzed based on different values of Tx and Ty .
Primordial power spectrum: a complete analysis with the WMAP nine-year data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hazra, Dhiraj Kumar; Shafieloo, Arman; Souradeep, Tarun, E-mail: dhiraj@apctp.org, E-mail: arman@apctp.org, E-mail: tarun@iucaa.ernet.in
2013-07-01
We have improved further the error sensitive Richardson-Lucy deconvolution algorithm making it applicable directly on the un-binned measured angular power spectrum of Cosmic Microwave Background observations to reconstruct the form of the primordial power spectrum. This improvement makes the application of the method significantly more straight forward by removing some intermediate stages of analysis allowing a reconstruction of the primordial spectrum with higher efficiency and precision and with lower computational expenses. Applying the modified algorithm we fit the WMAP 9 year data using the optimized reconstructed form of the primordial spectrum with more than 300 improvement in χ{sup 2}{sub eff}more » with respect to the best fit power-law. This is clearly beyond the reach of other alternative approaches and reflects the efficiency of the proposed method in the reconstruction process and allow us to look for any possible feature in the primordial spectrum projected in the CMB data. Though the proposed method allow us to look at various possibilities for the form of the primordial spectrum, all having good fit to the data, proper error-analysis is needed to test for consistency of theoretical models since, along with possible physical artefacts, most of the features in the reconstructed spectrum might be arising from fitting noises in the CMB data. Reconstructed error-band for the form of the primordial spectrum using many realizations of the data, all bootstrapped and based on WMAP 9 year data, shows proper consistency of power-law form of the primordial spectrum with the WMAP 9 data at all wave numbers. Including WMAP polarization data in to the analysis have not improved much our results due to its low quality but we expect Planck data will allow us to make a full analysis on CMB observations on both temperature and polarization separately and in combination.« less
NASA Astrophysics Data System (ADS)
Walker, Ernest L.
1994-05-01
This paper presents results of a theoretical investigation to evaluate the performance of code division multiple access communications over multimode optical fiber channels in an asynchronous, multiuser communication network environment. The system is evaluated using Gold sequences for spectral spreading of the baseband signal from each user employing direct-sequence biphase shift keying and intensity modulation techniques. The transmission channel model employed is a lossless linear system approximation of the field transfer function for the alpha -profile multimode optical fiber. Due to channel model complexity, a correlation receiver model employing a suboptimal receive filter was used in calculating the peak output signal at the ith receiver. In Part 1, the performance measures for the system, i.e., signal-to-noise ratio and bit error probability for the ith receiver, are derived as functions of channel characteristics, spectral spreading, number of active users, and the bit energy to noise (white) spectral density ratio. In Part 2, the overall system performance is evaluated.
Psychophysical Laws and the Superorganism.
Reina, Andreagiovanni; Bose, Thomas; Trianni, Vito; Marshall, James A R
2018-03-12
Through theoretical analysis, we show how a superorganism may react to stimulus variations according to psychophysical laws observed in humans and other animals. We investigate an empirically-motivated honeybee house-hunting model, which describes a value-sensitive decision process over potential nest-sites, at the level of the colony. In this study, we show how colony decision time increases with the number of available nests, in agreement with the Hick-Hyman law of psychophysics, and decreases with mean nest quality, in agreement with Piéron's law. We also show that colony error rate depends on mean nest quality, and difference in quality, in agreement with Weber's law. Psychophysical laws, particularly Weber's law, have been found in diverse species, including unicellular organisms. Our theoretical results predict that superorganisms may also exhibit such behaviour, suggesting that these laws arise from fundamental mechanisms of information processing and decision-making. Finally, we propose a combined psychophysical law which unifies Hick-Hyman's law and Piéron's law, traditionally studied independently; this unified law makes predictions that can be empirically tested.
Theoretical analysis of exponential transversal method of lines for the diffusion equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salazar, A.; Raydan, M.; Campo, A.
1996-12-31
Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less
Bayesian randomized clinical trials: From fixed to adaptive design.
Yin, Guosheng; Lam, Chi Kin; Shi, Haolun
2017-08-01
Randomized controlled studies are the gold standard for phase III clinical trials. Using α-spending functions to control the overall type I error rate, group sequential methods are well established and have been dominating phase III studies. Bayesian randomized design, on the other hand, can be viewed as a complement instead of competitive approach to the frequentist methods. For the fixed Bayesian design, the hypothesis testing can be cast in the posterior probability or Bayes factor framework, which has a direct link to the frequentist type I error rate. Bayesian group sequential design relies upon Bayesian decision-theoretic approaches based on backward induction, which is often computationally intensive. Compared with the frequentist approaches, Bayesian methods have several advantages. The posterior predictive probability serves as a useful and convenient tool for trial monitoring, and can be updated at any time as the data accrue during the trial. The Bayesian decision-theoretic framework possesses a direct link to the decision making in the practical setting, and can be modeled more realistically to reflect the actual cost-benefit analysis during the drug development process. Other merits include the possibility of hierarchical modeling and the use of informative priors, which would lead to a more comprehensive utilization of information from both historical and longitudinal data. From fixed to adaptive design, we focus on Bayesian randomized controlled clinical trials and make extensive comparisons with frequentist counterparts through numerical studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Integrated source and channel encoded digital communication system design study
NASA Technical Reports Server (NTRS)
Udalov, S.; Huth, G. K.
1977-01-01
The analysis of the forward link signal structure for the shuttle orbiter Ku-band communication system was carried out, based on the assumptions of a 3.03 Mcps PN code. It is shown that acquisition requirements for the forward link can be met at the acquisition threshold C/N0 sub 0 value of about 60 dB-Hz, which corresponds to a bit error rate (BER) of about 0.001. It is also shown that the tracking threshold for the forward link is at about 57 dB-Hz. The analysis of the bent pipe concept for the orbiter was carried out, along with the comparative analysis of the empirical data. The complexity of the analytical approach warrants further investigation to reconcile the empirical and theoretical results. Techniques for incorporating a text and graphics capability into the forward link data stream are considered and a baseline configuration is described.
Environmental dynamics at orbital altitudes
NASA Technical Reports Server (NTRS)
Karr, G. R.
1976-01-01
The influence of real satellite aerodynamics on the determination of upper atmospheric density was investigated. A method of analysis of satellite drag data is presented which includes the effect of satellite lift and the variation in aerodynamic properties around the orbit. The studies indicate that satellite lift may be responsible for the observed orbit precession rather than a super rotation of the upper atmosphere. The influence of simplifying assumptions concerning the aerodynamics of objects in falling sphere analysis were evaluated and an improved method of analysis was developed. Wind tunnel data was used to develop more accurate drag coefficient relationships for studying altitudes between 80 and 120 Km. The improved drag coefficient relationships revealed a considerable error in previous falling sphere drag interpretation. These data were reanalyzed using the more accurate relationships. Theoretical investigations of the drag coefficient in the very low speed ratio region were also conducted.
NASA Astrophysics Data System (ADS)
Chen, Jing-Bo
2014-06-01
By using low-frequency components of the damped wavefield, Laplace-Fourier-domain full waveform inversion (FWI) can recover a long-wavelength velocity model from the original undamped seismic data lacking low-frequency information. Laplace-Fourier-domain modelling is an important foundation of Laplace-Fourier-domain FWI. Based on the numerical phase velocity and the numerical attenuation propagation velocity, a method for performing Laplace-Fourier-domain numerical dispersion analysis is developed in this paper. This method is applied to an average-derivative optimal scheme. The results show that within the relative error of 1 per cent, the Laplace-Fourier-domain average-derivative optimal scheme requires seven gridpoints per smallest wavelength and smallest pseudo-wavelength for both equal and unequal directional sampling intervals. In contrast, the classical five-point scheme requires 23 gridpoints per smallest wavelength and smallest pseudo-wavelength to achieve the same accuracy. Numerical experiments demonstrate the theoretical analysis.
Modelling vertical error in LiDAR-derived digital elevation models
NASA Astrophysics Data System (ADS)
Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.
2010-01-01
A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p < 0.001). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.
Analysis of pre-flight modulator voltage calibration data for the Voyager plasma science experiment
NASA Technical Reports Server (NTRS)
Nastov, Ognen
1988-01-01
The Voyager Plasma Science (PLS) modulator calibration (MVM) data analysis was undertaken in order to check the correctness of the fast A/D converter formulas that connect low voltage monitor signals (MV) with digital outputs (DN), to determine the proportionality constants between the actual modulator grid potential (V) and the monitor voltage (MV), and to establish an algorithm to link the digitized readouts (DN) with the actual grid potential (V). The analysis results are surprising in that the derived conversion constants deviate by fairly significant amounts from their nominal values. However, it must be kept in mind that the test results which were used for analysis may be very imprecise. Even if it is assumed that the test result errors are very large, they do no appear to be capable to account for all discrepancies between the theoretical expectations and the results of the analysis. Measurements with the flight spare instrument appear to be the only means of investigating these effects further.