#### Sample records for numerical error analysis

1. Minimizing Errors in Numerical Analysis of Chemical Data.

ERIC Educational Resources Information Center

Rusling, James F.

1988-01-01

Investigates minimizing errors in computational methods commonly used in chemistry. Provides a series of examples illustrating the propagation of errors, finite difference methods, and nonlinear regression analysis. Includes illustrations to explain these concepts. (MVL)

2. Numerical errors in the real-height analysis of ionograms at high latitudes

SciTech Connect

Titheridge, J.E.

1987-10-01

A simple dual-range integration method for maintaining accuracy in the analysis of real-height ionograms at high latitudes up to a dip angle of 89 deg is presented. Numerical errors are reduced to zero for the start and valley calculations at all dip angles up to 89.9 deg. It is noted that the extreme errors which occur at high latitudes can be alternatively reduced by using a decreased value for the dip angle. An expression for the optimun dip angle for different integration orders and frequency intervals is given. 17 references.

3. Error Analysis

Scherer, Philipp O. J.

Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

4. Numerical Error Estimation with UQ

Ackmann, Jan; Korn, Peter; Marotzke, Jochem

2014-05-01

Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We

5. Numerical errors in the presence of steep topography: analysis and alternatives

SciTech Connect

Lundquist, K A; Chow, F K; Lundquist, J K

2010-04-15

It is well known in computational fluid dynamics that grid quality affects the accuracy of numerical solutions. When assessing grid quality, properties such as aspect ratio, orthogonality of coordinate surfaces, and cell volume are considered. Mesoscale atmospheric models generally use terrain-following coordinates with large aspect ratios near the surface. As high resolution numerical simulations are increasingly used to study topographically forced flows, a high degree of non-orthogonality is introduced, especially in the vicinity of steep terrain slopes. Numerical errors associated with the use of terrainfollowing coordinates can adversely effect the accuracy of the solution in steep terrain. Inaccuracies from the coordinate transformation are present in each spatially discretized term of the Navier-Stokes equations, as well as in the conservation equations for scalars. In particular, errors in the computation of horizontal pressure gradients, diffusion, and horizontal advection terms have been noted in the presence of sloping coordinate surfaces and steep topography. In this work we study the effects of these spatial discretization errors on the flow solution for three canonical cases: scalar advection over a mountain, an atmosphere at rest over a hill, and forced advection over a hill. This study is completed using the Weather Research and Forecasting (WRF) model. Simulations with terrain-following coordinates are compared to those using a flat coordinate, where terrain is represented with the immersed boundary method. The immersed boundary method is used as a tool which allows us to eliminate the terrain-following coordinate transformation, and quantify numerical errors through a direct comparison of the two solutions. Additionally, the effects of related issues such as the steepness of terrain slope and grid aspect ratio are studied in an effort to gain an understanding of numerical domains where terrain-following coordinates can successfully be used and

6. Some Surprising Errors in Numerical Differentiation

ERIC Educational Resources Information Center

Gordon, Sheldon P.

2012-01-01

Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…

7. Some Surprising Errors in Numerical Differentiation

ERIC Educational Resources Information Center

Gordon, Sheldon P.

2012-01-01

Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…

8. Automatic Error Analysis Using Intervals

ERIC Educational Resources Information Center

Rothwell, E. J.; Cloud, M. J.

2012-01-01

A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

9. Automatic Error Analysis Using Intervals

ERIC Educational Resources Information Center

Rothwell, E. J.; Cloud, M. J.

2012-01-01

A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

10. Analysis of errors introduced by geographic coordinate systems on weather numeric prediction modeling

Cao, Yanni; Cervone, Guido; Barkley, Zachary; Lauvaux, Thomas; Deng, Aijun; Taylor, Alan

2017-09-01

Most atmospheric models, including the Weather Research and Forecasting (WRF) model, use a spherical geographic coordinate system to internally represent input data and perform computations. However, most geographic information system (GIS) input data used by the models are based on a spheroid datum because it better represents the actual geometry of the earth. WRF and other atmospheric models use these GIS input layers as if they were in a spherical coordinate system without accounting for the difference in datum. When GIS layers are not properly reprojected, latitudinal errors of up to 21 km in the midlatitudes are introduced. Recent studies have suggested that for very high-resolution applications, the difference in datum in the GIS input data (e.g., terrain land use, orography) should be taken into account. However, the magnitude of errors introduced by the difference in coordinate systems remains unclear. This research quantifies the effect of using a spherical vs. a spheroid datum for the input GIS layers used by WRF to study greenhouse gas transport and dispersion in northeast Pennsylvania.

11. Online machining error estimation method of numerical control gear grinding machine tool based on data analysis of internal sensors

Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin

2016-12-01

This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.

12. Correcting numerical integration errors caused by small aliasing errors

SciTech Connect

Smallwood, D.O.

1997-11-01

Small sampling errors can have a large effect on numerically integrated waveforms. An example is the integration of acceleration to compute velocity and displacement waveforms. These large integration errors complicate checking the suitability of the acceleration waveform for reproduction on shakers. For waveforms typically used for shaker reproduction, the errors become significant when the frequency content of the waveform spans a large frequency range. It is shown that these errors are essentially independent of the numerical integration method used, and are caused by small aliasing errors from the frequency components near the Nyquist frequency. A method to repair the integrated waveforms is presented. The method involves using a model of the acceleration error, and fitting this model to the acceleration, velocity, and displacement waveforms to force the waveforms to fit the assumed initial and final values. The correction is then subtracted from the acceleration before integration. The method is effective where the errors are isolated to a small section of the time history. It is shown that the common method to repair these errors using a high pass filter is sometimes ineffective for this class of problem.

13. Error Analysis of Quadrature Rules. Classroom Notes

ERIC Educational Resources Information Center

Glaister, P.

2004-01-01

Approaches to the determination of the error in numerical quadrature rules are discussed and compared. This article considers the problem of the determination of errors in numerical quadrature rules, taking Simpson's rule as the principal example. It suggests an approach based on truncation error analysis of numerical schemes for differential…

14. Error Analysis of Quadrature Rules. Classroom Notes

ERIC Educational Resources Information Center

Glaister, P.

2004-01-01

Approaches to the determination of the error in numerical quadrature rules are discussed and compared. This article considers the problem of the determination of errors in numerical quadrature rules, taking Simpson's rule as the principal example. It suggests an approach based on truncation error analysis of numerical schemes for differential…

15. ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.

SciTech Connect

LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.

2004-07-26

We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.

16. Error analysis of analytic solutions for self-excited near-symmetric rigid bodies - A numerical study

NASA Technical Reports Server (NTRS)

Kia, T.; Longuski, J. M.

1984-01-01

Analytic error bounds are presented for the solutions of approximate models for self-excited near-symmetric rigid bodies. The error bounds are developed for analytic solutions to Euler's equations of motion. The results are applied to obtain a simplified analytic solution for Eulerian rates and angles. The results of a sample application of the range and error bound expressions for the case of the Galileo spacecraft experiencing transverse torques demonstrate the use of the bounds in analyses of rigid body spin change maneuvers.

17. Errata: Papers in Error Analysis.

ERIC Educational Resources Information Center

Svartvik, Jan, Ed.

Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

18. Error Estimates for Numerical Integration Rules

ERIC Educational Resources Information Center

Mercer, Peter R.

2005-01-01

The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.

19. Residents' numeric inputting error in computerized physician order entry prescription.

PubMed

Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong

2016-04-01

Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial

20. Uncertainty quantification and error analysis

SciTech Connect

Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

2010-01-01

UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

1. Error estimates of numerical solutions for a cyclic plasticity problem

Han, W.

A cyclic plasticity problem is numerically analyzed in [13], where a sub-optimal order error estimate is shown for a spatially discrete scheme. In this note, we prove an optimal order error estimate for the spatially discrete scheme under the same solution regularity condition. We also derive an error estimate for a fully discrete scheme for solving the plasticity problem.

2. Beta systems error analysis

NASA Technical Reports Server (NTRS)

1984-01-01

The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

3. Error Analysis in Mathematics Education.

ERIC Educational Resources Information Center

Rittner, Max

1982-01-01

The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

4. Skylab water balance error analysis

NASA Technical Reports Server (NTRS)

Leonard, J. I.

1977-01-01

Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

5. Analysis of discretization errors in LES

NASA Technical Reports Server (NTRS)

Ghosal, Sandip

1995-01-01

All numerical simulations of turbulence (DNS or LES) involve some discretization errors. The integrity of such simulations therefore depend on our ability to quantify and control such errors. In the classical literature on analysis of errors in partial differential equations, one typically studies simple linear equations (such as the wave equation or Laplace's equation). The qualitative insight gained from studying such simple situations is then used to design numerical methods for more complex problems such as the Navier-Stokes equations. Though such an approach may seem reasonable as a first approximation, it should be recognized that strongly nonlinear problems, such as turbulence, have a feature that is absent in linear problems. This feature is the simultaneous presence of a continuum of space and time scales. Thus, in an analysis of errors in the one dimensional wave equation, one may, without loss of generality, rescale the equations so that the dependent variable is always of order unity. This is not possible in the turbulence problem since the amplitudes of the Fourier modes of the velocity field have a continuous distribution. The objective of the present research is to provide some quantitative measures of numerical errors in such situations. Though the focus of this work is LES, the methods introduced here can be just as easily applied to DNS. Errors due to discretization of the time-variable are neglected for the purpose of this analysis.

6. Error analysis in laparoscopic surgery

Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.

1998-06-01

Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.

7. An Ensemble-type Approach to Numerical Error Estimation

Ackmann, J.; Marotzke, J.; Korn, P.

2015-12-01

The estimation of the numerical error in a specific physical quantity of interest (goal) is of key importance in geophysical modelling. Towards this aim, we have formulated an algorithm that combines elements of the classical dual-weighted error estimation with stochastic methods. Our algorithm is based on the Dual-weighted Residual method in which the residual of the model solution is weighed by the adjoint solution, i.e. by the sensitivities of the goal towards the residual. We extend this method by modelling the residual as a stochastic process. Parameterizing the residual by a stochastic process was motivated by the Mori-Zwanzig formalism from statistical mechanics.Here, we apply our approach to two-dimensional shallow-water flows with lateral boundaries and an eddy viscosity parameterization. We employ different parameters of the stochastic process for different dynamical regimes in different regions. We find that for each region the temporal fluctuations of local truncation errors (discrete residuals) can be interpreted stochastically by a Laplace-distributed random variable. Assuming that these random variables are fully correlated in time leads to a stochastic process that parameterizes a problem-dependent temporal evolution of local truncation errors. The parameters of this stochastic process are estimated from short, near-initial, high-resolution simulations. Under the assumption that the estimated parameters can be extrapolated to the full time window of the error estimation, the estimated stochastic process is proven to be a valid surrogate for the local truncation errors.Replacing the local truncation errors by a stochastic process puts our method within the class of ensemble methods and makes the resulting error estimator a random variable. The result of our error estimator is thus a confidence interval on the error in the respective goal. We will show error estimates for two 2D ocean-type experiments and provide an outlook for the 3D case.

8. A Classroom Note on: Building on Errors in Numerical Integration

ERIC Educational Resources Information Center

Gordon, Sheldon P.

2011-01-01

In both baseball and mathematics education, the conventional wisdom is to avoid errors at all costs. That advice might be on target in baseball, but in mathematics, it is not always the best strategy. Sometimes an analysis of errors provides much deeper insights into mathematical ideas and, rather than something to eschew, certain types of errors…

9. A Classroom Note on: Building on Errors in Numerical Integration

ERIC Educational Resources Information Center

Gordon, Sheldon P.

2011-01-01

In both baseball and mathematics education, the conventional wisdom is to avoid errors at all costs. That advice might be on target in baseball, but in mathematics, it is not always the best strategy. Sometimes an analysis of errors provides much deeper insights into mathematical ideas and, rather than something to eschew, certain types of errors…

10. Errors from Image Analysis

SciTech Connect

Wood, William Monford

2015-02-23

Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

11. Human Error: A Concept Analysis

NASA Technical Reports Server (NTRS)

Hansen, Frederick D.

2007-01-01

Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

12. SUS Source Level Error Analysis

DTIC Science & Technology

1978-01-20

RIECIP1IEN’ CATALOG NUMBER * ITLE (and SubaltIe) S. TYP aof REPORT & _V9RCO SUS~ SOURCE LEVEL ERROR ANALYSIS & Fia 1.r,. -. pAURWORONTIUMm N (s)\$S...Fourier Transform (FFTl) SUS Signal model ___ 10 TRA&C (CeEOINIMII1& ro"* *140O tidat n9#*#*Y a"d 0e~ntiff 6T 69*.4 apbt The report provides an analysis ...of major terms which contribute to signal analysis error in a proposed experiment to c-librate sourr - I levels of SUS (Signal Underwater Sound). A

13. Analysis of Medication Error Reports

SciTech Connect

Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

2004-11-15

In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

14. Antenna trajectory error analysis in backprojection-based SAR images

Wang, Ling; Yazıcı, Birsen; Yanik, H. Cagri

2014-06-01

We present an analysis of the positioning errors in Backprojection (BP)-based Synthetic Aperture Radar (SAR) images due to antenna trajectory errors for a monostatic SAR traversing a straight linear trajectory. Our analysis is developed using microlocal analysis, which can provide an explicit quantitative relationship between the trajectory error and the positioning error in BP-based SAR images. The analysis is applicable to arbitrary trajectory errors in the antenna and can be extended to arbitrary imaging geometries. We present numerical simulations to demonstrate our analysis.

15. Orbit IMU alignment: Error analysis

NASA Technical Reports Server (NTRS)

Corson, R. W.

1980-01-01

A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

16. Having Fun with Error Analysis

ERIC Educational Resources Information Center

Siegel, Peter

2007-01-01

We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

17. Error Analysis and Remedial Teaching.

ERIC Educational Resources Information Center

Corder, S. Pit

The purpose of this paper is to analyze the role of error analysis in specifying and planning remedial treatment in second language learning. Part 1 discusses situations that demand remedial action. This is a quantitative assessment that requires measurement of the varying degrees of disparity between the learner's knowledge and the demands of the…

18. Having Fun with Error Analysis

ERIC Educational Resources Information Center

Siegel, Peter

2007-01-01

We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

19. Condition and Error Estimates in Numerical Matrix Computations

SciTech Connect

Konstantinov, M. M.; Petkov, P. H.

2008-10-30

This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.

20. Measurement Error and Equating Error in Power Analysis

ERIC Educational Resources Information Center

Phillips, Gary W.; Jiang, Tao

2016-01-01

Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

1. Measurement Error and Equating Error in Power Analysis

ERIC Educational Resources Information Center

Phillips, Gary W.; Jiang, Tao

2016-01-01

Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

2. Static Analysis Numerical Algorithms

DTIC Science & Technology

2016-04-01

and Honeywell Aerospace Advanced Technology to combine model-based development of complex avionics control software with static analysis of the...numerical algorithms, software verification, formal methods 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES...18 3.3.4. Tool Software Integration Architecture

3. Numerical study of an error model for a strap-down INS

Grigorie, T. L.; Sandu, D. G.; Corcau, C. L.

2016-10-01

The paper presents a numerical study related to a mathematical error model developed for a strap-down inertial navigation system. The study aims to validate the error model by using some Matlab/Simulink software models implementing the inertial navigator and the error model mathematics. To generate the inputs in the evaluation Matlab/Simulink software some inertial sensors software models are used. The sensors models were developed based on the IEEE equivalent models for the inertial sensorsand on the analysis of the data sheets related to real inertial sensors. In the paper are successively exposed the inertial navigation equations (attitude, position and speed), the mathematics of the inertial navigator error model, the software implementations and the numerical evaluation results.

4. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

USGS Publications Warehouse

1987-01-01

Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.

5. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

PubMed

Yan, Ying; Yi, Grace Y

2016-07-01

Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

6. Error analysis of optical correlators

NASA Technical Reports Server (NTRS)

Ma, Paul W.; Reid, Max B.; Downie, John D.

1992-01-01

With the growing interest in using binary phase only filters (BPOF) in optical correlators that are implemented on magnetooptic spatial light modulators, an understanding of the effect of errors in system alignment and optical components is critical in obtaining optimal system performance. We present simulations of optical correlator performance degradation in the presence of eight errors. We break these eight errors into three groups: 1) alignment errors, 2) errors due to a combination of component imperfections and alignment errors, and 3) errors which result solely from non-ideal components. Under the first group, we simulate errors in the distance from the object to the first principle plane of the transform lens, the distance from the second principle plane of the transform lens to the filter plane, and rotational misalignment of the input mask with the filter mask. Next we consider errors which result from a combination of alignment and component imperfections. These include errors in the transform lens, the phase compensation lens, and the inverse Fourier transform lens. Lastly we have the component errors resulting from the choice of spatial light modulator. These include contrast error and phase errors caused by the non-uniform flatness of the masks. The effects of each individual error are discussed, and the result of combining all eight errors under assumptions of reasonable tolerances and system parameters is also presented. Conclusions are drawn as to which tolerances are most critical for optimal system performance.

7. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

NASA Technical Reports Server (NTRS)

Prive, Nikki C.; Errico, Ronald M.

2013-01-01

A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

8. Numerical Analysis Objects

Henderson, Michael

1997-08-01

The Numerical Analysis Objects project (NAO) is a project in the Mathematics Department of IBM's TJ Watson Research Center. While there are plenty of numerical tools available today, it is not an easy task to combine them into a custom application. NAO is directed at the dual problems of building applications from a set of tools, and creating those tools. There are several "reuse" projects, which focus on the problems of identifying and cataloging tools. NAO is directed at the specific context of scientific computing. Because the type of tools is restricted, problems such as tools with incompatible data structures for input and output, and dissimilar interfaces to tools which solve similar problems can be addressed. The approach we've taken is to define interfaces to those objects used in numerical analysis, such as geometries, functions and operators, and to start collecting (and building) a set of tools which use these interfaces. We have written a class library (a set of abstract classes and implementations) in C++ which demonstrates the approach. Besides the classes, the class library includes "stub" routines which allow the library to be used from C or Fortran, and an interface to a Visual Programming Language. The library has been used to build a simulator for petroleum reservoirs, using a set of tools for discretizing nonlinear differential equations that we have written, and includes "wrapped" versions of packages from the Netlib repository. Documentation can be found on the Web at "http://www.research.ibm.com/nao". I will describe the objects and their interfaces, and give examples ranging from mesh generation to solving differential equations.

9. GP-B error modeling and analysis

NASA Technical Reports Server (NTRS)

Hung, J. C.

1982-01-01

Individual source errors and their effects on the accuracy of the Gravity Probe B (GP-B) experiment were investigated. Emphasis was placed on: (1) the refinement of source error identification and classifications of error according to their physical nature; (2) error analysis for the GP-B data processing; and (3) measurement geometry for the experiment.

10. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

USGS Publications Warehouse

1985-01-01

Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.

11. Error Analysis and the EFL Classroom Teaching

ERIC Educational Resources Information Center

Xie, Fang; Jiang, Xue-mei

2007-01-01

This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…

12. Harmless error analysis: How do judges respond to confession errors?

PubMed

Wallace, D Brian; Kassin, Saul M

2012-04-01

In Arizona v. Fulminante (1991), the U.S. Supreme Court opened the door for appellate judges to conduct a harmless error analysis of erroneously admitted, coerced confessions. In this study, 132 judges from three states read a murder case summary, evaluated the defendant's guilt, assessed the voluntariness of his confession, and responded to implicit and explicit measures of harmless error. Results indicated that judges found a high-pressure confession to be coerced and hence improperly admitted into evidence. As in studies with mock jurors, however, the improper confession significantly increased their conviction rate in the absence of other evidence. On the harmless error measures, judges successfully overruled the confession when required to do so, indicating that they are capable of this analysis.

13. Error Analysis in Mathematics. Technical Report #1012

ERIC Educational Resources Information Center

Lai, Cheng-Fei

2012-01-01

Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…

14. Analysis and classification of human error

NASA Technical Reports Server (NTRS)

Rouse, W. B.; Rouse, S. H.

1983-01-01

The literature on human error is reviewed with emphasis on theories of error and classification schemes. A methodology for analysis and classification of human error is then proposed which includes a general approach to classification. Identification of possible causes and factors that contribute to the occurrence of errors is also considered. An application of the methodology to the use of checklists in the aviation domain is presented for illustrative purposes.

15. Analysis and classification of human error

NASA Technical Reports Server (NTRS)

Rouse, W. B.; Rouse, S. H.

1983-01-01

The literature on human error is reviewed with emphasis on theories of error and classification schemes. A methodology for analysis and classification of human error is then proposed which includes a general approach to classification. Identification of possible causes and factors that contribute to the occurrence of errors is also considered. An application of the methodology to the use of checklists in the aviation domain is presented for illustrative purposes.

16. Numerical Relativity meets Data Analysis

Schmidt, Patricia

2016-03-01

Gravitational waveforms (GW) from coalescing black hole binaries obtained by Numerical Relativity (NR) play a crucial role in the construction and validation of waveform models used as templates in GW matched filter searches and parameter estimation. In previous efforts, notably the NINJA and NINJA-2 collaborations, NR groups and data analysts worked closely together to use NR waveforms as mock GW signals to test the search and parameter estimation pipelines employed by LIGO. Recently, however, NR groups have been able to simulate hundreds of different binary black holes systems. It is desirable to directly use these waveforms in GW data analysis, for example to assess systematic errors in waveform models, to test general relativity or to appraise the limitations of aligned-spin searches among many other applications. In this talk, I will introduce recent developments that aim to fully integrate NR waveforms into the data analysis pipelines through a standardized interface. I will highlight a number of select applications for this new infrastructure.

17. Synthetic aperture interferometry: error analysis

SciTech Connect

Biswas, Amiya; Coupland, Jeremy

2010-07-10

Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

18. Error analysis using organizational simulation.

PubMed Central

Fridsma, D. B.

2000-01-01

Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885

19. ISMP Medication Error Report Analysis.

PubMed

2013-10-01

20. ISMP Medication Error Report Analysis.

PubMed

2014-01-01

1. ISMP Medication Error Report Analysis.

PubMed

2013-05-01

2. ISMP Medication Error Report Analysis.

PubMed

2013-12-01

3. ISMP Medication Error Report Analysis.

PubMed

2013-11-01

4. ISMP Medication error report analysis.

PubMed

2013-04-01

5. ISMP Medication Error Report Analysis.

PubMed

2013-06-01

6. ISMP Medication Error Report Analysis.

PubMed

2013-01-01

7. ISMP Medication Error Report Analysis.

PubMed

2013-02-01

8. ISMP Medication Error Report Analysis.

PubMed

2013-03-01

9. ISMP Medication Error Report Analysis.

PubMed

2013-09-01

10. ISMP Medication Error Report Analysis.

PubMed

2013-07-01

11. Error Analysis: Past, Present, and Future

ERIC Educational Resources Information Center

McCloskey, George

2017-01-01

This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…

12. Error Analysis: Past, Present, and Future

ERIC Educational Resources Information Center

McCloskey, George

2017-01-01

This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…

13. GP-B error modeling and analysis

NASA Technical Reports Server (NTRS)

1984-01-01

The analysis and modeling for the Gravity Probe B (GP-B) experiment is reported. The finite-wordlength induced errors in Kalman filtering computation were refined. Errors in the crude result were corrected, improved derivation steps are taken, and better justifications are given. The errors associated with the suppression of the 1-noise were analyzed by rolling the spacecraft and then performing a derolling operation by computation.

14. Error analysis of corner cutting algorithms

Mainar, E.; Peña, J. M.

1999-10-01

Corner cutting algorithms are used in different fields and, in particular, play a relevant role in Computer Aided Geometric Design. Evaluation algorithms such as the de Casteljau algorithm for polynomials and the de Boor-Cox algorithm for B-splines are examples of corner cutting algorithms. Here backward and forward error analysis of corner cutting algorithms are performed. The running error is also analyzed and as a consequence the general algorithm is modified to include the computation of an error bound.

15. Error analysis of finite element solutions for postbuckled cylinders

NASA Technical Reports Server (NTRS)

Sistla, Rajaram; Thurston, Gaylen A.

1989-01-01

A general method of error analysis and correction is investigated for the discrete finite-element results for cylindrical shell structures. The method for error analysis is an adaptation of the method of successive approximation. When applied to the equilibrium equations of shell theory, successive approximations derive an approximate continuous solution from the discrete finite-element results. The advantage of this continuous solution is that it contains continuous partial derivatives of an order higher than the basis functions of the finite-element solution. Preliminary numerical results are presented in this paper for the error analysis of finite-element results for a postbuckled stiffened cylindrical panel modeled by a general purpose shell code. Numerical results from the method have previously been reported for postbuckled stiffened plates. A procedure for correcting the continuous approximate solution by Newton's method is outlined.

16. Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis

ERIC Educational Resources Information Center

Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei

2010-01-01

This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…

17. Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis

ERIC Educational Resources Information Center

Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei

2010-01-01

This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…

18. Integrated analysis of error detection and recovery

NASA Technical Reports Server (NTRS)

Shin, K. G.; Lee, Y. H.

1985-01-01

An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.

19. Symbolic Error Analysis and Robot Planning,

DTIC Science & Technology

1982-09-01

ARD-RJL2i 867 SYMBOLIC ERROR ANALYSIS AND ROBOT PLANNING(U3 MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB R A BROOKS SEP 82 AI-N...LABORATORY A.I. Memo No. 685 September, 1982 Symbolic Error Analysis and Robot Planning Rodney A. Brooks -- Abstract>A program to control a robot manipulator...a human robot programmer. ~ !tJ.J Acknowledgements. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts

20. Error analysis of finite element solutions for postbuckled plates

NASA Technical Reports Server (NTRS)

Sistla, Rajaram; Thurston, Gaylen A.

1988-01-01

An error analysis of results from finite-element solutions of problems in shell structures is further developed, incorporating the results of an additional numerical analysis by which oscillatory behavior is eliminated. The theory is extended to plates with initial geometric imperfections, and this novel analysis is programmed as a postprocessor for a general-purpose finite-element code. Numerical results are given for the case of a stiffened panel in compression and a plate loaded in shear by a 'picture-frame' test fixture.

1. Phonological error analysis, development and empirical evaluation.

PubMed

Roeltgen, D P

1992-08-01

A method of error analysis, designed to examine phonological and nonphonological reading and spelling processes, was developed from preliminary studies and theoretical background, including a linguistic model and the relationships between articulatory features of phonemes. The usefulness of this method as an assessment tool for phonological ability was tested on a group of normal subjects. The results from the error analysis helped clarify similarities and differences in phonological performance among the subjects and helped delineate differences between phonological performance in spelling (oral and written) and reading within the group of subjects. These results support the usefulness of this method of error analysis in assessing phonological ability. Also, these results support the position that phonological approximation of responses is an important diagnostic feature and merely cataloging errors as phonologically accurate or inaccurate is inadequate for assessing phonological ability.

2. Empirical Error Analysis of GPS RO Atmospheric Profiles

Scherllin-Pirscher, B.; Steiner, A. K.; Foelsche, U.; Kirchengast, G.; Kuo, Y.

2010-12-01

In the upper troposphere and lower stratosphere (UTLS) region the radio occultation (RO) technique provides accurate profiles of atmospheric parameters. These profiles can be used in operational meteorology (i.e., numerical weather prediction), atmospheric and climate research. We present results of an empirical error analysis of GPS RO data retrieved at UCAR and at WEGC and compare data characteristics of CHAMP, GRACE-A, and Formosat-3/COSMIC. Retrieved atmospheric profiles of bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature are compared to reference profiles extracted from ECMWF analysis fields. This statistical error characterization yields a combined (RO observational plus ECMWF model) error. We restrict our analysis to the years 2007 to 2009 due to known ECMWF deficiencies prior to 2007 (e.g., deficiencies in the representation of the austral polar vortex or the weak representation of tropopause height variability). The GPS RO observational error is determined by subtracting the estimated ECMWF error from the combined error in terms of variances. Our results indicate that the estimated ECMWF error and the GPS RO observational error are approximately of the same order of magnitude. Differences between different satellites are small below 35 km. The GPS RO observational error features latitudinal and seasonal variations, which are most pronounced at stratospheric altitudes at high latitudes. We present simplified models for the observational error, which depend on a few parameters only (Steiner and Kirchengast, JGR 110, D15307, 2005). These global error models are derived from fitting simple analytical functions to the GPS RO observational error. From the lower troposphere up to the tropopause, the model error decreases closely proportional to an inverse height law. Within a core "tropopause region" of the upper troposphere/lower stratosphere the model error is constant and above this region it increases exponentially with

3. Measurement error analysis of taxi meter

He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu

2011-12-01

The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.

4. Numerical errors and chaotic behavior in docking simulations.

PubMed

Feher, Miklos; Williams, Christopher I

2012-03-26

This work examines the sensitivity of docking programs to tiny changes in ligand input files. The results show that nearly identical ligand input structures can produce dramatically different top-scoring docked poses. Even changing the atom order in a ligand input file can produce significantly different poses and scores. In well-behaved cases the docking variations are small and follow a normal distribution around a central pose and score, but in many cases the variations are large and reflect wildly different top scores and binding modes. The docking variations are characterized by statistical methods, and the sensitivity of high-throughput and more precise docking methods are compared. The results demonstrate that part of docking variation is due to numerical sensitivity and potentially chaotic effects in current docking algorithms and not solely due to incomplete ligand conformation and pose searching. These results have major implications for the way docking is currently used for pose prediction, ranking, and virtual screening. © 2012 American Chemical Society

5. Error analysis of quartz crystal resonator applications

SciTech Connect

Lucklum, R.; Behling, C.; Hauptmann, P.; Cernosek, R.W.; Martin, S.J.

1996-12-31

Quartz crystal resonators in chemical sensing applications are usually configured as the frequency determining element of an electrical oscillator. By contrast, the shear modulus determination of a polymer coating needs a complete impedance analysis. The first part of this contribution reports the error made if common approximations are used to relate the frequency shift to the sorbed mass. In the second part the authors discuss different error sources in the procedure to determine shear parameters.

6. Numerical analysis of engine instability

Habiballah, M.; Dubois, I.

Following a literature review on numerical analyses of combustion instability, to give the state of the art in the area, the paper describes the ONERA methodology used to analyze the combustion instability in liquid propellant engines. Attention is also given to a model (named Phedre) which describes the unsteady turbulent two-phase reacting flow in a liquid rocket engine combustion chamber. The model formulation includes axial or radial propellant injection, baffles, and acoustic resonators modeling, and makes it possible to treat different engine types. A numerical analysis of a cryogenic engine stability is presented, and the results of the analysis are compared with results of tests of the Viking engine and the gas generator of the Vulcain engine, showing good qualitative agreement and some general trends between experiments and numerical analysis.

7. A numerical algorithm to propagate navigation error covariance matrices associated with generalized strapdown inertial measurement units

NASA Technical Reports Server (NTRS)

Weir, Kent A.; Wells, Eugene M.

1990-01-01

The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.

8. TOA/FOA geolocation error analysis.

SciTech Connect

Mason, John Jeffrey

2008-08-01

This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

9. Numerical Experiments in Error Control for Sound Propagation Using a Damping Layer Boundary Treatment

NASA Technical Reports Server (NTRS)

Goodrich, John W.

2017-01-01

This paper presents results from numerical experiments for controlling the error caused by a damping layer boundary treatment when simulating the propagation of an acoustic signal from a continuous pressure source. The computations are with the 2D Linearized Euler Equations (LEE) for both a uniform mean flow and a steady parallel jet. The numerical experiments are with algorithms that are third, fifth, seventh and ninth order accurate in space and time. The numerical domain is enclosed in a damping layer boundary treatment. The damping is implemented in a time accurate manner, with simple polynomial damping profiles of second, fourth, sixth and eighth power. At the outer boundaries of the damping layer the propagating solution is uniformly set to zero. The complete boundary treatment is remarkably simple and intrinsically independant from the dimension of the spatial domain. The reported results show the relative effect on the error from the boundary treatment by varying the damping layer width, damping profile power, damping amplitude, propagtion time, grid resolution and algorithm order. The issue that is being addressed is not the accuracy of the numerical solution when compared to a mathematical solution, but the effect of the complete boundary treatment on the numerical solution, and to what degree the error in the numerical solution from the complete boundary treatment can be controlled. We report maximum relative absolute errors from just the boundary treatment that range from O[10-2] to O[10-7].

10. Error analysis in nuclear density functional theory

Schunck, Nicolas; McDonnell, Jordan D.; Sarich, Jason; Wild, Stefan M.; Higdon, Dave

2015-03-01

Nuclear density functional theory (DFT) is the only microscopic, global approach to the structure of atomic nuclei. It is used in numerous applications, from determining the limits of stability to gaining a deep understanding of the formation of elements in the Universe or the mechanisms that power stars and reactors. The predictive power of the theory depends on the amount of physics embedded in the energy density functional as well as on efficient ways to determine a small number of free parameters and solve the DFT equations. In this article, we discuss the various sources of uncertainties and errors encountered in DFT and possible methods to quantify these uncertainties in a rigorous manner.

11. Error Analysis in Nuclear Density Functional Theory

SciTech Connect

Schunck, Nicolas; McDonnell, Jordan D.; Sarich, Jason; Wild, Stefan M.; Higdon, Dave

2015-03-01

Nuclear density functional theory (DFT) is the only microscopic, global approach to the structure of atomic nuclei. It is used in numerous applications, from determining the limits of stability to gaining a deep understanding of the formation of elements in the Universe or the mechanisms that power stars and reactors. The predictive power of the theory depends on the amount of physics embedded in the energy density functional as well as on efficient ways to determine a small number of free parameters and solve the DFT equations. In this article, we discuss the various sources of uncertainties and errors encountered in DFT and possible methods to quantify these uncertainties in a rigorous manner.

12. Numeracy, Literacy and Newman's Error Analysis

ERIC Educational Resources Information Center

White, Allan Leslie

2010-01-01

Newman (1977, 1983) defined five specific literacy and numeracy skills as crucial to performance on mathematical word problems: reading, comprehension, transformation, process skills, and encoding. Newman's Error Analysis (NEA) provided a framework for considering the reasons that underlay the difficulties students experienced with mathematical…

13. Study of geopotential error models used in orbit determination error analysis

NASA Technical Reports Server (NTRS)

Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.

1991-01-01

The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic

14. Accumulation of errors in numerical simulations of chemically reacting gas dynamics

Smirnov, N. N.; Betelin, V. B.; Nikitin, V. F.; Stamov, L. I.; Altoukhov, D. I.

2015-12-01

The aim of the present study is to investigate problems of numerical simulations precision and stochastic errors accumulation in solving problems of detonation or deflagration combustion of gas mixtures in rocket engines. Computational models for parallel computing on supercomputers incorporating CPU and GPU units were tested and verified. Investigation of the influence of computational grid size on simulation precision and computational speed was performed. Investigation of accumulation of errors for simulations implying different strategies of computation were performed.

15. Error Analysis and Propagation in Metabolomics Data Analysis.

PubMed

Moseley, Hunter N B

2013-01-01

Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of 'omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed.

16. Error Analysis of Stochastic Gradient Descent Ranking.

PubMed

Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan

2012-12-31

Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.

17. Error analysis of stochastic gradient descent ranking.

PubMed

Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan

2013-06-01

Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.

18. A numerical method for multigroup slab-geometry discrete ordinates problems with no spatial truncation error

SciTech Connect

Barros, R.C. de; Larsen, E.W. )

1991-01-01

A generalization of the one-group Spectral Green's Function (SGF) method is developed for multigroup, slab-geometry discrete ordinates (S{sub N}) problems. The multigroup SGF method is free from spatial truncation errors; it generated numerical values for the cell-edge and cell-average angular fluxes that agree with the analytic solution of the multigroup S{sub N} equations. Numerical results are given to illustrate the method's accuracy.

19. A constrained-gradient method to control divergence errors in numerical MHD

Hopkins, Philip F.

2016-10-01

In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or 8-wave' cleaning can produce order-of-magnitude errors.

20. Scientific visualization for solar concentrating systems: Computer simulation program for visual and numerical analysis

SciTech Connect

Baum, I.V.

1996-11-01

The Computer Simulation Program for Visual and Numerical Analysis of Solar Concentrators (VNASC) and specified scientific visualization methods for computer modeling of solar concentrating systems are described. The program has two code versions (FORTRAN and C++). It visualizes a concentrating process and takes into account geometrical factors and errors, including Gauss errors of reflecting surfaces, facets alignment errors, and suntracking errors.

1. Error propagation in the numerical solutions of the differential equations of orbital mechanics

NASA Technical Reports Server (NTRS)

Bond, V. R.

1982-01-01

The relationship between the eigenvalues of the linearized differential equations of orbital mechanics and the stability characteristics of numerical methods is presented. It is shown that the Cowell, Encke, and Encke formulation with an independent variable related to the eccentric anomaly all have a real positive eigenvalue when linearized about the initial conditions. The real positive eigenvalue causes an amplification of the error of the solution when used in conjunction with a numerical integration method. In contrast an element formulation has zero eigenvalues and is numerically stable.

2. Error analysis of aspheric surface with reference datum.

PubMed

Peng, Yanglin; Dai, Yifan; Chen, Shanyong; Song, Ci; Shi, Feng

2015-07-20

Severe requirements of location tolerance provide new challenges for optical component measurement, evaluation, and manufacture. Form error, location error, and the relationship between form error and location error need to be analyzed together during error analysis of aspheric surface with reference datum. Based on the least-squares optimization method, we develop a least-squares local optimization method to evaluate form error of aspheric surface with reference datum, and then calculate the location error. According to the error analysis of a machined aspheric surface, the relationship between form error and location error is revealed, and the influence on the machining process is stated. In different radius and aperture of aspheric surface, the change laws are simulated by superimposing normally distributed random noise on an ideal surface. It establishes linkages between machining and error analysis, and provides an effective guideline for error correcting.

3. Error begat error: design error analysis and prevention in social infrastructure projects.

PubMed

Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

2012-09-01

Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.

4. Using PASCAL for numerical analysis

NASA Technical Reports Server (NTRS)

Volper, D.; Miller, T. C.

1978-01-01

The data structures and control structures of PASCAL enhance the coding ability of the programmer. Proposed extensions to the language further increase its usefulness in writing numeric programs and support packages for numeric programs.

5. Using PASCAL for numerical analysis

NASA Technical Reports Server (NTRS)

Volper, D.; Miller, T. C.

1978-01-01

The data structures and control structures of PASCAL enhance the coding ability of the programmer. Proposed extensions to the language further increase its usefulness in writing numeric programs and support packages for numeric programs.

6. Performance analysis of ARQ error controls under Markovian block error pattern

Cho, Young Jong; Un, Chong Kwan

1994-02-01

In this paper, we investigate the effect of forward/backward channel memory (statistical dependence in the occurrence of transmission errors) on ARQ error controls. To take into account the effect of backward channel errors in the performance analysis, we suppose some modified ARQ schemes that have an effective retransmission strategy to prevent the deadlock incurred by the errors on acknowledgments. In the study, we consider two modified go-back-N schemes with timer control and with buffer control.

7. Investigating Convergence Patterns for Numerical Methods Using Data Analysis

ERIC Educational Resources Information Center

Gordon, Sheldon P.

2013-01-01

The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…

8. Investigating Convergence Patterns for Numerical Methods Using Data Analysis

ERIC Educational Resources Information Center

Gordon, Sheldon P.

2013-01-01

The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…

9. A Factored Implicit Scheme for Numerical Weather Prediction with Small Factorization Error

NASA Technical Reports Server (NTRS)

Augenbaum, J. M.; Cohn, S. E.; Marchesin, D.

1985-01-01

Numerical results show that, for large time steps, the factorization error can be significant, even for the slowly propagating Rossby modes. A new scheme is formulated based on a more accurate factorization of the equations. By grouping separately the terms of the equations which give rise to the fast and slow motion, the equations are factored more accurately. The fast-slow factorization eliminated the factorization error. If each of the fast and slow factors are factored again according to spatial components, the resulting scheme only involves the solution of one dimensional linear systems, and computational efficient. It is shown that the factorization error for the slow made component is negligible for this new scheme.

10. Error analysis of tissue resistivity measurement.

PubMed

Tsai, Jang-Zern; Will, James A; Hubbard-Van Stelle, Scott; Cao, Hong; Tungjitkusolmun, Supan; Choy, Young Bin; Haemmerich, Dieter; Vorperian, Vicken R; Webster, John G

2002-05-01

We identified the error sources in a system for measuring tissue resistivity at eight frequencies from 1 Hz to 1 MHz using the four-terminal method. We expressed the measured resistivity with an analytical formula containing all error terms. We conducted practical error measurements with in-vivo and bench-top experiments. We averaged errors at all frequencies for all measurements. The standard deviations of error of the quantization error of the 8-bit digital oscilloscope with voltage averaging, the nonideality of the circuit, the in-vivo motion artifact and electrical interference combined to yield an error of +/- 1.19%. The dimension error in measuring the syringe tube for measuring the reference saline resistivity added +/- 1.32% error. The estimation of the working probe constant by interpolating a set of probe constants measured in reference saline solutions added +/- 0.48% error. The difference in the current magnitudes used during the probe calibration and that during the tissue resistivity measurement caused +/- 0.14% error. Variation of the electrode spacing, alignment, and electrode surface property due to the insertion of electrodes into the tissue caused +/- 0.61% error. We combined the above errors to yield an overall standard deviation error of the measured tissue resistivity of +/- 1.96%.

11. Nonlinear grid error effects on numerical solution of partial differential equations

NASA Technical Reports Server (NTRS)

Dey, S. K.

1980-01-01

Finite difference solutions of nonlinear partial differential equations require discretizations and consequently grid errors are generated. These errors strongly affect stability and convergence properties of difference models. Previously such errors were analyzed by linearizing the difference equations for solutions. Properties of mappings of decadence were used to analyze nonlinear instabilities. Such an analysis is directly affected by initial/boundary conditions. An algorithm was developed, applied to nonlinear Burgers equations, and verified computationally. A preliminary test shows that Navier-Stokes equations may be treated similarly.

12. Localization algorithm and error analysis for micro radio-localizer

Li, Xudong; Wang, Xiaohao; Li, Qiang; Zhao, Huijie

2006-11-01

After more than ten years' research efforts on the Micro Aerial Vehicle (MAV) since it was proposed in 1990s, the stable flying platform has been matured. The next reasonable goal is to implement more practical applications for MAVs. Equipped with a micro radio-localizer, MAVs have the ability of localizing a target that transmitting radio signals, and further can be a novel promising Anti-Radiation device. A micro radio-localizer prototype and its localization principle and localization algorithm are proposed. The error analysis of the algorithm is also discussed. On the basis of the comparison of the often-used radio localization method, considering the MAVs' inherent limitation on the dimension of the antennas, a signal intensity and guidance information based localization method is proposed. Under the assumption that the electromagnetic wave obeys the free-space spreading model and the signal's power keeps unchanged, the measuring equations under different target motions are established. Localization algorithm is derived. The determination of several factors such as the number of measuring positions, numerical solving method and initial solution is discussed. Error analysis of the localization algorithm is also proposed by utilizing error analysis theory. A radio-localizer prototype is developed and experiment results are shown as well.

13. Biomedical model fitting and error analysis.

PubMed

Costa, Kevin D; Kleinstein, Steven H; Hershberg, Uri

2011-09-20

This Teaching Resource introduces students to curve fitting and error analysis; it is the second of two lectures on developing mathematical models of biomedical systems. The first focused on identifying, extracting, and converting required constants--such as kinetic rate constants--from experimental literature. To understand how such constants are determined from experimental data, this lecture introduces the principles and practice of fitting a mathematical model to a series of measurements. We emphasize using nonlinear models for fitting nonlinear data, avoiding problems associated with linearization schemes that can distort and misrepresent the data. To help ensure proper interpretation of model parameters estimated by inverse modeling, we describe a rigorous six-step process: (i) selecting an appropriate mathematical model; (ii) defining a "figure-of-merit" function that quantifies the error between the model and data; (iii) adjusting model parameters to get a "best fit" to the data; (iv) examining the "goodness of fit" to the data; (v) determining whether a much better fit is possible; and (vi) evaluating the accuracy of the best-fit parameter values. Implementation of the computational methods is based on MATLAB, with example programs provided that can be modified for particular applications. The problem set allows students to use these programs to develop practical experience with the inverse-modeling process in the context of determining the rates of cell proliferation and death for B lymphocytes using data from BrdU-labeling experiments.

14. Error Analysis and Propagation in Metabolomics Data Analysis

PubMed Central

Moseley, Hunter N.B.

2013-01-01

Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of ‘omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed. PMID:23667718

15. The Vertical Error Characteristics of GOES-derived Winds: Description and Impact on Numerical Weather Prediction

NASA Technical Reports Server (NTRS)

Rao, P. Anil; Velden, Christopher S.; Braun, Scott A.; Einaudi, Franco (Technical Monitor)

2001-01-01

Errors in the height assignment of some satellite-derived winds exist because the satellites sense radiation emitted from a finite layer of the atmosphere rather than a specific level. Potential problems in data assimilation may arise because the motion of a measured layer is often represented by a single-level value. In this research, cloud and water vapor motion winds that are derived from the Geostationary Operational Environmental Satellites (GOES winds) are compared to collocated rawinsonde observations (RAOBs). An important aspect of this work is that in addition to comparisons at each assigned height, the GOES winds are compared to the entire profile of the collocated RAOB data to determine the vertical error characteristics of the GOES winds. The impact of these results on numerical weather prediction is then investigated. The comparisons at individual vector height assignments indicate that the error of the GOES winds range from approx. 3 to 10 m/s and generally increase with height. However, if taken as a percentage of the total wind speed, accuracy is better at upper levels. As expected, comparisons with the entire profile of the collocated RAOBs indicate that clear-air water vapor winds represent deeper layers than do either infrared or water vapor cloud-tracked winds. This is because in cloud-free regions the signal from water vapor features may result from emittance over a thicker layer. To further investigate characteristics of the clear-air water vapor winds, they are stratified into two categories that are dependent on the depth of the layer represented by the vector. It is found that if the vertical gradient of moisture is smooth and uniform from near the height assignment upwards, the clear-air water vapor wind tends to represent a relatively deep layer. The information from the comparisons is then used in numerical model simulations of two separate events to determine the forecast impacts. Four simulations are performed for each case: 1) A

16. Trends in MODIS Geolocation Error Analysis

NASA Technical Reports Server (NTRS)

Wolfe, R. E.; Nishihama, Masahiro

2009-01-01

Data from the two MODIS instruments have been accurately geolocated (Earth located) to enable retrieval of global geophysical parameters. The authors describe the approach used to geolocate with sub-pixel accuracy over nine years of data from M0DIS on NASA's E0S Terra spacecraft and seven years of data from MODIS on the Aqua spacecraft. The approach uses a geometric model of the MODIS instruments, accurate navigation (orbit and attitude) data and an accurate Earth terrain model to compute the location of each MODIS pixel. The error analysis approach automatically matches MODIS imagery with a global set of over 1,000 ground control points from the finer-resolution Landsat satellite to measure static biases and trends in the MO0lS geometric model parameters. Both within orbit and yearly thermally induced cyclic variations in the pointing have been found as well as a general long-term trend.

17. Sieve Estimation of Constant and Time-Varying Coefficients in Nonlinear Ordinary Differential Equation Models by Considering Both Numerical Error and Measurement Error.

PubMed

Xue, Hongqi; Miao, Hongyu; Wu, Hulin

2010-01-01

This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge-Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n(-1/(p∧4)), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics.

18. Sieve Estimation of Constant and Time-Varying Coefficients in Nonlinear Ordinary Differential Equation Models by Considering Both Numerical Error and Measurement Error

PubMed Central

Xue, Hongqi; Miao, Hongyu; Wu, Hulin

2010-01-01

This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge–Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n−1/(p∧4), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics. PMID:21132064

19. Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis

NASA Technical Reports Server (NTRS)

Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.

2017-01-01

This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.

20. Numerical likelihood analysis of cosmic ray anisotropies

SciTech Connect

Carlos Hojvat et al.

2003-07-02

A numerical likelihood approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.

1. Second Language Learning: Contrastive Analysis, Error Analysis, and Related Aspects.

ERIC Educational Resources Information Center

Robinett, Betty Wallace, Ed.; Schachter, Jacquelyn, Ed.

This graduate level text on second language learning is divided into three sections. The first two sections provide a survey of the historical underpinnings of second language research in contrastive analysis and error analysis. The third section includes discussions of recent developments in the field. The first section contains articles on the…

2. Numerical Package in Computer Supported Numeric Analysis Teaching

ERIC Educational Resources Information Center

Tezer, Murat

2007-01-01

At universities in the faculties of Engineering, Sciences, Business and Economics together with higher education in Computing, it is stated that because of the difficulty, calculators and computers can be used in Numerical Analysis (NA). In this study, the learning computer supported NA will be discussed together with important usage of the…

3. On the continuum-scale simulation of gravity-driven fingers with hysteretic Richards equation: Trucation error induced numerical artifacts

SciTech Connect

ELIASSI,MEHDI; GLASS JR.,ROBERT J.

2000-03-08

The authors consider the ability of the numerical solution of Richards equation to model gravity-driven fingers. Although gravity-driven fingers can be easily simulated using a partial downwind averaging method, they find the fingers are purely artificial, generated by the combined effects of truncation error induced oscillations and capillary hysteresis. Since Richards equation can only yield a monotonic solution for standard constitutive relations and constant flux boundary conditions, it is not the valid governing equation to model gravity-driven fingers, and therefore is also suspect for unsaturated flow in initially dry, highly nonlinear, and hysteretic media where these fingers occur. However, analysis of truncation error at the wetting front for the partial downwind method suggests the required mathematical behavior of a more comprehensive and physically based modeling approach for this region of parameter space.

4. Enthalpy-Entropy Compensation Effect in Chemical Kinetics and Experimental Errors: A Numerical Simulation Approach.

PubMed

Perez-Benito, Joaquin F; Mulero-Raichs, Mar

2016-10-06

Many kinetic studies concerning homologous reaction series report the existence of an activation enthalpy-entropy linear correlation (compensation plot), its slope being the temperature at which all the members of the series have the same rate constant (isokinetic temperature). Unfortunately, it has been demonstrated by statistical methods that the experimental errors associated with the activation enthalpy and entropy are mutually interdependent. Therefore, the possibility that some of those correlations might be caused by accidental errors has been explored by numerical simulations. As a result of this study, a computer program has been developed to evaluate the probability that experimental errors might lead to a linear compensation plot parting from an initial randomly scattered set of activation parameters (p-test). Application of this program to kinetic data for 100 homologous reaction series extracted from bibliographic sources has allowed concluding that most of the reported compensation plots can hardly be explained by the accumulation of experimental errors, thus requiring the existence of a previously existing, physically meaningful correlation.

5. Robustness of the cluster expansion: Assessing the roles of relaxation and numerical error

Nguyen, Andrew H.; Rosenbrock, Conrad W.; Reese, C. Shane; Hart, Gus L. W.

2017-07-01

Cluster expansion (CE) is effective in modeling the stability of metallic alloys, but sometimes cluster expansions fail. Failures are often attributed to atomic relaxation in the DFT-calculated data, but there is no metric for quantifying the degree of relaxation. Additionally, numerical errors can also be responsible for slow CE convergence. We studied over one hundred different Hamiltonians and identified a heuristic, based on a normalized mean-squared displacement of atomic positions in a crystal, to determine if the effects of relaxation in CE data are too severe to build a reliable CE model. Using this heuristic, CE practitioners can determine a priori whether or not an alloy system can be reliably expanded in the cluster basis. We also examined the error distributions of the fitting data. We find no clear relationship between the type of error distribution and CE prediction ability, but there are clear correlations between CE formalism reliability, model complexity, and the number of significant terms in the model. Our results show that the size of the errors is much more important than their distribution.

6. Errors, correlations and fidelity for noisy Hamilton flows. Theory and numerical examples

Turchetti, G.; Sinigardi, S.; Servizi, G.; Panichi, F.; Vaienti, S.

2017-02-01

We analyse the asymptotic growth of the error for Hamiltonian flows due to small random perturbations. We compare the forward error with the reversibility error, showing their equivalence for linear flows on a compact phase space. The forward error, given by the root mean square deviation σ (t) of the noisy flow, grows according to a power law if the system is integrable and according to an exponential law if it is chaotic. The autocorrelation and the fidelity, defined as the correlation of the perturbed flow with respect to the unperturbed one, exhibit an exponential decay as \\exp ≤ft(-2{π2}{σ2}(t)\\right) . Some numerical examples such as the anharmonic oscillator and the Hénon Heiles model confirm these results. We finally consider the effect of the observational noise on an integrable system, and show that the decay of correlations can only be observed after a sequence of measurements and that the multiplicative noise is more effective if the delay between two measurements is large.

7. Numerical errors in the computation of subfilter scalar variance in large eddy simulations

Kaul, C. M.; Raman, V.; Balarac, G.; Pitsch, H.

2009-05-01

Subfilter scalar variance is a key quantity for scalar mixing at the small scales of a turbulent flow and thus plays a crucial role in large eddy simulation of combustion. While prior studies have mainly focused on the physical aspects of modeling subfilter variance, the current work discusses variance models in conjunction with the numerical errors due to their implementation using finite-difference methods. A priori tests on data from direct numerical simulation of homogeneous turbulence are performed to evaluate the numerical implications of specific model forms. Like other subfilter quantities, such as kinetic energy, subfilter variance can be modeled according to one of two general methodologies. In the first of these, an algebraic equation relating the variance to gradients of the filtered scalar field is coupled with a dynamic procedure for coefficient estimation. Although finite-difference methods substantially underpredict the gradient of the filtered scalar field, the dynamic method is shown to mitigate this error through overestimation of the model coefficient. The second group of models utilizes a transport equation for the subfilter variance itself or for the second moment of the scalar. Here, it is shown that the model formulation based on the variance transport equation is consistently biased toward underprediction of the subfilter variance. The numerical issues in the variance transport equation stem from discrete approximations to chain-rule manipulations used to derive convection, diffusion, and production terms associated with the square of the filtered scalar. These approximations can be avoided by solving the equation for the second moment of the scalar, suggesting that model's numerical superiority.

8. Error analysis and correction for laser speckle photography

SciTech Connect

Song, Y.Z.; Kulenovic, R.; Groll, M.

1995-12-31

This paper deals with error analysis of experimental data of a laser speckle photography (LSP) application which measures a temperature field of natural convection around a heated cylindrical tube. A method for error corrections is proposed and presented in detail. Experimental and theoretical investigations have shown errors in the measurements are induced due to four causes. These error sources are discussed and suggestions to avoid the errors are given. Due to the error analysis and the introduced methods for their correction the temperature distribution, respectively the temperature gradient in a thermal boundary layer can be obtained more accurately.

9. Naming in aphasic children: analysis of paraphasic errors.

PubMed

van Dongen, H R; Visch-Brink, E G

1988-01-01

In the spontaneous speech of aphasic children paraphasias have been described. This analysis of naming errors during recovery showed that neologisms, literal and verbal paraphasias occurred. The etiology affected the recovery course of neologisms, but not other errors.

10. A technique for human error analysis (ATHEANA)

SciTech Connect

Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W.

1996-05-01

Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.

11. Analytic standard errors for exploratory process factor analysis.

PubMed

Zhang, Guangjian; Browne, Michael W; Ong, Anthony D; Chow, Sy Miin

2014-07-01

Exploratory process factor analysis (EPFA) is a data-driven latent variable model for multivariate time series. This article presents analytic standard errors for EPFA. Unlike standard errors for exploratory factor analysis with independent data, the analytic standard errors for EPFA take into account the time dependency in time series data. In addition, factor rotation is treated as the imposition of equality constraints on model parameters. Properties of the analytic standard errors are demonstrated using empirical and simulated data.

12. Nuclear numerical range and quantum error correction codes for non-unitary noise models

Lipka-Bartosik, Patryk; Życzkowski, Karol

2017-01-01

We introduce a notion of nuclear numerical range defined as the set of expectation values of a given operator A among normalized pure states, which belong to the nucleus of an auxiliary operator Z. This notion proves to be applicable to investigate models of quantum noise with block-diagonal structure of the corresponding Kraus operators. The problem of constructing a suitable quantum error correction code for this model can be restated as a geometric problem of finding intersection points of certain sets in the complex plane. This technique, worked out in the case of two-qubit systems, can be generalized for larger dimensions.

13. Numerical method to solve Cauchy type singular integral equation with error bounds

Setia, Amit; Sharma, Vaishali; Liu, Yucheng

2017-01-01

Cauchy type singular integral equations with index zero naturally occur in the field of aerodynamics. Literature is very much developed for these equations and Chebyshevs polynomials are most frequently used to solve these integral equations. In this paper, a residual based Galerkins method has been proposed by using Legendre polynomial as basis functions to solve Cauchy singular integral equation of index zero. It converts the Cauchy singular integral equation into system of equations which can be easily solved. The test examples are given for illustration of proposed numerical method. Error bounds are derived as well as implemented in all the test examples.

14. Analysis of Pronominal Errors: A Case Study.

ERIC Educational Resources Information Center

Oshima-Takane, Yuriko

1992-01-01

Reports on a study of a normally developing boy who made pronominal errors for about 10 months. Comprehension and production data clearly indicate that the child persistently made pronominal errors because of semantic confusion in the use of first- and second-person pronouns. (28 references) (GLR)

15. Assesment of SIRGAS Ionospheric Maps errors based on a numerical simulation

Brunini, Claudio; Emilio, Camilion; Francisco, Azpilicueta

2010-05-01

SIRGAS (Sistema de Referencia Geocéntrico para las Américas) is responsible of the International Terrestrial Reference Frame densification in Latin America and the Caribbean, which is realized and maintained by means of a continuously operational GNSS network with more than 200 receivers. Besides, SIRGAS uses this network for computing regional maps of the vertical Total Electron Content (TEC), which are released to the community through the SIRGAS web page (www.sirgas.org). As other similar products (e.g.: Global Ionospheric Maps (GIM) computed by the International GNSS Service), SIRGAS Ionospheric Maps (SIM) are based on a thin layer ionospheric model, in which the whole ionosphere is represented by one spherical layer of infinitesimal thickness and equivalent vertical TEC, located at a fixed height above the Earth's surface (tipycally between 350 and 450 km). This contribution aims to characterize the errors introduced in the thin layer ionospheric model by the use of a fixed and, sometimes, inappropiated ionospheric layer height. Particular attention is payed to the propagation of these errors to the estimation of the vertical TEC and to the estimation of the GNSS satellites and receivers Inter-Frequency Biases (IFB). The work relies upon a numerical simulation performed with an empirical model of the Earth's ionosphere, which allows creating a realistic but controlled ionospheric scenario, and then evaluates the errors that are produced when the thin layer model is used to reproduce those ionospheric scenarios. The error assessment is performed for the Central and the northern part of the South American continents, where largest errors are expected because the combined actions of the Appleton Anomaly of the ionosphere and the South-Atlantic anomaly of the geomagnetic field.

16. An Error Analysis of Elementary School Children's Number Production Abilities

ERIC Educational Resources Information Center

Skwarchuk, Sheri-Lynn; Betts, Paul

2006-01-01

Translating numerals into number words is a tacit task requiring linguistic and mathematical knowledge. This project expanded on previous number production models by examining developmental differences in children's number naming errors. Ninety-six children from grades one, three, five, and seven translated a random set of numerals into number…

17. Analysis of the statistical error in umbrella sampling simulations by umbrella integration

Kästner, Johannes; Thiel, Walter

2006-06-01

Umbrella sampling simulations, or biased molecular dynamics, can be used to calculate the free-energy change of a chemical reaction. We investigate the sources of different sampling errors and derive approximate expressions for the statistical errors when using harmonic restraints and umbrella integration analysis. This leads to generally applicable rules for the choice of the bias potential and the sampling parameters. Numerical results for simulations on an analytical model potential are presented for validation. While the derivations are based on umbrella integration analysis, the final error estimate is evaluated from the raw simulation data, and it may therefore be generally applicable as indicated by tests using the weighted histogram analysis method.

18. Solar tracking error analysis of Fresnel reflector.

PubMed

Zheng, Jiantao; Yan, Junjie; Pei, Jie; Liu, Guanjie

2014-01-01

Depending on the rotational structure of Fresnel reflector, the rotation angle of the mirror was deduced under the eccentric condition. By analyzing the influence of the sun tracking rotation angle error caused by main factors, the change rule and extent of the influence were revealed. It is concluded that the tracking errors caused by the difference between the rotation axis and true north meridian, at noon, were maximum under certain conditions and reduced at morning and afternoon gradually. The tracking error caused by other deviations such as rotating eccentric, latitude, and solar altitude was positive at morning, negative at afternoon, and zero at a certain moment of noon.

19. Solar Tracking Error Analysis of Fresnel Reflector

PubMed Central

Zheng, Jiantao; Yan, Junjie; Pei, Jie; Liu, Guanjie

2014-01-01

Depending on the rotational structure of Fresnel reflector, the rotation angle of the mirror was deduced under the eccentric condition. By analyzing the influence of the sun tracking rotation angle error caused by main factors, the change rule and extent of the influence were revealed. It is concluded that the tracking errors caused by the difference between the rotation axis and true north meridian, at noon, were maximum under certain conditions and reduced at morning and afternoon gradually. The tracking error caused by other deviations such as rotating eccentric, latitude, and solar altitude was positive at morning, negative at afternoon, and zero at a certain moment of noon. PMID:24895664

20. Ptychographic overlap constraint errors and the limits of their numerical recovery using conjugate gradient descent methods.

PubMed

Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G

2014-01-27

Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.

1. Application of Interval Analysis to Error Control.

DTIC Science & Technology

1976-09-01

We give simple examples of ways in which interval arithmetic can be used to alert instabilities in computer algorithms , roundoff error accumulation, and even the effects of hardware inadequacies. This paper is primarily tutorial. (Author)

2. Error Analysis of Terrestrial Laser Scanning Data by Means of Spherical Statistics and 3D Graphs

PubMed Central

Cuartero, Aurora; Armesto, Julia; Rodríguez, Pablo G.; Arias, Pedro

2010-01-01

This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS) data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles) that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP) and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis) and the proposed method (angular errors analysis) by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics. PMID:22163461

3. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

NASA Technical Reports Server (NTRS)

Prive, N. C.; Errico, Ronald M.

2015-01-01

The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

4. Analysis of thematic map classification error matrices.

USGS Publications Warehouse

Rosenfield, G.H.

1986-01-01

The classification error matrix expresses the counts of agreement and disagreement between the classified categories and their verification. Thematic mapping experiments compare variables such as multiple photointerpretation or scales of mapping, and produce one or more classification error matrices. This paper presents a tutorial to implement a typical problem of a remotely sensed data experiment for solution by the linear model method.-from Author

5. Numerical Analysis in Fracture Mechanics.

DTIC Science & Technology

1983-01-20

in the following. A. 2-D Elastic-Plastic Crack Problem In 1975, ASTh Committee E24.01.09 undertook a task to compare numerical solutions to elastic...Penalty Function and Superposition Method", Fracture Mechanics, 12th Symposium, ed. by P. C. Paris, ASTh SIP 700, p. 439, 1980. [44) Barsoum, R...Landes, J. A. Begley and G. A. Clarke, ASTh SIP 668, p. 65, 1979. [46) Benzley, S., "Nonlinear Calculations With a Quadratic Quarter-point Crack Tip

6. Generalized numerical pressure distribution model for smoothing polishing of irregular midspatial frequency errors.

PubMed

Nie, Xuqing; Li, Shengyi; Shi, Feng; Hu, Hao

2014-02-20

The smoothing effect of the rigid lap plays an important role in controlling midspatial frequency errors (MSFRs). At present, the pressure distribution between the polishing pad and processed surface is mainly calculated by Mehta's bridging model. However, this classic model does not work for the irregular MSFR. In this paper, a generalized numerical model based on the finite element method (FEM) is proposed to solve this problem. First, the smoothing polishing (SP) process is transformed to a 3D elastic structural FEM model, and the governing matrix equation is gained. By virtue of the boundary conditions applied to the governing matrix equation, the nodal displacement vector and nodal force vector of the pad can be attained, from which the pressure distribution can be extracted. In the partial contact condition, the iterative method is needed. The algorithmic routine is shown, and the applicability of the generalized numerical model is discussed. The detailed simulation is given when the lap is in contact with the irregular surface of different morphologies. A well-designed SP experiment is conducted in our lab to verify the model. A small difference between the experimental data and simulated result shows that the model is totally practicable. The generalized numerical model is applied on a Φ500  mm parabolic surface. The calculated result and measured data after the SP process have been compared, which indicates that the model established in this paper is an effective method to predict the SP process.

7. Asteroid orbital error analysis: Theory and application

NASA Technical Reports Server (NTRS)

Muinonen, K.; Bowell, Edward

1992-01-01

We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

8. Error analysis of system mass properties

NASA Technical Reports Server (NTRS)

Brayshaw, J.

1984-01-01

An attempt is made to verify the margin of system mass properties over values that are sufficient for the support of such other critical system requirements as those of dynamic control. System nominal mass properties are designed on the basis of an imperfect understanding of the mass and location of constituent elements; the effect of such element errors is to introduce net errors into calculated system mass properties. The direct measurement of system mass properties is, however, impractical. Attention is given to these issues in the case of the Galileo spacecraft.

9. A Numerical Study of Some Potential Sources of Error in Side-by-Side Seismometer Evaluations

USGS Publications Warehouse

Holcomb, L. Gary

1990-01-01

INTRODUCTION This report presents the results of a series of computer simulations of potential errors in test data, which might be obtained when conducting side-by-side comparisons of seismometers. These results can be used as guides in estimating potential sources and magnitudes of errors one might expect when analyzing real test data. First, the derivation of a direct method for calculating the noise levels of two sensors in a side-by-side evaluation is repeated and extended slightly herein. This bulk of this derivation was presented previously (see Holcomb 1989); it is repeated here for easy reference. This method is applied to the analysis of a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of white noise spectra with known signal-to-noise ratios (SNR's). This report extends this analysis to high SNR's to determine the limitations of the direct method for calculating the noise levels at signal-to-noise levels which are much higher than presented previously (see Holcomb 1989). Next, the method is used to analyze a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of bandshaped noise spectra with known signal-to-noise ratios. This is a much more realistic representation of real world data because the earth's background spectrum is certainly not flat. Finally, the results of the analysis of simulated white and bandshaped side-by-side test data are used to assist in interpreting the analysis of the effects of simulated azimuthal misalignment in side-by-side sensor evaluations. A thorough understanding of azimuthal misalignment errors is important because of the physical impossibility of perfectly aligning two sensors in a real world situation. The analysis herein indicates that alignment errors place lower limits on the levels of system noise which can be resolved in a side-by-side measurement. It also indicates that alignment errors are the source of the fact that

10. Hybrid S{sub N} numerical method free from spatial truncation error for slab lattice calculations

SciTech Connect

Barros, R.C.

1994-12-31

In typical lattice cells where a highly absorbing, small fuel element is embedded in the moderator (a large weakly absorbing medium), high-order transport methods become unnecessary. In this paper we describe a hybrid discrete ordinates (S{sub N}) nodal method for slab lattice calculations. This hybrid S{sub N} method combines the convenience of a low-order S{sub N} method in the moderator with a high-order S{sub N} method in the fuel. The idea is based on the fact that in weakly absorbing media whose physical size is several neutron mean free paths in extent, even the S{sub 2} method (P{sub 1} approximation), leads to an accurate result. We use special fuel-moderator interface conditions and the spectral Greens function (SGF) numerical nodal method completely free from spatial truncation error, to calculate the neutron flux distribution and the disadvantage factor.

11. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

NASA Technical Reports Server (NTRS)

Fiske, David R.

2004-01-01

In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

12. Dose error analysis for a scanned proton beam delivery system

Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.

2010-12-01

All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

13. L'analyse des erreurs: etat actuel de la recherche (Error Analysis: Present State of Research). Errors: A New Perspective.

ERIC Educational Resources Information Center

Lange, Michel

This paper raises questions about the significance of errors made by language learners. The discussion is divided into four parts: (1) definition of error analysis, (2) the present status of error analysis research, including an overview of the theories of Lado, Skinner, Chomsky, Corder, Nemser, and Selinker; (3) the subdivisions of error analysis…

14. Analysis of interfacial error in saturated unsaturated flow models

Pei, Yuansheng; Wang, Jinsheng; Tian, Zhaohui; Yu, Jianning

2006-04-01

Interfacial error results from estimation of interblock conductivities related to the saturated-unsaturated interface. Both interfacial conductivity error ( IEK) and interfacial pressure error ( IEh) were analyzed under the arithmetic mean scheme while IEK was numerically investigated under the averaging schemes arithmetic, geometric and harmonic. IEK, dependent on the media pore size, is regularly less than zero while IEh, associated with the height of capillary fringe, may be greater than zero. An interfacial discretization technique was developed to add two complementary equations into the saturated-unsaturated model with respect to the interface. The proposed interfacial approach may eliminate interfacial error from the approximations of interblock conductivities. Underestimation of the water-table response to infiltration is related to the negative IEK. The water-table response error reaches -5.13% in our investigation, which is an accumulated result from IEK.

15. Preventing medication errors in community pharmacy: root‐cause analysis of transcription errors

PubMed Central

Knudsen, P; Herborg, H; Mortensen, A R; Knudsen, M; Hellebek, A

2007-01-01

Background Medication errors can have serious consequences for patients, and medication safety is essential to pharmaceutical care. Insight is needed into the vulnerability of the working process at community pharmacies to identify what causes error incidents, so that the system can be improved to enhance patient safety. Methods 40 randomly selected Danish community pharmacies collected data on medication errors. Cases that reached patients were analysed, and the most serious cases were selected for root‐cause analyses by an interdisciplinary analysis team. Results 401 cases had reached patients and a substantial number of them had possible clinical significance. Most of these errors were made in the transcription stage, and the most serious were errors in strength and dosage. The analysis team identified four root causes: handwritten prescriptions; “traps” such as similarities in packaging or names, or strength and dosage stated in misleading ways; lack of effective control of prescription label and medicine; and lack of concentration caused by interruptions. Conclusion A substantial number of the medication errors identified at pharmacies that reach patients have possible clinical significance. Root‐cause analysis shows potential for identifying the underlying causes of the incidents and for providing a basis for action to improve patient safety. PMID:17693677

16. Empirical Analysis of Systematic Communication Errors.

DTIC Science & Technology

1981-09-01

human o~ . .... 8 components in communication systems. (Systematic errors were defined to be those that occur regularly in human communication links...phase of the human communication process and focuses on the linkage between a specific piece of information (and the receiver) and the transmission...communication flow. (2) Exchange. Exchange is the next phase in human communication and entails a concerted effort on the part of the sender and receiver to share

17. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

SciTech Connect

Beckerman, M.; Jones, J.P.

1999-02-01

We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

18. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis. Revision 1.12

NASA Technical Reports Server (NTRS)

Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

1997-01-01

We proposed a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and is required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has two important applications, which we term the assessment application and the objective analysis application. For the assessment application, our approach results in new objective measures of forecast skill which are more in line with subjective measures of forecast skill and which are useful in validating models and diagnosing their shortcomings. With regard to the objective analysis application, meteorological analysis schemes balance forecast error and observational error to obtain an optimal analysis. Presently, representations of the error covariance matrix used to measure the forecast error are severely limited. For the objective analysis application our approach will improve analyses by providing a more realistic measure of the forecast error. We expect, a priori, that our approach should greatly improve the utility of remotely sensed data which have relatively high horizontal resolution, but which are indirectly related to the conventional atmospheric variables. In this project, we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP) and 500 hPa geopotential height fields for forecasts of the short and medium range. Since the forecasts are generated by the GEOS (Goddard Earth Observing System) data assimilation system with and without ERS 1 scatterometer data, these preliminary studies serve several purposes. They (1) provide a

19. Size and Shape Analysis of Error-Prone Shape Data

PubMed Central

Du, Jiejun; Dryden, Ian L.; Huang, Xianzheng

2015-01-01

We consider the problem of comparing sizes and shapes of objects when landmark data are prone to measurement error. We show that naive implementation of ordinary Procrustes analysis that ignores measurement error can compromise inference. To account for measurement error, we propose the conditional score method for matching configurations, which guarantees consistent inference under mild model assumptions. The effects of measurement error on inference from naive Procrustes analysis and the performance of the proposed method are illustrated via simulation and application in three real data examples. Supplementary materials for this article are available online. PMID:26109745

20. Bibliometric Analysis of Medication Errors and Adverse Drug Events Studies.

PubMed

Huang, Hung-Chi; Wang, Cheng-Hua; Chen, Pi-Ching; Lee, Yen-Der

2015-07-31

1. Numerical solutions and error estimations for the space fractional diffusion equation with variable coefficients via Fibonacci collocation method.

PubMed

Bahşı, Ayşe Kurt; Yalçınbaş, Salih

2016-01-01

In this study, the Fibonacci collocation method based on the Fibonacci polynomials are presented to solve for the fractional diffusion equations with variable coefficients. The fractional derivatives are described in the Caputo sense. This method is derived by expanding the approximate solution with Fibonacci polynomials. Using this method of the fractional derivative this equation can be reduced to a set of linear algebraic equations. Also, an error estimation algorithm which is based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation algorithm. If the exact solution of the problem is not known, the absolute error function of the problems can be approximately computed by using the Fibonacci polynomial solution. By using this error estimation function, we can find improved solutions which are more efficient than direct numerical solutions. Numerical examples, figures, tables are comparisons have been presented to show efficiency and usable of proposed method.

2. The Use of Trigram Analysis for Spelling Error Detection.

ERIC Educational Resources Information Center

Zamora, E. M.; And Others

1981-01-01

Describes work performed under the Spelling Error Detection Correction Project (SPEEDCOP) at Chemical Abstracts Service to devise effective automatic methods of detecting and correcting misspellings in scholarly and scientific text. The trigram analysis technique developed determined sites but not types of errors. Thirteen references are listed.…

3. Implications of Error Analysis Studies for Academic Interventions

ERIC Educational Resources Information Center

Mather, Nancy; Wendling, Barbara J.

2017-01-01

We reviewed 13 studies that focused on analyzing student errors on achievement tests from the Kaufman Test of Educational Achievement-Third edition (KTEA-3). The intent was to determine what instructional implications could be derived from in-depth error analysis. As we reviewed these studies, several themes emerged. We explain how a careful…

4. Implications of Error Analysis Studies for Academic Interventions

ERIC Educational Resources Information Center

Mather, Nancy; Wendling, Barbara J.

2017-01-01

We reviewed 13 studies that focused on analyzing student errors on achievement tests from the Kaufman Test of Educational Achievement-Third edition (KTEA-3). The intent was to determine what instructional implications could be derived from in-depth error analysis. As we reviewed these studies, several themes emerged. We explain how a careful…

5. Exploratory Factor Analysis of Reading, Spelling, and Math Errors

ERIC Educational Resources Information Center

O'Brien, Rebecca; Pan, Xingyu; Courville, Troy; Bray, Melissa A.; Breaux, Kristina; Avitia, Maria; Choi, Dowon

2017-01-01

Norm-referenced error analysis is useful for understanding individual differences in students' academic skill development and for identifying areas of skill strength and weakness. The purpose of the present study was to identify underlying connections between error categories across five language and math subtests of the Kaufman Test of…

6. The Use of Trigram Analysis for Spelling Error Detection.

ERIC Educational Resources Information Center

Zamora, E. M.; And Others

1981-01-01

Describes work performed under the Spelling Error Detection Correction Project (SPEEDCOP) at Chemical Abstracts Service to devise effective automatic methods of detecting and correcting misspellings in scholarly and scientific text. The trigram analysis technique developed determined sites but not types of errors. Thirteen references are listed.…

7. Simple numerical analysis of longboard speedometer data

Hare, Jonathan

2013-11-01

Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 Phys. Educ. 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as simple numerical differentiation and integration. This is an interesting, fun and instructive way to start to explore data manipulation at GCSE and A-level—analysis and skills so essential for the engineer and scientist.

8. Error analysis for a laser differential confocal radius measurement system.

PubMed

Wang, Xu; Qiu, Lirong; Zhao, Weiqian; Xiao, Yang; Wang, Zhongyu

2015-02-10

In order to further improve the measurement accuracy of the laser differential confocal radius measurement system (DCRMS) developed previously, a DCRMS error compensation model is established for the error sources, including laser source offset, test sphere position adjustment offset, test sphere figure, and motion error, based on analyzing the influences of these errors on the measurement accuracy of radius of curvature. Theoretical analyses and experiments indicate that the expanded uncertainty of the DCRMS is reduced to U=0.13  μm+0.9  ppm·R (k=2) through the error compensation model. The error analysis and compensation model established in this study can provide the theoretical foundation for improving the measurement accuracy of the DCRMS.

9. Orbital error analysis for comet Encke, 1980

NASA Technical Reports Server (NTRS)

Yeomans, D. K.

1976-01-01

Before a particular comet is selected as a flyby target, the following criteria should be considered in determining its ephemeris uncertainty: (1) A target comet should have good observability during the apparition of the proposed intercept; and (2) A target comet should have a good observational history. Several well observed and consecutive apparitions allow an accurate determination of a comet's mean motion and nongravitational parameters. Using these criteria, along with statistical and empirical error analyses, it has been demonstrated that the 1980 apparition of comet Encke is an excellent opportunity for a cometary flyby space probe. For this particular apparition, a flyby to within 1,000 km of comet Encke seems possible without the use of sophisticated and expensive onboard navigation instrumentation.

10. Simple Numerical Analysis of Longboard Speedometer Data

ERIC Educational Resources Information Center

Hare, Jonathan

2013-01-01

Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 "Phys. Educ." 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as…

11. Simple Numerical Analysis of Longboard Speedometer Data

ERIC Educational Resources Information Center

Hare, Jonathan

2013-01-01

Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 "Phys. Educ." 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as…

12. Flux control coefficients determined by inhibitor titration: the design and analysis of experiments to minimize errors.

PubMed Central

Small, J R

1993-01-01

This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434

13. SINFAC - SYSTEMS IMPROVED NUMERICAL FLUIDS ANALYSIS CODE

NASA Technical Reports Server (NTRS)

Costello, F. A.

1994-01-01

The Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to the April 1983 revision of SINDA, a general thermal analyzer program. The purpose of the additional routines is to allow for the modeling of active heat transfer loops. The modeler can simulate the steady-state and pseudo-transient operations of 16 different heat transfer loop components including radiators, evaporators, condensers, mechanical pumps, reservoirs and many types of valves and fittings. In addition, the program contains a property analysis routine that can be used to compute the thermodynamic properties of 20 different refrigerants. SINFAC can simulate the response to transient boundary conditions. SINFAC was first developed as a method for computing the steady-state performance of two phase systems. It was then modified using CNFRWD, SINDA's explicit time-integration scheme, to accommodate transient thermal models. However, SINFAC cannot simulate pressure drops due to time-dependent fluid acceleration, transient boil-out, or transient fill-up, except in the accumulator. SINFAC also requires the user to be familiar with SINDA. The solution procedure used by SINFAC is similar to that which an engineer would use to solve a system manually. The solution to a system requires the determination of all of the outlet conditions of each component such as the flow rate, pressure, and enthalpy. To obtain these values, the user first estimates the inlet conditions to the first component of the system, then computes the outlet conditions from the data supplied by the manufacturer of the first component. The user then estimates the temperature at the outlet of the third component and computes the corresponding flow resistance of the second component. With the flow resistance of the second component, the user computes the conditions down stream, namely the inlet conditions of the third. The computations follow for the rest of the system, back to the first component

14. Attitude determination error analysis - General model and specific application

NASA Technical Reports Server (NTRS)

Markley, F. Landis; Seidewitz, ED; Deutschmann, Julie

1990-01-01

This paper presents a comprehensive approach to filter and dynamics modeling for attitude determination error analysis. The discussion includes models for both batch least-squares and sequential estimators, a specific dynamic model for attitude determination error analysis of a three-axis stabilized spacecraft equipped with strapdown gyros, and the incorporation of general attitude sensor observations. An analyst using this approach to perform an error analysis chooses a subset of the spacecraft parameters to be 'solve-for' parameters, which are to be estimated, and another subset to be 'consider' parameters, which are assumed to have errors but not to be estimated. The result of the error analysis is an indication of overall uncertainties in the 'solve-for' parameters, as well as the contributions of the various error sources to these uncertainties, including those of errors in the a priori 'solve-for' estimates, of measurement noise, of dynamic noise (also known as process noise or plant noise), and of 'consider' parameter uncertainties. The analysis of attitude, star tracker alignment, and gyro bias uncertainties for the Gamma Ray Observatory spacecraft provide a specific example of the use of a general-purpose software package incorporating these models.

15. Clustered Numerical Data Analysis Using Markov Lie Monoid Based Networks

Johnson, Joseph

2016-03-01

We have designed and build an optimal numerical standardization algorithm that links numerical values with their associated units, error level, and defining metadata thus supporting automated data exchange and new levels of artificial intelligence (AI). The software manages all dimensional and error analysis and computational tracing. Tables of entities verses properties of these generalized numbers (called metanumbers'') support a transformation of each table into a network among the entities and another network among their properties where the network connection matrix is based upon a proximity metric between the two items. We previously proved that every network is isomorphic to the Lie algebra that generates continuous Markov transformations. We have also shown that the eigenvectors of these Markov matrices provide an agnostic clustering of the underlying patterns. We will present this methodology and show how our new work on conversion of scientific numerical data through this process can reveal underlying information clusters ordered by the eigenvalues. We will also show how the linking of clusters from different tables can be used to form a supernet'' of all numerical information supporting new initiatives in AI.

16. Analysis and Numerical Treatment of Elliptic Equations with Stochastic Data

Cheng, Shi

Many science and engineering applications are impacted by a significant amount of uncertainty in the model. Examples include groundwater flow, microscopic biological system, material science and chemical engineering systems. Common mathematical problems in these applications are elliptic equations with stochastic data. In this dissertation, we examine two types of stochastic elliptic partial differential equations(SPDEs), namely nonlinear stochastic diffusion reaction equations and general linearized elastostatic problems in random media. We begin with the construction of an analysis framework for this class of SPDEs, extending prior work of Babuska in 2010. We then use the framework both for establishing well-posedness of the continuous problems and for posing Galerkintype numerical methods. In order to solve these two types of problems, single integral weak formulations and stochastic collocation methods are applied. Moreover, a priori error estimates for stochastic collocation methods are derived, which imply that the rate of convergence is exponential, along with the order of polynomial increasing in the space of random variables. As expected, numerical experiments show the exponential rate of convergence, verified by a posterior error analysis. Finally, an adaptive strategy driven by a posterior error indicators is designed.

17. Error Analysis of p-Version Discontinuous Galerkin Method for Heat Transfer in Built-up Structures

NASA Technical Reports Server (NTRS)

Kaneko, Hideaki; Bey, Kim S.

2004-01-01

The purpose of this paper is to provide an error analysis for the p-version of the discontinuous Galerkin finite element method for heat transfer in built-up structures. As a special case of the results in this paper, a theoretical error estimate for the numerical experiments recently conducted by James Tomey is obtained.

18. Classification error analysis in stereo vision

Gross, Eitan

2015-07-01

Depth perception in humans is obtained by comparing images generated by the two eyes to each other. Given the highly stochastic nature of neurons in the brain, this comparison requires maximizing the mutual information (MI) between the neuronal responses in the two eyes by distributing the coding information across a large number of neurons. Unfortunately, MI is not an extensive quantity, making it very difficult to predict how the accuracy of depth perception will vary with the number of neurons (N) in each eye. To address this question we present a two-arm, distributed decentralized sensors detection model. We demonstrate how the system can extract depth information from a pair of discrete valued stimuli represented here by a pair of random dot-matrix stereograms. Using the theory of large deviations we calculated the rate at which the global average error probability of our detector; and the MI between the two arms' output, vary with N. We found that MI saturates exponentially with N at a rate which decays as 1 / N. The rate function approached the Chernoff distance between the two probability distributions asymptotically. Our results may have implications in computer stereo vision that uses Hebbian-based algorithms for terrestrial navigation.

19. An Introduction to Error Analysis for Quantitative Chemistry

ERIC Educational Resources Information Center

Neman, R. L.

1972-01-01

Describes two formulas for calculating errors due to instrument limitations which are usually found in gravimetric volumetric analysis and indicates their possible applications to other fields of science. (CC)

20. Error analysis of large aperture static interference imaging spectrometer

Li, Fan; Zhang, Guo

2015-12-01

Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.

1. Attitude Determination Error Analysis System (ADEAS) mathematical specifications document

NASA Technical Reports Server (NTRS)

Nicholson, Mark; Markley, F.; Seidewitz, E.

1988-01-01

The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.

2. Data Analysis & Statistical Methods for Command File Errors

NASA Technical Reports Server (NTRS)

Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

2014-01-01

This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

3. Data Analysis & Statistical Methods for Command File Errors

NASA Technical Reports Server (NTRS)

Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

2014-01-01

This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

4. Error Analysis and Trajectory Correction Maneuvers of Lunar Transfer Orbit

Zhao, Yu-hui; Hou, Xi-yun; Liu, Lin

2013-10-01

For a returnable lunar probe, this paper studies the characteristics of both the Earth-Moon transfer orbit and the return orbit. On the basis of the error propagation matrix, the linear equation to estimate the first midcourse trajectory correction maneuver (TCM) is figured out. Numerical simulations are performed, and the features of error propagation in lunar transfer orbit are given. The advantages, disadvantages, and applications of two TCM strategies are discussed, and the computation of the second TCM of the return orbit is also simulated under the conditions at the reentry time.

5. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

NASA Technical Reports Server (NTRS)

Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

2013-01-01

6. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

NASA Technical Reports Server (NTRS)

Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

2013-01-01

7. Numerical Analysis of Robust Phase Estimation

Rudinger, Kenneth; Kimmel, Shelby

Robust phase estimation (RPE) is a new technique for estimating rotation angles and axes of single-qubit operations, steps necessary for developing useful quantum gates [arXiv:1502.02677]. As RPE only diagnoses a few parameters of a set of gate operations while at the same time achieving Heisenberg scaling, it requires relatively few resources compared to traditional tomographic procedures. In this talk, we present numerical simulations of RPE that show both Heisenberg scaling and robustness against state preparation and measurement errors, while also demonstrating numerical bounds on the procedure's efficacy. We additionally compare RPE to gate set tomography (GST), another Heisenberg-limited tomographic procedure. While GST provides a full gate set description, it is more resource-intensive than RPE, leading to potential tradeoffs between the procedures. We explore these tradeoffs and numerically establish criteria to guide experimentalists in deciding when to use RPE or GST to characterize their gate sets.Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

8. Abundance recovery error analysis using simulated AVIRIS data

NASA Technical Reports Server (NTRS)

Stoner, William W.; Harsanyi, Joseph C.; Farrand, William H.; Wong, Jennifer A.

1992-01-01

Measurement noise and imperfect atmospheric correction translate directly into errors in the determination of the surficial abundance of materials from imaging spectrometer data. The effects of errors on abundance recovery were investigated previously using Monte Carlo simulation methods by Sabol et. al. The drawback of the Monte Carlo approach is that thousands of trials are needed to develop good statistics on the probable error in abundance recovery. This computational burden invariably limits the number of scenarios of interest that can practically be investigated. A more efficient approach is based on covariance analysis. The covariance analysis approach expresses errors in abundance as a function of noise in the spectral measurements and provides a closed form result eliminating the need for multiple trials. Monte Carlo simulation and covariance analysis are used to predict confidence limits for abundance recovery for a scenario which is modeled as being derived from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).

9. The use of error analysis to assess resident performance.

PubMed

D'Angelo, Anne-Lise D; Law, Katherine E; Cohen, Elaine R; Greenberg, Jacob A; Kwan, Calvin; Greenberg, Caprice; Wiegmann, Douglas A; Pugh, Carla M

2015-11-01

10. The Use of Error Analysis to Assess Resident Performance

PubMed Central

D’Angelo, Anne-Lise D.; Law, Katherine E.; Cohen, Elaine R.; Greenberg, Jacob A.; Kwan, Calvin; Greenberg, Caprice; Wiegmann, Douglas A.; Pugh, Carla M.

2015-01-01

Background The aim of this study is to assess validity of a human factors error assessment method for evaluating resident performance during a simulated operative procedure. Methods Seven PGY4-5 residents had 30 minutes to complete a simulated laparoscopic ventral hernia (LVH) repair on Day 1 of a national, advanced laparoscopic course. Faculty provided immediate feedback on operative errors and residents participated in a final product analysis of their repairs. Residents then received didactic and hands-on training regarding several advanced laparoscopic procedures during a lecture session and animate lab. On Day 2, residents performed a nonequivalent LVH repair using a simulator. Three investigators reviewed and coded videos of the repairs using previously developed human error classification systems. Results Residents committed 121 total errors on Day 1 compared to 146 on Day 2. One of seven residents successfully completed the LVH repair on Day 1 compared to all seven residents on Day 2 (p=.001). The majority of errors (85%) committed on Day 2 were technical and occurred during the last two steps of the procedure. There were significant differences in error type (p=<.001) and level (p=.019) from Day 1 to Day 2. The proportion of omission errors decreased from Day 1 (33%) to Day 2 (14%). In addition, there were more technical and commission errors on Day 2. Conclusion The error assessment tool was successful in categorizing performance errors, supporting known-groups validity evidence. Evaluating resident performance through error classification has great potential in facilitating our understanding of operative readiness. PMID:26003910

11. Sensitivity analysis of geometric errors in additive manufacturing medical models.

PubMed

Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

2015-03-01

Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

12. Error Analysis of Variations on Larsen's Benchmark Problem

SciTech Connect

Azmy, YY

2001-06-27

Error norms for three variants of Larsen's benchmark problem are evaluated using three numerical methods for solving the discrete ordinates approximation of the neutron transport equation in multidimensional Cartesian geometry. The three variants of Larsen's test problem are concerned with the incoming flux boundary conditions: unit incoming flux on the left and bottom edges (Larsen's configuration); unit, incoming flux only on the left edge; unit incoming flux only on the bottom edge. The three methods considered are the Diamond Difference (DD) method, and the constant-approximation versions of the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic (AHOT-C) type. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L{sub 1}, L{sub 2}, and L{sub {infinity}} error norms are calculated. The results of this study demonstrate that while integral error norms, i.e. L{sub 1}, L{sub 2}, converge to zero with mesh refinement, the pointwise L{sub {infinity}} norm does not due to solution discontinuity across the singular characteristic. Little difference is observed between the error norm behavior of the three methods considered in spite of the fact that AHOT-C is locally exact, suggesting that numerical diffusion across the singular characteristic as the major source of error on the global scale. However, AHOT-C possesses a given accuracy in a larger fraction of computational cells than DD.

13. Error estimate evaluation in numerical approximations of partial differential equations: A pilot study using data mining methods

2013-03-01

In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.

14. Application of Energy Function as a Measure of Error in the Numerical Solution for Online Transient Stability Assessment

Sarojkumar, K.; Krishna, S.

2016-08-01

Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.

15. Error analysis of two methods for range-images registration

Liu, Xiaoli; Yin, Yongkai; Li, Ameng; He, Dong; Peng, Xiang

2010-08-01

With the improvements in range image registration techniques, this paper focuses on error analysis of two registration methods being generally applied in industry metrology including the algorithm comparison, matching error, computing complexity and different application areas. One method is iterative closest points, by which beautiful matching results with little error can be achieved. However some limitations influence its application in automatic and fast metrology. The other method is based on landmarks. We also present a algorithm for registering multiple range-images with non-coding landmarks, including the landmarks' auto-identification and sub-pixel location, 3D rigid motion, point pattern matching, global iterative optimization techniques et al. The registering results by the two methods are illustrated and a thorough error analysis is performed.

16. Estimating the designated use attainment decision error rates of US Environmental Protection Agency's proposed numeric total phosphorus criteria for Florida, USA, colored lakes.

PubMed

McLaughlin, Douglas B

2012-01-01

The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors.

17. Mutual alignment errors analysis based on wavelet due to antenna deformations in inter-satellite laser communications

Xie, Wanqing; Tan, Liying; Ma, Jing

2012-02-01

Wavelet analysis is employed in the paper to model diversified optical antenna deformations in inter-satellite laser communications. Mutual alignment errors, which comprise pointing and tracking errors, caused by the deformations are investigated with the model. Theoretical and numerical analysis show that both errors increase with the dilation factor of the model. Tracking error increases monotonously with the shift factor of the model, while pointing error increases first, and then decreases. When the deformation can be well approximated to a constant, both errors fluctuate periodically with the coefficient of the model. Otherwise, there is no obvious regularity for both errors with the increase of the coefficient. A reference for the machining precision of optical antennas is presented, and a method to reduce the effect of deformations is recommended. It is hoped that the study can contribute to improve the performance of inter-satellite laser communication systems.

18. Analysis of Australian newspaper coverage of medication errors.

PubMed

Hinchcliff, Reece; Westbrook, Johanna; Greenfield, David; Baysari, Melissa; Moldovan, Max; Braithwaite, Jeffrey

2012-02-01

To investigate the frequency, style and reliability of newspaper reporting of medication errors. Content analysis of articles discussing medication errors that were published in the 10 most widely read Australian daily newspapers between January 2005 and January 2010. Main outcome measure(s) Newspaper source, article type, article topic, leading news actors, identified causes and solutions of medication errors and cited references. Ninety-two articles included discussion of medication errors, with the one national newspaper, The Australian, the main source of articles (n = 24). News items were the most frequent type of articles (n = 73), with the majority (n = 55) primarily focused on broader hospital problems. Government representatives, advocacy groups, researchers, health service staff and private industry groups were prominent news actors. A shortage of hospital resources was identified as the central cause of medication errors (n = 38), with efficient error reporting systems most frequently identified as a solution (n = 25). Government reports were cited on 39 occasions, with peer-reviewed publications infrequently cited (n = 4). Australian newspaper reporting of medication errors was relatively limited. Given the high prevalence of errors and the potential role consumers can play in identifying and preventing errors, there is a clear argument for increasing public awareness and understanding of issues relating to medication safety. Existing coverage of this issue is unrelated to research evidence. This suggests the need for patient safety researchers and advocacy groups to engage more strongly with the media as a strategy to increase the productive public discourse concerning medication errors and gain support for evidence-based interventions.

19. Manufacturing in space: Fluid dynamics numerical analysis

NASA Technical Reports Server (NTRS)

Robertson, S. J.; Nicholson, L. A.; Spradley, L. W.

1981-01-01

Natural convection in a spherical container with cooling at the center was numerically simulated using the Lockheed-developed General Interpolants Method (GIM) numerical fluid dynamic computer program. The numerical analysis was simplified by assuming axisymmetric flow in the spherical container, with the symmetry axis being a sphere diagonal parallel to the gravity vector. This axisymmetric spherical geometry was intended as an idealization of the proposed Lal/Kroes growing experiments to be performed on board Spacelab. Results were obtained for a range of Rayleigh numbers from 25 to 10,000. For a temperature difference of 10 C from the cooling sting at the center to the container surface, and a gravitional loading of 0.000001 g a computed maximum fluid velocity of about 2.4 x 0.00001 cm/sec was reached after about 250 sec. The computed velocities were found to be approximately proportional to the Rayleigh number over the range of Rayleigh numbers investigated.

20. A simple and efficient error analysis for multi-step solution of the Navier-Stokes equations

Fithen, R. M.

2002-02-01

A simple error analysis is used within the context of segregated finite element solution scheme to solve incompressible fluid flow. An error indicator is defined based on the difference between a numerical solution on an original mesh and an approximated solution on a related mesh. This error indicator is based on satisfying the steady-state momentum equations. The advantages of this error indicator are, simplicity of implementation (post-processing step), ability to show regions of high and/or low error, and as the indicator approaches zero the solution approaches convergence. Two examples are chosen for solution; first, the lid-driven cavity problem, followed by the solution of flow over a backward facing step. The solutions are compared to previously published data for validation purposes. It is shown that this rather simple error estimate, when used as a re-meshing guide, can be very effective in obtaining accurate numerical solutions. Copyright

1. Numerical error in electron orbits with large. omega. sub ce. delta. t

SciTech Connect

Parker, S.E.; Birdsall, C.K.

1989-12-20

We have found that running electrostatic particle codes relatively large {omega}{sub ce}{Delta}t in some circumstances does not significantly affect the physical results. We first present results from a single particle mover finding the correct first order drifts for large {omega}{sub ce}{Delta}t. We then characterize the numerical orbit of the Boris algorithm for rotation when {omega}{sub ce}{Delta}t {much gt} 1. Next, an analysis of the guiding center motion is given showing why the first order drift is retained at large {omega}{sub ce}{Delta}t. Lastly, we present a plasma simulation of a one dimensional cross field sheath, with large and small {omega}{sub ce}{Delta}t, with very little difference in the results. 15 refs., 7 figs., 1 tab.

2. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

PubMed

Zollanvari, Amin; Genton, Marc G

2013-08-01

We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

3. Error control in the GCF: An information-theoretic model for error analysis and coding

NASA Technical Reports Server (NTRS)

1974-01-01

The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.

4. Sensitivity analysis of DOA estimation algorithms to sensor errors

Li, Fu; Vaccaro, Richard J.

1992-07-01

A unified statistical performance analysis using subspace perturbation expansions is applied to subspace-based algorithms for direction-of-arrival (DOA) estimation in the presence of sensor errors. In particular, the multiple signal classification (MUSIC), min-norm, state-space realization (TAM and DDA) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms are analyzed. This analysis assumes that only a finite amount of data is available. An analytical expression for the mean-squared error of the DOA estimates is developed for theoretical comparison in a simple and self-contained fashion. The tractable formulas provide insight into the algorithms. Simulation results verify the analysis.

5. Application of human error analysis to aviation and space operations

SciTech Connect

Nelson, W.R.

1998-03-01

For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) the authors have been working to apply methods of human error analysis to the design of complex systems. They have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. They are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. The primary vehicle the authors have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. They are currently adapting their methods and tools of human error analysis to the domain of air traffic management (ATM) systems. Under the NASA-sponsored Advanced Air Traffic Technologies (AATT) program they are working to address issues of human reliability in the design of ATM systems to support the development of a free flight environment for commercial air traffic in the US. They are also currently testing the application of their human error analysis approach for space flight operations. They have developed a simplified model of the critical habitability functions for the space station Mir, and have used this model to assess the affects of system failures and human errors that have occurred in the wake of the collision incident last year. They are developing an approach so that lessons learned from Mir operations can be systematically applied to design and operation of long-term space missions such as the International Space Station (ISS) and the manned Mars mission.

6. Enhanced orbit determination filter sensitivity analysis: Error budget development

NASA Technical Reports Server (NTRS)

Estefan, J. A.; Burkhart, P. D.

1994-01-01

An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.

7. From Monroe to Moreau: an analysis of face naming errors.

PubMed

Brédart, S; Valentine, T

1992-12-01

Functional models of face recognition and speech production have developed separately. However, naming a familiar face is, of course, an act of speech production. In this paper we propose a revision of Bruce and Young's (1986) model of face processing, which incorporates two features of Levelt's (1989) model of speech production. In particular, the proposed model includes two stages of lexical access for names and monitoring of face naming based on a "perceptual loop". Two predictions were derived from the perceptual loop hypothesis of speech monitoring: (1) naming errors in which a (correct) rare surname is erroneously replaced by a common surname should occur more frequently than the reverse substitution (the error asymmetry effect); (2) naming errors in which a common surname is articulated are more likely to be repaired than errors which result in articulation of a rare surname (the error-repairing effect). Both predictions were supported by an analysis of face naming errors in a laboratory face naming task. In a further experiment we considered the possibility that the effects of surname frequency observed in face naming errors could be explained by the frequency sensitivity of lexical access in speech production. However, no effect of the frequency of the surname of the faces used in the previous experiment was found on face naming latencies. Therefore, it is concluded that the perceptual loop hypothesis provides the more parsimonious account of the entire pattern of the results.

8. The Use of Contrastive and Error Analysis to Practicing Teachers.

ERIC Educational Resources Information Center

Filipovic, Rudolf

A major problem in learning a second language is the interference of a structurally different native language. Contrastive analysis (CA) combined with learner error analysis (EA) provide an excellent basis for preparation of language instructional materials. The Yugoslav Serbo-Croatian-English Contrastive Project proved that full application of CA…

9. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

ERIC Educational Resources Information Center

Zhang, Guangjian; Browne, Michael W.

2010-01-01

Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

10. Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors

ERIC Educational Resources Information Center

Sarcevic, Aleksandra

2009-01-01

An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…

11. Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors

ERIC Educational Resources Information Center

Sarcevic, Aleksandra

2009-01-01

An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…

12. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

ERIC Educational Resources Information Center

Zhang, Guangjian; Browne, Michael W.

2010-01-01

Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

13. Using Microcomputers for Assessment and Error Analysis. Monograph #23.

ERIC Educational Resources Information Center

Hasselbring, Ted S.; And Others

This monograph provides an overview of computer-based assessment and error analysis in the instruction of elementary students with complex medical, learning, and/or behavioral problems. Information on generating and scoring tests using the microcomputer is offered, as are ideas for using computers in the analysis of mathematical strategies and…

14. Error Analysis: Its Contribution to Second Language Teaching

ERIC Educational Resources Information Center

Ennis, Faye

1977-01-01

Research on error analysis indicates that the learner develops an ordered system of language which is frequently erroneous, but which represents a transitional stage in his progress towards mastery. A brief analysis of some textbooks provides information about the selection and presentation of material to the learner. (SW)

15. Linear error analysis of slope-area discharge determinations

USGS Publications Warehouse

Kirby, W.H.

1987-01-01

The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

16. Geometric error analysis for shuttle imaging spectrometer experiment

NASA Technical Reports Server (NTRS)

Wang, S. J.; Ih, C. H.

1984-01-01

The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.

17. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

NASA Technical Reports Server (NTRS)

Prive, N. C.; Errico, R. M.; Tai, K.-S.

2013-01-01

The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

18. The influence of observation errors on analysis error and forecast skill investigated with an observing system simulation experiment

Privé, N. C.; Errico, R. M.; Tai, K.-S.

2013-06-01

The National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a 1 month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 h forecast, increased observation error only yields a slight decline in forecast skill in the extratropics and no discernible degradation of forecast skill in the tropics.

19. Analysis of case-only studies accounting for genotyping error.

PubMed

Cheng, K F

2007-03-01

The case-only design provides one approach to assess possible interactions between genetic and environmental factors. It has been shown that if these factors are conditionally independent, then a case-only analysis is not only valid but also very efficient. However, a drawback of the case-only approach is that its conclusions may be biased by genotyping errors. In this paper, our main aim is to propose a method for analysis of case-only studies when these errors occur. We show that the bias can be adjusted through the use of internal validation data, which are obtained by genotyping some sampled individuals twice. Our analysis is based on a simple and yet highly efficient conditional likelihood approach. Simulation studies considered in this paper confirm that the new method has acceptable performance under genotyping errors.

20. Numerical Analysis Of Flows With FIDAP

NASA Technical Reports Server (NTRS)

Sohn, Jeong L.

1990-01-01

Report presents an evaluation of accuracy of Fluid Dynamics Package (FIDAP) computer program. Finite-element code for analysis of flows of incompressible fluids and transfers of heat in multidimensional domains. Includes both available methods for treatment of spurious numerical coupling between simulated velocity and simulated pressure; namely, penalty method and mixed-interpolation method with variable choices of interpolation polynomials for velocity and pressure. Streamwise upwind (STU) method included as option for flows dominated by convection.

1. Experimental and numerical analysis of convergent nozzlex

Srinivas, G.; Rakham, Bhupal

2017-05-01

In this paper the main focus was given to convergent nozzle where both the experimental and numerical calculations were carried out with the support of standardized literature. In the recent years the field of air breathing and non-air breathing engine developments significantly increase its performance. To enhance the performance of both the type of engines the nozzle is the one of the component which will play a vital role, especially selecting the type of nozzle depends upon the vehicle speed requirement and aerodynamic behavior at most important in the field of propulsion. The convergent nozzle flow experimental analysis done using scaled apparatus and the similar setup was arranged artificially in the ANSYS software for doing the flow analysis across the convergent nozzle. The consistent calculation analysis are done based on the public literature survey to validate the experimental and numerical simulation results of convergent nozzle. Using these two experimental and numerical simulation approaches the best fit results will bring up to meet the design requirements. However the comparison also made to meet the reliability of the work on design criteria of convergent nozzle which can entrench in the field of propulsion applications.

2. Error analysis for momentum conservation in Atomic-Continuum Coupled Model

Yang, Yantao; Cui, Junzhi; Han, Tiansi

2016-08-01

Atomic-Continuum Coupled Model (ACCM) is a multiscale computation model proposed by Xiang et al. (in IOP conference series materials science and engineering, 2010), which is used to study and simulate dynamics and thermal-mechanical coupling behavior of crystal materials, especially metallic crystals. In this paper, we construct a set of interpolation basis functions for the common BCC and FCC lattices, respectively, implementing the computation of ACCM. Based on this interpolation approximation, we give a rigorous mathematical analysis of the error of momentum conservation equation introduced by ACCM, and derive a sequence of inequalities that bound the error. Numerical experiment is carried out to verify our result.

3. The notion of error in Langevin dynamics. I. Linear analysis

Mishra, Bimal; Schlick, Tamar

1996-07-01

The notion of error in practical molecular and Langevin dynamics simulations of large biomolecules is far from understood because of the relatively large value of the timestep used, the short simulation length, and the low-order methods employed. We begin to examine this issue with respect to equilibrium and dynamic time-correlation functions by analyzing the behavior of selected implicit and explicit finite-difference algorithms for the Langevin equation. We derive: local stability criteria for these integrators; analytical expressions for the averages of the potential, kinetic, and total energy; and various limiting cases (e.g., timestep and damping constant approaching zero), for a system of coupled harmonic oscillators. These results are then compared to the corresponding exact solutions for the continuous problem, and their implications to molecular dynamics simulations are discussed. New concepts of practical and theoretical importance are introduced: scheme-dependent perturbative damping and perturbative frequency functions. Interesting differences in the asymptotic behavior among the algorithms become apparent through this analysis, and two symplectic algorithms, LIM2'' (implicit) and BBK'' (explicit), appear most promising on theoretical grounds. One result of theoretical interest is that for the Langevin/implicit-Euler algorithm (LI'') there exist timesteps for which there is neither numerical damping nor shift in frequency for a harmonic oscillator. However, this idea is not practical for more complex systems because these special timesteps can account only for one frequency of the system, and a large damping constant is required. We therefore devise a more practical, delay-function approach to remove the artificial damping and frequency perturbation from LI. Indeed, a simple MD implementation for a system of coupled harmonic oscillators demonstrates very satisfactory results in comparison with the velocity-Verlet scheme. We also define a

4. A case of error disclosure: a communication privacy management analysis.

PubMed

Petronio, Sandra; Helft, Paul R; Child, Jeffrey T

2013-12-01

To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios's theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient's family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public healthMuch of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information

5. Unbiased bootstrap error estimation for linear discriminant analysis.

PubMed

Vu, Thang; Sima, Chao; Braga-Neto, Ulisses M; Dougherty, Edward R

2014-12-01

Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.

6. Numerical flow analysis for axial flow turbine

Sato, T.; Aoki, S.

Some numerical flow analysis methods adopted in the gas turbine interactive design system, TDSYS, are described. In the TDSYS, a streamline curvature program for axisymmetric flows, quasi 3-D and fully 3-D time marching programs are used respectively for blade to blade flows and annular cascade flows. The streamline curvature method has some advantages in that it can include the effect of coolant mixing and choking flow conditions. Comparison of the experimental results with calculated results shows that the overall accuracy is determined more by the empirical correlations used for loss and deviation than by the numerical scheme. The time marching methods are the best choice for the analysis of turbine cascade flows because they can handle mixed subsonic-supersonic flows with automatic inclusion of shock waves in a single calculation. Some experimental results show that a time marching method can predict the airfoil surface Mach number distribution more accurately than a finite difference method. One weakpoint of the time marching methods is a long computer time; they usually require several times as much CPU time as other methods. But reductions in computer costs and improvements in numerical methods have made the quasi 3-D and fully 3-D time marching methods usable as design tools, and they are now used in TDSYS.

7. Shape Error Analysis of Functional Surface Based on Isogeometrical Approach

YUAN, Pei; LIU, Zhenyu; TAN, Jianrong

2017-05-01

The construction of traditional finite element geometry (i.e., the meshing procedure) is time consuming and creates geometric errors. The drawbacks can be overcame by the Isogeometric Analysis (IGA), which integrates the computer aided design and structural analysis in a unified way. A new IGA beam element is developed by integrating the displacement field of the element, which is approximated by the NURBS basis, with the internal work formula of Euler-Bernoulli beam theory with the small deformation and elastic assumptions. Two cases of the strong coupling of IGA elements, "beam to beam" and "beam to shell", are also discussed. The maximum relative errors of the deformation in the three directions of cantilever beam benchmark problem between analytical solutions and IGA solutions are less than 0.1%, which illustrate the good performance of the developed IGA beam element. In addition, the application of the developed IGA beam element in the Root Mean Square (RMS) error analysis of reflector antenna surface, which is a kind of typical functional surface whose precision is closely related to the product's performance, indicates that no matter how coarse the discretization is, the IGA method is able to achieve the accurate solution with less degrees of freedom than standard Finite Element Analysis (FEA). The proposed research provides an effective alternative to standard FEA for shape error analysis of functional surface.

8. Error analysis of flux limiter schemes at extrema

Kriel, A. J.

2017-01-01

Total variation diminishing (TVD) schemes have been an invaluable tool for the solution of hyperbolic conservation laws. One of the major shortcomings of commonly used TVD methods is the loss of accuracy near extrema. Although large amounts of anti-diffusion usually benefit the resolution of discontinuities, a balanced limiter such as Van Leer's performs better at extrema. Reliable criteria, however, for the performance of a limiter near extrema are not readily apparent. This work provides theoretical quantitative estimates for the local truncation errors of flux limiter schemes at extrema for a uniform grid. Moreover, the component of the error attributed to the flux limiter was obtained. This component is independent of the problem and grid spacing, and may be considered a property of the limiter that reflects the performance at extrema. Numerical test problems validate the results.

9. Explicit a posteriori error estimates for eigenvalue analysis of heterogeneous elastic structures.

SciTech Connect

Walsh, Timothy Francis; Reese, Garth M.; Hetmaniuk, Ulrich L.

2005-07-01

An a posteriori error estimator is developed for the eigenvalue analysis of three-dimensional heterogeneous elastic structures. It constitutes an extension of a well-known explicit estimator to heterogeneous structures. We prove that our estimates are independent of the variations in material properties and independent of the polynomial degree of finite elements. Finally, we study numerically the effectivity of this estimator on several model problems.

10. Numerical Analysis of Rocket Exhaust Cratering

NASA Technical Reports Server (NTRS)

2008-01-01

Supersonic jet exhaust impinging onto a flat surface is a fundamental flow encountered in space or with a missile launch vehicle system. The flow is important because it can endanger launch operations. The purpose of this study is to evaluate the effect of a landing rocket s exhaust on soils. From numerical simulations and analysis, we developed characteristic expressions and curves, which we can use, along with rocket nozzle performance, to predict cratering effects during a soft-soil landing. We conducted a series of multiphase flow simulations with two phases: exhaust gas and sand particles. The main objective of the simulation was to obtain the numerical results as close to the experimental results as possible. After several simulating test runs, the results showed that packing limit and the angle of internal friction are the two critical and dominant factors in the simulations.

11. An error analysis of the dynamic mode decomposition

Duke, Daniel; Soria, Julio; Honnery, Damon

2012-02-01

Dynamic mode decomposition (DMD) is a new diagnostic technique in fluid mechanics which is growing in popularity. A powerful analysis tool, it has great potential for measuring the spatial and temporal dynamics of coherent structures in experimental fluid flows. To aid interpretation of experimental data, error-bars on the measured growth rates are needed. In this article, we undertake a massively parallel error analysis of the DMD algorithm using synthetic waveforms that are shown to be representative of the canonical instabilities observed in shear flows. We show that the waveform of the instability has a marked impact on the error of the measured growth rate. Sawtooth and square waves may have an order of magnitude larger error than sine waves under the same conditions. We also show that the effects of data quantity and quality are of critical importance in determining the error in the growth or decay rate, and that the effect of the key parametric variables are modulated by the growth rate itself. We further demonstrate methods by which ensemble and orthogonal data may be introduced to improve the noise response. With regard for the important variables, precise measurement of the growth rates of instabilities may be supplemented with an accurately estimated uncertainty. This opens many new possibilities for the measurement of coherent structure in shear flows.

12. Error analysis of sub-aperture stitching interferometry

Jia, Xin; Xu, Fuchao; Xie, Weimin; Xing, Tingwen

2012-10-01

Large-aperture optical elements are widely employed in high-power laser system, astronomy, and outer-space technology. Sub-aperture stitching is an effective way to extend the lateral and vertical dynamic range of a conventional interferometer. With the aim to provide the accuracy of equipment, this paper simulates the arithmetic to analyze the errors. The Selection of stitching mode and setting of the number of subaperture is given. According to the programmed algorithms simulation stitching is performed for testing the algorithm. In this paper, based on the Matlab we simulate the arithmetic of Sub-aperture stitching. The sub-aperture stitching method can also be used to test the free formed surface. The freeformed surface is created by Zernike polynomials. The accuracy has relationship with the errors of tilting, positioning. Through the stitching the medium spatial frequency of the surface can be tested. The results of errors analysis by means of Matlab are shown that how the tilting and positioning errors to influence the testing accuracy. The analysis of errors can also be used in other interferometer systems.

13. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy.

PubMed

Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

2016-09-01

The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Balanced data according to the one-factor random effect model were assumed. Analysis-of-variance (anova)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The anova-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

14. Numerical analysis for finite Fresnel transform

Aoyagi, Tomohiro; Ohtsubo, Kouichi; Aoyagi, Nobuo

2016-10-01

The Fresnel transform is a bounded, linear, additive, and unitary operator in Hilbert space and is applied to many applications. In this study, a sampling theorem for a Fresnel transform pair in polar coordinate systems is derived. According to the sampling theorem, any function in the complex plane can be expressed by taking the products of the values of a function and sampling function systems. Sampling function systems are constituted by Bessel functions and their zeros. By computer simulations, we consider the application of the sampling theorem to the problem of approximating a function to demonstrate its validity. Our approximating function is a circularly symmetric function which is defined in the complex plane. Counting the number of sampling points requires the calculation of the zeros of Bessel functions, which are calculated by an approximation formula and numerical tables. Therefore, our sampling points are nonuniform. The number of sampling points, the normalized mean square error between the original function and its approximation function and phases are calculated and the relationship between them is revealed.

15. Comparing numerical error and visual quality in reconstructions from compressed digital holograms

Lehtimäki, Taina M.; Sääskilahti, Kirsti; Pitkäaho, Tomi; Naughton, Thomas J.

2010-04-01

Digital holography is a well-known technique for both sensing and displaying real-world three-dimensional objects. Compression of digital holograms has been studied extensively, and the errors introduced by lossy compression are routinely evaluated in a reconstruction domain. Mean-square error predominates in the evaluation of reconstruction quality. However, it is not known how well this metric corresponds to what a viewer would regard as perceived error, nor how consistently it functions across different holograms and different viewers. In this study, we evaluate how each of seventeen viewers compared the visual quality of compressed and uncompressed holograms' reconstructions. Holograms from five different three-dimensional objects were used in the study, captured using a phase-shift digital holography setup. We applied two different lossy compression techniques to the complex-valued hologram pixels: uniform quantization, and removal and quantization of the Fourier coefficients, and used seven different compression levels with each.

16. Application of human reliability analysis to nursing errors in hospitals.

PubMed

Inoue, Kayoko; Koizumi, Akio

2004-12-01

Adverse events in hospitals, such as in surgery, anesthesia, radiology, intensive care, internal medicine, and pharmacy, are of worldwide concern and it is important, therefore, to learn from such incidents. There are currently no appropriate tools based on state-of-the art models available for the analysis of large bodies of medical incident reports. In this study, a new model was developed to facilitate medical error analysis in combination with quantitative risk assessment. This model enables detection of the organizational factors that underlie medical errors, and the expedition of decision making in terms of necessary action. Furthermore, it determines medical tasks as module practices and uses a unique coding system to describe incidents. This coding system has seven vectors for error classification: patient category, working shift, module practice, linkage chain (error type, direct threat, and indirect threat), medication, severity, and potential hazard. Such mathematical formulation permitted us to derive two parameters: error rates for module practices and weights for the aforementioned seven elements. The error rate of each module practice was calculated by dividing the annual number of incident reports of each module practice by the annual number of the corresponding module practice. The weight of a given element was calculated by the summation of incident report error rates for an element of interest. This model was applied specifically to nursing practices in six hospitals over a year; 5,339 incident reports with a total of 63,294,144 module practices conducted were analyzed. Quality assurance (QA) of our model was introduced by checking the records of quantities of practices and reproducibility of analysis of medical incident reports. For both items, QA guaranteed legitimacy of our model. Error rates for all module practices were approximately of the order 10(-4) in all hospitals. Three major organizational factors were found to underlie medical

ERIC Educational Resources Information Center

Sass, Daniel A.

2010-01-01

Exploratory factor analysis (EFA) is commonly employed to evaluate the factor structure of measures with dichotomously scored items. Generally, only the estimated factor loadings are provided with no reference to significance tests, confidence intervals, and/or estimated factor loading standard errors. This simulation study assessed factor loading…

ERIC Educational Resources Information Center

Sass, Daniel A.

2010-01-01

Exploratory factor analysis (EFA) is commonly employed to evaluate the factor structure of measures with dichotomously scored items. Generally, only the estimated factor loadings are provided with no reference to significance tests, confidence intervals, and/or estimated factor loading standard errors. This simulation study assessed factor loading…

19. Analysis of possible systematic errors in the Oslo method

SciTech Connect

Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

2011-03-15

In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

20. Listening Comprehension Strategies and Autonomy: Why Error Analysis?

ERIC Educational Resources Information Center

Henner-Stanchina, Carolyn

An experiment combining listening comprehension training and error analysis was conducted with students at the English Language Institute, Queens College, the City University of New York. The purpose of the study was to investigate how to take learners who were primarily dependent on perceptive skills for comprehension and widen their…

1. Numerical Analysis of Orbital Perturbation Effects on Inclined Geosynchronous SAR

PubMed Central

Dong, Xichao; Hu, Cheng; Long, Teng; Li, Yuanhao

2016-01-01

The geosynchronous synthetic aperture radar (GEO SAR) is susceptible to orbit perturbations, leading to orbit drifts and variations. The influences behave very differently from those in low Earth orbit (LEO) SAR. In this paper, the impacts of perturbations on GEO SAR orbital elements are modelled based on the perturbed dynamic equations, and then, the focusing is analyzed theoretically and numerically by using the Systems Tool Kit (STK) software. The accurate GEO SAR slant range histories can be calculated according to the perturbed orbit positions in STK. The perturbed slant range errors are mainly the first and second derivatives, leading to image drifts and defocusing. Simulations of the point target imaging are performed to validate the aforementioned analysis. In the GEO SAR with an inclination of 53° and an argument of perigee of 90°, the Doppler parameters and the integration time are different and dependent on the geometry configurations. Thus, the influences are varying at different orbit positions: at the equator, the first-order phase errors should be mainly considered; at the perigee and apogee, the second-order phase errors should be mainly considered; at other positions, first-order and second-order exist simultaneously. PMID:27598168

2. Numerical Analysis of Orbital Perturbation Effects on Inclined Geosynchronous SAR.

PubMed

Dong, Xichao; Hu, Cheng; Long, Teng; Li, Yuanhao

2016-09-02

The geosynchronous synthetic aperture radar (GEO SAR) is susceptible to orbit perturbations, leading to orbit drifts and variations. The influences behave very differently from those in low Earth orbit (LEO) SAR. In this paper, the impacts of perturbations on GEO SAR orbital elements are modelled based on the perturbed dynamic equations, and then, the focusing is analyzed theoretically and numerically by using the Systems Tool Kit (STK) software. The accurate GEO SAR slant range histories can be calculated according to the perturbed orbit positions in STK. The perturbed slant range errors are mainly the first and second derivatives, leading to image drifts and defocusing. Simulations of the point target imaging are performed to validate the aforementioned analysis. In the GEO SAR with an inclination of 53° and an argument of perigee of 90°, the Doppler parameters and the integration time are different and dependent on the geometry configurations. Thus, the influences are varying at different orbit positions: at the equator, the first-order phase errors should be mainly considered; at the perigee and apogee, the second-order phase errors should be mainly considered; at other positions, first-order and second-order exist simultaneously.

3. A numerical model and spreadsheet interface for pumping test analysis.

PubMed

Johnson, G S; Cosgrove, D M; Frederick, D B

2001-01-01

Curve-matching techniques have been the standard method of aquifer test analysis for several decades. A variety of techniques provide the capability of evaluating test data from confined, unconfined, leaky aquitard, and other conditions. Each technique, however, is accompanied by a set of assumptions, and evaluation of a combination of conditions can be complicated or impossible due to intractable mathematics or nonuniqueness of the solution. Numerical modeling of pumping tests provides two major advantages: (1) the user can choose which properties to calibrate and what assumptions to make; and (2) in the calibration process the user is gaining insights into the conceptual model of the flow system and uncertainties in the analysis. Routine numerical modeling of pumping tests is now practical due to computer hardware and software advances of the last decade. The RADFLOW model and spreadsheet interface presented in this paper is an easy-to-use numerical model for estimation of aquifer properties from pumping test data. Layered conceptual models and their properties are evaluated in a trial-and-error estimation procedure. The RADFLOW model can treat most combinations of confined, unconfined, leaky aquitard, partial penetration, and borehole storage conditions. RADFLOW is especially useful in stratified aquifer systems with no identifiable lateral boundaries. It has been verified to several analytical solutions and has been applied in the Snake River Plain Aquifer to develop and test conceptual models and provide estimates of aquifer properties. Because the model assumes axially symmetrical flow, it is limited to representing multiple aquifer layers that are laterally continuous.

4. Numerical analysis of Horvath-Kawazoe equation.

PubMed

Kowalczyk, P; Terzyk, A P; Gauden, P A; Solarz, L

2002-01-01

A new Determining Horvath-Kawazoe (DHK) program for the evaluation of pore-size distribution curve based on the HK method (J. Chem. Eng. Jpn. 16 (1983) 470) is described. The standard bisection procedure (Gerald, C.F., 1977. Applied Numerical Analysis, 2nd ed. Addison-Wesley, CA) is used as a kernel in the proposed algorithm. The calculation of the effective pore-size distribution and the comparative analysis with the previous data published by Horvath and co-workers (J. Chem. Eng. Jpn. 16 (1983) 470; Fraissard, J., Conner, C.W., 1997. Physical Adsorption: Experiment, Theory and Applications. Kluwer Academic, London) and recalculated by Do (Do, D.D., 1998. Adsorption Analysis: Equilibria and Kinetics. Imperial College Press, London) were done.

5. Star tracker error analysis: Roll-to-pitch nonorthogonality

NASA Technical Reports Server (NTRS)

Corson, R. W.

1979-01-01

An error analysis is described on an anomaly isolated in the star tracker software line of sight (LOS) rate test. The LOS rate cosine was found to be greater than one in certain cases which implied that either one or both of the star tracker measured end point unit vectors used to compute the LOS rate cosine had lengths greater than unity. The roll/pitch nonorthogonality matrix in the TNB CL module of the IMU software is examined as the source of error.

6. An analysis of pilot error-related aircraft accidents

NASA Technical Reports Server (NTRS)

Kowalsky, N. B.; Masters, R. L.; Stone, R. B.; Babcock, G. L.; Rypka, E. W.

1974-01-01

A multidisciplinary team approach to pilot error-related U.S. air carrier jet aircraft accident investigation records successfully reclaimed hidden human error information not shown in statistical studies. New analytic techniques were developed and applied to the data to discover and identify multiple elements of commonality and shared characteristics within this group of accidents. Three techniques of analysis were used: Critical element analysis, which demonstrated the importance of a subjective qualitative approach to raw accident data and surfaced information heretofore unavailable. Cluster analysis, which was an exploratory research tool that will lead to increased understanding and improved organization of facts, the discovery of new meaning in large data sets, and the generation of explanatory hypotheses. Pattern recognition, by which accidents can be categorized by pattern conformity after critical element identification by cluster analysis.

7. Numerical analysis method for linear induction machines.

NASA Technical Reports Server (NTRS)

Elliott, D. G.

1972-01-01

A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.

8. Numerical and experimental analysis of spallation phenomena

Martin, Alexandre; Bailey, Sean C. C.; Panerai, Francesco; Davuluri, Raghava S. C.; Zhang, Huaibao; Vazsonyi, Alexander R.; Lippay, Zachary S.; Mansour, Nagi N.; Inman, Jennifer A.; Bathel, Brett F.; Splinter, Scott C.; Danehy, Paul M.

2016-12-01

The spallation phenomenon was studied through numerical analysis using a coupled Lagrangian particle tracking code and a hypersonic aerothermodynamics computational fluid dynamics solver. The results show that carbon emission from spalled particles results in a significant modification of the gas composition of the post-shock layer. Results from a test campaign at the NASA Langley HYMETS facility are presented. Using an automated image processing of short exposure images, two-dimensional velocity vectors of the spalled particles were calculated. In a 30-s test at 100 W/cm2 of cold-wall heat flux, more than 722 particles were detected, with an average velocity of 110 m/s.

9. Numerical analysis method for linear induction machines.

NASA Technical Reports Server (NTRS)

Elliott, D. G.

1972-01-01

A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.

10. Hybrid Experimental-Numerical Stress Analysis.

DTIC Science & Technology

1983-04-01

tension testings of excised strips of the cornea (17] and the sclera E18 ] yielded erroneous modulus of elasticity and Poisson’s ratio by the...Clarke, ASTM 51? 668, 1979, pp. 121 -150. D. Lndet J A.Begley and G. A. Clarke, ASTM STP 668v 1979P pp. 65 - 120. 28. Wilson# W. K., and Oslas, 3. R...Paris, ASTM STP 700, 1900, pp. 174 - 188. 33. Nishioka, T. and Atluri, S. N., "Numerical Analysis of Dynamic Crack Propagation: Generation and

11. Treatment of numerical overflow in simulating error performance of free-space optical communication

Li, Fei; Hou, Zaihong; Wu, Yi

2012-11-01

Gamma-gamma distribution model was widely used in numerical simulations of the free-space optical communication system. The simulations are often interrupted by numerical overflow exception due to excessive parameters. Based on former researches, two modified models are presented using mathematical calculation software and computer program. By means of substitution and recurrence, factors of the original model are transformed into corresponding logarithmic formats, and potential overflow in calculation is eliminated. By numerical verification, the practicability and accuracy of the modified models are proved and the advantages and disadvantages are listed. The proper model should be selected according to practical conditions. The two models are also applicable to other numerical simulations based on gamma gamma distribution such as outrage probability and mean fade time of the free-space optical communication.

12. Gamma Ray Observatory (GRO) OBC attitude error analysis

NASA Technical Reports Server (NTRS)

Harman, R. R.

1990-01-01

This analysis involves an in-depth look into the onboard computer (OBC) attitude determination algorithm. A review of TRW error analysis and necessary ground simulations to understand the onboard attitude determination process are performed. In addition, a plan is generated for the in-flight calibration and validation of OBC computed attitudes. Pre-mission expected accuracies are summarized and sensitivity of onboard algorithms to sensor anomalies and filter tuning parameters are addressed.

13. Minimization of the numerical phase velocity error in Particle-In-Cell simulations for relativistic charged particle systems

Meyers, Michael; Huang, Chengkun; Albright, B. J.

2013-10-01

The microbunching instability arises when GeV electrons interact with their coherent synchrotron radiation (CSR). Accurate particle-in-cell (PIC) modeling of this instability requires a method where the numerical phase velocity of light is very close to its physical value. This is also advantageous for mitigating the effects of Numerical Cherenkov Radiation (NCR), arising when simulating highly relativistic particles in astrophysical and high energy density laboratory settings. It has been shown that the use of a weighted stencil when calculating fields from the Ampere and Faraday laws affords a solver with a tunable phase velocity. A numerical dispersion relation appropriate to the PIC algorithm with the 3D FV24 scheme has been derived. Stencil weights that minimize the phase velocity error for the CSR and NCR problems will be presented along with simulations demonstrating the comparative advantages of this approach. Work performed under the auspices of DOE by LANL and supported by LDRD.

14. How psychotherapists handle treatment errors – an ethical analysis

PubMed Central

2013-01-01

Background Dealing with errors in psychotherapy is challenging, both ethically and practically. There is almost no empirical research on this topic. We aimed (1) to explore psychotherapists’ self-reported ways of dealing with an error made by themselves or by colleagues, and (2) to reconstruct their reasoning according to the two principle-based ethical approaches that are dominant in the ethics discourse of psychotherapy, Beauchamp & Childress (B&C) and Lindsay et al. (L). Methods We conducted 30 semi-structured interviews with 30 psychotherapists (physicians and non-physicians) and analysed the transcripts using qualitative content analysis. Answers were deductively categorized according to the two principle-based ethical approaches. Results Most psychotherapists reported that they preferred to an disclose error to the patient. They justified this by spontaneous intuitions and common values in psychotherapy, rarely using explicit ethical reasoning. The answers were attributed to the following categories with descending frequency: 1. Respect for patient autonomy (B&C; L), 2. Non-maleficence (B&C) and Responsibility (L), 3. Integrity (L), 4. Competence (L) and Beneficence (B&C). Conclusions Psychotherapists need specific ethical and communication training to complement and articulate their moral intuitions as a support when disclosing their errors to the patients. Principle-based ethical approaches seem to be useful for clarifying the reasons for disclosure. Further research should help to identify the most effective and acceptable ways of error disclosure in psychotherapy. PMID:24321503

15. Numerical analysis of discrete fractional integrodifferential structural dampers

NASA Technical Reports Server (NTRS)

1987-01-01

This paper develops solution algorithms enabling the handling of the dynamic response of nonlinear structures contained discretely attached dampers modeled by fractional integrodifferential operators of the Grunwald-Liouville-Riemann type. The development consists of two levels of formulation, namely: (1) numerical approximations of fractional operators and, (2) the establishment of global level implicit schemes enabling the solution to nonlinear structural formulations. To generalize the overall results, error estimates are derived for the fractional operator approximation algorithm. These enable an ongoing optimization of solution efficiency for a given error tolerance. To benchmark the scheme, the results of several numerical experiments are presented. These illustrate the numerical characteristics of the overall formulation.

16. Low noise propeller design using numerical analysis

Humpert, Bryce

17. Error Estimation and h-Adaptivity for Optimal Finite Element Analysis

NASA Technical Reports Server (NTRS)

Cwik, Tom; Lou, John

1997-01-01

The objective of adaptive meshing and automatic error control in finite element analysis is to eliminate the need for the application engineer from re-meshing and re-running design simulations to verify numerical accuracy. The user should only need to enter the component geometry and a coarse finite element mesh. The software will then autonomously and adaptively refine this mesh where needed, reducing the error in the fields to a user prescribed value. The ideal end result of the simulation is a measurable quantity (e.g. scattered field, input impedance), calculated to a prescribed error, in less time and less machine memory than if the user applied typical uniform mesh refinement by hand. It would also allow for the simulation of larger objects since an optimal mesh is created.

18. Doctors' duty to disclose error: a deontological or Kantian ethical analysis.

PubMed

Bernstein, Mark; Brown, Barry

2004-05-01

Medical (surgical) error is being talked about more openly and besides being the subject of retrospective reviews, is now the subject of prospective research. Disclosure of error has been a difficult issue because of fear of embarrassment for doctors in the eyes of their peers, and fear of punitive action by patients, consisting of medicolegal action and/or complaints to doctors' governing bodies. This paper examines physicians' and surgeons' duty to disclose error, from an ethical standpoint; specifically by applying the moral philosophical theory espoused by Immanuel Kant (ie. deontology). The purpose of this discourse is to apply moral philosophical analysis to a delicate but important issue which will be a matter all physicians and surgeons will have to confront, probably numerous times, in their professional careers.

19. Position determination and measurement error analysis for the spherical proof mass with optical shadow sensing

Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin

2016-09-01

To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.

20. The impact of observation errors on analysis error and forecast skill investigated with an Observing System Simulation Experiment

Prive, N.; Errico, R. M.; Tai, K.

2012-12-01

A global observing system simulation experiment (OSSE) has been developed at the NASA Global Modeling and Assimilation Office using the Global Earth Observing System (GEOS-5) forecast model and Gridpoint Statistical Interpolation data assimilation. A 13-month integration of the European Centre for Medium-Range Weather Forecasts operational forecast model is used as the Nature Run. Synthetic observations for conventional and radiance data types are interpolated from the Nature Run, with calibrated observation errors added to reproduce realistic statistics of analysis increment and observation innovation. It is found that correlated observation errors are necessary in order to replicate the statistics of analysis increment and observation innovation found with real data. The impact of these observation errors is explored in a series of OSSE experiments in which the magnitude of the applied observation error is varied from zero to double the calibrated values while the observation error covariances of the GSI are held fixed. Increased observation error has a strong effect on the variance of the analysis increment and observation innovation fields, but a much weaker impact on the root mean square (RMS) analysis error. For the 120 hour forecast, only slight degradation of forecast skill in terms of anomaly correlation and RMS forecast error is observed in the midlatitudes, and there is no appreciable impact of observation error on forecast skill in the tropics.

1. Nozzle Numerical Analysis Of The Scimitar Engine

Battista, F.; Marini, M.; Cutrone, L.

2011-05-01

This work describes part of the activities on the LAPCAT-II A2 vehicle, in which starting from the available conceptual vehicle design and the related pre- cooled turbo-ramjet engine called SCIMITAR, well- thought assumptions made for performance figures of different components during the iteration process within LAPCAT-I will be assessed in more detail. In this paper it is presented a numerical analysis aimed at the design optimization of the nozzle contour of the LAPCAT A2 SCIMITAR engine designed by Reaction Engines Ltd. (REL) (see Figure 1). In particular, nozzle shape optimization process is presented for cruise conditions. All the computations have been carried out by using the CIRA C3NS code in non equilibrium conditions. The effect of considering detailed or reduced chemical kinetic schemes has been analyzed with a particular focus on the production of pollutants. An analysis of engine performance parameters, such as thrust and combustion efficiency has been carried out.

2. Numerical Analysis of Convection/Transpiration Cooling

NASA Technical Reports Server (NTRS)

Glass, David E.; Dilley, Arthur D.; Kelly, H. Neale

1999-01-01

An innovative concept utilizing the natural porosity of refractory-composite materials and hydrogen coolant to provide CONvective and TRANspiration (CONTRAN) cooling and oxidation protection has been numerically studied for surfaces exposed to a high heat flux high temperature environment such as hypersonic vehicle engine combustor walls. A boundary layer code and a porous media finite difference code were utilized to analyze the effect of convection and transpiration cooling on surface heat flux and temperature. The boundary layer code determined that transpiration flow is able to provide blocking of the surface heat flux only if it is above a minimum level due to heat addition from combustion of the hydrogen transpirant. The porous media analysis indicated that cooling of the surface is attained with coolant flow rates that are in the same range as those required for blocking, indicating that a coupled analysis would be beneficial.

3. Numerical Analysis of Convection/Transpiration Cooling

NASA Technical Reports Server (NTRS)

Glass, David E.; Dilley, Arthur D.; Kelly, H. Neale

1999-01-01

An innovative concept utilizing the natural porosity of refractory-composite materials and hydrogen coolant to provide CONvective and TRANspiration (CONTRAN) cooling and oxidation protection has been numerically studied for surfaces exposed to a high heat flux, high temperature environment such as hypersonic vehicle engine combustor walls. A boundary layer code and a porous media finite difference code were utilized to analyze the effect of convection and transpiration cooling on surface heat flux and temperature. The boundary, layer code determined that transpiration flow is able to provide blocking of the surface heat flux only if it is above a minimum level due to heat addition from combustion of the hydrogen transpirant. The porous media analysis indicated that cooling of the surface is attained with coolant flow rates that are in the same range as those required for blocking, indicating that a coupled analysis would be beneficial.

4. Dynamic Numerical Analysis of Steel Footbridge

Major, Maciej; Minda, Izabela; Major, Izabela

2017-06-01

The study presents a numerical analysis of the arched footbridge designed in two variants, made of steel and aluminium. The first part presents the criteria for evaluation of the comfort of using the footbridges. The study examined the footbridge with arched design with span in the axis of 24 m and width of 1.4 m. Arch geometry was made as a part of the circle with radius of r = 20 m cut off with a chord with length equal to the calculation length of the girders. The model of the analysed footbridge was subjected to the dynamic effect of wind and the pedestrian traffic with variable flexibility. The analyses used Robot Structural Analysis software.

5. Numerical analysis of ellipsometric critical adsorption data

Smith, Dan S. P.; Law, Bruce M.; Smock, Martin; Landau, David P.

1997-01-01

A recent study [Dan S. P. Smith and Bruce M. Law, Phys. Rev. E 54, 2727 (1996)] presented measurements of the ellipsometric coefficient at the Brewster angle ρ-bar on the liquid-vapor surface of four different binary liquid mixtures in the vicinity of their liquid-liquid critical point and analyzed the data analytically for large reduced temperatures t. In the current report we analyze this (ρ-bar,t) data numerically over the entire range of t. Theoretical universal surface scaling functions P+/-(x) from a Monte Carlo (MC) simulation [M. Smock, H. W. Diehl, and D. P. Landau, Ber. Bunsenges. Phys. Chem. 98, 486 (1994)] and a renormalization-group (RG) calculation [H. W. Diehl and M. Smock, Phys. Rev. B 47, 5841 (1993); 48, 6470(E) (1993)] are used in the numerical integration of Maxwell's equations to provide theoretical (ρ-bar,t) curves that can be compared directly with the experimental data. While both the MC and RG curves are in qualitative agreement with the experimental data, the agreement is generally found to be better for the MC curves. However, systematic discrepancies are found in the quantitative comparison between the MC and experimental (ρ-bar,t) curves, and it is determined that these discrepancies are too large to be due to experimental error. Finally, it is demonstrated that ρ-bar can be rescaled to produce an approximately universal ellipsometric curve as a function of the single variable ξ+/-/λ, where ξ is the correlation length and λ is the wavelength of light. The position of the maximum of this curve in the one-phase region, (ξ+/λ)peak, is approximately a universal number. It is determined that (ξ+/λ)peak is dependent primarily on the ratio c+/P∞,+, where P+(x)≅c+x-Β/ν for x<<1 and P+(x)≅P∞,+e-x for x>>:1. This enables the experimental estimate of c+/P∞,+=0.90+/-0.24, which is significantly large compared to the MC and RG values of 0.577 and 0.442, respectively.

6. Doppler imaging of chemical spots on magnetic Ap/Bp stars. Numerical tests and assessment of systematic errors

Kochukhov, O.

2017-01-01

Context. Doppler imaging (DI) is a powerful spectroscopic inversion technique that enables conversion of a line profile time series into a two-dimensional map of the stellar surface inhomogeneities. DI has been repeatedly applied to reconstruct chemical spot topologies of magnetic Ap/Bp stars with the goal of understanding variability of these objects and gaining an insight into the physical processes responsible for spot formation. Aims: In this paper we investigate the accuracy of chemical abundance DI and assess the impact of several different systematic errors on the reconstructed spot maps. Methods: We have simulated spectroscopic observational data for two different Fe spot distributions with a surface abundance contrast of 1.5 dex in the presence of a moderately strong dipolar magnetic field. We then reconstructed chemical maps using different sets of spectral lines and making different assumptions about line formation in the inversion calculations. Results: Our numerical experiments demonstrate that a modern DI code successfully recovers the input chemical spot distributions comprised of multiple circular spots at different latitudes or an element overabundance belt at the magnetic equator. For the optimal reconstruction based on half a dozen spectral intervals, the average reconstruction errors do not exceed 0.10 dex. The errors increase to about 0.15 dex when abundance distributions are recovered from a few and/or blended spectral lines. Ignoring a 2.5 kG dipolar magnetic field in chemical abundance DI leads to an average relative error of 0.2 dex and maximum errors of 0.3 dex. Similar errors are encountered if a DI inversion is carried out neglecting a non-uniform continuum brightness distribution and variation of the local atmospheric structure. None of the considered systematic effects lead to major spurious features in the recovered abundance maps. Conclusions: This series of numerical DI simulations proves that inversions based on one or two spectral

7. Numerical Analysis of a Finite Element/Volume Penalty Method

Maury, Bertrand

The penalty method makes it possible to incorporate a large class of constraints in general purpose Finite Element solvers like freeFEM++. We present here some contributions to the numerical analysis of this method. We propose an abstract framework for this approach, together with some general error estimates based on the discretization parameter ɛ and the space discretization parameter h. As this work is motivated by the possibility to handle constraints like rigid motion for fluid-particle flows, we shall pay a special attention to a model problem of this kind, where the constraint is prescribed over a subdomain. We show how the abstract estimate can be applied to this situation, in the case where a non-body-fitted mesh is used. In addition, we describe how this method provides an approximation of the Lagrange multiplier associated to the constraint.

8. ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS

NASA Technical Reports Server (NTRS)

Putney, B.

1994-01-01

The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and

9. ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS

NASA Technical Reports Server (NTRS)

Putney, B.

1994-01-01

The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and

10. Numerical modeling techniques for flood analysis

Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

2016-12-01

Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

11. Error analysis for NMR polymer microstructure measurement without calibration standards.

PubMed

Qiu, XiaoHua; Zhou, Zhe; Gobbi, Gian; Redwine, Oscar D

2009-10-15

We report an error analysis method for primary analytical methods in the absence of calibration standards. Quantitative (13)C NMR analysis of ethylene/1-octene (E/O) copolymers is given as an example. Because the method is based on a self-calibration scheme established by counting, it is a measure of accuracy rather than precision. We demonstrate it is self-consistent and neither underestimate nor excessively overestimate the experimental errors. We also show the method identified previously unknown systematic biases in a NMR instrument. The method can eliminate unnecessary data averaging to save valuable NMR resources. The accuracy estimate proposed is not unique to (13)C NMR spectroscopy of E/O but should be applicable to all other measurement systems where the accuracy of a subset of the measured responses can be established.

12. Analysis of accuracy of approximate, simultaneous, nonlinear confidence intervals on hydraulic heads in analytical and numerical test cases

USGS Publications Warehouse

Hill, M.C.

1989-01-01

Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author

13. Efficient Reduction and Analysis of Model Predictive Error

Doherty, J.

2006-12-01

dominates the former) depends on the "innate variability" of hydraulic properties within the model domain. Knowledge of both of these is a prerequisite for characterisation of the magnitude of possible model predictive error. Unfortunately, in most cases, such knowledge is incomplete and subjective. Nevertheless, useful analysis of model predictive error can still take place. The present paper briefly discusses the means by which mathematical regularisation can be employed in the model calibration process in order to extract as much information as possible on hydraulic property heterogeneity prevailing within the model domain, thereby reducing predictive error to the lowest that can be achieved on the basis of that dataset. It then demonstrates the means by which predictive error variance can be quantified based on information supplied by the regularised inversion process. Both linear and nonlinear predictive error variance analysis is demonstrated using a number of real-world and synthetic examples.

14. Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis

NASA Technical Reports Server (NTRS)

Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl

2009-01-01

The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.

15. Eigenvector method for umbrella sampling enables error analysis

Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.

2016-08-01

Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence.

16. Eigenvector method for umbrella sampling enables error analysis

PubMed Central

Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.

2016-01-01

Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912

Marengo, Massimo; Karovska, Margarita; Sasselov, Dimitar D.; Sanchez, Mayly

2004-03-01

We derive an analytic solution for the minimization problem in the geometric Baade-Wesselink method. This solution allows deriving the distance and mean radius of a pulsating star by fitting its velocity curve and angular diameter measured interferometrically. The method also provides analytic solutions for the confidence levels of the best-fit parameters and accurate error estimates for the Baade-Wesselink solution. Special care is taken in the analysis of the various error sources in the final solution, among which are the uncertainties due to the projection factor, the limb darkening, and the velocity curve. We also discuss the importance of the phase shift between the stellar light curve and the velocity curve as a potential error source in the geometric Baade-Wesselink method. We finally discuss the case of the classical Cepheid ζ Gem, applying our method to the measurements derived with the Palomar Testbed Interferometer. We show how a careful treatment of the measurement errors can be used to discriminate between different models of limb darkening by using interferometric techniques.

18. Structure function analysis of mirror fabrication and support errors

Hvisc, Anastacia M.; Burge, James H.

2007-09-01

Telescopes are ultimately limited by atmospheric turbulence, which is commonly characterized by a structure function. The telescope optics will not further degrade the performance if their errors are small compared to the atmospheric effects. Any further improvement to the mirrors is not economical since there is no increased benefit to performance. Typically the telescope specification is written in terms of an image size or encircled energy and is derived from the best seeing that is expected at the site. Ideally, the fabrication and support errors should never exceed atmospheric turbulence at any spatial scale, so it is instructive to look at how these errors affect the structure function of the telescope. The fabrication and support errors are most naturally described by Zernike polynomials or by bending modes for the active mirrors. This paper illustrates an efficient technique for relating this modal analysis to wavefront structure functions. Data is provided for efficient calculation of structure function given coefficients for Zernike annular polynomials. An example of this procedure for the Giant Magellan Telescope primary mirror is described.

19. Eigenvector method for umbrella sampling enables error analysis.

PubMed

Thiede, Erik H; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R

2016-08-28

Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence.

20. Repeated measurement sampling in genetic association analysis with genotyping errors.

PubMed

Lai, Renzhen; Zhang, Hong; Yang, Yaning

2007-02-01

Genotype misclassification occurs frequently in human genetic association studies. When cases and controls are subject to the same misclassification model, Pearson's chi-square test has the correct type I error but may lose power. Most current methods adjusting for genotyping errors assume that the misclassification model is known a priori or can be assessed by a gold standard instrument. But in practical applications, the misclassification probabilities may not be completely known or the gold standard method can be too costly to be available. The repeated measurement design provides an alternative approach for identifying misclassification probabilities. With this design, a proportion of the subjects are measured repeatedly (five or more repeats) for the genotypes when the error model is completely unknown. We investigate the applications of the repeated measurement method in genetic association analysis. Cost-effectiveness study shows that if the phenotyping-to-genotyping cost ratio or the misclassification rates are relatively large, the repeat sampling can gain power over the regular case-control design. We also show that the power gain is not sensitive to the genetic model, genetic relative risk and the population high-risk allele frequency, all of which are typically important ingredients in association studies. An important implication of this result is that whatever the genetic factors are, the repeated measurement method can be applied if the genotyping errors must be accounted for or the phenotyping cost is high.

1. Error analysis of compensation cutting technique for wavefront error of KH2PO4 crystal.

PubMed

Tie, Guipeng; Dai, Yifan; Guan, Chaoliang; Zhu, Dengchao; Song, Bing

2013-09-20

Considering the wavefront error of KH(2)PO(4) (KDP) crystal is difficult to control through face fly cutting process because of surface shape deformation during vacuum suction, an error compensation technique based on a spiral turning method is put forward. An in situ measurement device is applied to measure the deformed surface shape after vacuum suction, and the initial surface figure error, which is obtained off-line, is added to the in situ surface shape to obtain the final surface figure to be compensated. Then a three-axis servo technique is utilized to cut the final surface shape. In traditional cutting processes, in addition to common error sources such as the error in the straightness of guide ways, spindle rotation error, and error caused by ambient environment variance, three other errors, the in situ measurement error, position deviation error, and servo-following error, are the main sources affecting compensation accuracy. This paper discusses the effect of these three errors on compensation accuracy and provides strategies to improve the final surface quality. Experimental verification was carried out on one piece of KDP crystal with the size of Φ270 mm×11 mm. After one compensation process, the peak-to-valley value of the transmitted wavefront error dropped from 1.9λ (λ=632.8 nm) to approximately 1/3λ, and the mid-spatial-frequency error does not become worse when the frequency of the cutting tool trajectory is controlled by use of a low-pass filter.

2. Effect of rawinsonde errors on rocketsonde density and pressure profiles: An error analysis of the Rawinsonde System

NASA Technical Reports Server (NTRS)

Luers, J. K.

1980-01-01

An initial value of pressure is required to derive the density and pressure profiles of the rocketborne rocketsonde sensor. This tie-on pressure value is obtained from the nearest rawinsonde launch at an altitude where overlapping rawinsonde and rocketsonde measurements occur. An error analysis was performed of the error sources in these sensors that contribute to the error in the tie-on pressure. It was determined that significant tie-on pressure errors result from radiation errors in the rawinsonde rod thermistor, and temperature calibration bias errors. To minimize the effect of these errors radiation corrections should be made to the rawinsonde temperature and the tie-on altitude should be chosen at the lowest altitude of overlapping data. Under these conditions the tie-on error, and consequently the resulting error in the Datasonde pressure and density profiles is less tha 1%. The effect of rawinsonde pressure and temperature errors on the wind and temperature versus height profiles of the rawinsonde was also determined.

3. Landmarking the brain for geometric morphometric analysis: an error study.

PubMed

Chollet, Madeleine B; Aldridge, Kristina; Pangborn, Nicole; Weinberg, Seth M; Deleon, Valerie B

2014-01-01

Neuroanatomic phenotypes are often assessed using volumetric analysis. Although powerful and versatile, this approach is limited in that it is unable to quantify changes in shape, to describe how regions are interrelated, or to determine whether changes in size are global or local. Statistical shape analysis using coordinate data from biologically relevant landmarks is the preferred method for testing these aspects of phenotype. To date, approximately fifty landmarks have been used to study brain shape. Of the studies that have used landmark-based statistical shape analysis of the brain, most have not published protocols for landmark identification or the results of reliability studies on these landmarks. The primary aims of this study were two-fold: (1) to collaboratively develop detailed data collection protocols for a set of brain landmarks, and (2) to complete an intra- and inter-observer validation study of the set of landmarks. Detailed protocols were developed for 29 cortical and subcortical landmarks using a sample of 10 boys aged 12 years old. Average intra-observer error for the final set of landmarks was 1.9 mm with a range of 0.72 mm-5.6 mm. Average inter-observer error was 1.1 mm with a range of 0.40 mm-3.4 mm. This study successfully establishes landmark protocols with a minimal level of error that can be used by other researchers in the assessment of neuroanatomic phenotypes.

4. Error threshold in optimal coding, numerical criteria, and classes of universalities for complexity

Saakian, David B.

2005-01-01

The free energy of the random energy model at the transition point between the ferromagnetic and spin glass phases is calculated. At this point, equivalent to the decoding error threshold in optimal codes, the free energy has finite size corrections proportional to the square root of the number of degrees. The response of the magnetization to an external ferromagnetic phase is maximal at values of magnetization equal to one-half. We give several criteria of complexity and define different universality classes. According to our classification, at the lowest class of complexity are random graphs, Markov models, and hidden Markov models. At the next level is the Sherrington-Kirkpatrick spin glass, connected to neuron-network models. On a higher level are critical theories, the spin glass phase of the random energy model, percolation, and self-organized criticality. The top level class involves highly optimized tolerance design, error thresholds in optimal coding, language, and, maybe, financial markets. Living systems are also related to the last class. The concept of antiresonance is suggested for complex systems.

5. Verifying the error bound of numerical computation implemented in computer systems

DOEpatents

2013-03-12

A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.

6. On the use of stability regions in the numerical analysis of initial value problems

Lenferink, H. W. J.; Spijker, M. N.

1991-07-01

This paper deals with the stability analysis of one-step methods in the numerical solution of initial (-boundary) value problems for linear, ordinary, and partial differential equations. Restrictions on the stepsize are derived which guarantee the rate of error growth in these methods to be of moderate size. These restrictions are related to the stability region of the method and to numerical ranges of matrices stemming from the differential equation under consideration. The errors in the one-step methods are measured in arbitrary norms (not necessarily generated by an inner product). The theory is illustrated in the numerical solution of the heat equation and some other differential equations, where the error growth is measured in the maximum norm.

7. Fourier analysis of numerical algorithms for the Maxwell equations

NASA Technical Reports Server (NTRS)

Liu, Yen

1993-01-01

The Fourier method is used to analyze the dispersive, dissipative, and isotropy errors of various spatial and time discretizations applied to the Maxwell equations on multi-dimensional grids. Both Cartesian grids and non-Cartesian grids based on hexagons and tetradecahedra are studied and compared. The numerical errors are quantitatively determined in terms of phase speed, wave number, propagation direction, gridspacings, and CFL number. The study shows that centered schemes are more efficient than upwind schemes. The non-Cartesian grids yield superior isotropy and higher accuracy than the Cartesian ones. For the centered schemes, the staggered grids produce less errors than the unstaggered ones. A new unstaggered scheme which has all the best properties is introduced. The study also demonstrates that a proper choice of time discretization can reduce the overall numerical errors due to the spatial discretization.

8. Computing the surveillance error grid analysis: procedure and examples.

PubMed

Kovatchev, Boris P; Wakeman, Christian A; Breton, Marc D; Kost, Gerald J; Louie, Richard F; Tran, Nam K; Klonoff, David C

2014-07-01

The surveillance error grid (SEG) analysis is a tool for analysis and visualization of blood glucose monitoring (BGM) errors, based on the opinions of 206 diabetes clinicians who rated 4 distinct treatment scenarios. Resulting from this large-scale inquiry is a matrix of 337 561 risk ratings, 1 for each pair of (reference, BGM) readings ranging from 20 to 580 mg/dl. The computation of the SEG is therefore complex and in need of automation. The SEG software introduced in this article automates the task of assigning a degree of risk to each data point for a set of measured and reference blood glucose values so that the data can be distributed into 8 risk zones. The software's 2 main purposes are to (1) distribute a set of BG Monitor data into 8 risk zones ranging from none to extreme and (2) present the data in a color coded display to promote visualization. Besides aggregating the data into 8 zones corresponding to levels of risk, the SEG computes the number and percentage of data pairs in each zone and the number/percentage of data pairs above/below the diagonal line in each zone, which are associated with BGM errors creating risks for hypo- or hyperglycemia, respectively. To illustrate the action of the SEG software we first present computer-simulated data stratified along error levels defined by ISO 15197:2013. This allows the SEG to be linked to this established standard. Further illustration of the SEG procedure is done with a series of previously published data, which reflect the performance of BGM devices and test strips under various environmental conditions. We conclude that the SEG software is a useful addition to the SEG analysis presented in this journal, developed to assess the magnitude of clinical risk from analytically inaccurate data in a variety of high-impact situations such as intensive care and disaster settings.

9. Error analysis for matrix elastic-net regularization algorithms.

PubMed

Li, Hong; Chen, Na; Li, Luoqing

2012-05-01

Elastic-net regularization is a successful approach in statistical modeling. It can avoid large variations which occur in estimating complex models. In this paper, elastic-net regularization is extended to a more general setting, the matrix recovery (matrix completion) setting. Based on a combination of the nuclear-norm minimization and the Frobenius-norm minimization, we consider the matrix elastic-net (MEN) regularization algorithm, which is an analog to the elastic-net regularization scheme from compressive sensing. Some properties of the estimator are characterized by the singular value shrinkage operator. We estimate the error bounds of the MEN regularization algorithm in the framework of statistical learning theory. We compute the learning rate by estimates of the Hilbert-Schmidt operators. In addition, an adaptive scheme for selecting the regularization parameter is presented. Numerical experiments demonstrate the superiority of the MEN regularization algorithm.

10. Dynamic analysis of high speed gears by using loaded static transmission error

Özgüven, H. Nevzat; Houser, D. R.

1988-08-01

A single degree of freedom non-linear model is used for the dynamic analysis of a gear pair. Two methods are suggested and a computer program is developed for calculating the dynamic mesh and tooth forces, dynamic factors based on stresses, and dynamic transmission error from measured or calculated loaded static transmission errors. The analysis includes the effects of variable mesh stiffness and mesh damping, gear errors (pitch, profile and runout errors), profile modifications and backlash. The accuracy of the method, which includes the time variation of both mesh stiffness and damping is demonstrated with numerical examples. In the second method, which is an approximate one, the time average of the mesh stiffness is used. However, the formulation used in the approximate analysis allows for the inclusion of the excitation effect of the variable mesh stiffness. It is concluded from the comparison of the results of the two methods that the displacement excitation resulting from a variable mesh stiffness is more important than the change in system natural frequency resulting from the mesh stiffness variation. Although the theory presented is general and applicable to spur, helical and spiral bevel gears, the computer program prepared is for only spur gears.

11. Pilot error and its relationship with higher organizational levels: HFACS analysis of 523 accidents.

PubMed

Li, Wen-Chin; Harris, Don

2006-10-01

Based on Reason's model of human error, the Human Factors Analysis and Classification System (HFACS) was developed as an analytical framework for the investigation of the role of human error in aviation accidents. However, there is little empirical work that formally describes numerically the relationship between the levels and components in the model (the organizational structures, psychological precursors of errors, and actual errors). This research analyzed 523 accidents in the Republic of China (ROC) Air Force between 1978 and 2002 through the application of the HFACS framework. The results revealed several key relationships between errors at the operational level and organizational inadequacies at both the immediately adjacent level (preconditions for unsafe acts) and higher levels in the organization (unsafe supervision and organizational influences). This research lends support to Reason's model that suggests that active failures are promoted by latent conditions in the organization. Fallible decisions in upper command levels were found to directly affect supervisory practices, thereby creating preconditions for unsafe acts, and hence indirectly impaired performance of pilots, leading to accidents. The HFACS framework was proven to be a useful tool for guiding accident investigations and developing accident prevention strategies.

12. Jason-2 systematic error analysis in the GPS derived orbits

Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.

2011-12-01

Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced

13. Numerical flow analysis of hydro power stations

Ostermann, Lars; Seidel, Christian

2017-07-01

For the hydraulic engineering and design of hydro power stations and their hydraulic optimisation, mainly experimental studies of the physical submodel or of the full model at the hydraulics laboratory are carried out. Partially, the flow analysis is done by means of computational fluid dynamics based on 2D and 3D methods and is a useful supplement to experimental studies. For the optimisation of hydro power stations, fast numerical methods would be appropriate to study the influence of a wide field of optimisation parameters and flow states. Among the 2D methods, especially the methods based on the shallow water equations are suitable for this field of application, since a lot of experience verified by in-situ measurements exists because of the widely used application of this method for the problems in hydraulic engineering. As necessary, a 3D model may supplement subsequently the optimisation of the hydro power station. The quality of the results of the 2D method for the optimisation of hydro power plants is investigated by means of the results of the optimisation of the hydraulic dividing pier compared to the results of the 3D flow analysis.

14. Convergence and error estimation in free energy calculations using the weighted histogram analysis method.

PubMed

Zhu, Fangqiang; Hummer, Gerhard

2012-02-05

The weighted histogram analysis method (WHAM) has become the standard technique for the analysis of umbrella sampling simulations. In this article, we address the challenges (1) of obtaining fast and accurate solutions of the coupled nonlinear WHAM equations, (2) of quantifying the statistical errors of the resulting free energies, (3) of diagnosing possible systematic errors, and (4) of optimally allocating of the computational resources. Traditionally, the WHAM equations are solved by a fixed-point direct iteration method, despite poor convergence and possible numerical inaccuracies in the solutions. Here, we instead solve the mathematically equivalent problem of maximizing a target likelihood function, by using superlinear numerical optimization algorithms with a significantly faster convergence rate. To estimate the statistical errors in one-dimensional free energy profiles obtained from WHAM, we note that for densely spaced umbrella windows with harmonic biasing potentials, the WHAM free energy profile can be approximated by a coarse-grained free energy obtained by integrating the mean restraining forces. The statistical errors of the coarse-grained free energies can be estimated straightforwardly and then used for the WHAM results. A generalization to multidimensional WHAM is described. We also propose two simple statistical criteria to test the consistency between the histograms of adjacent umbrella windows, which help identify inadequate sampling and hysteresis in the degrees of freedom orthogonal to the reaction coordinate. Together, the estimates of the statistical errors and the diagnostics of inconsistencies in the potentials of mean force provide a basis for the efficient allocation of computational resources in free energy simulations.

15. Convergence and error estimation in free energy calculations using the weighted histogram analysis method

PubMed Central

Zhu, Fangqiang; Hummer, Gerhard

2012-01-01

The weighted histogram analysis method (WHAM) has become the standard technique for the analysis of umbrella sampling simulations. In this paper, we address the challenges (1) of obtaining fast and accurate solutions of the coupled nonlinear WHAM equations, (2) of quantifying the statistical errors of the resulting free energies, (3) of diagnosing possible systematic errors, and (4) of optimal allocation of the computational resources. Traditionally, the WHAM equations are solved by a fixed-point direct iteration method, despite poor convergence and possible numerical inaccuracies in the solutions. Here we instead solve the mathematically equivalent problem of maximizing a target likelihood function, by using superlinear numerical optimization algorithms with a significantly faster convergence rate. To estimate the statistical errors in one-dimensional free energy profiles obtained from WHAM, we note that for densely spaced umbrella windows with harmonic biasing potentials, the WHAM free energy profile can be approximated by a coarse-grained free energy obtained by integrating the mean restraining forces. The statistical errors of the coarse-grained free energies can be estimated straightforwardly and then used for the WHAM results. A generalization to multidimensional WHAM is described. We also propose two simple statistical criteria to test the consistency between the histograms of adjacent umbrella windows, which help identify inadequate sampling and hysteresis in the degrees of freedom orthogonal to the reaction coordinate. Together, the estimates of the statistical errors and the diagnostics of inconsistencies in the potentials of mean force provide a basis for the efficient allocation of computational resources in free energy simulations. PMID:22109354

16. Error analysis of satellite attitude determination using a vision-based approach

Carozza, Ludovico; Bevilacqua, Alessandro

2013-09-01

Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).

17. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

SciTech Connect

Lon N. Haney; David I. Gertman

2003-04-01

Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

18. Error analysis and method of calibration for linear time grating displacement sensor

Gao, Zhonghua; Zheng, Fangyan; Chen, Xihou; Chen, Ziran; Peng, Donglin

2013-01-01

A combination method for calibrating the errors of linear time grating displacement sensor is presented. Based on further analysis of time grating, periodic errors, Abbe errors and thermal expansion errors are integrated to obtain error curve for setting up error model, which is adopted to compensate errors using Fourier harmonic analysis and the principle of liner expansion, respectively. Results prove that this method solves the difficult issues about error separation in the linear measurement, and significantly improves the accuracy of linear time grating. Furthermore, this method also solves the issues about continuous automatic sampling with computer, so that the calibration efficiency has been greatly enhanced.

19. Medication errors in pediatric nursing: assessment of nurses' knowledge and analysis of the consequences of errors.

PubMed

Lan, Ya-Hui; Wang, Kai-Wei K; Yu, Shu; Chen, I-Ju; Wu, Hsiang-Feng; Tang, Fu-In

2014-05-01

The purposes of this study were (i) to evaluate pediatric nurses' knowledge of pharmacology, and (ii) to analyze known pediatric administration errors. Medication errors occur frequently and ubiquitously, but medication errors involving pediatric patients attract special attention for their high incidence and injury rates. A cross-sectional study was conducted. A questionnaire with 20 true-false questions regarding pharmacology was used to evaluate nurses' knowledge, and the known pediatric administration errors were reported by nurses. The overall correct answer rate on the knowledge of pharmacology was 72.9% (n=262). Insufficient knowledge (61.5%) was the leading obstacle nurses encountered when administering medications. Of 141 pediatric medication errors, more than 60% (61.0%) of which were wrong doses, 9.2% of the children involved suffered serious consequences. Evidence-based results demonstrate that pediatric nurses have insufficient knowledge of pharmacology. Such strategies as providing continuing education and double-checking dosages are suggested. © 2013.

20. Linearised and non-linearised isotherm models optimization analysis by error functions and statistical means.

PubMed

Subramanyam, Busetty; Das, Ashutosh

2014-01-01

In adsorption study, to describe sorption process and evaluation of best-fitting isotherm model is a key analysis to investigate the theoretical hypothesis. Hence, numerous statistically analysis have been extensively used to estimate validity of the experimental equilibrium adsorption values with the predicted equilibrium values. Several statistical error analysis were carried out. In the present study, the following statistical analysis were carried out to evaluate the adsorption isotherm model fitness, like the Pearson correlation, the coefficient of determination and the Chi-square test, have been used. The ANOVA test was carried out for evaluating significance of various error functions and also coefficient of dispersion were evaluated for linearised and non-linearised models. The adsorption of phenol onto natural soil (Local name Kalathur soil) was carried out, in batch mode at 30 ± 20 C. For estimating the isotherm parameters, to get a holistic view of the analysis the models were compared between linear and non-linear isotherm models. The result reveled that, among above mentioned error functions and statistical functions were designed to determine the best fitting isotherm.

1. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

NASA Technical Reports Server (NTRS)

Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

1998-01-01

We proposed a novel characterization of errors for numerical weather predictions. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has several important applications, including the model assessment application and the objective analysis application. In this project, we have focused on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP), the 500 hPa geopotential height, and the 315 K potential vorticity fields for forecasts of the short and medium range. The forecasts are generated by the Goddard Earth Observing System (GEOS) data assimilation system with and without ERS-1 scatterometer data. A great deal of novel work has been accomplished under the current contract. In broad terms, we have developed and tested an efficient algorithm for determining distortions. The algorithm and constraints are now ready for application to larger data sets to be used to determine the statistics of the distortion as outlined above, and to be applied in data analysis by using GEOS water vapor imagery to correct short-term forecast errors.

2. Error Analysis for the Airborne Direct Georeferincing Technique

Elsharkawy, Ahmed S.; Habib, Ayman F.

2016-10-01

Direct Georeferencing was shown to be an important alternative to standard indirect image orientation using classical or GPS-supported aerial triangulation. Since direct Georeferencing without ground control relies on an extrapolation process only, particular focus has to be laid on the overall system calibration procedure. The accuracy performance of integrated GPS/inertial systems for direct Georeferencing in airborne photogrammetric environments has been tested extensively in the last years. In this approach, the limiting factor is a correct overall system calibration including the GPS/inertial component as well as the imaging sensor itself. Therefore remaining errors in the system calibration will significantly decrease the quality of object point determination. This research paper presents an error analysis for the airborne direct Georeferencing technique, where integrated GPS/IMU positioning and navigation systems are used, in conjunction with aerial cameras for airborne mapping compared with GPS/INS supported AT through the implementation of certain amount of error on the EOP and Boresight parameters and study the effect of these errors on the final ground coordinates. The data set is a block of images consists of 32 images distributed over six flight lines, the interior orientation parameters, IOP, are known through careful camera calibration procedure, also 37 ground control points are known through terrestrial surveying procedure. The exact location of camera station at time of exposure, exterior orientation parameters, EOP, is known through GPS/INS integration process. The preliminary results show that firstly, the DG and GPS-supported AT have similar accuracy and comparing with the conventional aerial photography method, the two technologies reduces the dependence on ground control (used only for quality control purposes). Secondly, In the DG Correcting overall system calibration including the GPS/inertial component as well as the imaging sensor itself

3. The Communication Link and Error ANalysis (CLEAN) simulator

NASA Technical Reports Server (NTRS)

Ebel, William J.; Ingels, Frank M.; Crowe, Shane

1993-01-01

During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.

4. Numerical Flow Analysis of Planing Boats

Brucker, Kyle; O'Shea, Thomas; Dommermuth, Douglas; Fu, Thomas

2012-11-01

The focus of this presentation is to describe the recent effort to validate the computer code Numerical Flow Analysis (NFA) for the prediction of hydrodynamic forces and moments associated with deep-V planing craft. This detailed validation effort was composed of two parts. The first part focuses on assessing NFA's ability to predict pressures on the surface of a 10 degree deadrise wedge during impact with an undisturbed free surface. Detailed comparisons to pressure gauges are presented for two different drop heights, 6 inches and 10 inches. Results show NFA accurately predicted pressures during the slamming event. The second part of the validation study focused on assessing how well NFA was able to accurately model the complex multiphase flow associated with high Froude number flows, specifically the formation of the spray sheet. NFA simulations of a planing hull fixed at various angles of roll (0 degrees, 10 degrees, 20 degrees, and 30 degrees) were compared to experiments from Judge (2012). Comparisons to underwater photographs illustrate NFA's ability to model the formation of the spray sheet and the free surface turbulence associated with planing boat hydrodynamics.

5. A Study of Constant Errors in Subtraction and in the Application of Selected Principles of the Decimal Numeration System Made by Third and Fourth Grade Students.

ERIC Educational Resources Information Center

Smith, Charles Winston, Jr.

Reported are the results of a study to determine if specific errors in subtraction occur when students demonstrate ability to apply selected decimal numeration system principles. A secondary purpose was to examine and compare errors made by various subsets of the sample population characterized by grade level, arithmetic achievement, mental…

6. Rasch Analysis of the Student Refractive Error and Eyeglass Questionnaire

PubMed Central

Crescioni, Mabel; Messer, Dawn H.; Warholak, Terri L.; Miller, Joseph M.; Twelker, J. Daniel; Harvey, Erin M.

2014-01-01

Purpose To evaluate and refine a newly developed instrument, the Student Refractive Error and Eyeglasses Questionnaire (SREEQ), designed to measure the impact of uncorrected and corrected refractive error on vision-related quality of life (VRQoL) in school-aged children. Methods. A 38 statement instrument consisting of two parts was developed: Part A relates to perceptions regarding uncorrected vision and Part B relates to perceptions regarding corrected vision and includes other statements regarding VRQoL with spectacle correction. The SREEQ was administered to 200 Native American 6th through 12th grade students known to have previously worn and who currently require eyeglasses. Rasch analysis was conducted to evaluate the functioning of the SREEQ. Statements on Part A and Part B were analyzed to examine the dimensionality and constructs of the questionnaire, how well the items functioned, and the appropriateness of the response scale used. Results Rasch analysis suggested two items be eliminated and the measurement scale for matching items be reduced from a 4-point response scale to a 3-point response scale. With these modifications, categorical data were converted to interval level data, to conduct an item and person analysis. A shortened version of the SREEQ was constructed with these modifications, the SREEQ-R, which included the statements that were able to capture changes in VRQoL associated with spectacle wear for those with significant refractive error in our study population. Conclusions While the SREEQ Part B appears to be a have less than optimal reliability to assess the impact of spectacle correction on VRQoL in our student population, it is also able to detect statistically significant differences from pretest to posttest on both the group and individual levels to show that the instrument can assess the impact that glasses have on VRQoL. Further modifications to the questionnaire, such as those included in the SREEQ-R, could enhance its functionality

7. Nonclassicality thresholds for multiqubit states: Numerical analysis

SciTech Connect

Gruca, Jacek; Zukowski, Marek; Laskowski, Wieslaw; Kiesel, Nikolai; Wieczorek, Witlef; Weinfurter, Harald; Schmid, Christian

2010-07-15

States that strongly violate Bell's inequalities are required in many quantum-informational protocols as, for example, in cryptography, secret sharing, and the reduction of communication complexity. We investigate families of such states with a numerical method which allows us to reveal nonclassicality even without direct knowledge of Bell's inequalities for the given problem. An extensive set of numerical results is presented and discussed.

8. Finite element and wavefront error analysis of the primary mirror of an experimental telescope with reverse engineering

Huang, Bo-Kai; Huang, Po-Hsuan

2016-09-01

This paper presents the finite element and wavefront error analysis with reverse engineering of the primary mirror of a small space telescope experimental model. The experimental space telescope with 280mm diameter primary mirror has been assembled and aligned in 2011, but the measured system optical performance and wavefront error did not achieve the goal. In order to find out the root causes, static structure finite element analysis (FEA) has been applied to analyze the structure model of the primary mirror assembly. Several assuming effects which may cause deformation of the primary mirror have been proposed, such as gravity effect, flexures bonding effect, thermal expansion effect, etc. According to each assuming effect, we establish a corresponding model and boundary condition setup, and the numerical model will be analyzed by finite element method (FEM) software and opto-mechanical analysis software to obtain numerical wavefront error and Zernike polynomials. Now new assumption of the flexures bonding effect is proposed, and we adopt reverse engineering to verify this effect. Finally, the numerically synthetic system wavefront error will be compared with measured system wavefront error of the telescope. By analyzing and realizing these deformation effects of the primary mirror, the opto-mechanical design and telescope assembly workmanship will be refined, and improve the telescope optical performance.

9. Analysis of Random Segment Errors on Coronagraph Performance

NASA Technical Reports Server (NTRS)

Stahl, Mark T.; Stahl, H. Philip; Shaklan, Stuart B.; N'Diaye, Mamadou

2016-01-01

At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt

10. Pinpointing error analysis of metal detectors under field conditions

Takahashi, Kazunori; Preetz, Holger

2012-06-01

Metal detectors are used not only to detect but also to locate targets. The location performance has been evaluated previously only in laboratory. The performance probably differs that in the field. In this paper, the evaluation of the location performance based on the analysis of pinpointing error is discussed. The data for the evaluation were collected in a blind test in the field. Therefore, the analyzed performance can be seen as the performance under field conditions. Further, the performance is discussed in relation to the search head and footprint dimensions.

11. Analysis of ionospheric refraction error corrections for GRARR systems

NASA Technical Reports Server (NTRS)

Mallinckrodt, A. J.; Parker, H. C.; Berbert, J. H.

1971-01-01

A determination is presented of the ionospheric refraction correction requirements for the Goddard range and range rate (GRARR) S-band, modified S-band, very high frequency (VHF), and modified VHF systems. The relation ships within these four systems are analyzed to show that the refraction corrections are the same for all four systems and to clarify the group and phase nature of these corrections. The analysis is simplified by recognizing that the range rate is equivalent to a carrier phase range change measurement. The equation for the range errors are given.

12. Asymptotic analysis of Bayesian generalization error with Newton diagram.

PubMed

Yamazaki, Keisuke; Aoyagi, Miki; Watanabe, Sumio

2010-01-01

Statistical learning machines that have singularities in the parameter space, such as hidden Markov models, Bayesian networks, and neural networks, are widely used in the field of information engineering. Singularities in the parameter space determine the accuracy of estimation in the Bayesian scenario. The Newton diagram in algebraic geometry is recognized as an effective method by which to investigate a singularity. The present paper proposes a new technique to plug the diagram in the Bayesian analysis. The proposed technique allows the generalization error to be clarified and provides a foundation for an efficient model selection. We apply the proposed technique to mixtures of binomial distributions.

13. Refractive error and the reading process: a literature analysis.

PubMed

Grisham, J D; Simons, H D

1986-01-01

The literature analysis of refractive error and reading performance includes only those studies which adhere to the rudaments of scientific investigation. The relative strengths and weaknesses of each study are described and conclusions are drawn where possible. Hyperopia and anisometropia appear to be related to poor reading progress and their correction seems to result in improved performance. Reduced distance visual acuity and myopia are not generally associated with reading difficulties. There is little evidence relating astigmatism and reading, but studies have not been adequately designed to draw conclusions. Implications for school vision screening are discussed.

14. Bootstrap Standard Error Estimates in Dynamic Factor Analysis.

PubMed

Zhang, Guangjian; Browne, Michael W

2010-05-28

Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the interdependence of successive observations. Bootstrap methods can fill this need, however. The standard bootstrap of individual timepoints is not appropriate because it destroys their order in time and consequently gives incorrect standard error estimates. Two bootstrap procedures that are appropriate for dynamic factor analysis are described. The moving block bootstrap breaks down the original time series into blocks and draws samples of blocks instead of individual timepoints. A parametric bootstrap is essentially a Monte Carlo study in which the population parameters are taken to be estimates obtained from the available sample. These bootstrap procedures are demonstrated using 103 days of affective mood self-ratings from a pregnant woman, 90 days of personality self-ratings from a psychology freshman, and a simulation study.

15. Analysis of Random Errors in Horizontal Sextant Angles

DTIC Science & Technology

1980-09-01

THREE-POINT FIX POSITIONING ACCURACY .... 49 APPENDIX B. THEODOLITE INTERSECTION POSITION ERROR . . . 54 APPENDIX C. DATA SET STATISTICS...Deviations. ................. 46 TABLE B-i - Theodolite Positioning Errors at Three Lociations. ... ............... 57 TABLE B-2 - Maximum Errors in Angular...Best Estimates at Three Locations Due to Theodolite Positioning Errors. ................... 58 TABLE C-i - Cruise I and Cruise II Data. ... ...... 61

16. Analysis of Solar Two heliostat tracking error sources

SciTech Connect

Stone, K.W.; Jones, S.A.

1999-07-01

This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.

17. Analysis of Solar Two Heliostat Tracking Error Sources

SciTech Connect

Jones, S.A.; Stone, K.W.

1999-01-28

This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.

18. Survey of Available Systems for Identifying Systematic Errors in Numerical Model Weather Forecasts.

DTIC Science & Technology

1981-02-01

derives a large group of environmental products. Due to the complexity of the techniques and methodology of these models and the atmosphere they attempt...tral analysis- diagnostic corn- Detailed pari son Anlysis INTERPRLTATION CFE\\TD7M, ALYSIS Fioure 2.1 A two staged verification system 6 It may be ANA S...TECHNIQUES. TVIS ANA PEPTTOM LYSIS WILL BE SPECIIE USUALLY JKrICMAL TFE-NI~Xi~j OBTAIN P ER1PIT 0py CZC0GFAPHICAL SPECTIYTED THE BA ~SIC ANALYSIS WILL BE

19. Elimination of Gibbs' phenomena from error analysis of finite element results

NASA Technical Reports Server (NTRS)

Thurston, Gaylen A.; Sistla, Rajaram

1990-01-01

This paper is one of a series on error analysis and correction of finite element solutions for plates and shells. The error analysis in the earlier papers used half-range double Fourier sine series for numerical harmonic analysis. The half-range formulas are simple to apply, but they can be inaccurate near the ends of the ranges of the independent variables. The Gibbs' phenomenon exhibited by half-range sine series in one independent variable has a two-dimensional analog; a classic example is the Navier solution in a double half-range sine series for the simply supported plate under a uniform load. A simple change of variables is introduced in the paper to improve the accuracy of the double Fourier sine series without adding complexity to the numerical analysis. The change of variables is applied to the problem of approximating a transverse load that is tabulated on a rectangular grid. A solution based on the change of variables is compared with results from the Navier solution for the simply supported plate problem and finite element results for the same problem.

20. Numerical Predictions of Static-Pressure-Error Corrections for a Modified T-38C Aircraft

DTIC Science & Technology

2014-12-15

Inc. [18] employed in related USAF aeroelastic -simulation research [19–22]. The outer mold line of the standard T-38C aircraft is modified with the... Aeroelastic Dynamics Analysis of a Full F-16 Configuration for Various Flight Conditions,” AIAA Journal, Vol. 41, No. 3, 2003, pp. 363–371. doi

1. An analysis of spacecraft data time tagging errors

NASA Technical Reports Server (NTRS)

Fang, A. C.

1975-01-01

An indepth examination of the timing and telemetry in just one spacecraft points out the genesis of various types of timing errors and serves as a guide in the design of future timing/telemetry systems. The principal sources of timing errors are examined carefully and are described in detail. Estimates of these errors are also made and presented. It is found that the timing errors within the telemetry system are larger than the total timing errors resulting from all other sources.

2. Permanence analysis of a concatenated coding scheme for error control

NASA Technical Reports Server (NTRS)

Costello, D. J., Jr.; Lin, S.; Kasami, T.

1983-01-01

A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however, the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for the planetary program, is analyzed.

3. Error analysis for a sinh transformation used in evaluating nearly singular boundary element integrals

Elliott, David; Johnston, Peter R.

2007-06-01

In the two-dimensional boundary element method, one often needs to evaluate numerically integrals of the form where j2 is a quadratic, g is a polynomial and f is a rational, logarithmic or algebraic function with a singularity at zero. The constants a and b are such that -1[less-than-or-equals, slant]a[less-than-or-equals, slant]1 and 0errors. By making the transformation x=a+bsinh([mu]u-[eta]), where the constants [mu] and [eta] are chosen so that the interval of integration is again [-1,1], it is found that the truncation errors arising, when the same Gauss-Legendre quadrature is applied to the transformed integral, are much reduced. The asymptotic error analysis for Gauss-Legendre quadrature, as given by Donaldson and Elliott [A unified approach to quadrature rules with asymptotic estimates of their remainders, SIAM J. Numer. Anal. 9 (1972) 573-602], is then used to explain this phenomenon and justify the transformation.

4. Close-range radar rainfall estimation and error analysis

van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.

2016-08-01

5. SIRTF Focal Plane Survey: A Pre-flight Error Analysis

NASA Technical Reports Server (NTRS)

Bayard, David S.; Brugarolas, Paul B.; Boussalis, Dhemetrios; Kang, Bryan H.

2003-01-01

This report contains a pre-flight error analysis of the calibration accuracies expected from implementing the currently planned SIRTF focal plane survey strategy. The main purpose of this study is to verify that the planned strategy will meet focal plane survey calibration requirements (as put forth in the SIRTF IOC-SV Mission Plan [4]), and to quantify the actual accuracies expected. The error analysis was performed by running the Instrument Pointing Frame (IPF) Kalman filter on a complete set of simulated IOC-SV survey data, and studying the resulting propagated covariances. The main conclusion of this study is that the all focal plane calibration requirements can be met with the currently planned survey strategy. The associated margins range from 3 to 95 percent, and tend to be smallest for frames having a 0.14" requirement, and largest for frames having a more generous 0.28" (or larger) requirement. The smallest margin of 3 percent is associated with the IRAC 3.6 and 5.8 micron array centers (frames 068 and 069), and the largest margin of 95 percent is associated with the MIPS 160 micron array center (frame 087). For pointing purposes, the most critical calibrations are for the IRS Peakup sweet spots and short wavelength slit centers (frames 019, 023, 052, 028, 034). Results show that these frames are meeting their 0.14" requirements with an expected accuracy of approximately 0.1", which corresponds to a 28 percent margin.

6. Incremental Volumetric Remapping Method: Analysis and Error Evaluation

Baptista, A. J.; Alves, J. L.; Oliveira, M. C.; Rodrigues, D. M.; Menezes, L. F.

2007-05-01

In this paper the error associated with the remapping problem is analyzed. A range of numerical results that assess the performance of three different remapping strategies, applied to FE meshes that typically are used in sheet metal forming simulation, are evaluated. One of the selected strategies is the previously presented Incremental Volumetric Remapping method (IVR), which was implemented in the in-house code DD3TRIM. The IVR method fundaments consists on the premise that state variables in all points associated to a Gauss volume of a given element are equal to the state variable quantities placed in the correspondent Gauss point. Hence, given a typical remapping procedure between a donor and a target mesh, the variables to be associated to a target Gauss volume (and point) are determined by a weighted average. The weight function is the Gauss volume percentage of each donor element that is located inside the target Gauss volume. The calculus of the intersecting volumes between the donor and target Gauss volumes is attained incrementally, for each target Gauss volume, by means of a discrete approach. The other two remapping strategies selected are based in the interpolation/extrapolation of variables by using the finite element shape functions or moving least square interpolants. The performance of the three different remapping strategies is address with two tests. The first remapping test was taken from a literature work. The test consists in remapping successively a rotating symmetrical mesh, throughout N increments, in an angular span of 90°. The second remapping error evaluation test consists of remapping an irregular element shape target mesh from a given regular element shape donor mesh and proceed with the inverse operation. In this second test the computation effort is also measured. The results showed that the error level associated to IVR can be very low and with a stable evolution along the number of remapping procedures when compared with the other

7. Field Error Analysis and a Correction Scheme for the KSTAR device

You, K.-I.; Lee, D. K.; Jhang, Hogun; Lee, G.-S.; Kwon, K. H.

2000-10-01

Non-axisymmetric error fields can lead to tokamak plasma performance degradation and ultimately premature plasma disruption, if some error field components are larger than threshold values. The major sources of the field error include the unavoidable winding irregularities of the poloidal field coils during manufacturing, poloidal field and toroidal field coils misalignments during installation, stray fields from bus and lead wires between coils and power supplies, and welded joints of the vacuum vessel. Numerical simulation results are presented for Fourier harmonics of the error field obtained on the (m,n) = (2,1) resonant flux surface with a coil current set for the reference equilibrium configuration. Field error contributions are considered separately for all major error sources. An error correction scheme designed to reduce key components of the total net error field is also discussed in relation to the field error correction coils inside the vacuum vessel.

8. Error analysis for earth orientation recovery from GPS data

NASA Technical Reports Server (NTRS)

Zelensky, N.; Ray, J.; Liebrecht, P.

1990-01-01

The use of GPS navigation satellites to study earth-orientation parameters in real-time is examined analytically with simulations of network geometries. The Orbit Analysis covariance-analysis program is employed to simulate the block-II constellation of 18 GPS satellites, and attention is given to the budget for tracking errors. Simultaneous solutions are derived for earth orientation given specific satellite orbits, ground clocks, and station positions with tropospheric scaling at each station. Media effects and measurement noise are found to be the main causes of uncertainty in earth-orientation determination. A program similar to the Polaris network using single-difference carrier-phase observations can provide earth-orientation parameters with accuracies similar to those for the VLBI program. The GPS concept offers faster data turnaround and lower costs in addition to more accurate determinations of UT1 and pole position.

9. Soft X Ray Telescope (SXT) focus error analysis

NASA Technical Reports Server (NTRS)

1991-01-01

The analysis performed on the soft x-ray telescope (SXT) to determine the correct thickness of the spacer to position the CCD camera at the best focus of the telescope and to determine the maximum uncertainty in this focus position due to a number of metrology and experimental errors, and thermal, and humidity effects is presented. This type of analysis has been performed by the SXT prime contractor, Lockheed Palo Alto Research Lab (LPARL). The SXT project office at MSFC formed an independent team of experts to review the LPARL work, and verify the analysis performed by them. Based on the recommendation of this team, the project office will make a decision if an end to end focus test is required for the SXT prior to launch. The metrology and experimental data, and the spreadsheets provided by LPARL are used at the basis of the analysis presented. The data entries in these spreadsheets have been verified as far as feasible, and the format of the spreadsheets has been improved to make these easier to understand. The results obtained from this analysis are very close to the results obtained by LPARL. However, due to the lack of organized documentation the analysis uncovered a few areas of possibly erroneous metrology data, which may affect the results obtained by this analytical approach.

10. Analysis of infusion pump error logs and their significance for health care.

PubMed

Lee, Paul T; Thompson, Frankle; Thimbleby, Harold

Infusion therapy is one of the largest practised therapies in any healthcare organisation, and infusion pumps are used to deliver millions of infusions every year in the NHS. The aircraft industry downloads information from 'black boxes' to help design better systems and reduce risk; however, the same cannot be said about error logs and data logs from infusion pumps. This study downloaded and analysed approximately 360 000 hours of infusion pump error logs from 131 infusion pumps used for up to 2 years in one large acute hospital. Staff had to manage 260 129 alarms; this accounted for approximately 5% of total infusion time, costing about £1000 per pump per year. This paper describes many such insights, including numerous technical errors, propensity for certain alarms in clinical conditions, logistical issues and how infrastructure problems can lead to an increase in alarm conditions. Routine use of error log analysis, combined with appropriate management of pumps to help identify improved device design, use and application is recommended.

11. Status of NINJA: the Numerical INJection Analysis project

Cadonati, Laura; Aylott, Benjamin; Baker, John G.; Boggs, William D.; Boyle, Michael; Brady, Patrick R.; Brown, Duncan A.; Brügmann, Bernd; Buchman, Luisa T.; Buonanno, Alessandra; Camp, Jordan; Campanelli, Manuela; Centrella, Joan; Chatterji, Shourov; Christensen, Nelson; Chu, Tony; Diener, Peter; Dorband, Nils; Etienne, Zachariah B.; Faber, Joshua; Fairhurst, Stephen; Farr, Benjamin; Fischetti, Sebastian; Guidi, Gianluca; Goggin, Lisa M.; Hannam, Mark; Herrmann, Frank; Hinder, Ian; Husa, Sascha; Kalogera, Vicky; Keppel, Drew; Kidder, Lawrence E.; Kelly, Bernard J.; Krishnan, Badri; Laguna, Pablo; Lousto, Carlos O.; Mandel, Ilya; Marronetti, Pedro; Matzner, Richard; McWilliams, Sean T.; Matthews, Keith D.; Mercer, R. Adam; Mohapatra, Satyanarayan R. P.; Mroué, Abdul H.; Nakano, Hiroyuki; Ochsner, Evan; Pan, Yi; Pekowsky, Larne; Pfeiffer, Harald P.; Pollney, Denis; Pretorius, Frans; Raymond, Vivien; Reisswig, Christian; Rezzolla, Luciano; Rinne, Oliver; Robinson, Craig; Röver, Christian; Santamaría, Lucía; Sathyaprakash, Bangalore; Scheel, Mark A.; Schnetter, Erik; Seiler, Jennifer; Shapiro, Stuart L.; Shoemaker, Deirdre; Sperhake, Ulrich; Stroeer, Alexander; Sturani, Riccardo; Tichy, Wolfgang; Liu, Yuk Tung; van der Sluys, Marc; van Meter, James R.; Vaulin, Ruslan; Vecchio, Alberto; Veitch, John; Viceré, Andrea; Whelan, John T.; Zlochower, Yosef

2009-06-01

The 2008 NRDA conference introduced the Numerical INJection Analysis project (NINJA), a new collaborative effort between the numerical relativity community and the data analysis community. NINJA focuses on modeling and searching for gravitational wave signatures from the coalescence of binary system of compact objects. We review the scope of this collaboration and the components of the first NINJA project, where numerical relativity groups, shared waveforms and data analysis teams applied various techniques to detect them when embedded in colored Gaussian noise.

12. Effects of Correlated Errors on the Analysis of Space Geodetic Data

NASA Technical Reports Server (NTRS)

Romero-Wolf, Andres; Jacobs, C. S.

2011-01-01

As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

13. Effects of Correlated Errors on the Analysis of Space Geodetic Data

NASA Technical Reports Server (NTRS)

Romero-Wolf, Andres; Jacobs, C. S.

2011-01-01

As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

14. A Framework for Examining Mathematics Teacher Knowledge as Used in Error Analysis

ERIC Educational Resources Information Center

Peng, Aihui; Luo, Zengru

2009-01-01

Error analysis is a basic and important task for mathematics teachers. Unfortunately, in the present literature there is a lack of detailed understanding about teacher knowledge as used in it. Based on a synthesis of the literature in error analysis, a framework for prescribing and assessing mathematics teacher knowledge in error analysis was…

15. Reliability-Based Stability Analysis of Rock Slopes Using Numerical Analysis and Response Surface Method

2017-08-01

While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.

16. Statistical analysis of modeling error in structural dynamic systems

NASA Technical Reports Server (NTRS)

Hasselman, T. K.; Chrostowski, J. D.

1990-01-01

The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

17. Error analysis of exponential integrators for oscillatory second-order differential equations

Grimm, Volker; Hochbruck, Marlis

2006-05-01

In this paper, we analyse a family of exponential integrators for second-order differential equations in which high-frequency oscillations in the solution are generated by a linear part. Conditions are given which guarantee that the integrators allow second-order error bounds independent of the product of the step size with the frequencies. Our convergence analysis generalizes known results on the mollified impulse method by García-Archilla, Sanz-Serna and Skeel (1998, SIAM J. Sci. Comput. 30 930-63) and on Gautschi-type exponential integrators (Hairer E, Lubich Ch and Wanner G 2002 Geometric Numerical Integration (Berlin: Springer), Hochbruck M and Lubich Ch 1999 Numer. Math. 83 403-26).

18. Analysis of errors occurring in large eddy simulation.

PubMed

Geurts, Bernard J

2009-07-28

We analyse the effect of second- and fourth-order accurate central finite-volume discretizations on the outcome of large eddy simulations of homogeneous, isotropic, decaying turbulence at an initial Taylor-Reynolds number Re(lambda)=100. We determine the implicit filter that is induced by the spatial discretization and show that a higher order discretization also induces a higher order filter, i.e. a low-pass filter that keeps a wider range of flow scales virtually unchanged. The effectiveness of the implicit filtering is correlated with the optimal refinement strategy as observed in an error-landscape analysis based on Smagorinsky's subfilter model. As a point of reference, a finite-volume method that is second-order accurate for both the convective and the viscous fluxes in the Navier-Stokes equations is used. We observe that changing to a fourth-order accurate convective discretization leads to a higher value of the Smagorinsky coefficient C(S) required to achieve minimal total error at given resolution. Conversely, changing only the viscous flux discretization to fourth-order accuracy implies that optimal simulation results are obtained at lower values of C(S). Finally, a fully fourth-order discretization yields an optimal C(S) that is slightly lower than the reference fully second-order method.

19. Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows. II; Minimization of Delta * B Numerical Error

NASA Technical Reports Server (NTRS)

Sjoegreen, Bjoern; Yee, H. C.

2003-01-01

The generalization of a class of low-dissipative high order filter finite difference schemes for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been developed. The new scheme consists of a divergence free preserving high order spatial base scheme with a filter approach which can be divergence-free preserving depending on the type of filter operator being used, the method of applying the filter step, and the type of flow problem to be considered. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field (Delta * B) numerical error in the sense that no standard divergence cleaning is required. Performance evaluation of these variants, and the key role that the proper treatment of their corresponding numerical boundary conditions can play will be illustrated. Many levels of grid refinement and detailed comparison with several commonly used compressible MHD shock-capturing schemes will be sought. For certain MHD 2-D test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.

20. Power vectors: an application of Fourier analysis to the description and statistical analysis of refractive error.

PubMed

Thibos, L N; Wheeler, W; Horner, D

1997-06-01

The description of sphero-cylinder lenses is approached from the viewpoint of Fourier analysis of the power profile. It is shown that the familiar sine-squared law leads naturally to a Fourier series representation with exactly three Fourier coefficients, representing the natural parameters of a thin lens. The constant term corresponds to the mean spherical equivalent (MSE) power, whereas the amplitude and phase of the harmonic correspond to the power and axis of a Jackson cross-cylinder (JCC) lens, respectively. Expressing the Fourier series in rectangular form leads to the representation of an arbitrary sphero-cylinder lens as the sum of a spherical lens and two cross-cylinders, one at axis 0 degree and the other at axis 45 degrees. The power of these three component lenses may be interpreted as (x,y,z) coordinates of a vector representation of the power profile. Advantages of this power vector representation of a sphero-cylinder lens for numerical and graphical analysis of optometric data are described for problems involving lens combinations, comparison of different lenses, and the statistical distribution of refractive errors.

1. Error Analysis in Composition of Iranian Lower Intermediate Students

ERIC Educational Resources Information Center

Taghavi, Mehdi

2012-01-01

Learners make errors during the process of learning languages. This study examines errors in writing task of twenty Iranian lower intermediate male students aged between 13 and 15. A subject was given to the participants was a composition about the seasons of a year. All of the errors were identified and classified. Corder's classification (1967)…

2. Method to control depth error when ablating human dentin with numerically controlled picosecond laser: a preliminary study.

PubMed

Sun, Yuchun; Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Wang, Yong

2015-07-01

A three-axis numerically controlled picosecond laser was used to ablate dentin to investigate the quantitative relationships among the number of additive pulse layers in two-dimensional scans starting from the focal plane, step size along the normal of the focal plane (focal plane normal), and ablation depth error. A method to control the ablation depth error, suitable to control stepping along the focal plane normal, was preliminarily established. Twenty-four freshly removed mandibular first molars were cut transversely along the long axis of the crown and prepared as 48 tooth sample slices with approximately flat surfaces. Forty-two slices were used in the first section. The picosecond laser was 1,064 nm in wavelength, 3 W in power, and 10 kHz in repetition frequency. For a varying number (n = 5-70) of focal plane additive pulse layers (14 groups, three repetitions each), two-dimensional scanning and ablation were performed on the dentin regions of the tooth sample slices, which were fixed on the focal plane. The ablation depth, d, was measured, and the quantitative function between n and d was established. Six slices were used in the second section. The function was used to calculate and set the timing of stepwise increments, and the single-step size along the focal plane normal was d micrometer after ablation of n layers (n = 5-50; 10 groups, six repetitions each). Each sample underwent three-dimensional scanning and ablation to produce 2 × 2-mm square cavities. The difference, e, between the measured cavity depth and theoretical value was calculated, along with the difference, e 1, between the measured average ablation depth of a single-step along the focal plane normal and theoretical value. Values of n and d corresponding to the minimum values of e and e 1, respectively, were obtained. In two-dimensional ablation, d was largest (720.61 μm) when n = 65 and smallest when n = 5 (45.00 μm). Linear regression yielded the quantitative

3. The Impact of Text Genre on Iranian Intermediate EFL Students' Writing Errors: An Error Analysis Perspective

ERIC Educational Resources Information Center

Moqimipour, Kourosh; Shahrokhi, Mohsen

2015-01-01

The present study aimed at analyzing writing errors caused by the interference of the Persian language, regarded as the first language (L1), in three writing genres, namely narration, description, and comparison/contrast by Iranian EFL students. 65 English paragraphs written by the participants, who were at the intermediate level based on their…

4. Numerical analysis of Swiss roll metamaterials.

PubMed

2009-08-12

A Swiss roll metamaterial is a resonant magnetic medium, with a negative magnetic permeability for a range of frequencies, due to its self-inductance and self-capacitance components. In this paper, we discuss the band structure, S-parameters and effective electromagnetic parameters of Swiss roll metamaterials, with both analytical and numerical results, which show an exceptional convergence.

5. Analysis of personnel error occurrence reports across Defense Program facilities

SciTech Connect

Stock, D.A.; Shurberg, D.A.; OBrien, J.N.

1994-05-01

More than 2,000 reports from the Occurrence Reporting and Processing System (ORPS) database were examined in order to identify weaknesses in the implementation of the guidance for the Conduct of Operations (DOE Order 5480.19) at Defense Program (DP) facilities. The analysis revealed recurrent problems involving procedures, training of employees, the occurrence of accidents, planning and scheduling of daily operations, and communications. Changes to DOE 5480.19 and modifications of the Occurrence Reporting and Processing System are recommended to reduce the frequency of these problems. The primary tool used in this analysis was a coding scheme based on the guidelines in 5480.19, which was used to classify the textual content of occurrence reports. The occurrence reports selected for analysis came from across all DP facilities, and listed personnel error as a cause of the event. A number of additional reports, specifically from the Plutonium Processing and Handling Facility (TA55), and the Chemistry and Metallurgy Research Facility (CMR), at Los Alamos National Laboratory, were analyzed separately as a case study. In total, 2070 occurrence reports were examined for this analysis. A number of core issues were consistently found in all analyses conducted, and all subsets of data examined. When individual DP sites were analyzed, including some sites which have since been transferred, only minor variations were found in the importance of these core issues. The same issues also appeared in different time periods, in different types of reports, and at the two Los Alamos facilities selected for the case study.

6. Kitchen Physics: Lessons in Fluid Pressure and Error Analysis

Vieyra, Rebecca Elizabeth; Vieyra, Chrystian; Macchia, Stefano

2017-02-01

Although the advent and popularization of the "flipped classroom" tends to center around at-home video lectures, teachers are increasingly turning to at-home labs for enhanced student engagement. This paper describes two simple at-home experiments that can be accomplished in the kitchen. The first experiment analyzes the density of four liquids using a waterproof case and a smartphone barometer in a container, sink, or tub. The second experiment determines the relationship between pressure and temperature of an ideal gas in a constant volume container placed momentarily in a refrigerator freezer. These experiences provide a ripe opportunity both for learning fundamental physics concepts as well as to investigate a variety of error analysis techniques that are frequently overlooked in introductory physics courses.

7. Reduction of S-parameter errors using singular spectrum analysis

Ozturk, Turgut; Uluer, Ihsan; Ünal, Ilhami

2016-07-01

A free space measurement method, which consists of two horn antennas, a network analyzer, two frequency extenders, and a sample holder, is used to measure transmission (S21) coefficients in 75-110 GHz (W-Band) frequency range. Singular spectrum analysis method is presented to eliminate the error and noise of raw S21 data after calibration and measurement processes. The proposed model can be applied easily to remove the repeated calibration process for each sample measurement. Hence, smooth, reliable, and accurate data are obtained to determine the dielectric properties of materials. In addition, the dielectric constant of materials (paper, polyvinylchloride-PVC, Ultralam® 3850HT, and glass) is calculated by thin sheet approximation and Newton-Raphson extracting techniques using a filtered S21 transmission parameter.

8. Reduction of S-parameter errors using singular spectrum analysis.

PubMed

Ozturk, Turgut; Uluer, İhsan; Ünal, İlhami

2016-07-01

A free space measurement method, which consists of two horn antennas, a network analyzer, two frequency extenders, and a sample holder, is used to measure transmission (S21) coefficients in 75-110 GHz (W-Band) frequency range. Singular spectrum analysis method is presented to eliminate the error and noise of raw S21 data after calibration and measurement processes. The proposed model can be applied easily to remove the repeated calibration process for each sample measurement. Hence, smooth, reliable, and accurate data are obtained to determine the dielectric properties of materials. In addition, the dielectric constant of materials (paper, polyvinylchloride-PVC, Ultralam® 3850HT, and glass) is calculated by thin sheet approximation and Newton-Raphson extracting techniques using a filtered S21 transmission parameter.

9. Error analysis of the residence time of bistable Poisson states obtained by periodic measurements.

PubMed

Lee, Jinwoo; Lyo, In-Whan

2010-06-01

We performed error analysis on the periodic measurement schemes to obtain the residence time of bistable Poisson states. Experimental data were obtained by periodical level-sensitive samplings of oxygen-induced states on Si(111)-7 x 7 that stochastically switches between two metastable states. Simulated data sequences were created by the Monte Carlo numerical method. The residence times were extracted from the experimental and simulation data sequences by averaging and exponential-fitting methods. The averaging method yields the residence time via the summation of the detected temporal width of each state weighed by the normalized frequency of the state and the exponential fitting via fitting a single exponential function to the frequency histogram of the data. It is found that the averaging method produces consistently more accurate results with no arbitrariness, when compared to the exponential fitting method. For further understanding, data modeling using the first-order approximation was performed; the enhanced accuracy in the averaging method is due to the mutual cancellation of errors associated with detection of zero-width states and long-tail states. We investigated a multi-interval detection scheme as well. Similar analysis shows that the dual-interval scheme produces larger error compared to the single interval one, and has narrower optimum region.

10. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

NASA Technical Reports Server (NTRS)

2012-01-01

Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

11. Analysis of the impact of error detection on computer performance

NASA Technical Reports Server (NTRS)

Shin, K. C.; Lee, Y. H.

1983-01-01

Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.

12. A Numerical Model for Atomtronic Circuit Analysis

SciTech Connect

Chow, Weng W.; Straatsma, Cameron J. E.; Anderson, Dana Z.

2015-07-16

A model for studying atomtronic devices and circuits based on finite-temperature Bose-condensed gases is presented. The approach involves numerically solving equations of motion for atomic populations and coherences, derived using the Bose-Hubbard Hamiltonian and the Heisenberg picture. The resulting cluster expansion is truncated at a level giving balance between physics rigor and numerical demand mitigation. This approach allows parametric studies involving time scales that cover both the rapid population dynamics relevant to nonequilibrium state evolution, as well as the much longer time durations typical for reaching steady-state device operation. This model is demonstrated by studying the evolution of a Bose-condensed gas in the presence of atom injection and extraction in a double-well potential. In this configuration phase locking between condensates in each well of the potential is readily observed, and its influence on the evolution of the system is studied.

13. Numerical analysis of single and multiple jets

2017-05-01

The present study aims to use the concept of entropy generation in order to study numerically the flow and the interaction of multiple jets. Several configurations of a single jet surrounded by equidistant 3, 5, 7 and 9 circumferential jets have been studied. The turbulent incompressible Navier-Stokes equations have been solved numerically using the commercial computational fluid dynamics code Fluent. The standard k-ɛ model has been selected to assess the eddy viscosity. The domain has been reduced to a quarter of the geometry due to symmetry. Results for axial and radial velocities have been compared with experimental measurements from the literature. Furthermore, additional results involving entropy generation rate have been presented and discussed. Contribution to the topical issue "Materials for Energy harvesting, conversion and storage II (ICOME 2016)", edited by Jean-Michel Nunzi, Rachid Bennacer and Mohammed El Ganaoui

14. Numerical Analysis of the SCHOLAR Supersonic Combustor

NASA Technical Reports Server (NTRS)

Rodriguez, Carlos G.; Cutler, Andrew D.

2003-01-01

The SCHOLAR scramjet experiment is the subject of an ongoing numerical investigation. The facility nozzle and combustor were solved separate and sequentially, with the exit conditions of the former used as inlet conditions for the latter. A baseline configuration for the numerical model was compared with the available experimental data. It was found that ignition-delay was underpredicted and fuel-plume penetration overpredicted, while the pressure rise was close to experimental values. In addition, grid-convergence by means of grid-sequencing could not be established. The effects of the different turbulence parameters were quantified. It was found that it was not possible to simultaneously predict the three main parameters of this flow: pressure-rise, ignition-delay, and fuel-plume penetration.

15. Analysis of the "naming game" with learning errors in communications.

PubMed

Lou, Yang; Chen, Guanrong

2015-07-16

Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

16. SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS

NASA Technical Reports Server (NTRS)

Frisch, H. P.

1994-01-01

SAMSAN algorithm; however, it is generally agreed by experienced users, and in the numerical error analysis literature, that computation with non-symmetric matrices of order greater than about 200 should be avoided or treated with extreme care. SAMSAN attempts to support the needs of application oriented analysis by providing: 1) a methodology with unlimited growth potential, 2) a methodology to insure that associated documentation is current and available "on demand", 3) a foundation of basic computational algorithms that most controls analysis procedures are based upon, 4) a set of check out and evaluation programs which demonstrate usage of the algorithms on a series of problems which are structured to expose the limits of each algorithm's applicability, and 5) capabilities which support both a priori and a posteriori error analysis for the computational algorithms provided. The SAMSAN algorithms are coded in FORTRAN 77 for batch or interactive execution and have been implemented on a DEC VAX computer under VMS 4.7. An effort was made to assure that the FORTRAN source code was portable and thus SAMSAN may be adaptable to other machine environments. The documentation is included on the distribution tape or can be purchased separately at the price below. SAMSAN version 2.0 was developed in 1982 and updated to version 3.0 in 1988.

17. SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS

NASA Technical Reports Server (NTRS)

Frisch, H. P.

1994-01-01

SAMSAN algorithm; however, it is generally agreed by experienced users, and in the numerical error analysis literature, that computation with non-symmetric matrices of order greater than about 200 should be avoided or treated with extreme care. SAMSAN attempts to support the needs of application oriented analysis by providing: 1) a methodology with unlimited growth potential, 2) a methodology to insure that associated documentation is current and available "on demand", 3) a foundation of basic computational algorithms that most controls analysis procedures are based upon, 4) a set of check out and evaluation programs which demonstrate usage of the algorithms on a series of problems which are structured to expose the limits of each algorithm's applicability, and 5) capabilities which support both a priori and a posteriori error analysis for the computational algorithms provided. The SAMSAN algorithms are coded in FORTRAN 77 for batch or interactive execution and have been implemented on a DEC VAX computer under VMS 4.7. An effort was made to assure that the FORTRAN source code was portable and thus SAMSAN may be adaptable to other machine environments. The documentation is included on the distribution tape or can be purchased separately at the price below. SAMSAN version 2.0 was developed in 1982 and updated to version 3.0 in 1988.

18. An error taxonomy system for analysis of haemodialysis incidents.

PubMed

Gu, Xiuzhu; Itoh, Kenji; Suzuki, Satoshi

2014-12-01

This paper describes the development of a haemodialysis error taxonomy system for analysing incidents and predicting the safety status of a dialysis organisation. The error taxonomy system was developed by adapting an error taxonomy system which assumed no specific specialty to haemodialysis situations. Its application was conducted with 1,909 incident reports collected from two dialysis facilities in Japan. Over 70% of haemodialysis incidents were reported as problems or complications related to dialyser, circuit, medication and setting of dialysis condition. Approximately 70% of errors took place immediately before and after the four hours of haemodialysis therapy. Error types most frequently made in the dialysis unit were omission and qualitative errors. Failures or complications classified to staff human factors, communication, task and organisational factors were found in most dialysis incidents. Device/equipment/materials, medicine and clinical documents were most likely to be involved in errors. Haemodialysis nurses were involved in more incidents related to medicine and documents, whereas dialysis technologists made more errors with device/equipment/materials. This error taxonomy system is able to investigate incidents and adverse events occurring in the dialysis setting but is also able to estimate safety-related status of an organisation, such as reporting culture. © 2014 European Dialysis and Transplant Nurses Association/European Renal Care Association.

19. Human error identification: an analysis of myringotomy and ventilation tube insertion.

PubMed

Montague, Mary-Louise; Lee, Michael S W; Hussain, S S M

2004-10-01

To use a human reliability assessment tool to identify commonly occurring errors during myringotomy and ventilation tube (VT) insertion and to quantify the likelihood of error occurrence. Error-free task analysis for myringotomy and VT insertion was defined at the outset. Fifty-five consecutive myringotomy and VT insertion procedures were videotaped. The operator was either the senior author (S.S.M.H.) or a trainee in the specialist registrar or senior house officer grade. Three assessors (M.-L.M., M.S.W.L, and S.S.M.H.) blinded to operator identity independently evaluated each procedure. Interobserver agreement was calculated (kappa values). Twelve potential error types were identified. A total of 87 errors were observed in 55 procedures. In 53% of procedures (n = 29) multiple errors were identified. Seven percent of procedures (n = 4) were error free. The 4 most frequent errors identified were (1) failure to perform a unidirectional myringotomy incision (n = 37; 43%); (2) multiple attempts to place VT (n = 14; 16%); (3) multiple attempts to complete the myringotomy (n = 11; 13%); and (4) magnification setting too high (n = 11; 13%). The human error probability was 0.13. Interobserver agreement as expressed by kappa statistics was high. Human error identification in this most common of otologic procedures is crucial to future error avoidance. Eliminating the 2 most common errors in this model will halve the human error probability. Extending the role of error analysis to error-based teaching as an educational tool has potential.

20. Numerical analysis of slender vortex motion

SciTech Connect

Zhou, H.

1996-02-01

Several numerical methods for slender vortex motion (the local induction equation, the Klein-Majda equation, and the Klein-Knio equation) are compared on the specific example of sideband instability of Kelvin waves on a vortex. Numerical experiments on this model problem indicate that all these methods yield qualitatively similar behavior, and this behavior is different from the behavior of a non-slender vortex with variable cross-section. It is found that the boundaries between stable, recurrent, and chaotic regimes in the parameter space of the model problem depend on the method used. The boundaries of these domains in the parameter space for the Klein-Majda equation and for the Klein-Knio equation are closely related to the core size. When the core size is large enough, the Klein-Majda equation always exhibits stable solutions for our model problem. Various conclusions are drawn; in particular, the behavior of turbulent vortices cannot be captured by these local approximations, and probably cannot be captured by any slender vortex model with constant vortex cross-section. Speculations about the differences between classical and superfluid hydrodynamics are also offered.

1. Flight instrumentation specification for parameter identification: Program user's guide. [instrument errors/error analysis

NASA Technical Reports Server (NTRS)

Mohr, R. L.

1975-01-01

A set of four digital computer programs is presented which can be used to investigate the effects of instrumentation errors on the accuracy of aircraft and helicopter stability-and-control derivatives identified from flight test data. The programs assume that the differential equations of motion are linear and consist of small perturbations about a quasi-steady flight condition. It is also assumed that a Newton-Raphson optimization technique is used for identifying the estimates of the parameters. Flow charts and printouts are included.

2. Error Analysis of Stereophotoclinometry in Support of the OSIRIS-REx Mission

Palmer, Eric; Gaskell, Robert W.; Weirich, John R.

2015-11-01

Stereophotoclinometry has been used on numerous planetary bodies to derive the shape model, most recently 67P-Churyumov-Gerasimenko (Jorda, et al., 2014), the Earth (Palmer, et al., 2014) and Vesta (Gaskell, 2012). SPC is planned to create the ultra-high resolution topography for the upcoming mission OSIRIS-REx that will sample the asteroid Bennu, arriving in 2018. This shape model will be used both for scientific analysis as well as operational navigation, to include providing the topography that will ensure a safe collection of the surface.We present the initial results of error analysis of SPC, with specific focus on how both systematic and non-systematic error propagate through SPC into the shape model. For this testing, we have created a notional global truth model at 5cm and a single region at 2.5mm ground sample distance. These truth models were used to create images using GSFC's software Freespace. Then these images were used by SPC to form a derived shape model with a ground sample distance of 5cm.We will report on both the absolute and relative error that the derived shape model has compared to the original truth model as well as other empirical and theoretical measurement of errors within SPC.Jorda, L. et al (2014) "The Shape of Comet 67P/Churyumov-Gerasimenko from Rosetta/Osiris Images", AGU Fall Meeting, #P41C-3943. Gaskell, R (2012) "SPC Shape and Topography of Vesta from DAWN Imaging Data", DSP Meeting #44, #209.03. Palmer, L. Sykes, M. V. Gaskll, R.W. (2014) "Mercator — Autonomous Navigation Using Panoramas", LPCS 45, #1777.

3. Diagnosing non-Gaussianity of forecast and analysis errors in a convective-scale model

Legrand, R.; Michel, Y.; Montmerle, T.

2016-01-01

In numerical weather prediction, the problem of estimating initial conditions with a variational approach is usually based on a Bayesian framework associated with a Gaussianity assumption of the probability density functions of both observations and background errors. In practice, Gaussianity of errors is tied to linearity, in the sense that a nonlinear model will yield non-Gaussian probability density functions. In this context, standard methods relying on Gaussian assumption may perform poorly. This study aims to describe some aspects of non-Gaussianity of forecast and analysis errors in a convective-scale model using a Monte Carlo approach based on an ensemble of data assimilations. For this purpose, an ensemble of 90 members of cycled perturbed assimilations has been run over a highly precipitating case of interest. Non-Gaussianity is measured using the K2 statistics from the D'Agostino test, which is related to the sum of the squares of univariate skewness and kurtosis. Results confirm that specific humidity is the least Gaussian variable according to that measure and also that non-Gaussianity is generally more pronounced in the boundary layer and in cloudy areas. The dynamical control variables used in our data assimilation, namely vorticity and divergence, also show distinct non-Gaussian behaviour. It is shown that while non-Gaussianity increases with forecast lead time, it is efficiently reduced by the data assimilation step especially in areas well covered by observations. Our findings may have implication for the choice of the control variables.

4. Numerical Sensitivity Analysis of a Composite Impact Absorber

Caputo, F.; Lamanna, G.; Scarano, D.; Soprano, A.

2008-08-01

This work deals with a numerical investigation on the energy absorbing capability of structural composite components. There are several difficulties associated with the numerical simulation of a composite impact-absorber, such as high geometrical non-linearities, boundary contact conditions, failure criteria, material behaviour; all those aspects make the calibration of numerical models and the evaluation of their sensitivity to the governing geometrical, physical and numerical parameters one of the main objectives of whatever numerical investigation. The last aspect is a very important one for designers in order to make the application of the model to real cases robust from both a physical and a numerical point of view. At first, on the basis of experimental data from literature, a preliminary calibration of the numerical model of a composite impact absorber and then a sensitivity analysis to the variation of the main geometrical and material parameters have been developed, by using explicit finite element algorithms implemented in the Ls-Dyna code.

5. Direct numerical simulations in solid mechanics for quantifying the macroscale effects of microstructure and material model-form error

DOE PAGES

Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; ...

2016-03-16

Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less

6. Direct numerical simulations in solid mechanics for quantifying the macroscale effects of microstructure and material model-form error

SciTech Connect

Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; Littlewood, David J.; Baines, Andrew J.

2016-03-16

Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cell represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.

7. Direct Numerical Simulations in Solid Mechanics for Quantifying the Macroscale Effects of Microstructure and Material Model-Form Error

Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; Littlewood, David J.; Baines, Andrew J.

2016-05-01

Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cell represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Ultimately, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.

8. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

PubMed

Chiu, Ming-Chuan; Hsieh, Min-Chih

2016-05-01

9. English Majors' Errors in Translating Arabic Endophora: Analysis and Remedy

ERIC Educational Resources Information Center

Abdellah, Antar Solhy

2007-01-01

Egyptian English majors in the faculty of Education, South Valley University tend to mistranslate the plural inanimate Arabic pronoun with the singular inanimate English pronoun. A diagnostic test was designed to analyze this error. Results showed that a large number of students (first year and fourth year students) make this error, that the error…

10. An Analysis of Error-Correction Procedures during Discrimination Training.

ERIC Educational Resources Information Center

Rodgers, Teresa A.; Iwata, Brian A.

1991-01-01

Seven adults with severe to profound mental retardation participated in match-to-sample discrimination training under three conditions. Results indicated that error-correction procedures improve performance through negative reinforcement; that error correction may serve multiple functions; and that, for some subjects, trial repetition enhances…

11. Visual Retention Test: An Analysis of Children's Errors.

ERIC Educational Resources Information Center

Rice, James A., Bobele, R. Monte

Grade level norms were developed, based on a sample of 678 elementary school students, for various error scores of the Benton Visual Retention Test. Norms were also developed for 201 normal children, 58 minimal brain dysfunction children, and 101 educable mentally retarded children. In both the copying mode and the memory mode, most errors were…

12. TOWARD ERROR ANALYSIS OF LARGE-SCALE FOREST CARBON BUDGETS

EPA Science Inventory

Quantification of forest carbon sources and sinks is an important part of national inventories of net greenhouse gas emissions. Several such forest carbon budgets have been constructed, but little effort has been made to analyse the sources of error and how these errors propagate...

13. TOWARD ERROR ANALYSIS OF LARGE-SCALE FOREST CARBON BUDGETS

EPA Science Inventory

Quantification of forest carbon sources and sinks is an important part of national inventories of net greenhouse gas emissions. Several such forest carbon budgets have been constructed, but little effort has been made to analyse the sources of error and how these errors propagate...

14. Factor Rotation and Standard Errors in Exploratory Factor Analysis

ERIC Educational Resources Information Center

Zhang, Guangjian; Preacher, Kristopher J.

2015-01-01

In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…

15. Analysis of Attitudinal Data: Dealing with "Response Error"

ERIC Educational Resources Information Center

McCollum, Janet; Thompson, Bruce

1980-01-01

Response error refers to the tendency to respond to items based on the perceived social desirability or undesirability of given responses. Response error can be particularly problematic when all or most of the items on a measure are extremely attractive or unattractive. The present paper proposes a method of (a) distinguishing among preferences…

16. Factor Rotation and Standard Errors in Exploratory Factor Analysis

ERIC Educational Resources Information Center

Zhang, Guangjian; Preacher, Kristopher J.

2015-01-01

In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…

17. Manufacturing in space: Fluid dynamics numerical analysis

NASA Technical Reports Server (NTRS)

Robertson, S. J.; Nicholson, L. A.; Spradley, L. W.

1982-01-01

Numerical computations were performed for natural convection in circular enclosures under various conditions of acceleration. It was found that subcritical acceleration vectors applied in the direction of the temperature gradient will lead to an eventual state of rest regardless of the initial state of motion. Supercritical acceleration vectors will lead to the same steady state condition of motion regardless of the initial state of motion. Convection velocities were computed for acceleration vectors at various angles of the initial temperature gradient. The results for Rayleigh numbers of 1000 or less were found to closely follow Weinbaum's first order theory. Higher Rayleigh number results were shown to depart significantly from the first order theory. Supercritical behavior was confirmed for Rayleigh numbers greater than the known supercritical value of 9216. Response times were determined to provide an indication of the time required to change states of motion for the various cases considered.

18. Numerical Analysis of Magnetic Sail Spacecraft

SciTech Connect

Sasaki, Daisuke; Yamakawa, Hiroshi; Usui, Hideyuki; Funaki, Ikkoh; Kojima, Hirotsugu

2008-12-31

To capture the kinetic energy of the solar wind by creating a large magnetosphere around the spacecraft, magneto-plasma sail injects a plasma jet into a strong magnetic field produced by an electromagnet onboard the spacecraft. The aim of this paper is to investigate the effect of the IMF (interplanetary magnetic field) on the magnetosphere of magneto-plasma sail. First, using an axi-symmetric two-dimensional MHD code, we numerically confirm the magnetic field inflation, and the formation of a magnetosphere by the interaction between the solar wind and the magnetic field. The expansion of an artificial magnetosphere by the plasma injection is then simulated, and we show that the magnetosphere is formed by the interaction between the solar wind and the magnetic field expanded by the plasma jet from the spacecraft. This simulation indicates the size of the artificial magnetosphere becomes smaller when applying the IMF.

19. Procedures for numerical analysis of circadian rhythms

PubMed Central

REFINETTI, ROBERTO; LISSEN, GERMAINE CORNÉ; HALBERG, FRANZ

2010-01-01

This article reviews various procedures used in the analysis of circadian rhythms at the populational, organismal, cellular and molecular levels. The procedures range from visual inspection of time plots and actograms to several mathematical methods of time series analysis. Computational steps are described in some detail, and additional bibliographic resources and computer programs are listed. PMID:23710111

20. Analysis on the alignment errors of segmented Fresnel lens

Zhou, Xudong; Wu, Shibin; Yang, Wei; Wang, Lihua

2014-09-01

Stitching Fresnel lens are designed for the application in the micro-focus X-ray, but splicing errors between sub-apertures will affect optical performance of the entire mirror. The offset error tolerance of different degrees of freedom between the sub-apertures are analyzed theoretically according to the wave-front aberration theory and with the Rayleigh criterion as evaluation criteria, and then validate the correctness of the theory using simulation software of ZEMAX. The results show that Z-axis piston error tolerance and translation error tolerance of XY axis increases with the increasing F-number of stitching Fresnel lens, and tilt error tolerance of XY axis decreases with increasing diameter. The results provide a theoretical basis and guidance for the design, detection and alignment of stitching Fresnel lens.

1. The Analysis, Numerical Simulation, and Diagnosis of Extratropical Weather Systems

DTIC Science & Technology

1999-09-30

respectively, and iv ) the numerical simulation and observational validation of high-spatial resolution (~10 km) numerical predictions. APPROACH My approach...satellite and targeted dropwindsonde observations; in collaboration with Xiaolie Zou (Fla. State Univ.), Chris Velden (Univ. Wisc ./CIMMS), and Arlin...Univ. Wisc .), and Arlin Krueger (NASA/GSFC). Analysis and numerical simulation of the fine-scale structure of upper-level jet streams from high- spatial

2. Research in applied mathematics, numerical analysis, and computer science

NASA Technical Reports Server (NTRS)

1984-01-01

Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

3. Integration of numerical analysis tools for automated numerical optimization of a transportation package design

SciTech Connect

Witkowski, W.R.; Eldred, M.S.; Harding, D.C.

1994-09-01

The use of state-of-the-art numerical analysis tools to determine the optimal design of a radioactive material (RAM) transportation container is investigated. The design of a RAM packages components involves a complex coupling of structural, thermal, and radioactive shielding analyses. The final design must adhere to very strict design constraints. The current technique used by cask designers is uncoupled and involves designing each component separately with respect to its driving constraint. With the use of numerical optimization schemes, the complex couplings can be considered directly, and the performance of the integrated package can be maximized with respect to the analysis conditions. This can lead to more efficient package designs. Thermal and structural accident conditions are analyzed in the shape optimization of a simplified cask design. In this paper, details of the integration of numerical analysis tools, development of a process model, nonsmoothness difficulties with the optimization of the cask, and preliminary results are discussed.

4. Error analysis of finite element method for Poisson–Nernst–Planck equations

SciTech Connect

Sun, Yuzhou; Sun, Pengtao; Zheng, Bin; Lin, Guang

2016-08-01

A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.

5. Fabrication error analysis and experimental demonstration for computer-generated holograms.

PubMed

Zhou, Ping; Burge, James H

2007-02-10

Aspheric optical surfaces are often tested using computer-generated holograms (CGHs). For precise measurement, the wavefront errors caused by the CGH must be known and characterized. A parametric model relating the wavefront errors to the CGH fabrication errors is introduced. Methods are discussed for measuring the fabrication errors in the CGH substrate, duty cycle, etching depth, and effect of surface roughness. An example analysis of the wavefront errors from fabrication nonuniformities for a phase CGH is given. The calibration of these effects for a CGH null corrector is demonstrated to cause measurement error less than 1 nm.

6. NA-NET numerical analysis net

SciTech Connect

Dongarra, J. . Dept. of Computer Science Oak Ridge National Lab., TN ); Rosener, B. . Dept. of Computer Science)

1991-12-01

This report describes a facility called NA-NET created to allow numerical analysts (na) an easy method of communicating with one another. The main advantage of the NA-NET is uniformity of addressing. All mail is addressed to the Internet host na-net.ornl.gov'' at Oak Ridge National Laboratory. Hence, members of the NA-NET do not need to remember complicated addresses or even where a member is currently located. As long as moving members change their e-mail address in the NA-NET everything works smoothly. The NA-NET system is currently located at Oak Ridge National Laboratory. It is running on the same machine that serves netlib. Netlib is a separate facility that distributes mathematical software via electronic mail. For more information on netlib consult, or send the one-line message send index'' to netlib{at}ornl.gov. The following report describes the current NA-NET system from both a user's perspective and from an implementation perspective. Currently, there are over 2100 members in the NA-NET. An average of 110 mail messages pass through this facility daily.

7. NA-NET numerical analysis net

SciTech Connect

Dongarra, J. |; Rosener, B.

1991-12-01

This report describes a facility called NA-NET created to allow numerical analysts (na) an easy method of communicating with one another. The main advantage of the NA-NET is uniformity of addressing. All mail is addressed to the Internet host na-net.ornl.gov at Oak Ridge National Laboratory. Hence, members of the NA-NET do not need to remember complicated addresses or even where a member is currently located. As long as moving members change their e-mail address in the NA-NET everything works smoothly. The NA-NET system is currently located at Oak Ridge National Laboratory. It is running on the same machine that serves netlib. Netlib is a separate facility that distributes mathematical software via electronic mail. For more information on netlib consult, or send the one-line message send index to netlib{at}ornl.gov. The following report describes the current NA-NET system from both a users perspective and from an implementation perspective. Currently, there are over 2100 members in the NA-NET. An average of 110 mail messages pass through this facility daily.

8. A categorical analysis of coreference resolution errors in biomedical texts.

PubMed

Choi, Miji; Zobel, Justin; Verspoor, Karin

2016-04-01

Coreference resolution is an essential task in information extraction from the published biomedical literature. It supports the discovery of complex information by linking referring expressions such as pronouns and appositives to their referents, which are typically entities that play a central role in biomedical events. Correctly establishing these links allows detailed understanding of all the participants in events, and connecting events together through their shared participants. As an initial step towards the development of a novel coreference resolution system for the biomedical domain, we have categorised the characteristics of coreference relations by type of anaphor as well as broader syntactic and semantic characteristics, and have compared the performance of a domain adaptation of a state-of-the-art general system to published results from domain-specific systems in terms of this categorisation. We also develop a rule-based system for anaphoric coreference resolution in the biomedical domain with simple modules derived from available systems. Our results show that the domain-specific systems outperform the general system overall. Whilst this result is unsurprising, our proposed categorisation enables a detailed quantitative analysis of the system performance. We identify limitations of each system and find that there remain important gaps in the state-of-the-art systems, which are clearly identifiable with respect to the categorisation. We have analysed in detail the performance of existing coreference resolution systems for the biomedical literature and have demonstrated that there clear gaps in their coverage. The approach developed in the general domain needs to be tailored for portability to the biomedical domain. The specific framework for class-based error analysis of existing systems that we propose has benefits for identifying specific limitations of those systems. This in turn provides insights for further system development. Copyright © 2016

9. Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.

ERIC Educational Resources Information Center

Monagle, E. Brette

The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…

10. Errors Analysis of Solving Linear Inequalities among the Preparatory Year Students at King Saud University

ERIC Educational Resources Information Center

El-khateeb, Mahmoud M. A.

2016-01-01

The purpose of this study aims to investigate the errors classes occurred by the Preparatory year students at King Saud University, through analysis student responses to the items of the study test, and to identify the varieties of the common errors and ratios of common errors that occurred in solving inequalities. In the collection of the data,…

11. Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas

ERIC Educational Resources Information Center

Herzberg, Tina

2010-01-01

In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…

12. Numerical Analysis Of Interlaminar-Fracture Toughness

NASA Technical Reports Server (NTRS)

Chamis, C. C.; Murthy, P. L. N.

1988-01-01

Finite-element analysis applied in conjunction with strain-energy and micromechanical concepts. Computational procedure involves local, local-crack-closure, and/or the "unique" local-crack-closure method developed at NASA Lewis Research Center, for mathematical modeling of ENF and MMF. Methods based on three-dimensional finite-element analysis in conjunction with concept of strain-energy-release rate and with micromechanics of composite materials. Assists in interpretation of ENF and MMF fracture tests performed to obtain fracture-toughness parameters, by enabling evaluation of states of stress likely to induce interlaminar fractures.

13. Instantaneous PIV/PTV-based pressure gradient estimation: a framework for error analysis and correction

McClure, Jeffrey; Yarusevych, Serhiy

2017-08-01

A framework for the exact determination of the pressure gradient estimation error in incompressible flows given erroneous velocimetry data is derived which relies on the calculation of the curl and divergence of the pressure gradient error over the domain and then the solution of a div-curl system to reconstruct the pressure gradient error field. In practice, boundary conditions for the div-curl system are unknown, and the divergence of the pressure gradient error requires approximation. The effect of zero pressure gradient error boundary conditions and approximating the divergence are evaluated using three flow cases: (1) a stationary Taylor vortex; (2) an advecting Lamb-Oseen vortex near a boundary; and (3) direct numerical simulation of the turbulent wake of a circular cylinder. The results show that the exact form of the pressure gradient error field reconstruction converges onto the exact values, within truncation and round-off errors, except for a small flow field region near the domain boundaries. It is also shown that the approximation for the divergence of the pressure gradient error field retains the fidelity of the reconstruction, even when velocity field errors are generated with substantial spatial variation. In addition to the utility of the proposed technique to improve the accuracy of pressure estimates, the reconstructed error fields provide spatially resolved estimates for instantaneous PIV/PTV-based pressure error.

14. Numerical analysis of the orthogonal descent method

SciTech Connect

Shokov, V.A.; Shchepakin, M.B.

1994-11-01

The author of the orthogonal descent method has been testing it since 1977. The results of these tests have only strengthened the need for further analysis and development of orthogonal descent algorithms for various classes of convex programming problems. Systematic testing of orthogonal descent algorithms and comparison of test results with other nondifferentiable optimization methods was conducted at TsEMI RAN in 1991-1992 using the results.

15. Numerical analysis of the sea state bias for satellite altimetry

Glazman, R. E.; Fabrikant, A.; Srokosz, M. A.

1996-02-01

Theoretical understanding of the dependence of sea state bias (SSB) on wind wave conditions has been achieved only for the case of a unidirectional wind-driven sea [Jackson, 1979; Rodriguez et al., 1992; Glazman and Srokosz, 1991]. Recent analysis of Geosat and TOPEX altimeter data showed that additional factors, such as swell, ocean currents, and complex directional properties of realistic wave fields, may influence SSB behavior. Here we investigate effects of two-dimensional multimodal wave spectra using a numerical model of radar reflection from a random, non-Gaussian surface. A recently proposed ocean wave spectrum is employed to describe sea surface statistics. The following findings appear to be of particular interest: (1) Sea swell has an appreciable effect in reducing the SSB coefficient compared with the pure wind sea case but has less effect on the actual SSB, owing to the corresponding increase in significant wave height. (2) Hidden multimodal structure (the two-dimensional wavenumber spectrum contains separate peaks, for swell and wind seas, while the frequency spectrum looks unimodal) results in an appreciable change of SSB. (3) For unimodal, purely wind-driven seas, the influence of the angular spectral width is relatively unimportant; that is, a unidirectional sea provides a good qualitative model for SSB if the swell is absent. (4) The pseudo wave age is generally much better for parametrizing the SSB coefficient than the actual wave age (which is ill-defined for a multimodal sea) or wind speed. (5) SSB can be as high as 5% of the significant wave height, which is significantly greater than predicted by present empirical model functions tuned on global data sets. (6) Parameterization of SSB in terms of wind speed is likely to lead to errors due to the dependence on the (in practice, unknown) fetch.

16. Numerical Analysis of the Sea State Bias for Satellite Altimetry

NASA Technical Reports Server (NTRS)

Glazman, R. E.; Fabrikant, A.; Srokosz, M. A.

1996-01-01

Theoretical understanding of the dependence of sea state bias (SSB) on wind wave conditions has been achieved only for the case of a unidirectional wind-driven sea. Recent analysis of Geosat and TOPEX altimeter data showed that additional factors, such as swell, ocean currents, and complex directional properties of realistic wave fields, may influence SSB behavior. Here we investigate effects of two-dimensional multimodal wave spectra using a numerical model of radar reflection from a random, non-Gaussian surface. A recently proposed ocean wave spectrum is employed to describe sea surface statistics. The following findings appear to be of particular interest: (1) Sea swell has an appreciable effect in reducing the SSB coefficient compared with the pure wind sea case but has less effect on the actual SSB owing to the corresponding increase in significant wave height. (2) Hidden multimodal structure (the two-dimensional wavenumber spectrum contains separate peaks, for swell and wind seas, while the frequency spectrum looks unimodal) results in an appreciable change of SSB. (3) For unimodal, purely wind-driven seas, the influence of the angular spectral width is relatively unimportant; that is, a unidirectional sea provides a good qualitative model for SSB if the swell is absent. (4) The pseudo wave age is generally much better fo parametrization the SSB coefficient than the actual wave age (which is ill-defined for a multimodal sea) or wind speed. (5) SSB can be as high as 5% of the significant wave height, which is significantly greater than predicted by present empirical model functions tuned on global data sets. (6) Parameterization of SSB in terms of wind speed is likely to lead to errors due to the dependence on the (in practice, unknown) fetch.

17. Numerical bifurcation analysis of immunological models with time delays

Luzyanina, Tatyana; Roose, Dirk; Bocharov, Gennady

2005-12-01

In recent years, a large number of mathematical models that are described by delay differential equations (DDEs) have appeared in the life sciences. To analyze the models' dynamics, numerical methods are necessary, since analytical studies can only give limited results. In turn, the availability of efficient numerical methods and software packages encourages the use of time delays in mathematical modelling, which may lead to more realistic models. We outline recently developed numerical methods for bifurcation analysis of DDEs and illustrate the use of these methods in the analysis of a mathematical model of human hepatitis B virus infection.

18. Numerical analysis on pump turbine runaway points

Guo, L.; Liu, J. T.; Wang, L. Q.; Jiao, L.; Li, Z. F.

2012-11-01

To research the character of pump turbine runaway points with different guide vane opening, a hydraulic model was established based on a pumped storage power station. The RNG k-ε model and SMPLEC algorithms was used to simulate the internal flow fields. The result of the simulation was compared with the test data and good correspondence was got between experimental data and CFD result. Based on this model, internal flow analysis was carried out. The result show that when the pump turbine ran at the runway speed, lots of vortexes appeared in the flow passage of the runner. These vortexes could always be observed even if the guide vane opening changes. That is an important way of energy loss in the runaway condition. Pressure on two sides of the runner blades were almost the same. So the runner power is very low. High speed induced large centrifugal force and the small guide vane opening gave the water velocity a large tangential component, then an obvious water ring could be observed between the runner blades and guide vanes in small guide vane opening condition. That ring disappeared when the opening bigger than 20°. These conclusions can provide a theory basis for the analysis and simulation of the pump turbine runaway points.

19. Numerical Uncertainty Quantification for Radiation Analysis Tools

NASA Technical Reports Server (NTRS)

Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha

2007-01-01

Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.

20. Numerical analysis of soil-structure interaction

Vanlangen, Harry

1991-05-01

A study to improve some existing procedures for the finite element analysis of soil deformation and collapse is presented. Special attention is paid to problems of soil structure interaction. Emphasis is put on the behavior of soil rather than on that of structures. This seems to be justifiable if static interaction of stiff structures and soft soil is considered. In such a case nonlinear response will exclusively stem from soil deformation. In addition, the quality of the results depends to a high extent on the proper modeling of soil flow along structures and not on the modeling of the structure itself. An exception is made when geotextile reinforcement is considered. In that case the structural element, i.e., the geotextile, is highly flexible. The equation of continuum equilibrium, which serves as a starting point for the finite element formulation of large deformation elastoplasticity, is discussed with special attention being paid to the interpretation of some objective stress rate tensors. The solution of nonlinear finite element equations is addressed. Soil deformation in the prefailure range is discussed. Large deformation effect in the analysis of soil deformation is touched on.

1. Stochastic modelling and analysis of IMU sensor errors

Zaho, Y.; Horemuz, M.; Sjöberg, L. E.

2011-12-01

The performance of a GPS/INS integration system is greatly determined by the ability of stand-alone INS system to determine position and attitude within GPS outage. The positional and attitude precision degrades rapidly during GPS outage due to INS sensor errors. With advantages of low price and volume, the Micro Electrical Mechanical Sensors (MEMS) have been wildly used in GPS/INS integration. Moreover, standalone MEMS can keep a reasonable positional precision only a few seconds due to systematic and random sensor errors. General stochastic error sources existing in inertial sensors can be modelled as (IEEE STD 647, 2006) Quantization Noise, Random Walk, Bias Instability, Rate Random Walk and Rate Ramp. Here we apply different methods to analyze the stochastic sensor errors, i.e. autoregressive modelling, Gauss-Markov process, Power Spectral Density and Allan Variance. Then the tests on a MEMS based inertial measurement unit were carried out with these methods. The results show that different methods give similar estimates of stochastic error model parameters. These values can be used further in the Kalman filter for better navigation accuracy and in the Doppler frequency estimate for faster acquisition after GPS signal outage.

2. Hurricane Debby - Analysis and numerical forecasts using VAS soundings

NASA Technical Reports Server (NTRS)

Le Marshall, J. F.; Smith, W. L.; Callan, G. M.

1984-01-01

The utility of VISSR Atmospheric Sounder (VAS) temperature and moisture soundings in defining the storm and its surroundings at subsynoptic scales has been examined using a numerical analysis and prognosis system. In particular, VAS temperature and moisture soundings and cloud and water vapor motion winds have been used in numerical analysis for three time periods. It is shown that the VAS temperature and moisture data which specify temperature and moisture well in cloud free regions are complemented by cloud and water vapor wind data which provide horizontal gradient information for the cloudy areas. The loss of analysis integrity due to the reduction of VAS data density in the cloudy regions associated with synoptic activity is ameliorated by using cloud and water vapor motion winds. The improvement in numerical forecasts resulting from the addition of these data to the numerical data base is also recorded.

3. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

Pan, B.; Wang, B.; Lubineau, G.

2016-07-01

Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work.

4. A comprehensive analysis of translational missense errors in the yeast Saccharomyces cerevisiae.

PubMed

Kramer, Emily B; Vallabhaneni, Haritha; Mayer, Lauren M; Farabaugh, Philip J

2010-09-01

The process of protein synthesis must be sufficiently rapid and sufficiently accurate to support continued cellular growth. Failure in speed or accuracy can have dire consequences, including disease in humans. Most estimates of the accuracy come from studies of bacterial systems, principally Escherichia coli, and have involved incomplete analysis of possible errors. We recently used a highly quantitative system to measure the frequency of all types of misreading errors by a single tRNA in E. coli. That study found a wide variation in error frequencies among codons; a major factor causing that variation is competition between the correct (cognate) and incorrect (near-cognate) aminoacyl-tRNAs for the mutant codon. Here we extend that analysis to measure the frequency of missense errors by two tRNAs in a eukaryote, the yeast Saccharomyces cerevisiae. The data show that in yeast errors vary by codon from a low of 4 x 10(-5) to a high of 6.9 x 10(-4) per codon and that error frequency is in general about threefold lower than in E. coli, which may suggest that yeast has additional mechanisms that reduce missense errors. Error rate again is strongly influenced by tRNA competition. Surprisingly, missense errors involving wobble position mispairing were much less frequent in S. cerevisiae than in E. coli. Furthermore, the error-inducing aminoglycoside antibiotic, paromomycin, which stimulates errors on all error-prone codons in E. coli, has a more codon-specific effect in yeast.

5. Systematic error analysis and correction in quadriwave lateral shearing interferometer

Zhu, Wenhua; Li, Jinpeng; Chen, Lei; Zheng, Donghui; Yang, Ying; Han, Zhigang

2016-12-01

To obtain high-precision and high-resolution measurement of dynamic wavefront, the systematic error of the quadriwave lateral shearing interferometer (QWLSI) is analyzed and corrected. The interferometer combines a chessboard grating with an order selection mask to select four replicas of the wavefront under test. A collimating lens is introduced to collimate the replicas, which not only eliminates the coma induced by the shear between each two replicas, but also avoids the astigmatism and defocus caused by CCD tilt. Besides, this configuration permits the shear amount to vary from zero, which benefits calibrating the systematic errors. A practical transmitted wavefront was measured by the QWLSI with different shear amounts. The systematic errors of reconstructed wavefronts are well suppressed. The standard deviation of root mean square is 0.8 nm, which verifies the stability and reliability of QWLSI for dynamic wavefront measurement.

6. Analysis of adjustment error in aspheric null testing with CGH

He, Yiwei; Xi, Hou; Chen, Qiang; Wu, Fan; Li, Chaoqiang; Zhu, Xiaoqiang; Song, Weihong

2016-09-01

Generally, in order to gain high accuracy in aspheric testing, a piece of high-quality CGH (computer generated hologram) is inserted behind transmission sphere to generate specified wave-front to match aspheric part. According to the difference in function, the CGH is divided into 2 parts: the center region, called as testing hologram, is used to generate specified aspheric wave-front; the outer ring, called as alignment hologram, is used to align the location of CGH behind transmission sphere. Although alignment hologram is used, there is still some adjustment error from both CGH and aspheric part, such as tilt, eccentricity and defocus. Here we will stimulate the effect of these error sources on the accuracy that is rms after the piston, tilt and power are removed, when testing a specified aspheric part. It is easy to conclude that the total measurement error is about 2 nm and the defocus of CGH contributes most.

7. Numerical analysis of human dental occlusal contact

Bastos, F. S.; Las Casas, E. B.; Godoy, G. C. D.; Meireles, A. B.

2010-06-01

The purpose of this study was to obtain real contact areas, forces, and pressures acting on human dental enamel as a function of the nominal pressure during dental occlusal contact. The described development consisted of three steps: characterization of the surface roughness by 3D contact profilometry test, finite element analysis of micro responses for each pair of main asperities in contact, and homogenization of macro responses using an assumed probability density function. The inelastic deformation of enamel was considered, adjusting the stress-strain relationship of sound enamel to that obtained from instrumented indentation tests conducted with spherical tip. A mechanical part of the static friction coefficient was estimated as the ratio between tangential and normal components of the overall resistive force, resulting in μd = 0.057. Less than 1% of contact pairs reached the yield stress of enamel, indicating that the occlusal contact is essentially elastic. The micro-models indicated an average hardness of 6.25GPa, and the homogenized result for macroscopic interface was around 9GPa. Further refinements of the methodology and verification using experimental data can provide a better understanding of processes related to contact, friction and wear of human tooth enamel.

8. NASCRIN - NUMERICAL ANALYSIS OF SCRAMJET INLET

NASA Technical Reports Server (NTRS)

Kumar, A.

1994-01-01

The NASCRIN program was developed for analyzing two-dimensional flow fields in supersonic combustion ramjet (scramjet) inlets. NASCRIN solves the two-dimensional Euler or Navier-Stokes equations in conservative form by an unsplit, explicit, two-step finite-difference method. A more recent explicit-implicit, two-step scheme has also been incorporated in the code for viscous flow analysis. An algebraic, two-layer eddy-viscosity model is used for the turbulent flow calculations. NASCRIN can analyze both inviscid and viscous flows with no struts, one strut, or multiple struts embedded in the flow field. NASCRIN can be used in a quasi-three-dimensional sense for some scramjet inlets under certain simplifying assumptions. Although developed for supersonic internal flow, NASCRIN may be adapted to a variety of other flow problems. In particular, it should be readily adaptable to subsonic inflow with supersonic outflow, supersonic inflow with subsonic outflow, or fully subsonic flow. The NASCRIN program is available for batch execution on the CDC CYBER 203. The vectorized FORTRAN version was developed in 1983. NASCRIN has a central memory requirement of approximately 300K words for a grid size of about 3,000 points.

9. NASCRIN - NUMERICAL ANALYSIS OF SCRAMJET INLET

NASA Technical Reports Server (NTRS)

Kumar, A.

1994-01-01

The NASCRIN program was developed for analyzing two-dimensional flow fields in supersonic combustion ramjet (scramjet) inlets. NASCRIN solves the two-dimensional Euler or Navier-Stokes equations in conservative form by an unsplit, explicit, two-step finite-difference method. A more recent explicit-implicit, two-step scheme has also been incorporated in the code for viscous flow analysis. An algebraic, two-layer eddy-viscosity model is used for the turbulent flow calculations. NASCRIN can analyze both inviscid and viscous flows with no struts, one strut, or multiple struts embedded in the flow field. NASCRIN can be used in a quasi-three-dimensional sense for some scramjet inlets under certain simplifying assumptions. Although developed for supersonic internal flow, NASCRIN may be adapted to a variety of other flow problems. In particular, it should be readily adaptable to subsonic inflow with supersonic outflow, supersonic inflow with subsonic outflow, or fully subsonic flow. The NASCRIN program is available for batch execution on the CDC CYBER 203. The vectorized FORTRAN version was developed in 1983. NASCRIN has a central memory requirement of approximately 300K words for a grid size of about 3,000 points.

10. Combustion irreversibilities: Numerical simulation and analysis

Silva, Valter; Rouboa, Abel

2012-08-01

An exergy analysis was performed considering the combustion of methane and agro-industrial residues produced in Portugal (forest residues and vines pruning). Regarding that the irreversibilities of a thermodynamic process are path dependent, the combustion process was considering as resulting from different hypothetical paths each one characterized by four main sub-processes: reactant mixing, fuel oxidation, internal thermal energy exchange (heat transfer), and product mixing. The exergetic efficiency was computed using a zero dimensional model developed by using a Visual Basic home code. It was concluded that the exergy losses were mainly due to the internal thermal energy exchange sub-process. The exergy losses from this sub-process are higher when the reactants are preheated up to the ignition temperature without previous fuel oxidation. On the other hand, the global exergy destruction can be minored increasing the pressure, the reactants temperature and the oxygen content on the oxidant stream. This methodology allows the identification of the phenomena and processes that have larger exergy losses, the understanding of why these losses occur and how the exergy changes with the parameters associated to each system which is crucial to implement the syngas combustion from biomass products as a competitive technology.

11. Error analysis on spinal motion measurement using skin mounted sensors.

PubMed

Yang, Zhengyi; Ma, Heather Ting; Wang, Deming; Lee, Raymond

2008-01-01

Measurement errors of skin-mounted sensors in measuring forward bending movement of the lumbar spines are investigated. In this investigation, radiographic images capturing the entire lumbar spines' positions were acquired and used as a 'gold' standard. Seventeen young male volunteers (21 (SD 1) years old) agreed to participate in the study. Light-weight miniature sensors of the electromagnetic tracking systems-Fastrak were attached to the skin overlying the spinous processes of the lumbar spine. With the sensors attached, the subjects were requested to take lateral radiographs in two postures: neutral upright and full flexion. The ranges of motions of lumbar spine were calculated from two sets of digitized data: the bony markers of vertebral bodies and the sensors and compared. The differences between the two sets of results were then analyzed. The relative movement between sensor and vertebrae was decomposed into sensor sliding and titling, from which sliding error and titling error were introduced. Gross motion range of forward bending of lumbar spine measured from bony markers of vertebrae is 67.8 degrees (SD 10.6 degrees ) and that from sensors is 62.8 degrees (SD 12.8 degrees ). The error and absolute error for gross motion range were 5.0 degrees (SD 7.2 degrees ) and 7.7 degrees (SD 3.9 degrees ). The contributions of sensors placed on S1 and L1 to the absolute error were 3.9 degrees (SD 2.9 degrees ) and 4.4 degrees (SD 2.8 degrees ), respectively.

12. Analysis and improvement of gas turbine blade temperature measurement error

Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui

2015-10-01

Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed.

13. Bayesian analysis of truncation errors in chiral effective field theory

Melendez, J.; Furnstahl, R. J.; Klco, N.; Phillips, D. R.; Wesolowski, S.

2016-09-01

In the Bayesian approach to effective field theory (EFT) expansions, truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. By encoding expectations about the naturalness of EFT expansion coefficients for observables, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. We extend and test previous calculations of DOB intervals for chiral EFT observables, examine correlations between contributions at different orders and energies, and explore methods to validate the statistical consistency of the EFT expansion parameter. Supported in part by the NSF and the DOE.

14. Frequency analysis of nonlinear oscillations via the global error minimization

Kalami Yazdi, M.; Hosseini Tehrani, P.

2016-06-01

The capacity and effectiveness of a modified variational approach, namely global error minimization (GEM) is illustrated in this study. For this purpose, the free oscillations of a rod rocking on a cylindrical surface and the Duffing-harmonic oscillator are treated. In order to validate and exhibit the merit of the method, the obtained result is compared with both of the exact frequency and the outcome of other well-known analytical methods. The corollary reveals that the first order approximation leads to an acceptable relative error, specially for large initial conditions. The procedure can be promisingly exerted to the conservative nonlinear problems.

15. The impact of response measurement error on the analysis of designed experiments

SciTech Connect

Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

2016-11-01

This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.

16. The impact of response measurement error on the analysis of designed experiments

DOE PAGES

Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

2016-11-01

This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

17. Numerical Differentiation Methods for Computing Error Covariance Matrices in Item Response Theory Modeling: An Evaluation and a New Proposal

ERIC Educational Resources Information Center

Tian, Wei; Cai, Li; Thissen, David; Xin, Tao

2013-01-01

In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…

18. Uncertainity analysis of selected sources of errors in bioelectromagnetic investigations.

PubMed

Dlugosz, Tomasz

2014-01-01

The aim of this paper is to focus attention of experimenters on several sources of error that are not taken into account in the majority of bioelectromagnetics experiments, and which may lead to complete falsification of the results of the experiments.

19. Shape error analysis for reflective nano focusing optics

SciTech Connect

2010-06-23

Focusing performance of reflective x-ray optics is determined by surface figure accuracy. Any surface imperfection present on such optics introduces a phase error in the outgoing wave fields. Therefore converging beam at the focal spot will differ from the desired performance. Effect of these errors on focusing performance can be calculated by wave optical approach considering a coherent wave field illumination of optical elements. We have developed a wave optics simulator using Fresnel-Kirchhoff diffraction integral to calculate the mirror pupil function. Both analytically calculated and measured surface topography data can be taken as an aberration source to outgoing wave fields. Simulations are performed to study the effect of surface height fluctuations on focusing performances over wide frequency range in high, mid and low frequency band. The results using real shape profile measured with long trace profilometer (LTP) suggest that the shape error of {lambda}/4 PV (peak to valley) is tolerable to achieve diffraction limited performance. It is desirable to remove shape error of very low frequency as 0.1 mm{sup -1} which otherwise will generate beam waist or satellite peaks. All other frequencies above this limit will not affect the focused beam profile but only caused a loss in intensity.

20. Analysis of Errors Made by Students Solving Genetics Problems.

ERIC Educational Resources Information Center

Costello, Sandra Judith

The purpose of this study was to analyze the errors made by students solving genetics problems. A sample of 10 non-science undergraduate students was obtained from a private college in Northern New Jersey. The results support prior research in the area of genetics education and show that a weak understanding of the relationship of meiosis to…

1. Oral Definitions of Newly Learned Words: An Error Analysis

ERIC Educational Resources Information Center

Steele, Sara C.

2012-01-01

This study examined and compared patterns of errors in the oral definitions of newly learned words. Fifteen 9- to 11-year-old children with language learning disability (LLD) and 15 typically developing age-matched peers inferred the meanings of 20 nonsense words from four novel reading passages. After reading, children provided oral definitions…

ERIC Educational Resources Information Center

Abu-rabia, Salim; Taha, Haitham

2004-01-01

This study was an investigation of reading and spelling errors of dyslexic Arabic readers ("n"=20) compared with two groups of normal readers: a young readers group, matched with the dyslexics by reading level ("n"=20) and an age-matched group ("n"=20). They were tested on reading and spelling of texts, isolated…

3. Pitch Error Analysis of Young Piano Students' Music Reading Performances

ERIC Educational Resources Information Center

Rut Gudmundsdottir, Helga

2010-01-01

This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…

4. Analysis of uncompensated phase error on automatic target recognition performance

Montagnino, Lee J.; Cassabaum, Mary L.; Halversen, Shawn D.; Rupp, Chad T.; Wagner, Gregory M.; Young, Matthew T.

2009-05-01

Performance of Automatic Target Recognition (ATR) algorithms for Synthetic Aperture Radar (SAR) systems relies heavily on the system performance and specifications of the SAR sensor. A representative multi-stage SAR ATR algorithm [1, 2] is analyzed across imagery containing phase errors in the down-range direction induced during the transmission of the radar's waveform. The degradation induced on the SAR imagery by the phase errors is measured in terms of peak phase error, Root-Mean-Square (RMS) phase error, and multiplicative noise. The ATR algorithm consists of three stages: a two-parameter CFAR, a discrimination stage to reduce false alarms, and a classification stage to identify targets in the scene. The end-to-end performance of the ATR algorithm is quantified as a function of the multiplicative noise present in the SAR imagery through Receiver Operating Characteristic (ROC) curves. Results indicate that the performance of the ATR algorithm presented is robust over a 3dB change in multiplicative noise.

5. Geometric Error Analysis in Applied Calculus Problem Solving

ERIC Educational Resources Information Center

Usman, Ahmed Ibrahim

2017-01-01

The paper investigates geometric errors students made as they tried to use their basic geometric knowledge in the solution of the Applied Calculus Optimization Problem (ACOP). Inaccuracies related to the drawing of geometric diagrams (visualization skills) and those associated with the application of basic differentiation concepts into ACOP…

6. Analysis of Errors Made by Students Solving Genetics Problems.

ERIC Educational Resources Information Center

Costello, Sandra Judith

The purpose of this study was to analyze the errors made by students solving genetics problems. A sample of 10 non-science undergraduate students was obtained from a private college in Northern New Jersey. The results support prior research in the area of genetics education and show that a weak understanding of the relationship of meiosis to…

7. Young Children's Mental Arithmetic Errors: A Working-Memory Analysis.

ERIC Educational Resources Information Center

Brainerd, Charles J.

1983-01-01

Presents a stochastic model for distinguishing mental arithmetic errors according to causes of failure. A series of experiments (1) studied questions of goodness of fit and model validity among four and five year olds and (2) used the model to measure the relative contributions of developmental improvements in short-term memory and arithmetical…

8. Pitch Error Analysis of Young Piano Students' Music Reading Performances

ERIC Educational Resources Information Center

Rut Gudmundsdottir, Helga

2010-01-01

This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…

9. Oral Definitions of Newly Learned Words: An Error Analysis

ERIC Educational Resources Information Center

Steele, Sara C.

2012-01-01

This study examined and compared patterns of errors in the oral definitions of newly learned words. Fifteen 9- to 11-year-old children with language learning disability (LLD) and 15 typically developing age-matched peers inferred the meanings of 20 nonsense words from four novel reading passages. After reading, children provided oral definitions…

10. Analysis of Students' Error in Learning of Quadratic Equations

ERIC Educational Resources Information Center

Zakaria, Effandi; Ibrahim; Maat, Siti Mistima

2010-01-01

The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…

11. Error analysis of subaperture processing in 1-D ultrasound arrays.

PubMed

Zhao, Kang-Qiao; Bjåstad, Tore Gruner; Kristoffersen, Kjell

2015-04-01

To simplify the medical ultrasound system and reduce the cost, several techniques have been proposed to reduce the interconnections between the ultrasound probe and the back-end console. Among them, subaperture processing (SAP) is the most straightforward approach and is widely used in commercial products. This paper reviews the most important error sources of SAP, such as static focusing, delay quantization, linear delay profile, and coarse apodization, and the impacts introduced by these errors are shown. We propose to use main lobe coherence loss as a simple classification of the quality of the beam profile for a given design. This figure-ofmerit (FoM) is evaluated by simulations with a 1-D ultrasound subaperture array setup. The analytical expressions and the coherence loss can work as a quick guideline in subaperture design by equalizing the merit degradations from different error sources, as well as minimizing the average or maximum loss over ranges. For the evaluated 1-D array example, a good balance between errors and cost was achieved using a subaperture size of 5 elements, focus at 40 mm range, and a delay quantization step corresponding to a phase of π/4.

12. Analysis of Children's Errors in Comprehension and Expression

ERIC Educational Resources Information Center

Hatcher, Ryan C.; Breaux, Kristina C.; Liu, Xiaochen; Bray, Melissa A.; Ottone-Cross, Karen L.; Courville, Troy; Luria, Sarah R.; Langley, Susan Dulong

2017-01-01

Children's oral language skills typically begin to develop sooner than their written language skills; however, the four language systems (listening, speaking, reading, and writing) then develop concurrently as integrated strands that influence one another. This research explored relationships between students' errors in language comprehension of…

13. Analysis of Children's Errors in Comprehension and Expression

ERIC Educational Resources Information Center

Hatcher, Ryan C.; Breaux, Kristina C.; Liu, Xiaochen; Bray, Melissa A.; Ottone-Cross, Karen L.; Courville, Troy; Luria, Sarah R.; Langley, Susan Dulong

2017-01-01

Children's oral language skills typically begin to develop sooner than their written language skills; however, the four language systems (listening, speaking, reading, and writing) then develop concurrently as integrated strands that influence one another. This research explored relationships between students' errors in language comprehension of…

14. Numerical Analysis vs. Mathematics: Modern mathematics often does not deal with the practical problems which face numerical analysis.

PubMed

Hamming, R W

1965-04-23

I hope I have shown not that mathematicians are incompetent or wrong, but why I believe that their interests, tastes, and objectives are frequently different from those of practicing numerical analysts, and why activity in numerical analysis should be evaluated by its own standards and not by those of pure mathematics. I hope I have also shown you that much of the "art form" of mathematics consists of delicate, "noise-free" results, while many areas of applied mathematics, especially numerical analysis, are dominated by noise. Again, in computing the process is fundamental, and rigorous mathematical proofs are often meaningless in computing situations. Finally, in numerical analysis, as in engineering, choosing the right model is more important than choosing the model with the elegant mathematics.

15. Reduced wavefront reconstruction mean square error using optimal priors: algebraic analysis and simulations

Béchet, C.; Tallon, M.; Thiébaut, E.

2008-07-01

The turbulent wavefront reconstruction step in an adaptive optics system is an inverse problem. The Mean-Square Error (MSE) assessing the reconstruction quality is made of two terms, often called bias and variance. The latter is also commonly referred as the noise propagation. The aim of this paper is to investigate the evolution of these two error contributions when the number of parameters to be estimated becomes of the order of 10 4. Such dimensions are expected for the adaptive optics systems on the Extremely Large Telescopes. We provide an algebraic formalism to compare the MSE of Maximum Likelihood and Maximum A Posteriori linear reconstructors. A Generalized Singular Value Decomposition applied on the reconstructors theoretically enhances the differences between zonal and modal approaches, and demonstrates the gain in using Maximum A Posteriori method. Thanks to numerical simulations, we quantitatively study the evolution of the MSE contributions with respect to the pupil shape, to the outer scale of the turbulence, to the number of actuators and to the signal-to-noise ratio. Simulations results are consistent with previous noise propagation studies and with our algebraic analysis. Finally, using the Fractal Iterative Method as a Maximum A Posteriori reconstruction algorithm in our simulations, we demonstrate a possible reduction of the MSE of a factor 2 in large adaptive optics systems, for low signal-to-noise ratio.

16. Study on error analysis and accuracy improvement for aspheric profile measurement

Gao, Huimin; Zhang, Xiaodong; Fang, Fengzhou

2017-06-01

Aspheric surfaces are important to the optical systems and need high precision surface metrology. Stylus profilometry is currently the most common approach to measure axially symmetric elements. However, if the asphere has the rotational alignment errors, the wrong cresting point would be located deducing the significantly incorrect surface errors. This paper studied the simulated results of an asphere with rotational angles around X-axis and Y-axis, and the stylus tip shift in X, Y and Z direction. Experimental results show that the same absolute value of rotational errors around X-axis would cause the same profile errors and different value of rotational errors around Y-axis would cause profile errors with different title angle. Moreover, the greater the rotational errors, the bigger the peak-to-valley value of profile errors. To identify the rotational angles in X-axis and Y-axis, the algorithms are performed to analyze the X-axis and Y-axis rotational angles respectively. Then the actual profile errors with multiple profile measurement around X-axis are calculated according to the proposed analysis flow chart. The aim of the multiple measurements strategy is to achieve the zero position of X-axis rotational errors. Finally, experimental results prove the proposed algorithms achieve accurate profile errors for aspheric surfaces avoiding both X-axis and Y-axis rotational errors. Finally, a measurement strategy for aspheric surface is presented systematically.

17. Using Online Error Analysis Items to Support Preservice Teachers' Pedagogical Content Knowledge in Mathematics

ERIC Educational Resources Information Center

McGuire, Patrick

2013-01-01

This article describes how a free, web-based intelligent tutoring system, (ASSISTment), was used to create online error analysis items for preservice elementary and secondary mathematics teachers. The online error analysis items challenged preservice teachers to analyze, diagnose, and provide targeted instructional remediation intended to help…

18. Quantitative analysis of numerical solvers for oscillatory biomolecular system models

PubMed Central

Quo, Chang F; Wang, May D

2008-01-01

Background This article provides guidelines for selecting optimal numerical solvers for biomolecular system models. Because various parameters of the same system could have drastically different ranges from 10-15 to 1010, the ODEs can be stiff and ill-conditioned, resulting in non-unique, non-existing, or non-reproducible modeling solutions. Previous studies have not examined in depth how to best select numerical solvers for biomolecular system models, which makes it difficult to experimentally validate the modeling results. To address this problem, we have chosen one of the well-known stiff initial value problems with limit cycle behavior as a test-bed system model. Solving this model, we have illustrated that different answers may result from different numerical solvers. We use MATLAB numerical solvers because they are optimized and widely used by the modeling community. We have also conducted a systematic study of numerical solver performances by using qualitative and quantitative measures such as convergence, accuracy, and computational cost (i.e. in terms of function evaluation, partial derivative, LU decomposition, and "take-off" points). The results show that the modeling solutions can be drastically different using different numerical solvers. Thus, it is important to intelligently select numerical solvers when solving biomolecular system models. Results The classic Belousov-Zhabotinskii (BZ) reaction is described by the Oregonator model and is used as a case study. We report two guidelines in selecting optimal numerical solver(s) for stiff, complex oscillatory systems: (i) for problems with unknown parameters, ode45 is the optimal choice regardless of the relative error tolerance; (ii) for known stiff problems, both ode113 and ode15s are good choices under strict relative tolerance conditions. Conclusions For any given biomolecular model, by building a library of numerical solvers with quantitative performance assessment metric, we show that it is possible

19. Palaeomagnetic analysis of plunging fold structures: Errors and a simple fold test

Stewart, Simon A.

1995-02-01

The conventional corrections for bedding dip in palaeomagnetic studies involve either untilting about strike or about some inclined axis—the choice is usually governed by the perceived fold hinge orientation. While it has been recognised that untilting bedding about strike can be erroneous if the beds lie within plunging fold structures, there are several types of fold which have plunging hinges, but whose limbs have rotated about horizontal axes. Examples are interference structures and forced folds; restoration about inclined axes may be incorrect in these cases. The angular errors imposed upon palaeomagnetic lineation data via the wrong choice of rotation axis during unfolding are calculated here and presented for lineations in any orientation which could be associated with an upright, symmetrical fold. This extends to palaeomagnetic data previous analyses which were relevant to bedding-parallel lineations. This numerical analysis highlights the influence of various parameters which describe fold geometry and relative lineation orientation upon the angular error imparted to lineation data by the wrong unfolding method. The effect of each parameter is described, and the interaction of the parameters in producing the final error is discussed. Structural and palaeomagnetic data are cited from two field examples of fold structures which illustrate the alternative kinematic histories. Both are from thin-skinned thrust belts, but the data show that one is a true plunging fold, formed by rotation about its inclined hinge, whereas the other is an interference structure produced by rotation of the limbs about non-parallel horizontal axes. Since the angle between the palaeomagnetic lineations and the inclined fold hinge is equal on both limbs in the former type of structure, but varies from limb to limb in the latter, a simple test can be defined which uses palaeomagnetic lineation data to identify rotation axes and hence fold type. This test can use pre- or syn

20. Diagnosing non-Gaussianity of forecast and analysis errors in a convective scale model

Legrand, R.; Michel, Y.; Montmerle, T.

2015-07-01

In numerical weather prediction, the problem of estimating initial conditions is usually based on a Bayesian framework. Two common derivations respectively lead to the Kalman filter and to variational approaches. They rely on either assumptions of linearity or assumptions of Gaussianity of the probability density functions of both observation and background errors. In practice, linearity and Gaussianity of errors are tied to one another, in the sense that a nonlinear model will yield non-Gaussian probability density functions, and that standard methods may perform poorly in the context of non-Gaussian probability density functions. This study aims to describe some aspects of non-Gaussianity of forecast and analysis errors in a convective scale model using a Monte-Carlo approach based on an ensemble of data assimilations. For this purpose, an ensemble of 90 members of cycled perturbed assimilations has been run over a highly precipitating case of interest. Non-Gaussianity is measured using the K2-statistics from the D'Agostino test, which is related to the sum of the squares of univariate skewness and kurtosis. Results confirm that specific humidity is the least Gaussian variable according to that measure, and also that non-Gaussianity is generally more pronounced in the boundary layer and in cloudy areas. The mass control variables used in our data assimilation, namely vorticity and divergence, also show distinct non-Gaussian behavior. It is shown that while non-Gaussianity increases with forecast lead time, it is efficiently reduced by the data assimilation step especially in areas well covered by observations. Our findings may have implication for the choice of the control variables.

1. Simultaneous control of error rates in fMRI data analysis.

PubMed

Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

2015-12-01

The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain.

2. Error analysis in stereo vision for location measurement of 3D point

Li, Yunting; Zhang, Jun; Tian, Jinwen

2015-12-01

Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

3. Error Analysis of Reaction Wheel Speed Detection Methods

Oh, Shi-Hwan; Lee, Hye-Jin; Lee, Seon-Ho; Yong, Ki-Lyuk

2008-12-01

Reaction wheel is one of the actuators for spacecraft attitude control, which generates torque by changing an inertial rotor speed inside of the wheel. In order to generate required torque accurately and estimate an accurate angular momentum, wheel speed should be measured as close to the actual speed as possible. In this study, two conventional speed detection methods for high speed motor with digital tacho pulse (Elapsed-time method and Pulse-count method) and their resolutions are analyzed. For satellite attitude maneuvering and control, reaction wheel shall be operated in bi-directional and low speed operation is sometimes needed for emergency case. Thus the bias error at low speed with constant acceleration (or deceleration) is also analysed. As a result, the speed detection error of elapsed-time method is largely influenced upon the high-speed clock frequency at high speed and largely effected on the number of tacho pulses used in elapsed time calculation at low speed, respectively.

4. Digital floodplain mapping and an analysis of errors involved

USGS Publications Warehouse

Hamblen, C.S.; Soong, D.T.; Cai, X.

2007-01-01

Mapping floodplain boundaries using geographical information system (GIS) and digital elevation models (DEMs) was completed in a recent study. However convenient this method may appear at first, the resulting maps potentially can have unaccounted errors. Mapping the floodplain using GIS is faster than mapping manually, and digital mapping is expected to be more common in the future. When mapping is done manually, the experience and judgment of the engineer or geographer completing the mapping and the contour resolution of the surface topography are critical in determining the flood-plain and floodway boundaries between cross sections. When mapping is done digitally, discrepancies can result from the use of the computing algorithm and digital topographic datasets. Understanding the possible sources of error and how the error accumulates through these processes is necessary for the validation of automated digital mapping. This study will evaluate the procedure of floodplain mapping using GIS and a 3 m by 3 m resolution DEM with a focus on the accumulated errors involved in the process. Within the GIS environment of this mapping method, the procedural steps of most interest, initially, include: (1) the accurate spatial representation of the stream centerline and cross sections, (2) properly using a triangulated irregular network (TIN) model for the flood elevations of the studied cross sections, the interpolated elevations between them and the extrapolated flood elevations beyond the cross sections, and (3) the comparison of the flood elevation TIN with the ground elevation DEM, from which the appropriate inundation boundaries are delineated. The study area involved is of relatively low topographic relief; thereby, making it representative of common suburban development and a prime setting for the need of accurately mapped floodplains. This paper emphasizes the impacts of integrating supplemental digital terrain data between cross sections on floodplain delineation

5. Probability analysis of position errors using uncooled IR stereo camera

Oh, Jun Ho; Lee, Sang Hwa; Lee, Boo Hwan; Park, Jong-Il

2016-05-01

This paper analyzes the random phenomenon of 3D positions when tracking moving objects using the infrared (IR) stereo camera, and proposes a probability model of 3D positions. The proposed probability model integrates two random error phenomena. One is the pixel quantization error which is caused by discrete sampling pixels in estimating disparity values of stereo camera. The other is the timing jitter which results from the irregular acquisition-timing in the uncooled IR cameras. This paper derives a probability distribution function by combining jitter model with pixel quantization error. To verify the proposed probability function of 3D positions, the experiments on tracking fast moving objects are performed using IR stereo camera system. The 3D depths of moving object are estimated by stereo matching, and be compared with the ground truth obtained by laser scanner system. According to the experiments, the 3D depths of moving object are estimated within the statistically reliable range which is well derived by the proposed probability distribution. It is expected that the proposed probability model of 3D positions can be applied to various IR stereo camera systems that deal with fast moving objects.

6. Calculating Internal Avalanche Velocities From Correlation With Error Analysis.

McElwaine, J. N.; Tiefenbacher, F.

Velocities inside avalanches have been calculated for many years by calculating the cross-correlation between light sensitive sensors using a method pioneered by Dent. His approach has been widely adopted but suffers from four shortcomings. (i) Corre- lations are studied between pairs of sensors rather than between all sensors simulta- neously. This can result in inconsistent velocities and does not extract the maximum information from the data. (ii) The longer the time that the correlations are taken over the better the noise rejection, but errors due to non-constant velocity increase. (iii) The errors are hard to quantify. (iv) The calculated velocities are usually widely scattered and discontinuous. A new approach is described that produces a continuous veloc- ity field from any number of sensors at arbitrary locations. The method is based on a variational principle that reconstructs the underlying signal as it is advected past the sensors and enforces differentiability on the velocity. The errors in the method are quantified and applied to the problem of optimal sensor positioning and design. Results on SLF data from chute experiments are discussed.

7. PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.

SciTech Connect

PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.

1999-03-29

All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.

8. Numerical analysis of an H1-Galerkin mixed finite element method for time fractional telegraph equation.

PubMed

Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong

2014-01-01

We discuss and analyze an H(1)-Galerkin mixed finite element (H(1)-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H(1)-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H(1)-GMFE method. Based on the discussion on the theoretical error analysis in L(2)-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H(1)-norm. Moreover, we derive and analyze the stability of H(1)-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure.

9. Numerical Analysis of an H 1-Galerkin Mixed Finite Element Method for Time Fractional Telegraph Equation

PubMed Central

Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong

2014-01-01

We discuss and analyze an H 1-Galerkin mixed finite element (H 1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H 1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H 1-GMFE method. Based on the discussion on the theoretical error analysis in L 2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H 1-norm. Moreover, we derive and analyze the stability of H 1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148

10. Analysis of instrumentation error effects on the identification accuracy of aircraft parameters

NASA Technical Reports Server (NTRS)

Sorensen, J. A.

1972-01-01

An analytical investigation is presented of the effect of unmodeled measurement system errors on the accuracy of aircraft stability and control derivatives identified from flight test data. Such error sources include biases, scale factor errors, instrument position errors, misalignments, and instrument dynamics. Two techniques (ensemble analysis and simulated data analysis) are formulated to determine the quantitative variations to the identified parameters resulting from the unmodeled instrumentation errors. The parameter accuracy that would result from flight tests of the F-4C aircraft with typical quality instrumentation is determined using these techniques. It is shown that unmodeled instrument errors can greatly increase the uncertainty in the value of the identified parameters. General recommendations are made of procedures to be followed to insure that the measurement system associated with identifying stability and control derivatives from flight test provides sufficient accuracy.

11. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

NASA Technical Reports Server (NTRS)

Alexander, Tiffaney Miller

2017-01-01

Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

12. Errors in logic and statistics plague a meta-analysis

USDA-ARS?s Scientific Manuscript database

The non-target effects of transgenic insecticidal crops has been a topic of debate for over a decade and many laboratory and field studies have addressed the issue in numerous countries. In 2009 Lovei et al. (Transgenic Insecticidal Crops and Natural Enemies: A Detailed Review of Laboratory Studies)...

13. Quantification of uncertainties in OCO-2 measurements of XCO2: simulations and linear error analysis

Connor, Brian; Bösch, Hartmut; McDuffie, James; Taylor, Tommy; Fu, Dejian; Frankenberg, Christian; O'Dell, Chris; Payne, Vivienne H.; Gunson, Michael; Pollock, Randy; Hobbs, Jonathan; Oyafuso, Fabiano; Jiang, Yibo

2016-10-01

We present an analysis of uncertainties in global measurements of the column averaged dry-air mole fraction of CO2 (XCO2) by the NASA Orbiting Carbon Observatory-2 (OCO-2). The analysis is based on our best estimates for uncertainties in the OCO-2 operational algorithm and its inputs, and uses simulated spectra calculated for the actual flight and sounding geometry, with measured atmospheric analyses. The simulations are calculated for land nadir and ocean glint observations. We include errors in measurement, smoothing, interference, and forward model parameters. All types of error are combined to estimate the uncertainty in XCO2 from single soundings, before any attempt at bias correction has been made. From these results we also estimate the "variable error" which differs between soundings, to infer the error in the difference of XCO2 between any two soundings. The most important error sources are aerosol interference, spectroscopy, and instrument calibration. Aerosol is the largest source of variable error. Spectroscopy and calibration, although they are themselves fixed error sources, also produce important variable errors in XCO2. Net variable errors are usually < 1 ppm over ocean and ˜ 0.5-2.0 ppm over land. The total error due to all sources is ˜ 1.5-3.5 ppm over land and ˜ 1.5-2.5 ppm over ocean.

14. A string of mistakes: the importance of cascade analysis in describing, counting, and preventing medical errors.

PubMed

Woolf, Steven H; Kuzel, Anton J; Dovey, Susan M; Phillips, Robert L

2004-01-01

Notions about the most common errors in medicine currently rest on conjecture and weak epidemiologic evidence. We sought to determine whether cascade analysis is of value in clarifying the epidemiology and causes of errors and whether physician reports are sensitive to the impact of errors on patients. Eighteen US family physicians participating in a 6-country international study filed 75 anonymous error reports. The narratives were examined to identify the chain of events and the predominant proximal errors. We tabulated the consequences to patients, both reported by physicians and inferred by investigators. A chain of errors was documented in 77% of incidents. Although 83% of the errors that ultimately occurred were mistakes in treatment or diagnosis, 2 of 3 were set in motion by errors in communication. Fully 80% of the errors that initiated cascades involved informational or personal miscommunication. Examples of informational miscommunication included communication breakdowns among colleagues and with patients (44%), misinformation in the medical record (21%), mishandling of patients' requests and messages (18%), inaccessible medical records (12%), and inadequate reminder systems (5%). When asked whether the patient was harmed, physicians answered affirmatively in 43% of cases in which their narratives described harms. Psychological and emotional effects accounted for 17% of physician-reported consequences but 69% of investigator-inferred consequences. Cascade analysis of physicians' error reports is helpful in understanding the precipitant chain of events, but physicians provide incomplete information about how patients are affected. Miscommunication appears to play an important role in propagating diagnostic and treatment mistakes.

15. Single trial time-frequency domain analysis of error processing in post-traumatic stress disorder.

PubMed

Clemans, Zachary A; El-Baz, Ayman S; Hollifield, Michael; Sokhadze, Estate M

2012-09-13

Error processing studies in psychology and psychiatry are relatively common. Event-related potentials (ERPs) are often used as measures of error processing, two such response-locked ERPs being the error-related negativity (ERN) and the error-related positivity (Pe). The ERN and Pe occur following committed error in reaction time tasks as low frequency (4-8 Hz) electroencephalographic (EEG) oscillations registered at the midline fronto-central sites. We created an alternative method for analyzing error processing using time-frequency analysis in the form of a wavelet transform. A study was conducted in which subjects with PTSD and healthy control completed a forced-choice task. Single trial EEG data from errors in the task were processed using a continuous wavelet transform. Coefficients from the transform that corresponded to the theta range were averaged to isolate a theta waveform in the time-frequency domain. Measures called the time-frequency ERN and Pe were obtained from these waveforms for five different channels and then averaged to obtain a single time-frequency ERN and Pe for each error trial. A comparison of the amplitude and latency for the time-frequency ERN and Pe between the PTSD and control group was performed. A significant group effect was found on the amplitude of both measures. These results indicate that the developed single trial time-frequency error analysis method is suitable for examining error processing in PTSD and possibly other psychiatric disorders.

16. Analysis of Free-Space Coupling to Photonic Lanterns in the Presence of Tilt Errors

DTIC Science & Technology

2017-05-01

Analysis of Free- Space Coupling to Photonic Lanterns in the Presence of Tilt Errors Timothy M. Yarnall, David J. Geisler, Curt M. Schieler...Massachusetts Avenue Cambridge, MA 02139, USA Abstract—Free space coupling to photonic lanterns is more tolerant to tilt errors and F -number mismatch than...these errors. I. INTRODUCTION Photonic lanterns provide a means for transitioning from the free space regime to the single-mode fiber (SMF) regime by

17. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

NASA Technical Reports Server (NTRS)

LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

2011-01-01

This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

18. Quantitative analysis of errors in fractionated stereotactic radiotherapy.

PubMed

Choi, D R; Kim, D Y; Ahn, Y C; Huh, S J; Yeo, I J; Nam, D H; Lee, J I; Park, K; Kim, J H

2001-01-01

Fractionated stereotactic radiotherapy (FSRT) offers a technique to minimize the absorbed dose to normal tissues; therefore, quality assurance is essential for these procedures. In this study, quality assurance for FSRT of 58 cases, between August 1995 and August 1997 are described, and the errors for each step and overall accuracy were estimated. Some of the important items for FSRT procedures are: accuracy in CT localization, transferred image distortion, laser alignment, isocentric accuracy of linear accelerator, head frame movement, portal verification, and various human errors. A geometric phantom, that has known coordinates was used to estimate the accuracy of CT localization. A treatment planning computer was used for checking the transferred image distortion. The mechanical isocenter standard (MIS), rectilinear phantom pointer: (RLPP), and laser target localizer frame (LTLF) were used for laser alignment and target coordinates setting. Head-frame stability check was performed by a depth confirmation helmet (DCH). A film test was done to check isocentric accuracy and portal verification. All measured data for the 58 patients were recorded and analyzed for each item. 4-MV x-rays from a linear accelerator, were used for FSRT, along with homemade circular cones with diameters from 20 to 70 mm (interval: 5 mm). The accuracy in CT localization was 1.2+/-0.5 mm. The isocentric accuracy of the linear accelerator, including laser alignment, was 0.5+/-0.2 mm. The reproducibility of the head frame was 1.1+/-0.6 mm. The overall accuracy was 1.7+/-0.7 mm, excluding human errors.

19. An Analysis of Estimating Errors on Government Contracts.

DTIC Science & Technology

2014-09-26

of the cost of each project, using the same plans and specifications as the bidders. This estimate is open at the same time the rest of the bids are...significantly reduces the variability of estimating error. Time was used by setting I January 1982 as zero time . Thus, a data value of 1.2 represents 14 March...22 15 9 46 SWD 4.11 3.50 2.95 8.75 35 43 15 93 - 10 - the data set. These projects are 20-40 times the magnitude of the mean project size and

20. Numerical and semiclassical analysis of some generalized Casimir pistons

SciTech Connect

2009-05-15

The Casimir force due to a scalar field in a cylinder of radius r with a spherical cap of radius R>r is computed numerically in the world-line approach. A geometrical subtraction scheme gives the finite interaction energy that determines the Casimir force. The spectral function of convex domains is obtained from a probability measure on convex surfaces that is induced by the Wiener measure on Brownian bridges the convex surfaces are the hulls of. Due to reflection positivity, the vacuum force on the piston by a scalar field satisfying Dirichlet boundary conditions is attractive in these geometries, but the strength and short-distance behavior of the force depend strongly on the shape of the piston casing. For a cylindrical casing with a hemispherical head, the force on the piston does not depend on the dimension of the casing at small piston elevation a<numerically approaches F{sub cas}(a<numerical results for the small-distance behavior of the force within statistical errors, whereas the proximity force approximation is off by one order of magnitude when R{approx}r.

1. Teaching Numerical Integration in a Revitalized Calculus.

ERIC Educational Resources Information Center

Fay, Temple H.

1990-01-01

Described is an approach to the derivation of numerical integration formulas. Students develop their own formulas using polynomial interpolation and determine error estimates. The Newton-Cotes formulas and error analysis are reviewed. (KR)

2. Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors

San, Bingbing; Yang, Qingshan; Yin, Liwei

2017-03-01

Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.

3. Empirical analysis and modeling of errors of atmospheric profiles from GPS radio occultation

Scherllin-Pirscher, B.; Steiner, A. K.; Kirchengast, G.; Kuo, Y.-H.; Foelsche, U.

2011-05-01

The utilization of radio occultation (RO) data in atmospheric studies requires precise knowledge of error characteristics. We present results of an empirical error analysis of GPS radio occultation (RO) bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature. We find very good agreement between data characteristics of different missions (CHAMP, GRACE-A, and Formosat-3/COSMIC (F3C)). In the global mean, observational errors (standard deviation from "true" profiles at mean tangent point location) agree within 0.3 % in bending angle, 0.1 % in refractivity, and 0.2 K in dry temperature at all altitude levels between 4 km and 35 km. Above ≈20 km, the observational errors show a strong seasonal dependence at high latitudes. Larger errors occur in hemispheric wintertime and are associated mainly with background data used in the retrieval process. The comparison between UCAR and WEGC results (both data centers have independent inversion processing chains) reveals different magnitudes of observational errors in atmospheric parameters, which are attributable to different background fields used. Based on the empirical error estimates, we provide a simple analytical error model for GPS RO atmospheric parameters and account for vertical, latitudinal, and seasonal variations. In the model, which spans the altitude range from 4 km to 35 km, a constant error is adopted around the tropopause region amounting to 0.8 % for bending angle, 0.35 % for refractivity, 0.15 % for dry pressure, 10 m for dry geopotential height, and 0.7 K for dry temperature. Below this region the observational error increases following an inverse height power-law and above it increases exponentially. The observational error model is the same for UCAR and WEGC data but due to somewhat different error characteristics below about 10 km and above about 20 km some parameters have to be adjusted. Overall, the observational error model is easily applicable and adjustable to

4. Accounting for both random errors and systematic errors in uncertainty propagation analysis of computer models involving experimental measurements with Monte Carlo methods.

PubMed

Vasquez, Victor R; Whiting, Wallace B

2005-12-01

A Monte Carlo method is presented to study the effect of systematic and random errors on computer models mainly dealing with experimental data. It is a common assumption in this type of models (linear and nonlinear regression, and nonregression computer models) involving experimental measurements that the error sources are mainly random and independent with no constant background errors (systematic errors). However, from comparisons of different experimental data sources evidence is often found of significant bias or calibration errors. The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors. The main objectives are to detect the error source with stochastic dominance on the uncertainty propagation and the combined effect on output variables of the models. The results from the case studies analyzed show that the approach is able to distinguish which error type has a more significant effect on the performance of the model. Also, it was found that systematic or calibration errors, if present, cannot be neglected in uncertainty analysis of models dependent on experimental measurements such as chemical and physical properties. The approach can be used to facilitate decision making in fields related to safety factors selection, modeling, experimental data measurement, and experimental design.

5. Error analysis of combined stereo/optical-flow passive ranging

Barniv, Yair

1991-08-01

The motion of an imaging sensor causes each imaged point of the scene to correspondingly describe a time trajectory on the image plane. The trajectories of all imaged points are reminiscent of a flow (e.g., of liquid) which is the source of the term 'optical flow'. Optical-flow ranging is a method by which the stream of two-dimensional images obtained from a forward-looking forward-moving passive sensor is used to compute depth (or range) to points in the field of view. Another well-known ranging method consists of triangulation based on stereo images obtained from at least two stationary sensors. In this paper we analyze the potential accuracies of a combined optical flow and stereo passive-ranging system in the context of helicopter nap-of-the-earth obstacle avoidance. The Cramer-Rao lower bound is developed for the combined system under the assumption of an unknown angular bias error common to both cameras of a stereo pair. It is shown that the depth accuracy degradations caused by a bias error is negligible for a combined optical-flow and stereo system as compared to a monocular optical-flow system.

6. Error analysis of combined stereo/optical-flow passive ranging

NASA Technical Reports Server (NTRS)

Barniv, Yair

1991-01-01

The motion of an imaging sensor causes each imaged point of the scene to correspondingly describe a time trajectory on the image plane. The trajectories of all imaged points are reminiscent of a flow (e.g., of liquid) which is the source of the term 'optical flow'. Optical-flow ranging is a method by which the stream of two-dimensional images obtained from a forward-looking forward-moving passive sensor is used to compute depth (or range) to points in the field of view. Another well-known ranging method consists of triangulation based on stereo images obtained from at least two stationary sensors. In this paper we analyze the potential accuracies of a combined optical flow and stereo passive-ranging system in the context of helicopter nap-of-the-earth obstacle avoidance. The Cramer-Rao lower bound is developed for the combined system under the assumption of an unknown angular bias error common to both cameras of a stereo pair. It is shown that the depth accuracy degradations caused by a bias error is negligible for a combined optical-flow and stereo system as compared to a monocular optical-flow system.

7. ANALYSIS OF A CLASSIFICATION ERROR MATRIX USING CATEGORICAL DATA TECHNIQUES.

USGS Publications Warehouse

Rosenfield, George H.; Fitzpatrick-Lins, Katherine

1984-01-01

Summary form only given. A classification error matrix typically contains tabulation results of an accuracy evaluation of a thematic classification, such as that of a land use and land cover map. The diagonal elements of the matrix represent the counts corrected, and the usual designation of classification accuracy has been the total percent correct. The nondiagonal elements of the matrix have usually been neglected. The classification error matrix is known in statistical terms as a contingency table of categorical data. As an example, an application of these methodologies to a problem of remotely sensed data concerning two photointerpreters and four categories of classification indicated that there is no significant difference in the interpretation between the two photointerpreters, and that there are significant differences among the interpreted category classifications. However, two categories, oak and cottonwood, are not separable in classification in this experiment at the 0. 51 percent probability. A coefficient of agreement is determined for the interpreted map as a whole, and individually for each of the interpreted categories. A conditional coefficient of agreement for the individual categories is compared to other methods for expressing category accuracy which have already been presented in the remote sensing literature.

8. Error analysis of combined stereo/optical-flow passive ranging

NASA Technical Reports Server (NTRS)

Barniv, Yair

1991-01-01

The motion of an imaging sensor causes each imaged point of the scene to correspondingly describe a time trajectory on the image plane. The trajectories of all imaged points are reminiscent of a flow (e.g., of liquid) which is the source of the term 'optical flow'. Optical-flow ranging is a method by which the stream of two-dimensional images obtained from a forward-looking forward-moving passive sensor is used to compute depth (or range) to points in the field of view. Another well-known ranging method consists of triangulation based on stereo images obtained from at least two stationary sensors. In this paper we analyze the potential accuracies of a combined optical flow and stereo passive-ranging system in the context of helicopter nap-of-the-earth obstacle avoidance. The Cramer-Rao lower bound is developed for the combined system under the assumption of an unknown angular bias error common to both cameras of a stereo pair. It is shown that the depth accuracy degradations caused by a bias error is negligible for a combined optical-flow and stereo system as compared to a monocular optical-flow system.

9. Analysis of Student Errors on Division of Fractions

Maelasari, E.; Jupri, A.

2017-02-01

This study aims to describe the type of student errors that typically occurs at the completion of the division arithmetic operations on fractions, and to describe the causes of students’ mistakes. This research used a descriptive qualitative method, and involved 22 fifth grade students at one particular elementary school in Kuningan, Indonesia. The results of this study showed that students’ error answers caused by students changing their way of thinking to solve multiplication and division operations on the same procedures, the changing of mix fractions to common fraction have made students confused, and students are careless in doing calculation. From student written work, in solving the fraction problems, we found that there is influence between the uses of learning methods and student response, and some of student responses beyond researchers’ prediction. We conclude that the teaching method is not only the important thing that must be prepared, but the teacher should also prepare about predictions of students’ answers to the problems that will be given in the learning process. This could be a reflection for teachers to be better and to achieve the expected learning goals.

10. Numerical analysis of eccentric orifice plate using ANSYS Fluent software

Zahariea, D.

2016-11-01

In this paper the eccentric orifice plate is qualitative analysed as compared with the classical concentric orifice plate from the point of view of sedimentation tendency of solid particles in the fluid whose flow rate is measured. For this purpose, the numerical streamlines pattern will be compared for both orifice plates. The numerical analysis has been performed using ANSYS Fluent software. The methodology of CFD analysis is presented: creating the 3D solid model, fluid domain extraction, meshing, boundary condition, turbulence model, solving algorithm, convergence criterion, results and validation. Analysing the numerical streamlines, for the concentric orifice plate can be clearly observed two circumferential regions of separated flows, upstream and downstream of the orifice plate. The bottom part of these regions are the place where the solid particles could sediment. On the other hand, for the eccentric orifice plate, the streamlines pattern suggest that no sedimentation will occur because at the bottom area of the pipe there are no separated flows.

11. A general numerical model for wave rotor analysis

NASA Technical Reports Server (NTRS)

Paxson, Daniel W.

1992-01-01

Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.

12. A general numerical model for wave rotor analysis

Paxson, Daniel W.

1992-07-01

Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.

13. Error Analysis of non-TLD HDR Brachytherapy Dosimetric Techniques

The American Association of Physicists in Medicine Task Group Report43 (AAPM-TG43) and its updated version TG-43U1 rely on the LiF TLD detector to determine the experimental absolute dose rate for brachytherapy. The recommended uncertainty estimates associated with TLD experimental dosimetry include 5% for statistical errors (Type A) and 7% for systematic errors (Type B). TG-43U1 protocol does not include recommendation for other experimental dosimetric techniques to calculate the absolute dose for brachytherapy. This research used two independent experimental methods and Monte Carlo simulations to investigate and analyze uncertainties and errors associated with absolute dosimetry of HDR brachytherapy for a Tandem applicator. An A16 MicroChamber* and one dose MOSFET detectors† were selected to meet the TG-43U1 recommendations for experimental dosimetry. Statistical and systematic uncertainty analyses associated with each experimental technique were analyzed quantitatively using MCNPX 2.6‡ to evaluate source positional error, Tandem positional error, the source spectrum, phantom size effect, reproducibility, temperature and pressure effects, volume averaging, stem and wall effects, and Tandem effect. Absolute dose calculations for clinical use are based on Treatment Planning System (TPS) with no corrections for the above uncertainties. Absolute dose and uncertainties along the transverse plane were predicted for the A16 microchamber. The generated overall uncertainties are 22%, 17%, 15%, 15%, 16%, 17%, and 19% at 1cm, 2cm, 3cm, 4cm, and 5cm, respectively. Predicting the dose beyond 5cm is complicated due to low signal-to-noise ratio, cable effect, and stem effect for the A16 microchamber. Since dose beyond 5cm adds no clinical information, it has been ignored in this study. The absolute dose was predicted for the MOSFET detector from 1cm to 7cm along the transverse plane. The generated overall uncertainties are 23%, 11%, 8%, 7%, 7%, 9%, and 8% at 1cm, 2cm, 3cm

14. Analysis of measured data of human body based on error correcting frequency

Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

2014-04-01

Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

15. Manifest variable path analysis: potentially serious and misleading consequences due to uncorrected measurement error.

PubMed

Cole, David A; Preacher, Kristopher J

2014-06-01

Despite clear evidence that manifest variable path analysis requires highly reliable measures, path analyses with fallible measures are commonplace even in premier journals. Using fallible measures in path analysis can cause several serious problems: (a) As measurement error pervades a given data set, many path coefficients may be either over- or underestimated. (b) Extensive measurement error diminishes power and can prevent invalid models from being rejected. (c) Even a little measurement error can cause valid models to appear invalid. (d) Differential measurement error in various parts of a model can change the substantive conclusions that derive from path analysis. (e) All of these problems become increasingly serious and intractable as models become more complex. Methods to prevent and correct these problems are reviewed. The conclusion is that researchers should use more reliable measures (or correct for measurement error in the measures they do use), obtain multiple measures for use in latent variable modeling, and test simpler models containing fewer variables.

16. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

NASA Technical Reports Server (NTRS)

Duda, David P.; Minnis, Patrick

2009-01-01

Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

17. Analysis of Errors and Misconceptions in the Learning of Calculus by Undergraduate Students

ERIC Educational Resources Information Center

Muzangwa, Jonatan; Chifamba, Peter

2012-01-01

This paper is going to analyse errors and misconceptions in an undergraduate course in Calculus. The study will be based on a group of 10 BEd. Mathematics students at Great Zimbabwe University. Data is gathered through use of two exercises on Calculus 1&2.The analysis of the results from the tests showed that a majority of the errors were due…

18. An Analysis of College Students' Attitudes towards Error Correction in EFL Context

ERIC Educational Resources Information Center

Zhu, Honglin

2010-01-01

This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…

19. A Linguistic Analysis of Errors in the Compositions of Arba Minch University Students

ERIC Educational Resources Information Center

Tizazu, Yoseph

2014-01-01

This study reports the dominant linguistic errors that occur in the written productions of Arba Minch University (hereafter AMU) students. A sample of paragraphs was collected for two years from students ranging from freshmen to graduating level. The sampled compositions were then coded, described, and explained using error analysis method. Both…

20. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

ERIC Educational Resources Information Center

Jennrich, Robert I.

2008-01-01

The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

1. Error Analysis of Mathematical Word Problem Solving across Students with and without Learning Disabilities

ERIC Educational Resources Information Center

Kingsdorf, Sheri; Krawec, Jennifer

2014-01-01

Solving word problems is a common area of struggle for students with learning disabilities (LD). In order for instruction to be effective, we first need to have a clear understanding of the specific errors exhibited by students with LD during problem solving. Error analysis has proven to be an effective tool in other areas of math but has had…

2. Mark-Up-Based Writing Error Analysis Model in an On-Line Classroom.

ERIC Educational Resources Information Center

Feng, Cheng; Yano, Yoneo; Ogata, Hiroaki

2000-01-01

Describes a new component called "Writing Error Analysis Model" (WEAM) in the CoCoA system for teaching writing composition in Japanese as a foreign language. The Weam can be used for analyzing learners' morphological errors and selecting appropriate compositions for learners' revising exercises. (Author/VWL)

3. Mixed Methods Analysis of Medical Error Event Reports: A Report from the ASIPS Collaborative

DTIC Science & Technology

2005-05-01

correct classification rate of 71.1 percent. In summary, the quantitative analysis sensitized us to look for deeper understanding of how communication ... issues influenced diagnostic testing errors, at what point in the procedure event chain the error occurred, how and why it occurred, and why harm

4. Error analysis of mixed finite element methods for wave propagation in double negative metamaterials

Li, Jichun

2007-12-01

In this paper, we develop both semi-discrete and fully discrete mixed finite element methods for modeling wave propagation in three-dimensional double negative metamaterials. Optimal error estimates are proved for Nedelec spaces under the assumption of smooth solutions. To our best knowledge, this is the first error analysis obtained for Maxwell's equations when metamaterials are involved.

5. Mark-Up-Based Writing Error Analysis Model in an On-Line Classroom.

ERIC Educational Resources Information Center

Feng, Cheng; Yano, Yoneo; Ogata, Hiroaki

2000-01-01

Describes a new component called "Writing Error Analysis Model" (WEAM) in the CoCoA system for teaching writing composition in Japanese as a foreign language. The Weam can be used for analyzing learners' morphological errors and selecting appropriate compositions for learners' revising exercises. (Author/VWL)

6. Error analysis for relay type satellite-aided search and rescue systems

NASA Technical Reports Server (NTRS)

Marini, J. W.

1979-01-01

An analysis is made of the errors in the determination of the position of an emergency transmitter in a satellite-aided search and rescue system. The satellite is assumed to be at a height of 820 km in a near-circular near polar orbit. Short data spans of four minutes or less are used. The error sources considered are measurement noise, transmitter frequency drift, ionospheric effects, and error in the assumed height of the transmitter. The errors are calculated for several different transmitter positions, data rates, and data spans. The only transmitter frequency used was 406 MHz, but the result can be scaled to different frequencies.

7. Carriage Error Identification Based on Cross-Correlation Analysis and Wavelet Transformation

PubMed Central

Mu, Donghui; Chen, Dongju; Fan, Jinwei; Wang, Xiaofeng; Zhang, Feihu

2012-01-01

This paper proposes a novel method for identifying carriage errors. A general mathematical model of a guideway system is developed, based on the multi-body system method. Based on the proposed model, most error sources in the guideway system can be measured. The flatness of a workpiece measured by the PGI1240 profilometer is represented by a wavelet. Cross-correlation analysis performed to identify the error source of the carriage. The error model is developed based on experimental results on the low frequency components of the signals. With the use of wavelets, the identification precision of test signals is very high. PMID:23012558

8. Analysis of remote sensing errors of omission and commission under FTP conditions

SciTech Connect

Stephens, R.D.; Cadle, S.H.; Qian, T.Z.

1996-06-01

Second-by-second modal emissions data from a 73-vehicle fleet of 1990 and 1991 light duty cars and trucks driven on the Federal Test Procedure (FTP) driving cycle were examined to determine remote sensing errors of commission in identifying high emissions vehicles. Results are combined with a similar analysis of errors of omission based on modal FTP data from high emissions vehicles. Extremely low errors of commission combined with modest errors of omission indicate that remote sensing should be very effective in isolating high CO and HC emitting vehicles in a fleet of late model vehicles on the road. 13 refs., 5 figs., 6 tabs.

9. Analysis of Remote Sensing Errors of Omission and Commission Under FTP Conditions.

PubMed

Stephens, Robert D; Cadle, Steven H; Qian, Tim Z

1996-06-01

Second-by-second modal emissions data from a 73-vehicle fleet of 1990 and 1991 light duty cars and trucks driven on the Federal Test Procedure (FTP) driving cycle were examined to determine remote sensing errors of commission in identifying high emissions vehicles. Results are combined with a similar analysis of errors of omission based on modal FTP data from high emissions vehicles. Extremely low errors of commission combined with modest errors of omission indicate that remote sensing should be very effective in isolating high CO and HC emitting vehicles in a fleet of late model vehicles on the road.

10. Numerical analysis of strongly nonlinear extensional vibrations in elastic rods.

PubMed

Vanhille, Christian; Campos-Pozuelo, Cleofé

2007-01-01

In the framework of transduction, nondestructive testing, and nonlinear acoustic characterization, this article presents the analysis of strongly nonlinear vibrations by means of an original numerical algorithm. In acoustic and transducer applications in extreme working conditions, such as the ones induced by the generation of high-power ultrasound, the analysis of nonlinear ultrasonic vibrations is fundamental. Also, the excitation and analysis of nonlinear vibrations is an emergent technique in nonlinear characterization for damage detection. A third-order evolution equation is derived and numerically solved for extensional waves in isotropic dissipative media. A nine-constant theory of elasticity for isotropic solids is constructed, and the nonlinearity parameters corresponding to extensional waves are proposed. The nonlinear differential equation is solved by using a new numerical algorithm working in the time domain. The finite-difference numerical method proposed is implicit and only requires the solution of a linear set of equations at each time step. The model allows the analysis of strongly nonlinear, one-dimensional vibrations and can be used for prediction as well as characterization. Vibration waveforms are calculated at different points, and results are compared for different excitation levels and boundary conditions. Amplitude distributions along the rod axis for every harmonic component also are evaluated. Special attention is given to the study of high-amplitude damping of vibrations by means of several simulations. Simulations are performed for amplitudes ranging from linear to nonlinear and weak shock.

11. Scilab and Maxima Environment: Towards Free Software in Numerical Analysis

ERIC Educational Resources Information Center

Mora, Angel; Galan, Jose Luis; Aguilera, Gabriel; Fernandez, Alvaro; Merida, Enrique; Rodriguez, Pedro

2010-01-01

In this work we will present the ScilabUMA environment we have developed as an alternative to Matlab. This environment connects Scilab (for numerical analysis) and Maxima (for symbolic computations). Furthermore, the developed interface is, in our opinion at least, as powerful as the interface of Matlab. (Contains 3 figures.)

12. Scilab and Maxima Environment: Towards Free Software in Numerical Analysis

ERIC Educational Resources Information Center

Mora, Angel; Galan, Jose Luis; Aguilera, Gabriel; Fernandez, Alvaro; Merida, Enrique; Rodriguez, Pedro

2010-01-01

In this work we will present the ScilabUMA environment we have developed as an alternative to Matlab. This environment connects Scilab (for numerical analysis) and Maxima (for symbolic computations). Furthermore, the developed interface is, in our opinion at least, as powerful as the interface of Matlab. (Contains 3 figures.)

13. Analysis and Correction of Systematic Height Model Errors

Jacobsen, K.

2016-06-01

The geometry of digital height models (DHM) determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC). Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3) has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP), but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM) digital surface model (DSM) or the new AW3D30 DSM, based on ALOS PRISM images, are

14. Analysis and Evaluation of Error-Proof Systems for Configuration Data Management in Railway Signalling

Shimazoe, Toshiyuki; Ishikawa, Hideto; Takei, Tsuyoshi; Tanaka, Kenji

Recent types of train protection systems such as ATC require the large amounts of low-level configuration data compared to conventional types of them. Hence management of the configuration data is becoming more important than before. Because of this, the authors developed an error-proof system focusing on human operations in the configuration data management. This error-proof system has already been introduced to the Tokaido Shinkansen ATC data management system. However, as effectiveness of the system has not been presented objectively, its full perspective is not clear. To clarify the effectiveness, this paper analyses error-proofing cases introduced to the system, using the concept of QFD and the error-proofing principles. From this analysis, the following methods of evaluation for error-proof systems are proposed: metrics to review the rationality of required qualities are provided by arranging the required qualities according to hazard levels and work phases; metrics to evaluate error-proof systems are provided to improve their reliability effectively by mapping the error-proofing principles onto the error-proofing cases which are applied according to the required qualities and the corresponding hazard levels. In addition, these objectively-analysed error-proofing cases are available to be used as error-proofing-cases database or guidelines for safer HMI design especially for data management.

15. Analysis of Naming Errors during Cortical Stimulation Mapping: Implications for Models of Language Representation

PubMed Central

Corina, David P.; Loudermilk, Brandon C.; Detwiler, Landon; Martin, Richard F.; Brinkley, James F.; Ojemann, George

2011-01-01

This study reports on the characteristics and distribution of naming errors of patients undergoing cortical stimulation mapping (CSM). During the procedure, electrical stimulation is used to induce temporary functional lesions and locate ‘essential’ language areas for preservation. Under stimulation, patients are shown slides of common objects and asked to name them. Cortical stimulation can lead to a variety of naming errors. In the present study, we aggregate errors across patients to examine the neuroanatomical correlates and linguistic characteristics of six common errors: semantic paraphasias, circumlocutions, phonological paraphasias, neologisms, performance errors, and no-response errors. Aiding analysis, we relied on a suite of web-based querying and imaging tools that enabled the summative mapping of normalized stimulation sites. Errors were visualized and analyzed by type and location. We provide descriptive statistics to characterize the commonality of errors across patients and location. The errors observed suggest a widely distributed and heterogeneous cortical network that gives rise to differential patterning of paraphasic errors. Data are discussed in relation to emerging models of language representation that honor distinctions between frontal, parietal, and posterior temporal dorsal implementation systems and ventral-temporal lexical semantic and phonological storage and assembly regions; the latter of which may participate both in language comprehension and production. PMID:20452661

16. Detection method of nonlinearity errors by statistical signal analysis in heterodyne Michelson interferometer.

PubMed

Hu, Juju; Hu, Haijiang; Ji, Yinghua

2010-03-15

Periodic nonlinearity that ranges from tens of nanometers to a few nanometers in heterodyne interferometer limits its use in high accuracy measurement. A novel method is studied to detect the nonlinearity errors based on the electrical subdivision and the analysis method of statistical signal in heterodyne Michelson interferometer. Under the movement of micropositioning platform with the uniform velocity, the method can detect the nonlinearity errors by using the regression analysis and Jackknife estimation. Based on the analysis of the simulations, the method can estimate the influence of nonlinearity errors and other noises for the dimensions measurement in heterodyne Michelson interferometer.

17. Optical refractive synchronization: bit error rate analysis and measurement

Palmer, James R.

1999-11-01

The direction of this paper is to describe the analytical tools and measurement techniques used at SilkRoad to evaluate the optical and electrical signals used in Optical Refractive Synchronization for transporting SONET signals across the transmission fiber. Fundamentally, the direction of this paper is to provide an outline of how SilkRoad, Inc., transports a multiplicity of SONET signals across a distance of fiber > 100 Km without amplification or regeneration of the optical signal, i.e., one laser over one fiber. Test and measurement data are presented to reflect how the SilkRoad technique of Optical Refractive Synchronization is employed to provide a zero bit error rate for transmission of multiple OC-12 and OC-48 SONET signals that are sent over a fiber optical cable which is > 100Km. The recovery and transformation modules are described for the modification and transportation of these SONET signals.

18. Study on analysis from sources of error for Airborne LIDAR

Ren, H. C.; Yan, Q.; Liu, Z. J.; Zuo, Z. Q.; Xu, Q. Q.; Li, F. F.; Song, C.

2016-11-01

With the advancement of Aerial Photogrammetry, it appears that to obtain geo-spatial information of high spatial and temporal resolution provides a new technical means for Airborne LIDAR measurement techniques, with unique advantages and broad application prospects. Airborne LIDAR is increasingly becoming a new kind of space for earth observation technology, which is mounted by launching platform for aviation, accepting laser pulses to get high-precision, high-density three-dimensional coordinate point cloud data and intensity information. In this paper, we briefly demonstrates Airborne laser radar systems, and that some errors about Airborne LIDAR data sources are analyzed in detail, so the corresponding methods is put forwarded to avoid or eliminate it. Taking into account the practical application of engineering, some recommendations were developed for these designs, which has crucial theoretical and practical significance in Airborne LIDAR data processing fields.

19. Testing and error analysis of a real-time controller

NASA Technical Reports Server (NTRS)

Savolaine, C. G.

1983-01-01

Inexpensive ways to organize and conduct system testing that were used on a real-time satellite network control system are outlined. This system contains roughly 50,000 lines of executable source code developed by a team of eight people. For a small investment of staff, the system was thoroughly tested, including automated regression testing, before field release. Detailed records were kept for fourteen months, during which several versions of the system were written. A separate testing group was not established, but testing itself was structured apart from the development process. The errors found during testing are examined by frequency per subsystem by size and complexity as well as by type. The code was released to the user in March, 1983. To date, only a few minor problems found with the system during its pre-service testing and user acceptance has been good.

20. Testing and error analysis of a real-time controller

NASA Technical Reports Server (NTRS)

Savolaine, C. G.

1983-01-01

Inexpensive ways to organize and conduct system testing that were used on a real-time satellite network control system are outlined. This system contains roughly 50,000 lines of executable source code developed by a team of eight people. For a small investment of staff, the system was thoroughly tested, including automated regression testing, before field release. Detailed records were kept for fourteen months, during which several versions of the system were written. A separate testing group was not established, but testing itself was structured apart from the development process. The errors found during testing are examined by frequency per subsystem by size and complexity as well as by type. The code was released to the user in March, 1983. To date, only a few minor problems found with the system during its pre-service testing and user acceptance has been good.