#### Sample records for numerical error analysis

1. Minimizing Errors in Numerical Analysis of Chemical Data.

ERIC Educational Resources Information Center

Rusling, James F.

1988-01-01

Investigates minimizing errors in computational methods commonly used in chemistry. Provides a series of examples illustrating the propagation of errors, finite difference methods, and nonlinear regression analysis. Includes illustrations to explain these concepts. (MVL)

2. Error analysis of a ratio pyrometer by numerical simulation

SciTech Connect

Gathers, G.R. )

1992-01-01

A numerical method has been devised to evaluate measurement errors for a three-channel ratio pyrometer as a function of temperature. The pyrometer is simulated by computer codes, which can be used to explore the behavior of various designs. The influence of the various components in the system can be evaluated. General conclusions can be drawn about what makes a good pyrometer, and an existing pyrometer was evaluated, to predict its behavior as a function of temperature. The results show which combination of two channels gives the best precision. 13 refs., 12 figs.

3. Error analysis of a ratio pyrometer by numerical simulation

SciTech Connect

Gathers, G.R.

1990-05-01

A numerical method has been devised to evaluate measurement errors for a three channel ratio pyrometer as a function of temperature. The pyrometer is simulated by computer codes, which can be used to explore the behavior of various designs. The influence of the various components in the system can be evaluated. General conclusions can be drawn about what makes a good pyrometer, and an existing pyrometer was evaluated, to predict its behavior as a function of temperature. The results show which combination of two channels gives the best precision. 12 refs., 12 figs.

4. Error Analysis

Scherer, Philipp O. J.

Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

5. Error analysis of numerical gravitational waveforms from coalescing binary black holes

Fong, Heather; Chu, Tony; Kumar, Prayush; Pfeiffer, Harald; Boyle, Michael; Hemberger, Daniel; Kidder, Lawrence; Scheel, Mark; Szilagyi, Bela; SXS Collaboration

2016-03-01

The Advanced Laser Interferometer Gravitational-wave Observatory (Advanced LIGO) has finished a successful first observation run and will commence its second run this summer. Detection of compact object binaries utilizes matched-filtering, which requires a vast collection of highly accurate gravitational waveforms. This talk will present a set of about 100 new aligned-spin binary black hole simulations. I will discuss their properties, including a detailed error analysis, which demonstrates that the numerical waveforms are sufficiently accurate for gravitational wave detection purposes, as well as for parameter estimation purposes.

6. Revisiting Numerical Errors in Direct and Large Eddy Simulations of Turbulence: Physical and Spectral Spaces Analysis

Fedioun, Ivan; Lardjane, Nicolas; Gökalp, Iskender

2001-12-01

Some recent studies on the effects of truncation and aliasing errors on the large eddy simulation (LES) of turbulent flows via the concept of modified wave number are revisited. It is shown that all the results obtained for nonlinear partial differential equations projected and advanced in time in spectral space are not straightforwardly applicable to physical space calculations due to the nonequivalence by Fourier transform of spectral aliasing errors and numerical errors on a set of grid points in physical space. The consequences of spectral static aliasing errors on a set of grid points are analyzed in one dimension of space for quadratic products and their derivatives. The dynamical process that results through time stepping is illustrated on the Burgers equation. A method based on midpoint interpolation is proposed to remove in physical space the static grid point errors involved in divergence forms. It is compared to the sharp filtering technique on finer grids suggested by previous authors. Global performances resulting from combination of static aliasing errors and truncation errors are then discussed for all classical forms of the convective terms in Navier-Stokes equations. Some analytical results previously obtained on the relative magnitude of subgrid scale terms and numerical errors are confirmed with 3D realistic random fields. The physical space dynamical behavior and the stability of typical associations of numerical schemes and forms of nonlinear terms are finally evaluated on the LES of self-decaying homogeneous isotropic turbulence. It is shown that the convective form (if conservative properties are not strictly required) associated with highly resolving compact finite difference schemes provides the best compromise, which is nearly equivalent to dealiased pseudo-spectral calculations.

7. Numerical errors in the presence of steep topography: analysis and alternatives

SciTech Connect

Lundquist, K A; Chow, F K; Lundquist, J K

2010-04-15

It is well known in computational fluid dynamics that grid quality affects the accuracy of numerical solutions. When assessing grid quality, properties such as aspect ratio, orthogonality of coordinate surfaces, and cell volume are considered. Mesoscale atmospheric models generally use terrain-following coordinates with large aspect ratios near the surface. As high resolution numerical simulations are increasingly used to study topographically forced flows, a high degree of non-orthogonality is introduced, especially in the vicinity of steep terrain slopes. Numerical errors associated with the use of terrainfollowing coordinates can adversely effect the accuracy of the solution in steep terrain. Inaccuracies from the coordinate transformation are present in each spatially discretized term of the Navier-Stokes equations, as well as in the conservation equations for scalars. In particular, errors in the computation of horizontal pressure gradients, diffusion, and horizontal advection terms have been noted in the presence of sloping coordinate surfaces and steep topography. In this work we study the effects of these spatial discretization errors on the flow solution for three canonical cases: scalar advection over a mountain, an atmosphere at rest over a hill, and forced advection over a hill. This study is completed using the Weather Research and Forecasting (WRF) model. Simulations with terrain-following coordinates are compared to those using a flat coordinate, where terrain is represented with the immersed boundary method. The immersed boundary method is used as a tool which allows us to eliminate the terrain-following coordinate transformation, and quantify numerical errors through a direct comparison of the two solutions. Additionally, the effects of related issues such as the steepness of terrain slope and grid aspect ratio are studied in an effort to gain an understanding of numerical domains where terrain-following coordinates can successfully be used and

8. Some Surprising Errors in Numerical Differentiation

ERIC Educational Resources Information Center

Gordon, Sheldon P.

2012-01-01

Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…

9. Automatic Error Analysis Using Intervals

ERIC Educational Resources Information Center

Rothwell, E. J.; Cloud, M. J.

2012-01-01

A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

10. Error Analysis of Quadrature Rules. Classroom Notes

ERIC Educational Resources Information Center

Glaister, P.

2004-01-01

Approaches to the determination of the error in numerical quadrature rules are discussed and compared. This article considers the problem of the determination of errors in numerical quadrature rules, taking Simpson's rule as the principal example. It suggests an approach based on truncation error analysis of numerical schemes for differential…

11. Numerical modelling errors in electrical impedance tomography.

PubMed

Dehghani, Hamid; Soleimani, Manuchehr

2007-07-01

Electrical impedance tomography (EIT) is a non-invasive technique that aims to reconstruct images of internal impedance values of a volume of interest, based on measurements taken on the external boundary. Since most reconstruction algorithms rely on model-based approximations, it is important to ensure numerical accuracy for the model being used. This work demonstrates and highlights the importance of accurate modelling in terms of model discretization (meshing) and shows that although the predicted boundary data from a forward model may be within an accepted error, the calculated internal field, which is often used for image reconstruction, may contain errors, based on the mesh quality that will result in image artefacts.

12. ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.

SciTech Connect

LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.

2004-07-26

We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.

13. Errata: Papers in Error Analysis.

ERIC Educational Resources Information Center

Svartvik, Jan, Ed.

Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

14. Uncertainty quantification and error analysis

SciTech Connect

Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

2010-01-01

UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

15. The Insufficiency of Error Analysis

ERIC Educational Resources Information Center

Hammarberg, B.

1974-01-01

The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…

16. A Numerical Approach for Computing Standard Errors of Linear Equating.

ERIC Educational Resources Information Center

Zeng, Lingjia

1993-01-01

A numerical approach for computing standard errors (SEs) of a linear equating is described in which first partial derivatives of equating functions needed to compute SEs are derived numerically. Numerical and analytical approaches are compared using the Tucker equating method. SEs derived numerically are found indistinguishable from SEs derived…

17. Beta systems error analysis

NASA Technical Reports Server (NTRS)

1984-01-01

The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

18. Skylab water balance error analysis

NASA Technical Reports Server (NTRS)

Leonard, J. I.

1977-01-01

Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

19. Managing numerical errors in random sequential adsorption

Cieśla, Michał; Nowak, Aleksandra

2016-09-01

Aim of this study is to examine the influence of a finite surface size and a finite simulation time on a packing fraction estimated using random sequential adsorption simulations. The goal of particular interest is providing hints on simulation setup to achieve desired level of accuracy. The analysis is based on properties of saturated random packing of disks on continuous and flat surfaces of different sizes.

20. Analysis of discretization errors in LES

NASA Technical Reports Server (NTRS)

Ghosal, Sandip

1995-01-01

All numerical simulations of turbulence (DNS or LES) involve some discretization errors. The integrity of such simulations therefore depend on our ability to quantify and control such errors. In the classical literature on analysis of errors in partial differential equations, one typically studies simple linear equations (such as the wave equation or Laplace's equation). The qualitative insight gained from studying such simple situations is then used to design numerical methods for more complex problems such as the Navier-Stokes equations. Though such an approach may seem reasonable as a first approximation, it should be recognized that strongly nonlinear problems, such as turbulence, have a feature that is absent in linear problems. This feature is the simultaneous presence of a continuum of space and time scales. Thus, in an analysis of errors in the one dimensional wave equation, one may, without loss of generality, rescale the equations so that the dependent variable is always of order unity. This is not possible in the turbulence problem since the amplitudes of the Fourier modes of the velocity field have a continuous distribution. The objective of the present research is to provide some quantitative measures of numerical errors in such situations. Though the focus of this work is LES, the methods introduced here can be just as easily applied to DNS. Errors due to discretization of the time-variable are neglected for the purpose of this analysis.

1. Orbital and Geodetic Error Analysis

NASA Technical Reports Server (NTRS)

Felsentreger, T.; Maresca, P.; Estes, R.

1985-01-01

Results that previously required several runs determined in more computer-efficient manner. Multiple runs performed only once with GEODYN and stored on tape. ERODYN then performs matrix partitioning and linear algebra required for each individual error-analysis run.

2. Errors from Image Analysis

SciTech Connect

Wood, William Monford

2015-02-23

Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

3. Human Error: A Concept Analysis

NASA Technical Reports Server (NTRS)

Hansen, Frederick D.

2007-01-01

Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

4. An Ensemble-type Approach to Numerical Error Estimation

Ackmann, J.; Marotzke, J.; Korn, P.

2015-12-01

The estimation of the numerical error in a specific physical quantity of interest (goal) is of key importance in geophysical modelling. Towards this aim, we have formulated an algorithm that combines elements of the classical dual-weighted error estimation with stochastic methods. Our algorithm is based on the Dual-weighted Residual method in which the residual of the model solution is weighed by the adjoint solution, i.e. by the sensitivities of the goal towards the residual. We extend this method by modelling the residual as a stochastic process. Parameterizing the residual by a stochastic process was motivated by the Mori-Zwanzig formalism from statistical mechanics.Here, we apply our approach to two-dimensional shallow-water flows with lateral boundaries and an eddy viscosity parameterization. We employ different parameters of the stochastic process for different dynamical regimes in different regions. We find that for each region the temporal fluctuations of local truncation errors (discrete residuals) can be interpreted stochastically by a Laplace-distributed random variable. Assuming that these random variables are fully correlated in time leads to a stochastic process that parameterizes a problem-dependent temporal evolution of local truncation errors. The parameters of this stochastic process are estimated from short, near-initial, high-resolution simulations. Under the assumption that the estimated parameters can be extrapolated to the full time window of the error estimation, the estimated stochastic process is proven to be a valid surrogate for the local truncation errors.Replacing the local truncation errors by a stochastic process puts our method within the class of ensemble methods and makes the resulting error estimator a random variable. The result of our error estimator is thus a confidence interval on the error in the respective goal. We will show error estimates for two 2D ocean-type experiments and provide an outlook for the 3D case.

5. A Classroom Note on: Building on Errors in Numerical Integration

ERIC Educational Resources Information Center

Gordon, Sheldon P.

2011-01-01

In both baseball and mathematics education, the conventional wisdom is to avoid errors at all costs. That advice might be on target in baseball, but in mathematics, it is not always the best strategy. Sometimes an analysis of errors provides much deeper insights into mathematical ideas and, rather than something to eschew, certain types of errors…

6. Numerical errors of diffraction computing using plane wave spectrum decomposition

Kozacki, Tomasz

2008-09-01

In the paper the numerical determination of diffraction patterns using plane wave spectrum decomposition (PWS) is investigated. The simple formula for sampling selection for error-free numerical computation is proposed and its applicability is discussed. The usage of this formula presents practical difficulty for some diffraction problems due to required large memory load. A new multi-Fourier transform PWS (MPWS) method is elaborated which overcomes memory requirement of the PWS method. The performances of the PWS and MPWS methods are verified through extensive numerical simulations.

7. Analysis of Medication Error Reports

SciTech Connect

Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

2004-11-15

In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

8. Orbit IMU alignment: Error analysis

NASA Technical Reports Server (NTRS)

Corson, R. W.

1980-01-01

A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

9. Having Fun with Error Analysis

ERIC Educational Resources Information Center

Siegel, Peter

2007-01-01

We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

10. Measurement Error and Equating Error in Power Analysis

ERIC Educational Resources Information Center

Phillips, Gary W.; Jiang, Tao

2016-01-01

Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

11. Condition and Error Estimates in Numerical Matrix Computations

SciTech Connect

Konstantinov, M. M.; Petkov, P. H.

2008-10-30

This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.

12. Numerical study of an error model for a strap-down INS

Grigorie, T. L.; Sandu, D. G.; Corcau, C. L.

2016-10-01

The paper presents a numerical study related to a mathematical error model developed for a strap-down inertial navigation system. The study aims to validate the error model by using some Matlab/Simulink software models implementing the inertial navigator and the error model mathematics. To generate the inputs in the evaluation Matlab/Simulink software some inertial sensors software models are used. The sensors models were developed based on the IEEE equivalent models for the inertial sensorsand on the analysis of the data sheets related to real inertial sensors. In the paper are successively exposed the inertial navigation equations (attitude, position and speed), the mathematics of the inertial navigator error model, the software implementations and the numerical evaluation results.

13. Numerical study of error propagation in Monte Carlo depletion simulations

SciTech Connect

Wyant, T.; Petrovic, B.

2012-07-01

Improving computer technology and the desire to more accurately model the heterogeneity of the nuclear reactor environment have made the use of Monte Carlo depletion codes more attractive in recent years, and feasible (if not practical) even for 3-D depletion simulation. However, in this case statistical uncertainty is combined with error propagating through the calculation from previous steps. In an effort to understand this error propagation, a numerical study was undertaken to model and track individual fuel pins in four 17 x 17 PWR fuel assemblies. By changing the code's initial random number seed, the data produced by a series of 19 replica runs was used to investigate the true and apparent variance in k{sub eff}, pin powers, and number densities of several isotopes. While this study does not intend to develop a predictive model for error propagation, it is hoped that its results can help to identify some common regularities in the behavior of uncertainty in several key parameters. (authors)

14. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

USGS Publications Warehouse

1987-01-01

Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.

15. GP-B error modeling and analysis

NASA Technical Reports Server (NTRS)

Hung, J. C.

1982-01-01

Individual source errors and their effects on the accuracy of the Gravity Probe B (GP-B) experiment were investigated. Emphasis was placed on: (1) the refinement of source error identification and classifications of error according to their physical nature; (2) error analysis for the GP-B data processing; and (3) measurement geometry for the experiment.

16. Error Analysis in the Introductory Physics Laboratory.

ERIC Educational Resources Information Center

Deacon, Christopher G.

1992-01-01

Describes two simple methods of error analysis: (1) combining errors in the measured quantities; and (2) calculating the error or uncertainty in the slope of a straight-line graph. Discusses significance of the error in the comparison of experimental results with some known value. (MDH)

17. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

NASA Technical Reports Server (NTRS)

Prive, Nikki C.; Errico, Ronald M.

2013-01-01

A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

18. Error Analysis and the EFL Classroom Teaching

ERIC Educational Resources Information Center

Xie, Fang; Jiang, Xue-mei

2007-01-01

This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…

19. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

USGS Publications Warehouse

1985-01-01

Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.

20. Misclassification Errors and Categorical Data Analysis.

ERIC Educational Resources Information Center

Katz, Barry M.; McSweeney, Maryellen

1979-01-01

Errors of misclassification and their effects on categorical data analysis are discussed. The chi-square test for equality of two proportions is examined in the context of errorful categorical data. The effects of such errors are illustrated. A correction procedure is developed and discussed. (Author/MH)

1. Relative error covariance analysis techniques and application

NASA Technical Reports Server (NTRS)

Wolff, Peter, J.; Williams, Bobby G.

1988-01-01

A technique for computing the error covariance of the difference between two estimators derived from different (possibly overlapping) data arcs is presented. The relative error covariance is useful for predicting the achievable consistency between Kalman-Bucy filtered estimates generated from two (not necessarily disjoint) data sets. The relative error covariance analysis technique is then applied to a Venus Orbiter simulation.

2. Identifying state-dependent model error in numerical weather prediction

Moskaitis, J.; Hansen, J.; Toth, Z.; Zhu, Y.

2003-04-01

Model forecasts of complex systems such as the atmosphere lose predictive skill because of two different sources of error: initial conditions error and model error. While much study has been done to determine the nature and consequences of initial conditions error in operational forecast models, relatively little has been done to identify the source of model error and to quantify the effects of model error on forecasts. Here, we attempt to "disentangle" model error from initial conditions error by applying a diagnostic tool in a simple model framework to identify poor forecasts for which model error is likely responsible. The diagnostic is based on the premise that for a perfect ensemble forecast, verification should fall outside the range of ensemble forecast states only a small percentage of the time, according to the size of the ensemble. Identifying these outlier verifications and comparing the statistics of their occurrence to those of a perfect ensemble can tell us about the role of model error in a quantitative, state-dependent manner. The same diagnostic is applied to operational NWP models to quantify the role of model error in poor forecasts (see companion paper by Toth et al.). From these results, we can infer the atmospheric processes the model cannot adequately simulate.

3. Analysis and classification of human error

NASA Technical Reports Server (NTRS)

Rouse, W. B.; Rouse, S. H.

1983-01-01

The literature on human error is reviewed with emphasis on theories of error and classification schemes. A methodology for analysis and classification of human error is then proposed which includes a general approach to classification. Identification of possible causes and factors that contribute to the occurrence of errors is also considered. An application of the methodology to the use of checklists in the aviation domain is presented for illustrative purposes.

4. Numerical Analysis Objects

Henderson, Michael

1997-08-01

The Numerical Analysis Objects project (NAO) is a project in the Mathematics Department of IBM's TJ Watson Research Center. While there are plenty of numerical tools available today, it is not an easy task to combine them into a custom application. NAO is directed at the dual problems of building applications from a set of tools, and creating those tools. There are several "reuse" projects, which focus on the problems of identifying and cataloging tools. NAO is directed at the specific context of scientific computing. Because the type of tools is restricted, problems such as tools with incompatible data structures for input and output, and dissimilar interfaces to tools which solve similar problems can be addressed. The approach we've taken is to define interfaces to those objects used in numerical analysis, such as geometries, functions and operators, and to start collecting (and building) a set of tools which use these interfaces. We have written a class library (a set of abstract classes and implementations) in C++ which demonstrates the approach. Besides the classes, the class library includes "stub" routines which allow the library to be used from C or Fortran, and an interface to a Visual Programming Language. The library has been used to build a simulator for petroleum reservoirs, using a set of tools for discretizing nonlinear differential equations that we have written, and includes "wrapped" versions of packages from the Netlib repository. Documentation can be found on the Web at "http://www.research.ibm.com/nao". I will describe the objects and their interfaces, and give examples ranging from mesh generation to solving differential equations.

5. Synthetic aperture interferometry: error analysis

SciTech Connect

Biswas, Amiya; Coupland, Jeremy

2010-07-10

Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

6. A posteriori error control in numerical simulations of semiconductor nanodevices

Chen, Ren-Chuen; Li, Chun-Hsien; Liu, Jinn-Liang

2016-10-01

A posteriori error estimation and control methods are proposed for a quantum corrected energy balance (QCEB) model that describes electron and hole flows in semiconductor nanodevices under the influence of electrical, diffusive, thermal, and quantum effects. The error estimation is based on the maximum norm a posteriori error estimate developed by Kopteva (2008) for singularly perturbed semilinear reaction-diffusion problems. The error estimate results in three error estimators called the first-, second-, and third-order estimators to guide the refinement process. The second-order estimator is shown to be most effective for adaptive mesh refinement. The QCEB model is scaled to a dimensionless coupled system of seven singularly perturbed semilinear PDEs with various perturbation parameters so that the estimator can be applied to each PDE on equal footing. It is found that the estimator suitable for controlling the approximation error of one PDE (one physical variable) may not be suitable for another PDE, indicating that different parameters account for different boundary or interior layer regions as illustrated by two different semiconductor devices, namely, a diode and a MOSFET. A hybrid approach to automatically choosing different PDEs for calculating the estimator in the adaptive mesh refinement process is shown to be able to control the errors of all PDEs uniformly.

7. Numerical Errors in Coupling Micro- and Macrophysics in the Community Atmosphere Model

Gardner, D. J.; Caldwell, P.; Sexton, J. M.; Woodward, C. S.

2014-12-01

In this study, we investigate numerical errors in version 2 of the Morrison-Gettelman microphysics scheme (MG2) and its coupling to a development version of the macrophysics (condensation/evaporation) scheme used in version 5 of the Community Atmosphere Model (CAM5). Our analysis is performed using a modified version of the Kinematic Driver (KiD) framework, which combines the full macro- and microphysics schemes from CAM5 with idealizations of all other model components. The benefit of this framework is that its simplicity makes diagnosing problems easier and its efficiency allows us to test a variety of numerical schemes. Initial results suggest that numerical convergence requires time steps much shorter than those typically used in CAM5.

8. Integrated analysis of error detection and recovery

NASA Technical Reports Server (NTRS)

Shin, K. G.; Lee, Y. H.

1985-01-01

An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.

9. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

2015-04-01

In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

10. Measurement error analysis of taxi meter

He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu

2011-12-01

The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.

11. TOA/FOA geolocation error analysis.

SciTech Connect

Mason, John Jeffrey

2008-08-01

This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

12. A numerical algorithm to propagate navigation error covariance matrices associated with generalized strapdown inertial measurement units

NASA Technical Reports Server (NTRS)

Weir, Kent A.; Wells, Eugene M.

1990-01-01

The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.

13. Numeracy, Literacy and Newman's Error Analysis

ERIC Educational Resources Information Center

White, Allan Leslie

2010-01-01

Newman (1977, 1983) defined five specific literacy and numeracy skills as crucial to performance on mathematical word problems: reading, comprehension, transformation, process skills, and encoding. Newman's Error Analysis (NEA) provided a framework for considering the reasons that underlay the difficulties students experienced with mathematical…

14. Study of geopotential error models used in orbit determination error analysis

NASA Technical Reports Server (NTRS)

Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.

1991-01-01

The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic

15. Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows

NASA Technical Reports Server (NTRS)

Barth, Timothy J.

2010-01-01

The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.

16. Error Propagation Analysis for Quantitative Intracellular Metabolomics

PubMed Central

Tillack, Jana; Paczia, Nicole; Nöh, Katharina; Wiechert, Wolfgang; Noack, Stephan

2012-01-01

Model-based analyses have become an integral part of modern metabolic engineering and systems biology in order to gain knowledge about complex and not directly observable cellular processes. For quantitative analyses, not only experimental data, but also measurement errors, play a crucial role. The total measurement error of any analytical protocol is the result of an accumulation of single errors introduced by several processing steps. Here, we present a framework for the quantification of intracellular metabolites, including error propagation during metabolome sample processing. Focusing on one specific protocol, we comprehensively investigate all currently known and accessible factors that ultimately impact the accuracy of intracellular metabolite concentration data. All intermediate steps are modeled, and their uncertainty with respect to the final concentration data is rigorously quantified. Finally, on the basis of a comprehensive metabolome dataset of Corynebacterium glutamicum, an integrated error propagation analysis for all parts of the model is conducted, and the most critical steps for intracellular metabolite quantification are detected. PMID:24957773

17. Accumulation of errors in numerical simulations of chemically reacting gas dynamics

Smirnov, N. N.; Betelin, V. B.; Nikitin, V. F.; Stamov, L. I.; Altoukhov, D. I.

2015-12-01

The aim of the present study is to investigate problems of numerical simulations precision and stochastic errors accumulation in solving problems of detonation or deflagration combustion of gas mixtures in rocket engines. Computational models for parallel computing on supercomputers incorporating CPU and GPU units were tested and verified. Investigation of the influence of computational grid size on simulation precision and computational speed was performed. Investigation of accumulation of errors for simulations implying different strategies of computation were performed.

18. Microlens assembly error analysis for light field camera based on Monte Carlo method

Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping

2016-08-01

This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.

19. Posterior covariance versus analysis error covariance in variational data assimilation

Shutyaev, Victor; Gejadze, Igor; Le Dimet, Francois-Xavier

2013-04-01

The problem of variational data assimilation for a nonlinear evolution model is formulated as an optimal control problem to find the initial condition function (analysis) [1]. The data contain errors (observation and background errors), hence there is an error in the analysis. For mildly nonlinear dynamics, the analysis error covariance can be approximated by the inverse Hessian of the cost functional in the auxiliary data assimilation problem [2], whereas for stronger nonlinearity - by the 'effective' inverse Hessian [3, 4]. However, it has been noticed that the analysis error covariance is not the posterior covariance from the Bayesian perspective. While these two are equivalent in the linear case, the difference may become significant in practical terms with the nonlinearity level rising. For the proper Bayesian posterior covariance a new approximation via the Hessian of the original cost functional is derived and its 'effective' counterpart is introduced. An approach for computing the mentioned estimates in the matrix-free environment using Lanczos method with preconditioning is suggested. Numerical examples which validate the developed theory are presented for the model governed by the Burgers equation with a nonlinear viscous term. The authors acknowledge the funding through the Natural Environment Research Council (NERC grant NE/J018201/1), the Russian Foundation for Basic Research (project 12-01-00322), the Ministry of Education and Science of Russia, the MOISE project (CNRS, INRIA, UJF, INPG) and Région Rhône-Alpes. References: 1. Le Dimet F.X., Talagrand O. Variational algorithms for analysis and assimilation of meteorological observations: theoretical aspects. Tellus, 1986, v.38A, pp.97-110. 2. Gejadze I., Le Dimet F.-X., Shutyaev V. On analysis error covariances in variational data assimilation. SIAM J. Sci. Computing, 2008, v.30, no.4, pp.184-1874. 3. Gejadze I.Yu., Copeland G.J.M., Le Dimet F.-X., Shutyaev V. Computation of the analysis error

20. Error Analysis of Modified Langevin Dynamics

Redon, Stephane; Stoltz, Gabriel; Trstanova, Zofia

2016-08-01

We consider Langevin dynamics associated with a modified kinetic energy vanishing for small momenta. This allows us to freeze slow particles, and hence avoid the re-computation of inter-particle forces, which leads to computational gains. On the other hand, the statistical error may increase since there are a priori more correlations in time. The aim of this work is first to prove the ergodicity of the modified Langevin dynamics (which fails to be hypoelliptic), and next to analyze how the asymptotic variance on ergodic averages depends on the parameters of the modified kinetic energy. Numerical results illustrate the approach, both for low-dimensional systems where we resort to a Galerkin approximation of the generator, and for more realistic systems using Monte Carlo simulations.

1. Error analysis of aspheric surface with reference datum.

PubMed

Peng, Yanglin; Dai, Yifan; Chen, Shanyong; Song, Ci; Shi, Feng

2015-07-20

Severe requirements of location tolerance provide new challenges for optical component measurement, evaluation, and manufacture. Form error, location error, and the relationship between form error and location error need to be analyzed together during error analysis of aspheric surface with reference datum. Based on the least-squares optimization method, we develop a least-squares local optimization method to evaluate form error of aspheric surface with reference datum, and then calculate the location error. According to the error analysis of a machined aspheric surface, the relationship between form error and location error is revealed, and the influence on the machining process is stated. In different radius and aperture of aspheric surface, the change laws are simulated by superimposing normally distributed random noise on an ideal surface. It establishes linkages between machining and error analysis, and provides an effective guideline for error correcting.

2. A constrained-gradient method to control divergence errors in numerical MHD

Hopkins, Philip F.

2016-10-01

In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or 8-wave' cleaning can produce order-of-magnitude errors.

3. Analysis of Atmospheric Delays and Asymmetric Positioning Errors in GPS

Materna, K.; Herring, T.

2014-12-01

Error in accounting for atmospheric delay is one of the most significant limiting factors in the accuracy of GPS position determination. Delay due to tropospheric water vapor is especially difficult to model, as it depends in part on local atmospheric dynamics. Currently, the delay models used in GPS data analysis produce millimeter-level position estimates for most of the stations in the Plate Boundary Observatory (PBO) GPS network. However, certain stations in the network often show large position errors of 10 millimeters or more, and the key characteristic of these errors is that they occur in a particular direction. By analyzing the PBO network for these asymmetric outliers, we found that all affected stations are located in mountainous regions of the United States, and that many are located in the Sierra Nevada Mountains. Furthermore, we found that the direction in which the asymmetric outliers occur is related to the direction of local topographic increase, suggesting that topography plays a role in creating asymmetric outliers. We compared the GPS time series data with several forms of weather data, including radiosonde balloon measurements, numerical weather models, and MODIS satellite imagery. The results suggest that GPS position errors in the Sierra Nevada occur when there is strong atmospheric turbulence, including variations in pressure and humidity, downwind of the mountain crest. Specifically, when GPS position errors occur in the Sierra Nevada, lee waves are likely to be observed over the ridge; however, not all lee wave events produce position errors. Our results suggest that GPS measurements in mountainous regions may be more prone to systematic errors than previously thought due to the formation of lee waves.

4. Meteor radar signal processing and error analysis

Kang, Chunmei

5. Error propagation in the numerical solutions of the differential equations of orbital mechanics

NASA Technical Reports Server (NTRS)

Bond, V. R.

1982-01-01

The relationship between the eigenvalues of the linearized differential equations of orbital mechanics and the stability characteristics of numerical methods is presented. It is shown that the Cowell, Encke, and Encke formulation with an independent variable related to the eccentric anomaly all have a real positive eigenvalue when linearized about the initial conditions. The real positive eigenvalue causes an amplification of the error of the solution when used in conjunction with a numerical integration method. In contrast an element formulation has zero eigenvalues and is numerically stable.

6. Error analysis for the Fourier domain offset estimation algorithm

Wei, Ling; He, Jieling; He, Yi; Yang, Jinsheng; Li, Xiqi; Shi, Guohua; Zhang, Yudong

2016-02-01

The offset estimation algorithm is crucial for the accuracy of the Shack-Hartmann wave-front sensor. Recently, the Fourier Domain Offset (FDO) algorithm has been proposed for offset estimation. Similar to other algorithms, the accuracy of FDO is affected by noise such as background noise, photon noise, and 'fake' spots. However, no adequate quantitative error analysis has been performed for FDO in previous studies, which is of great importance for practical applications of the FDO. In this study, we quantitatively analysed how the estimation error of FDO is affected by noise based on theoretical deduction, numerical simulation, and experiments. The results demonstrate that the standard deviation of the wobbling error is: (1) inversely proportional to the raw signal to noise ratio, and proportional to the square of the sub-aperture size in the presence of background noise; and (2) proportional to the square root of the intensity in the presence of photonic noise. Furthermore, the upper bound of the estimation error is proportional to the intensity of 'fake' spots and the sub-aperture size. The results of the simulation and experiments agreed with the theoretical analysis.

7. Numerical evaluation of the fidelity error threshold for the surface code

Jouzdani, Pejman; Mucciolo, Eduardo R.

2014-07-01

We study how the resilience of the surface code is affected by the coupling to a non-Markovian environment at zero temperature. The qubits in the surface code experience an effective dynamics due to the coupling to the environment that induces correlations among them. The range of the effective induced qubit-qubit interaction depends on parameters related to the environment and the duration of the quantum error correction cycle. We show numerically that different interaction ranges set different intrinsic bounds on the fidelity of the code. These bounds are unrelated to the error thresholds based on stochastic error models. We introduce a definition of stabilizers based on logical operators that allows us to efficiently implement a Metropolis algorithm to determine upper bounds to the fidelity error threshold.

8. Error margin analysis for feature gene extraction

PubMed Central

2010-01-01

Background Feature gene extraction is a fundamental issue in microarray-based biomarker discovery. It is normally treated as an optimization problem of finding the best predictive feature genes that can effectively and stably discriminate distinct types of disease conditions, e.g. tumors and normals. Since gene microarray data normally involves thousands of genes at, tens or hundreds of samples, the gene extraction process may fall into local optimums if the gene set is optimized according to the maximization of classification accuracy of the classifier built from it. Results In this paper, we propose a novel gene extraction method of error margin analysis to optimize the feature genes. The proposed algorithm has been tested upon one synthetic dataset and two real microarray datasets. Meanwhile, it has been compared with five existing gene extraction algorithms on each dataset. On the synthetic dataset, the results show that the feature set extracted by our algorithm is the closest to the actual gene set. For the two real datasets, our algorithm is superior in terms of balancing the size and the validation accuracy of the resultant gene set when comparing to other algorithms. Conclusion Because of its distinct features, error margin analysis method can stably extract the relevant feature genes from microarray data for high-performance classification. PMID:20459827

9. Introduction to Error Analysis, 2nd Ed. (cloth)

Taylor, John R.

This best-selling text by John Taylor, now released in its second edition, introduces the study of uncertainties to lower division science students. Assuming no prior knowledge, the author introduces error analysis through the use of familiar examples ranging from carpentry to well-known historic experiments. Pertinent worked examples, simple exercises throughout the text, and numerous chapter-ending problems combine to make the book ideal for use in physics, chemistry, and engineering lab courses. The first edition of this book has been translated into six languages.

10. Towards a More Rigorous Analysis of Foreign Language Errors.

ERIC Educational Resources Information Center

Abbott, Gerry

1980-01-01

Presents a precise and detailed process to be used in error analysis. The process is proposed as a means of making research in error analysis more accessible and useful to others, as well as assuring more objectivity. (Author/AMH)

11. Experimental and numerical study of error fields in the CNT stellarator

Hammond, K. C.; Anichowski, A.; Brenner, P. W.; Pedersen, T. S.; Raftopoulos, S.; Traverso, P.; Volpe, F. A.

2016-07-01

Sources of error fields were indirectly inferred in a stellarator by reconciling computed and numerical flux surfaces. Sources considered so far include the displacements and tilts of the four circular coils featured in the simple CNT stellarator. The flux surfaces were measured by means of an electron beam and fluorescent rod, and were computed by means of a Biot–Savart field-line tracing code. If the ideal coil locations and orientations are used in the computation, agreement with measurements is poor. Discrepancies are ascribed to errors in the positioning and orientation of the in-vessel interlocked coils. To that end, an iterative numerical method was developed. A Newton–Raphson algorithm searches for the coils’ displacements and tilts that minimize the discrepancy between the measured and computed flux surfaces. This method was verified by misplacing and tilting the coils in a numerical model of CNT, calculating the flux surfaces that they generated, and testing the algorithm’s ability to deduce the coils’ displacements and tilts. Subsequently, the numerical method was applied to the experimental data, arriving at a set of coil displacements whose resulting field errors exhibited significantly improved agreement with the experimental results.

12. Experimental and numerical study of error fields in the CNT stellarator

Hammond, K. C.; Anichowski, A.; Brenner, P. W.; Pedersen, T. S.; Raftopoulos, S.; Traverso, P.; Volpe, F. A.

2016-07-01

Sources of error fields were indirectly inferred in a stellarator by reconciling computed and numerical flux surfaces. Sources considered so far include the displacements and tilts of the four circular coils featured in the simple CNT stellarator. The flux surfaces were measured by means of an electron beam and fluorescent rod, and were computed by means of a Biot-Savart field-line tracing code. If the ideal coil locations and orientations are used in the computation, agreement with measurements is poor. Discrepancies are ascribed to errors in the positioning and orientation of the in-vessel interlocked coils. To that end, an iterative numerical method was developed. A Newton-Raphson algorithm searches for the coils’ displacements and tilts that minimize the discrepancy between the measured and computed flux surfaces. This method was verified by misplacing and tilting the coils in a numerical model of CNT, calculating the flux surfaces that they generated, and testing the algorithm’s ability to deduce the coils’ displacements and tilts. Subsequently, the numerical method was applied to the experimental data, arriving at a set of coil displacements whose resulting field errors exhibited significantly improved agreement with the experimental results.

13. Trends in MODIS Geolocation Error Analysis

NASA Technical Reports Server (NTRS)

Wolfe, R. E.; Nishihama, Masahiro

2009-01-01

Data from the two MODIS instruments have been accurately geolocated (Earth located) to enable retrieval of global geophysical parameters. The authors describe the approach used to geolocate with sub-pixel accuracy over nine years of data from M0DIS on NASA's E0S Terra spacecraft and seven years of data from MODIS on the Aqua spacecraft. The approach uses a geometric model of the MODIS instruments, accurate navigation (orbit and attitude) data and an accurate Earth terrain model to compute the location of each MODIS pixel. The error analysis approach automatically matches MODIS imagery with a global set of over 1,000 ground control points from the finer-resolution Landsat satellite to measure static biases and trends in the MO0lS geometric model parameters. Both within orbit and yearly thermally induced cyclic variations in the pointing have been found as well as a general long-term trend.

14. Second Language Learning: Contrastive Analysis, Error Analysis, and Related Aspects.

ERIC Educational Resources Information Center

Robinett, Betty Wallace, Ed.; Schachter, Jacquelyn, Ed.

This graduate level text on second language learning is divided into three sections. The first two sections provide a survey of the historical underpinnings of second language research in contrastive analysis and error analysis. The third section includes discussions of recent developments in the field. The first section contains articles on the…

15. The Vertical Error Characteristics of GOES-derived Winds: Description and Impact on Numerical Weather Prediction

NASA Technical Reports Server (NTRS)

Rao, P. Anil; Velden, Christopher S.; Braun, Scott A.; Einaudi, Franco (Technical Monitor)

2001-01-01

Errors in the height assignment of some satellite-derived winds exist because the satellites sense radiation emitted from a finite layer of the atmosphere rather than a specific level. Potential problems in data assimilation may arise because the motion of a measured layer is often represented by a single-level value. In this research, cloud and water vapor motion winds that are derived from the Geostationary Operational Environmental Satellites (GOES winds) are compared to collocated rawinsonde observations (RAOBs). An important aspect of this work is that in addition to comparisons at each assigned height, the GOES winds are compared to the entire profile of the collocated RAOB data to determine the vertical error characteristics of the GOES winds. The impact of these results on numerical weather prediction is then investigated. The comparisons at individual vector height assignments indicate that the error of the GOES winds range from approx. 3 to 10 m/s and generally increase with height. However, if taken as a percentage of the total wind speed, accuracy is better at upper levels. As expected, comparisons with the entire profile of the collocated RAOBs indicate that clear-air water vapor winds represent deeper layers than do either infrared or water vapor cloud-tracked winds. This is because in cloud-free regions the signal from water vapor features may result from emittance over a thicker layer. To further investigate characteristics of the clear-air water vapor winds, they are stratified into two categories that are dependent on the depth of the layer represented by the vector. It is found that if the vertical gradient of moisture is smooth and uniform from near the height assignment upwards, the clear-air water vapor wind tends to represent a relatively deep layer. The information from the comparisons is then used in numerical model simulations of two separate events to determine the forecast impacts. Four simulations are performed for each case: 1) A

16. Pathway Analysis Software: Annotation Errors and Solutions

PubMed Central

Henderson-MacLennan, Nicole K.; Papp, Jeanette C.; Talbot, C. Conover; McCabe, Edward R.B.; Presson, Angela P.

2010-01-01

Genetic databases contain a variety of annotation errors that often go unnoticed due to the large size of modern genetic data sets. Interpretation of these data sets requires bioinformatics tools that may contribute to this problem. While providing gene symbol annotations for identifiers (IDs) such as microarray probeset, RefSeq, GenBank and Entrez Gene is seemingly trivial, the accuracy is fundamental to any subsequent conclusions. We examine gene symbol annotations and results from three commercial pathway analysis software (PAS) packages: Ingenuity Pathways Analysis, GeneGO and Pathway Studio. We compare gene symbol annotations and canonical pathway results over time and among different input ID types. We find that PAS results can be affected by variation in gene symbol annotations across software releases and the input ID type analyzed. As a result, we offer suggestions for using commercial PAS and reporting microarray results to improve research quality. We propose a wiki type website to facilitate communication of bioinformatics software problems within the scientific community. PMID:20663702

17. Investigating Convergence Patterns for Numerical Methods Using Data Analysis

ERIC Educational Resources Information Center

Gordon, Sheldon P.

2013-01-01

The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…

18. Sieve Estimation of Constant and Time-Varying Coefficients in Nonlinear Ordinary Differential Equation Models by Considering Both Numerical Error and Measurement Error

PubMed Central

Xue, Hongqi; Miao, Hongyu; Wu, Hulin

2010-01-01

This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge–Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n−1/(p∧4), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics. PMID:21132064

19. On the continuum-scale simulation of gravity-driven fingers with hysteretic Richards equation: Trucation error induced numerical artifacts

SciTech Connect

ELIASSI,MEHDI; GLASS JR.,ROBERT J.

2000-03-08

The authors consider the ability of the numerical solution of Richards equation to model gravity-driven fingers. Although gravity-driven fingers can be easily simulated using a partial downwind averaging method, they find the fingers are purely artificial, generated by the combined effects of truncation error induced oscillations and capillary hysteresis. Since Richards equation can only yield a monotonic solution for standard constitutive relations and constant flux boundary conditions, it is not the valid governing equation to model gravity-driven fingers, and therefore is also suspect for unsaturated flow in initially dry, highly nonlinear, and hysteretic media where these fingers occur. However, analysis of truncation error at the wetting front for the partial downwind method suggests the required mathematical behavior of a more comprehensive and physically based modeling approach for this region of parameter space.

20. An Error Analysis of Elementary School Children's Number Production Abilities

ERIC Educational Resources Information Center

Skwarchuk, Sheri-Lynn; Betts, Paul

2006-01-01

Translating numerals into number words is a tacit task requiring linguistic and mathematical knowledge. This project expanded on previous number production models by examining developmental differences in children's number naming errors. Ninety-six children from grades one, three, five, and seven translated a random set of numerals into number…

1. Solar Tracking Error Analysis of Fresnel Reflector

PubMed Central

Zheng, Jiantao; Yan, Junjie; Pei, Jie; Liu, Guanjie

2014-01-01

Depending on the rotational structure of Fresnel reflector, the rotation angle of the mirror was deduced under the eccentric condition. By analyzing the influence of the sun tracking rotation angle error caused by main factors, the change rule and extent of the influence were revealed. It is concluded that the tracking errors caused by the difference between the rotation axis and true north meridian, at noon, were maximum under certain conditions and reduced at morning and afternoon gradually. The tracking error caused by other deviations such as rotating eccentric, latitude, and solar altitude was positive at morning, negative at afternoon, and zero at a certain moment of noon. PMID:24895664

2. Analysis of thematic map classification error matrices.

USGS Publications Warehouse

Rosenfield, G.H.

1986-01-01

The classification error matrix expresses the counts of agreement and disagreement between the classified categories and their verification. Thematic mapping experiments compare variables such as multiple photointerpretation or scales of mapping, and produce one or more classification error matrices. This paper presents a tutorial to implement a typical problem of a remotely sensed data experiment for solution by the linear model method.-from Author

3. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

NASA Technical Reports Server (NTRS)

Prive, N. C.; Errico, Ronald M.

2015-01-01

The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

4. Numerical likelihood analysis of cosmic ray anisotropies

SciTech Connect

Carlos Hojvat et al.

2003-07-02

A numerical likelihood approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.

5. Asteroid orbital error analysis: Theory and application

NASA Technical Reports Server (NTRS)

Muinonen, K.; Bowell, Edward

1992-01-01

We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

6. Numerical Package in Computer Supported Numeric Analysis Teaching

ERIC Educational Resources Information Center

Tezer, Murat

2007-01-01

At universities in the faculties of Engineering, Sciences, Business and Economics together with higher education in Computing, it is stated that because of the difficulty, calculators and computers can be used in Numerical Analysis (NA). In this study, the learning computer supported NA will be discussed together with important usage of the…

7. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

NASA Technical Reports Server (NTRS)

Fiske, David R.

2004-01-01

In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

8. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

SciTech Connect

Beckerman, M.; Jones, J.P.

1999-02-01

We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

9. Size and Shape Analysis of Error-Prone Shape Data

PubMed Central

Du, Jiejun; Dryden, Ian L.; Huang, Xianzheng

2015-01-01

We consider the problem of comparing sizes and shapes of objects when landmark data are prone to measurement error. We show that naive implementation of ordinary Procrustes analysis that ignores measurement error can compromise inference. To account for measurement error, we propose the conditional score method for matching configurations, which guarantees consistent inference under mild model assumptions. The effects of measurement error on inference from naive Procrustes analysis and the performance of the proposed method are illustrated via simulation and application in three real data examples. Supplementary materials for this article are available online. PMID:26109745

10. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis. Revision 1.12

NASA Technical Reports Server (NTRS)

Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

1997-01-01

We proposed a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and is required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has two important applications, which we term the assessment application and the objective analysis application. For the assessment application, our approach results in new objective measures of forecast skill which are more in line with subjective measures of forecast skill and which are useful in validating models and diagnosing their shortcomings. With regard to the objective analysis application, meteorological analysis schemes balance forecast error and observational error to obtain an optimal analysis. Presently, representations of the error covariance matrix used to measure the forecast error are severely limited. For the objective analysis application our approach will improve analyses by providing a more realistic measure of the forecast error. We expect, a priori, that our approach should greatly improve the utility of remotely sensed data which have relatively high horizontal resolution, but which are indirectly related to the conventional atmospheric variables. In this project, we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP) and 500 hPa geopotential height fields for forecasts of the short and medium range. Since the forecasts are generated by the GEOS (Goddard Earth Observing System) data assimilation system with and without ERS 1 scatterometer data, these preliminary studies serve several purposes. They (1) provide a

11. A Numerical Study of Some Potential Sources of Error in Side-by-Side Seismometer Evaluations

USGS Publications Warehouse

Holcomb, L. Gary

1990-01-01

INTRODUCTION This report presents the results of a series of computer simulations of potential errors in test data, which might be obtained when conducting side-by-side comparisons of seismometers. These results can be used as guides in estimating potential sources and magnitudes of errors one might expect when analyzing real test data. First, the derivation of a direct method for calculating the noise levels of two sensors in a side-by-side evaluation is repeated and extended slightly herein. This bulk of this derivation was presented previously (see Holcomb 1989); it is repeated here for easy reference. This method is applied to the analysis of a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of white noise spectra with known signal-to-noise ratios (SNR's). This report extends this analysis to high SNR's to determine the limitations of the direct method for calculating the noise levels at signal-to-noise levels which are much higher than presented previously (see Holcomb 1989). Next, the method is used to analyze a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of bandshaped noise spectra with known signal-to-noise ratios. This is a much more realistic representation of real world data because the earth's background spectrum is certainly not flat. Finally, the results of the analysis of simulated white and bandshaped side-by-side test data are used to assist in interpreting the analysis of the effects of simulated azimuthal misalignment in side-by-side sensor evaluations. A thorough understanding of azimuthal misalignment errors is important because of the physical impossibility of perfectly aligning two sensors in a real world situation. The analysis herein indicates that alignment errors place lower limits on the levels of system noise which can be resolved in a side-by-side measurement. It also indicates that alignment errors are the source of the fact that

12. Trial application of a technique for human error analysis (ATHEANA)

SciTech Connect

Bley, D.C.; Cooper, S.E.; Parry, G.W.

1996-10-01

The new method for HRA, ATHEANA, has been developed based on a study of the operating history of serious accidents and an understanding of the reasons why people make errors. Previous publications associated with the project have dealt with the theoretical framework under which errors occur and the retrospective analysis of operational events. This is the first attempt to use ATHEANA in a prospective way, to select and evaluate human errors within the PSA context.

13. Analysis of Medication Errors in Simulated Pediatric Resuscitation by Residents

PubMed Central

Porter, Evelyn; Barcega, Besh; Kim, Tommy Y.

2014-01-01

Introduction The objective of our study was to estimate the incidence of prescribing medication errors specifically made by a trainee and identify factors associated with these errors during the simulated resuscitation of a critically ill child. Methods The results of the simulated resuscitation are described. We analyzed data from the simulated resuscitation for the occurrence of a prescribing medication error. We compared univariate analysis of each variable to medication error rate and performed a separate multiple logistic regression analysis on the significant univariate variables to assess the association between the selected variables. Results We reviewed 49 simulated resuscitations. The final medication error rate for the simulation was 26.5% (95% CI 13.7% – 39.3%). On univariate analysis, statistically significant findings for decreased prescribing medication error rates included senior residents in charge, presence of a pharmacist, sleeping greater than 8 hours prior to the simulation, and a visual analog scale score showing more confidence in caring for critically ill children. Multiple logistic regression analysis using the above significant variables showed only the presence of a pharmacist to remain significantly associated with decreased medication error, odds ratio of 0.09 (95% CI 0.01 – 0.64). Conclusion Our results indicate that the presence of a clinical pharmacist during the resuscitation of a critically ill child reduces the medication errors made by resident physician trainees. PMID:25035756

14. An Introduction to Error Analysis for Quantitative Chemistry

ERIC Educational Resources Information Center

Neman, R. L.

1972-01-01

Describes two formulas for calculating errors due to instrument limitations which are usually found in gravimetric volumetric analysis and indicates their possible applications to other fields of science. (CC)

15. Error analysis of large aperture static interference imaging spectrometer

Li, Fan; Zhang, Guo

2015-12-01

Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.

16. Attitude Determination Error Analysis System (ADEAS) mathematical specifications document

NASA Technical Reports Server (NTRS)

Nicholson, Mark; Markley, F.; Seidewitz, E.

1988-01-01

The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.

17. Data Analysis & Statistical Methods for Command File Errors

NASA Technical Reports Server (NTRS)

Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

2014-01-01

This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

18. Retrieval, Analysis, and Display of Numeric Data.

ERIC Educational Resources Information Center

Berger, Mary C.; Wanger, Judith

1982-01-01

This introduction to online numeric database systems describes the types of databases associated with such systems, shows the major functions which they can perform (retrieval, analysis, display), and identifies the major characteristics of user interfaces. Examples of numeric database use are appended. (EJS)

19. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

NASA Technical Reports Server (NTRS)

Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

2013-01-01

20. Predictive error analysis for a water resource management model

Gallagher, Mark; Doherty, John

2007-02-01

SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

1. Error Analysis of p-Version Discontinuous Galerkin Method for Heat Transfer in Built-up Structures

NASA Technical Reports Server (NTRS)

Kaneko, Hideaki; Bey, Kim S.

2004-01-01

The purpose of this paper is to provide an error analysis for the p-version of the discontinuous Galerkin finite element method for heat transfer in built-up structures. As a special case of the results in this paper, a theoretical error estimate for the numerical experiments recently conducted by James Tomey is obtained.

2. Numerical solutions and error estimations for the space fractional diffusion equation with variable coefficients via Fibonacci collocation method.

PubMed

Bahşı, Ayşe Kurt; Yalçınbaş, Salih

2016-01-01

In this study, the Fibonacci collocation method based on the Fibonacci polynomials are presented to solve for the fractional diffusion equations with variable coefficients. The fractional derivatives are described in the Caputo sense. This method is derived by expanding the approximate solution with Fibonacci polynomials. Using this method of the fractional derivative this equation can be reduced to a set of linear algebraic equations. Also, an error estimation algorithm which is based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation algorithm. If the exact solution of the problem is not known, the absolute error function of the problems can be approximately computed by using the Fibonacci polynomial solution. By using this error estimation function, we can find improved solutions which are more efficient than direct numerical solutions. Numerical examples, figures, tables are comparisons have been presented to show efficiency and usable of proposed method. PMID:27610294

3. Error Analysis and Trajectory Correction Maneuvers of Lunar Transfer Orbit

Zhao, Yu-hui; Hou, Xi-yun; Liu, Lin

2013-10-01

For a returnable lunar probe, this paper studies the characteristics of both the Earth-Moon transfer orbit and the return orbit. On the basis of the error propagation matrix, the linear equation to estimate the first midcourse trajectory correction maneuver (TCM) is figured out. Numerical simulations are performed, and the features of error propagation in lunar transfer orbit are given. The advantages, disadvantages, and applications of two TCM strategies are discussed, and the computation of the second TCM of the return orbit is also simulated under the conditions at the reentry time.

4. Error Analysis and Trajectory Correction Maneuvers of Lunar Transfer Orbit

Zhao, Y. H.; Hou, X. Y.; Liu, L.

2013-05-01

For the sample return lunar missions and human lunar exploration, this paper studies the characteristics of both the Earth-Moon transfer orbit and the return orbit. On the basis of the error propagation matrix, the linear equation to estimate the first midcourse trajectory correction maneuver (TCM) is figured out. Numerical simulations are performed, and the features of error propagation in lunar transfer orbit are given. The advantages, disadvantages, and applications of two TCM strategies are discussed, and the computation of the second TCM of the return orbit is also simulated under the conditions at the reentry time.

5. Abundance recovery error analysis using simulated AVIRIS data

NASA Technical Reports Server (NTRS)

Stoner, William W.; Harsanyi, Joseph C.; Farrand, William H.; Wong, Jennifer A.

1992-01-01

Measurement noise and imperfect atmospheric correction translate directly into errors in the determination of the surficial abundance of materials from imaging spectrometer data. The effects of errors on abundance recovery were investigated previously using Monte Carlo simulation methods by Sabol et. al. The drawback of the Monte Carlo approach is that thousands of trials are needed to develop good statistics on the probable error in abundance recovery. This computational burden invariably limits the number of scenarios of interest that can practically be investigated. A more efficient approach is based on covariance analysis. The covariance analysis approach expresses errors in abundance as a function of noise in the spectral measurements and provides a closed form result eliminating the need for multiple trials. Monte Carlo simulation and covariance analysis are used to predict confidence limits for abundance recovery for a scenario which is modeled as being derived from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).

6. Computerized analysis of error patterns in digit span recall.

PubMed

Woods, David L; Herron, T J; Yund, E W; Hink, R F; Kishiyama, M M; Reed, Bruce

2011-08-01

We analyzed error patterns during digit span (DS) testing in four experiments. In Experiment 1, error patterns analyzed from a community sample of 427 subjects revealed strong primacy and recency effects. Subjects with shorter DSs showed an increased incidence of transposition errors in comparison with other error types and a greater incidence of multiple errors on incorrect trials. Experiment 2 investigated 46 young subjects in three test sessions. The results replicated those of Experiment 1 and demonstrated that error patterns of individual subjects were consistent across repeated test administrations. Experiment 3 investigated 40 subjects from Experiment 2 who feigned symptoms of traumatic brain injury (TBI) with 80% of malingering subjects producing digit spans in the abnormal range. A digit span malingering index (DSMI) was developed to detect atypical error patterns in malingering subjects. Overall, 59% of malingering subjects with abnormal digit spans showed DSMIs in the abnormal range and DSMI values correlated significantly with the magnitude of malingering. Experiment 4 compared 29 patients with TBI with a new group of 38 control subjects. The TBI group showed significant reductions in digit span. Overall, 32% of the TBI patients showed DS abnormalities and 11% showed abnormal DSMIs. Computerized error-pattern analysis improves the sensitivity of DS assessment and can assist in the detection of malingering.

7. Sensitivity analysis of geometric errors in additive manufacturing medical models.

PubMed

Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

2015-03-01

Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.

8. Error Analysis of Variations on Larsen's Benchmark Problem

SciTech Connect

Azmy, YY

2001-06-27

Error norms for three variants of Larsen's benchmark problem are evaluated using three numerical methods for solving the discrete ordinates approximation of the neutron transport equation in multidimensional Cartesian geometry. The three variants of Larsen's test problem are concerned with the incoming flux boundary conditions: unit incoming flux on the left and bottom edges (Larsen's configuration); unit, incoming flux only on the left edge; unit incoming flux only on the bottom edge. The three methods considered are the Diamond Difference (DD) method, and the constant-approximation versions of the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic (AHOT-C) type. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L{sub 1}, L{sub 2}, and L{sub {infinity}} error norms are calculated. The results of this study demonstrate that while integral error norms, i.e. L{sub 1}, L{sub 2}, converge to zero with mesh refinement, the pointwise L{sub {infinity}} norm does not due to solution discontinuity across the singular characteristic. Little difference is observed between the error norm behavior of the three methods considered in spite of the fact that AHOT-C is locally exact, suggesting that numerical diffusion across the singular characteristic as the major source of error on the global scale. However, AHOT-C possesses a given accuracy in a larger fraction of computational cells than DD.

9. Error control in the GCF: An information-theoretic model for error analysis and coding

NASA Technical Reports Server (NTRS)

1974-01-01

The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.

10. Application of human error analysis to aviation and space operations

SciTech Connect

Nelson, W.R.

1998-03-01

For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) the authors have been working to apply methods of human error analysis to the design of complex systems. They have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. They are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. The primary vehicle the authors have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. They are currently adapting their methods and tools of human error analysis to the domain of air traffic management (ATM) systems. Under the NASA-sponsored Advanced Air Traffic Technologies (AATT) program they are working to address issues of human reliability in the design of ATM systems to support the development of a free flight environment for commercial air traffic in the US. They are also currently testing the application of their human error analysis approach for space flight operations. They have developed a simplified model of the critical habitability functions for the space station Mir, and have used this model to assess the affects of system failures and human errors that have occurred in the wake of the collision incident last year. They are developing an approach so that lessons learned from Mir operations can be systematically applied to design and operation of long-term space missions such as the International Space Station (ISS) and the manned Mars mission.

11. Enhanced orbit determination filter sensitivity analysis: Error budget development

NASA Technical Reports Server (NTRS)

Estefan, J. A.; Burkhart, P. D.

1994-01-01

An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.

12. Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors

ERIC Educational Resources Information Center

Sarcevic, Aleksandra

2009-01-01

An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…

13. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

ERIC Educational Resources Information Center

Zhang, Guangjian; Browne, Michael W.

2010-01-01

Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

14. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

PubMed

Zollanvari, Amin; Genton, Marc G

2013-08-01

We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

15. Simple Numerical Analysis of Longboard Speedometer Data

ERIC Educational Resources Information Center

Hare, Jonathan

2013-01-01

Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 "Phys. Educ." 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as…

16. Matching post-Newtonian and numerical relativity waveforms: Systematic errors and a new phenomenological model for nonprecessing black hole binaries

SciTech Connect

Santamaria, L.; Ohme, F.; Dorband, N.; Moesta, P.; Robinson, E. L.; Krishnan, B.; Ajith, P.; Bruegmann, B.; Hannam, M.; Husa, S.; Pollney, D.; Reisswig, C.; Seiler, J.

2010-09-15

We present a new phenomenological gravitational waveform model for the inspiral and coalescence of nonprecessing spinning black hole binaries. Our approach is based on a frequency-domain matching of post-Newtonian inspiral waveforms with numerical relativity based binary black hole coalescence waveforms. We quantify the various possible sources of systematic errors that arise in matching post-Newtonian and numerical relativity waveforms, and we use a matching criteria based on minimizing these errors; we find that the dominant source of errors are those in the post-Newtonian waveforms near the merger. An analytical formula for the dominant mode of the gravitational radiation of nonprecessing black hole binaries is presented that captures the phenomenology of the hybrid waveforms. Its implementation in the current searches for gravitational waves should allow cross-checks of other inspiral-merger-ringdown waveform families and improve the reach of gravitational-wave searches.

17. SINFAC - SYSTEMS IMPROVED NUMERICAL FLUIDS ANALYSIS CODE

NASA Technical Reports Server (NTRS)

Costello, F. A.

1994-01-01

The Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to the April 1983 revision of SINDA, a general thermal analyzer program. The purpose of the additional routines is to allow for the modeling of active heat transfer loops. The modeler can simulate the steady-state and pseudo-transient operations of 16 different heat transfer loop components including radiators, evaporators, condensers, mechanical pumps, reservoirs and many types of valves and fittings. In addition, the program contains a property analysis routine that can be used to compute the thermodynamic properties of 20 different refrigerants. SINFAC can simulate the response to transient boundary conditions. SINFAC was first developed as a method for computing the steady-state performance of two phase systems. It was then modified using CNFRWD, SINDA's explicit time-integration scheme, to accommodate transient thermal models. However, SINFAC cannot simulate pressure drops due to time-dependent fluid acceleration, transient boil-out, or transient fill-up, except in the accumulator. SINFAC also requires the user to be familiar with SINDA. The solution procedure used by SINFAC is similar to that which an engineer would use to solve a system manually. The solution to a system requires the determination of all of the outlet conditions of each component such as the flow rate, pressure, and enthalpy. To obtain these values, the user first estimates the inlet conditions to the first component of the system, then computes the outlet conditions from the data supplied by the manufacturer of the first component. The user then estimates the temperature at the outlet of the third component and computes the corresponding flow resistance of the second component. With the flow resistance of the second component, the user computes the conditions down stream, namely the inlet conditions of the third. The computations follow for the rest of the system, back to the first component

18. Linear error analysis of slope-area discharge determinations

USGS Publications Warehouse

Kirby, W.H.

1987-01-01

The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

19. The influence of observation errors on analysis error and forecast skill investigated with an observing system simulation experiment

Privé, N. C.; Errico, R. M.; Tai, K.-S.

2013-06-01

The National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a 1 month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 h forecast, increased observation error only yields a slight decline in forecast skill in the extratropics and no discernible degradation of forecast skill in the tropics.

20. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

NASA Technical Reports Server (NTRS)

Prive, N. C.; Errico, R. M.; Tai, K.-S.

2013-01-01

The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

1. Geometric error analysis for shuttle imaging spectrometer experiment

NASA Technical Reports Server (NTRS)

Wang, S. J.; Ih, C. H.

1984-01-01

The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.

2. Clustered Numerical Data Analysis Using Markov Lie Monoid Based Networks

Johnson, Joseph

2016-03-01

We have designed and build an optimal numerical standardization algorithm that links numerical values with their associated units, error level, and defining metadata thus supporting automated data exchange and new levels of artificial intelligence (AI). The software manages all dimensional and error analysis and computational tracing. Tables of entities verses properties of these generalized numbers (called metanumbers'') support a transformation of each table into a network among the entities and another network among their properties where the network connection matrix is based upon a proximity metric between the two items. We previously proved that every network is isomorphic to the Lie algebra that generates continuous Markov transformations. We have also shown that the eigenvectors of these Markov matrices provide an agnostic clustering of the underlying patterns. We will present this methodology and show how our new work on conversion of scientific numerical data through this process can reveal underlying information clusters ordered by the eigenvalues. We will also show how the linking of clusters from different tables can be used to form a supernet'' of all numerical information supporting new initiatives in AI.

3. Error estimate evaluation in numerical approximations of partial differential equations: A pilot study using data mining methods

2013-03-01

In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.

4. Estimating the designated use attainment decision error rates of US Environmental Protection Agency's proposed numeric total phosphorus criteria for Florida, USA, colored lakes.

PubMed

McLaughlin, Douglas B

2012-01-01

The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors.

5. A case of error disclosure: a communication privacy management analysis.

PubMed

Petronio, Sandra; Helft, Paul R; Child, Jeffrey T

2013-12-01

To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios's theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient's family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public healthMuch of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information

6. A case of error disclosure: a communication privacy management analysis.

PubMed

Petronio, Sandra; Helft, Paul R; Child, Jeffrey T

2013-12-01

To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios's theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient's family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public healthMuch of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information

7. A Case of Error Disclosure: A Communication Privacy Management Analysis

PubMed Central

Petronio, Sandra; Helft, Paul R.; Child, Jeffrey T.

2013-01-01

To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios’s theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient’s family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public health Much of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information

8. Error analysis for momentum conservation in Atomic-Continuum Coupled Model

Yang, Yantao; Cui, Junzhi; Han, Tiansi

2016-08-01

Atomic-Continuum Coupled Model (ACCM) is a multiscale computation model proposed by Xiang et al. (in IOP conference series materials science and engineering, 2010), which is used to study and simulate dynamics and thermal-mechanical coupling behavior of crystal materials, especially metallic crystals. In this paper, we construct a set of interpolation basis functions for the common BCC and FCC lattices, respectively, implementing the computation of ACCM. Based on this interpolation approximation, we give a rigorous mathematical analysis of the error of momentum conservation equation introduced by ACCM, and derive a sequence of inequalities that bound the error. Numerical experiment is carried out to verify our result.

9. Identification of human errors of commission using Sneak Analysis

SciTech Connect

Hahn, H.A.; deVries, J.A. II.

1991-01-01

Sneak Analysis was adapted for use in identifying human errors of commission. Flow diagrams were developed to guide the analyst through a series of questions aimed at locating sneak paths, sneak indications, sneak labels, and sneak timing. An illustration of the application of this methodology in a nuclear environment is given and a computerized tool to support Sneak Analysis is described. A nuclear power plant loss of coolant accident is used as the example of sneak analysis of reactor safety. 8 figs.

10. Numerical Analysis of Robust Phase Estimation

Rudinger, Kenneth; Kimmel, Shelby

Robust phase estimation (RPE) is a new technique for estimating rotation angles and axes of single-qubit operations, steps necessary for developing useful quantum gates [arXiv:1502.02677]. As RPE only diagnoses a few parameters of a set of gate operations while at the same time achieving Heisenberg scaling, it requires relatively few resources compared to traditional tomographic procedures. In this talk, we present numerical simulations of RPE that show both Heisenberg scaling and robustness against state preparation and measurement errors, while also demonstrating numerical bounds on the procedure's efficacy. We additionally compare RPE to gate set tomography (GST), another Heisenberg-limited tomographic procedure. While GST provides a full gate set description, it is more resource-intensive than RPE, leading to potential tradeoffs between the procedures. We explore these tradeoffs and numerically establish criteria to guide experimentalists in deciding when to use RPE or GST to characterize their gate sets.Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

11. Error analysis for mesospheric temperature profiling by absorptive occultation sensors

Rieder, M. J.; Kirchengast, G.

2001-01-01

An error analysis for mesospheric profiles retrieved from absorptive occultation data has been performed, starting with realistic error assumptions as would apply to intensity data collected by available high-precision UV photodiode sensors. Propagation of statistical errors was investigated through the complete retrieval chain from measured intensity profiles to atmospheric density, pressure, and temperature profiles. We assumed unbiased errors as the occultation method is essentially self-calibrating and straight-line propagation of occulted signals as we focus on heights of 50 100 km, where refractive bending of the sensed radiation is negligible. Throughout the analysis the errors were characterized at each retrieval step by their mean profile, their covariance matrix and their probability density function (pdf). This furnishes, compared to a variance-only estimation, a much improved insight into the error propagation mechanism. We applied the procedure to a baseline analysis of the performance of a recently proposed solar UV occultation sensor (SMAS Sun Monitor and Atmospheric Sounder) and provide, using a reasonable exponential atmospheric model as background, results on error standard deviations and error correlation functions of density, pressure, and temperature profiles. Two different sensor photodiode assumptions are discussed, respectively, diamond diodes (DD) with 0.03% and silicon diodes (SD) with 0.1% (unattenuated intensity) measurement noise at 10 Hz sampling rate. A factor-of-2 margin was applied to these noise values in order to roughly account for unmodeled cross section uncertainties. Within the entire height domain (50 100 km) we find temperature to be retrieved to better than 0.3 K (DD) / 1 K (SD) accuracy, respectively, at 2 km height resolution. The results indicate that absorptive occultations acquired by a SMAS-type sensor could provide mesospheric profiles of fundamental variables such as temperature with unprecedented accuracy and

12. Analysis of possible systematic errors in the Oslo method

SciTech Connect

Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

2011-03-15

In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

ERIC Educational Resources Information Center

Sass, Daniel A.

2010-01-01

Exploratory factor analysis (EFA) is commonly employed to evaluate the factor structure of measures with dichotomously scored items. Generally, only the estimated factor loadings are provided with no reference to significance tests, confidence intervals, and/or estimated factor loading standard errors. This simulation study assessed factor loading…

14. A numerical study of geometry dependent errors in velocity, temperature, and density measurements from single grid planar retarding potential analyzers

SciTech Connect

Davidson, R. L.; Earle, G. D.; Heelis, R. A.; Klenzing, J. H.

2010-08-15

Planar retarding potential analyzers (RPAs) have been utilized numerous times on high profile missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellite Program to measure plasma composition, temperature, density, and the velocity component perpendicular to the plane of the instrument aperture. These instruments use biased grids to approximate ideal biased planes. These grids introduce perturbations in the electric potential distribution inside the instrument and when unaccounted for cause errors in the measured plasma parameters. Traditionally, the grids utilized in RPAs have been made of fine wires woven into a mesh. Previous studies on the errors caused by grids in RPAs have approximated woven grids with a truly flat grid. Using a commercial ion optics software package, errors in inferred parameters caused by both woven and flat grids are examined. A flat grid geometry shows the smallest temperature and density errors, while the double thick flat grid displays minimal errors for velocities over the temperature and velocity range used. Wire thickness along the dominant flow direction is found to be a critical design parameter in regard to errors in all three inferred plasma parameters. The results shown for each case provide valuable design guidelines for future RPA development.

15. An analysis of pilot error-related aircraft accidents

NASA Technical Reports Server (NTRS)

Kowalsky, N. B.; Masters, R. L.; Stone, R. B.; Babcock, G. L.; Rypka, E. W.

1974-01-01

A multidisciplinary team approach to pilot error-related U.S. air carrier jet aircraft accident investigation records successfully reclaimed hidden human error information not shown in statistical studies. New analytic techniques were developed and applied to the data to discover and identify multiple elements of commonality and shared characteristics within this group of accidents. Three techniques of analysis were used: Critical element analysis, which demonstrated the importance of a subjective qualitative approach to raw accident data and surfaced information heretofore unavailable. Cluster analysis, which was an exploratory research tool that will lead to increased understanding and improved organization of facts, the discovery of new meaning in large data sets, and the generation of explanatory hypotheses. Pattern recognition, by which accidents can be categorized by pattern conformity after critical element identification by cluster analysis.

16. How psychotherapists handle treatment errors – an ethical analysis

PubMed Central

2013-01-01

Background Dealing with errors in psychotherapy is challenging, both ethically and practically. There is almost no empirical research on this topic. We aimed (1) to explore psychotherapists’ self-reported ways of dealing with an error made by themselves or by colleagues, and (2) to reconstruct their reasoning according to the two principle-based ethical approaches that are dominant in the ethics discourse of psychotherapy, Beauchamp & Childress (B&C) and Lindsay et al. (L). Methods We conducted 30 semi-structured interviews with 30 psychotherapists (physicians and non-physicians) and analysed the transcripts using qualitative content analysis. Answers were deductively categorized according to the two principle-based ethical approaches. Results Most psychotherapists reported that they preferred to an disclose error to the patient. They justified this by spontaneous intuitions and common values in psychotherapy, rarely using explicit ethical reasoning. The answers were attributed to the following categories with descending frequency: 1. Respect for patient autonomy (B&C; L), 2. Non-maleficence (B&C) and Responsibility (L), 3. Integrity (L), 4. Competence (L) and Beneficence (B&C). Conclusions Psychotherapists need specific ethical and communication training to complement and articulate their moral intuitions as a support when disclosing their errors to the patients. Principle-based ethical approaches seem to be useful for clarifying the reasons for disclosure. Further research should help to identify the most effective and acceptable ways of error disclosure in psychotherapy. PMID:24321503

17. Position determination and measurement error analysis for the spherical proof mass with optical shadow sensing

Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin

2016-09-01

To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.

18. Doctors' duty to disclose error: a deontological or Kantian ethical analysis.

PubMed

Bernstein, Mark; Brown, Barry

2004-05-01

Medical (surgical) error is being talked about more openly and besides being the subject of retrospective reviews, is now the subject of prospective research. Disclosure of error has been a difficult issue because of fear of embarrassment for doctors in the eyes of their peers, and fear of punitive action by patients, consisting of medicolegal action and/or complaints to doctors' governing bodies. This paper examines physicians' and surgeons' duty to disclose error, from an ethical standpoint; specifically by applying the moral philosophical theory espoused by Immanuel Kant (ie. deontology). The purpose of this discourse is to apply moral philosophical analysis to a delicate but important issue which will be a matter all physicians and surgeons will have to confront, probably numerous times, in their professional careers. PMID:15198440

19. Ferrofluids: Modeling, numerical analysis, and scientific computation

Tomas, Ignacio

This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

20. A propagation of error analysis of the enzyme activity expression. A model for determining the total system random error of a kinetic enzyme analyzer.

PubMed

Tiffany, T O; Thayer, P C; Coelho, C M; Manning, G B

1976-09-01

We present a total system error evaluation of random error, based on a propagation of error analysis of the expression for the calculation of enzyme activity. A simple expression is derived that contains terms for photometric error, timing uncertainty, temperature-control error, sample and reagent volume errors, and pathlength error. This error expression was developed in general to provide a simple means of evaluating the magnitude of random error in an analytical system and in particular to provide an error evaluation protocol for the assessment of the error components in a prototype Miniature Centrifugal Analyzer system. Individual system components of error are measured. These measured error components are combined in the error expressiion to predict performance. Enzyme activity measurements are made to correlate with the projected error data. In conclusion, it is demonstrated that this is one method for permitting the clinical chemist and the instrument manufacturer to establish reasonable error limits. PMID:954193

1. ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS

NASA Technical Reports Server (NTRS)

Putney, B.

1994-01-01

The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and

2. Dispersion analysis and linear error analysis capabilities of the space vehicle dynamics simulation program

NASA Technical Reports Server (NTRS)

Snow, L. S.; Kuhn, A. E.

1975-01-01

Previous error analyses conducted by the Guidance and Dynamics Branch of NASA have used the Guidance Analysis Program (GAP) as the trajectory simulation tool. Plans are made to conduct all future error analyses using the Space Vehicle Dynamics Simulation (SVDS) program. A study was conducted to compare the inertial measurement unit (IMU) error simulations of the two programs. Results of the GAP/SVDS comparison are presented and problem areas encountered while attempting to simulate IMU errors, vehicle performance uncertainties and environmental uncertainties using SVDS are defined. An evaluation of the SVDS linear error analysis capability is also included.

3. Multiple boundary condition testing error analysis. [for large flexible space structures

NASA Technical Reports Server (NTRS)

Glaser, R. J.; Kuo, C. P.; Wada, B. K.

1989-01-01

Techniques for interpreting data from multiple-boundary-condition (MBC) ground tests of large space structures are developed analytically and demonstrated. The use of MBC testing to validate structures too large to stand alone on the ground is explained; the generalized least-squares mass and stiffness curve-fitting methods typically applied to MBC test data are reviewed; and a detailed error analysis is performed. Consideration is given to sensitivity coefficients, covariance-matrix theory, the correspondence between test and analysis modes, constraints and step sizes, convergence criteria, and factor-analysis theory. Numerical results for a simple beam problem are presented in tables and briefly characterized. The improved error-updating capabilities of MBC testing are confirmed, and it is concluded that reasonably accurate results can be obtained using a diagonal covariance matrix.

4. Effect of rawinsonde errors on rocketsonde density and pressure profiles: An error analysis of the Rawinsonde System

NASA Technical Reports Server (NTRS)

Luers, J. K.

1980-01-01

An initial value of pressure is required to derive the density and pressure profiles of the rocketborne rocketsonde sensor. This tie-on pressure value is obtained from the nearest rawinsonde launch at an altitude where overlapping rawinsonde and rocketsonde measurements occur. An error analysis was performed of the error sources in these sensors that contribute to the error in the tie-on pressure. It was determined that significant tie-on pressure errors result from radiation errors in the rawinsonde rod thermistor, and temperature calibration bias errors. To minimize the effect of these errors radiation corrections should be made to the rawinsonde temperature and the tie-on altitude should be chosen at the lowest altitude of overlapping data. Under these conditions the tie-on error, and consequently the resulting error in the Datasonde pressure and density profiles is less tha 1%. The effect of rawinsonde pressure and temperature errors on the wind and temperature versus height profiles of the rawinsonde was also determined.

5. Eigenvector method for umbrella sampling enables error analysis

Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.

2016-08-01

Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence.

6. Eigenvector method for umbrella sampling enables error analysis.

PubMed

Thiede, Erik H; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R

2016-08-28

Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912

7. Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis

NASA Technical Reports Server (NTRS)

Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl

2009-01-01

The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.

8. Structure function analysis of mirror fabrication and support errors

Hvisc, Anastacia M.; Burge, James H.

2007-09-01

Telescopes are ultimately limited by atmospheric turbulence, which is commonly characterized by a structure function. The telescope optics will not further degrade the performance if their errors are small compared to the atmospheric effects. Any further improvement to the mirrors is not economical since there is no increased benefit to performance. Typically the telescope specification is written in terms of an image size or encircled energy and is derived from the best seeing that is expected at the site. Ideally, the fabrication and support errors should never exceed atmospheric turbulence at any spatial scale, so it is instructive to look at how these errors affect the structure function of the telescope. The fabrication and support errors are most naturally described by Zernike polynomials or by bending modes for the active mirrors. This paper illustrates an efficient technique for relating this modal analysis to wavefront structure functions. Data is provided for efficient calculation of structure function given coefficients for Zernike annular polynomials. An example of this procedure for the Giant Magellan Telescope primary mirror is described.

9. Sequential analysis of the numerical Stroop effect reveals response suppression.

PubMed

Cohen Kadosh, Roi; Gevers, Wim; Notebaert, Wim

2011-09-01

Automatic processing of irrelevant stimulus dimensions has been demonstrated in a variety of tasks. Previous studies have shown that conflict between relevant and irrelevant dimensions can be reduced when a feature of the irrelevant dimension is repeated. The specific level at which the automatic process is suppressed (e.g., perceptual repetition, response repetition), however, is less understood. In the current experiment we used the numerical Stroop paradigm, in which the processing of irrelevant numerical values of 2 digits interferes with the processing of their physical size, to pinpoint the precise level of the suppression. Using a sequential analysis, we dissociated perceptual repetition from response repetition of the relevant and irrelevant dimension. Our analyses of reaction times, error rates, and diffusion modeling revealed that the congruity effect is significantly reduced or even absent when the response sequence of the irrelevant dimension, rather than the numerical value or the physical size, is repeated. These results suggest that automatic activation of the irrelevant dimension is suppressed at the response level. The current results shed light on the level of interaction between numerical magnitude and physical size as well as the effect of variability of responses and stimuli on automatic processing.

10. Computing the surveillance error grid analysis: procedure and examples.

PubMed

Kovatchev, Boris P; Wakeman, Christian A; Breton, Marc D; Kost, Gerald J; Louie, Richard F; Tran, Nam K; Klonoff, David C

2014-07-01

The surveillance error grid (SEG) analysis is a tool for analysis and visualization of blood glucose monitoring (BGM) errors, based on the opinions of 206 diabetes clinicians who rated 4 distinct treatment scenarios. Resulting from this large-scale inquiry is a matrix of 337 561 risk ratings, 1 for each pair of (reference, BGM) readings ranging from 20 to 580 mg/dl. The computation of the SEG is therefore complex and in need of automation. The SEG software introduced in this article automates the task of assigning a degree of risk to each data point for a set of measured and reference blood glucose values so that the data can be distributed into 8 risk zones. The software's 2 main purposes are to (1) distribute a set of BG Monitor data into 8 risk zones ranging from none to extreme and (2) present the data in a color coded display to promote visualization. Besides aggregating the data into 8 zones corresponding to levels of risk, the SEG computes the number and percentage of data pairs in each zone and the number/percentage of data pairs above/below the diagonal line in each zone, which are associated with BGM errors creating risks for hypo- or hyperglycemia, respectively. To illustrate the action of the SEG software we first present computer-simulated data stratified along error levels defined by ISO 15197:2013. This allows the SEG to be linked to this established standard. Further illustration of the SEG procedure is done with a series of previously published data, which reflect the performance of BGM devices and test strips under various environmental conditions. We conclude that the SEG software is a useful addition to the SEG analysis presented in this journal, developed to assess the magnitude of clinical risk from analytically inaccurate data in a variety of high-impact situations such as intensive care and disaster settings. PMID:25562887

11. Statistical error analysis of surface-structure parameters determined by low-energy electron and positron diffraction: Data errors

Duke, C. B.; Lazarides, A.; Paton, A.; Wang, Y. R.

1995-11-01

An error-analysis procedure that gives statistically significant error estimates for surface-structure parameters extracted from analyses of measured low-energy electron and positron diffraction (LEED and LEPD) intensities is proposed. This procedure is applied to a surface-structure analysis of Cu(100) in which experimental data are simulated by adding Gaussian-distributed random errors to the calculated intensities for relaxed surface structures. Quantitative expressions for the variances in the surface-structural parameters are given and shown to obey the expected scaling laws for Gaussian errors in the experimental data. The procedure is shown to describe rigorously parameter errors in the limit that the errors in the measured intensities are described by uncorrelated Gaussian statistics. The analysis is valid for structure determinations that are of sufficient quality to admit errors that have magnitudes within the region of convergence of a linear theory that relates perturbations of diffracted intensities to perturbations in structural parameters. It is compared with previously proposed error-estimation techniques used in LEED, LEPD, and x-ray intensity analyses.

12. Numerical Analysis of Orbital Perturbation Effects on Inclined Geosynchronous SAR.

PubMed

Dong, Xichao; Hu, Cheng; Long, Teng; Li, Yuanhao

2016-01-01

The geosynchronous synthetic aperture radar (GEO SAR) is susceptible to orbit perturbations, leading to orbit drifts and variations. The influences behave very differently from those in low Earth orbit (LEO) SAR. In this paper, the impacts of perturbations on GEO SAR orbital elements are modelled based on the perturbed dynamic equations, and then, the focusing is analyzed theoretically and numerically by using the Systems Tool Kit (STK) software. The accurate GEO SAR slant range histories can be calculated according to the perturbed orbit positions in STK. The perturbed slant range errors are mainly the first and second derivatives, leading to image drifts and defocusing. Simulations of the point target imaging are performed to validate the aforementioned analysis. In the GEO SAR with an inclination of 53° and an argument of perigee of 90°, the Doppler parameters and the integration time are different and dependent on the geometry configurations. Thus, the influences are varying at different orbit positions: at the equator, the first-order phase errors should be mainly considered; at the perigee and apogee, the second-order phase errors should be mainly considered; at other positions, first-order and second-order exist simultaneously. PMID:27598168

13. Numerical Analysis of Orbital Perturbation Effects on Inclined Geosynchronous SAR

PubMed Central

Dong, Xichao; Hu, Cheng; Long, Teng; Li, Yuanhao

2016-01-01

The geosynchronous synthetic aperture radar (GEO SAR) is susceptible to orbit perturbations, leading to orbit drifts and variations. The influences behave very differently from those in low Earth orbit (LEO) SAR. In this paper, the impacts of perturbations on GEO SAR orbital elements are modelled based on the perturbed dynamic equations, and then, the focusing is analyzed theoretically and numerically by using the Systems Tool Kit (STK) software. The accurate GEO SAR slant range histories can be calculated according to the perturbed orbit positions in STK. The perturbed slant range errors are mainly the first and second derivatives, leading to image drifts and defocusing. Simulations of the point target imaging are performed to validate the aforementioned analysis. In the GEO SAR with an inclination of 53° and an argument of perigee of 90°, the Doppler parameters and the integration time are different and dependent on the geometry configurations. Thus, the influences are varying at different orbit positions: at the equator, the first-order phase errors should be mainly considered; at the perigee and apogee, the second-order phase errors should be mainly considered; at other positions, first-order and second-order exist simultaneously. PMID:27598168

14. Numerical Analysis of Orbital Perturbation Effects on Inclined Geosynchronous SAR.

PubMed

Dong, Xichao; Hu, Cheng; Long, Teng; Li, Yuanhao

2016-09-02

The geosynchronous synthetic aperture radar (GEO SAR) is susceptible to orbit perturbations, leading to orbit drifts and variations. The influences behave very differently from those in low Earth orbit (LEO) SAR. In this paper, the impacts of perturbations on GEO SAR orbital elements are modelled based on the perturbed dynamic equations, and then, the focusing is analyzed theoretically and numerically by using the Systems Tool Kit (STK) software. The accurate GEO SAR slant range histories can be calculated according to the perturbed orbit positions in STK. The perturbed slant range errors are mainly the first and second derivatives, leading to image drifts and defocusing. Simulations of the point target imaging are performed to validate the aforementioned analysis. In the GEO SAR with an inclination of 53° and an argument of perigee of 90°, the Doppler parameters and the integration time are different and dependent on the geometry configurations. Thus, the influences are varying at different orbit positions: at the equator, the first-order phase errors should be mainly considered; at the perigee and apogee, the second-order phase errors should be mainly considered; at other positions, first-order and second-order exist simultaneously.

15. Numerical Analysis of Rocket Exhaust Cratering

NASA Technical Reports Server (NTRS)

2008-01-01

Supersonic jet exhaust impinging onto a flat surface is a fundamental flow encountered in space or with a missile launch vehicle system. The flow is important because it can endanger launch operations. The purpose of this study is to evaluate the effect of a landing rocket s exhaust on soils. From numerical simulations and analysis, we developed characteristic expressions and curves, which we can use, along with rocket nozzle performance, to predict cratering effects during a soft-soil landing. We conducted a series of multiphase flow simulations with two phases: exhaust gas and sand particles. The main objective of the simulation was to obtain the numerical results as close to the experimental results as possible. After several simulating test runs, the results showed that packing limit and the angle of internal friction are the two critical and dominant factors in the simulations.

16. Error analysis for matrix elastic-net regularization algorithms.

PubMed

Li, Hong; Chen, Na; Li, Luoqing

2012-05-01

Elastic-net regularization is a successful approach in statistical modeling. It can avoid large variations which occur in estimating complex models. In this paper, elastic-net regularization is extended to a more general setting, the matrix recovery (matrix completion) setting. Based on a combination of the nuclear-norm minimization and the Frobenius-norm minimization, we consider the matrix elastic-net (MEN) regularization algorithm, which is an analog to the elastic-net regularization scheme from compressive sensing. Some properties of the estimator are characterized by the singular value shrinkage operator. We estimate the error bounds of the MEN regularization algorithm in the framework of statistical learning theory. We compute the learning rate by estimates of the Hilbert-Schmidt operators. In addition, an adaptive scheme for selecting the regularization parameter is presented. Numerical experiments demonstrate the superiority of the MEN regularization algorithm.

17. Numerical analysis for finite Fresnel transform

Aoyagi, Tomohiro; Ohtsubo, Kouichi; Aoyagi, Nobuo

2016-10-01

The Fresnel transform is a bounded, linear, additive, and unitary operator in Hilbert space and is applied to many applications. In this study, a sampling theorem for a Fresnel transform pair in polar coordinate systems is derived. According to the sampling theorem, any function in the complex plane can be expressed by taking the products of the values of a function and sampling function systems. Sampling function systems are constituted by Bessel functions and their zeros. By computer simulations, we consider the application of the sampling theorem to the problem of approximating a function to demonstrate its validity. Our approximating function is a circularly symmetric function which is defined in the complex plane. Counting the number of sampling points requires the calculation of the zeros of Bessel functions, which are calculated by an approximation formula and numerical tables. Therefore, our sampling points are nonuniform. The number of sampling points, the normalized mean square error between the original function and its approximation function and phases are calculated and the relationship between them is revealed.

18. Numerical analysis for finite Fresnel transform

Aoyagi, Tomohiro; Ohtsubo, Kouichi; Aoyagi, Nobuo

2016-08-01

The Fresnel transform is a bounded, linear, additive, and unitary operator in Hilbert space and is applied to many applications. In this study, a sampling theorem for a Fresnel transform pair in polar coordinate systems is derived. According to the sampling theorem, any function in the complex plane can be expressed by taking the products of the values of a function and sampling function systems. Sampling function systems are constituted by Bessel functions and their zeros. By computer simulations, we consider the application of the sampling theorem to the problem of approximating a function to demonstrate its validity. Our approximating function is a circularly symmetric function which is defined in the complex plane. Counting the number of sampling points requires the calculation of the zeros of Bessel functions, which are calculated by an approximation formula and numerical tables. Therefore, our sampling points are nonuniform. The number of sampling points, the normalized mean square error between the original function and its approximation function and phases are calculated and the relationship between them is revealed.

19. CMOS RAM cosmic-ray-induced-error-rate analysis

NASA Technical Reports Server (NTRS)

Pickel, J. C.; Blandford, J. T., Jr.

1981-01-01

A significant number of spacecraft operational anomalies are believed to be associated with cosmic-ray-induced soft errors in the LSI memories. Test programs using a cyclotron to simulate cosmic rays have established conclusively that many common commercial memory types are vulnerable to heavy-ion upset. A description is given of the methodology and the results of a detailed analysis for predicting the bit-error rate in an assumed space environment for CMOS memory devices. Results are presented for three types of commercially available CMOS 1,024-bit RAMs. It was found that the HM6508 is susceptible to single-ion induced latchup from argon and krypton ions. The HS6508 and HS6508RH and the CDP1821 apparently are not susceptible to single-ion induced latchup.

20. Beam line error analysis, position correction, and graphic processing

Wang, Fuhua; Mao, Naifeng

1993-12-01

A beam transport line error analysis and beam position correction code called EAC'' has been enveloped associated with a graphics and data post processing package for TRANSPORT. Based on the linear optics design using TRANSPORT or other general optics codes, EAC independently analyzes effects of magnet misalignments, systematic and statistical errors of magnetic fields as well as the effects of the initial beam positions, on the central trajectory and upon the transverse beam emittance dilution. EAC also provides an efficient way to develop beam line trajectory correcting schemes. The post processing package generates various types of graphics such as the beam line geometrical layout, plots of the Twiss parameters, beam envelopes, etc. It also generates an EAC input file, thus connecting EAC with general optics codes. EAC and the post processing package are small size codes, that are easy to access and use. They have become useful tools for the design of transport lines at SSCL.

1. Jason-2 systematic error analysis in the GPS derived orbits

Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.

2011-12-01

Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced

2. Error analysis and optimization design of wide spectrum AOTF's optical performance parameter measuring system

Qin, Xiage; He, Zhiping; Xu, Rui; Wu, Yu; Shu, Rong

2015-10-01

As a new type of light dispersion device, Acousto-Optic Tunable Filter (AOTF) based on the acousto-optic interaction principle which can achieve diffractive spectral, has rapidly developed and been widely used in the technical fields of spectral analysis and remote sensing detection since it launched. The precise measurement of AOTF's optical performance parameter is the precondition to ensure spectral radiometric calibration and data inversion in the process of quantitation for spectrometer based on AOTF. In this paper, a kind of AOTF performance analysis system in 450~3200nm wide spectrum was introduced, including the fundamental principle of the basic system and the test method of the key optical parameters of AOTF. The error sources and the influence of the magnitude of the error in the whole test system were analyzed and verified emphatically. The numerical simulation of the noise in detecting circuit and the instability of light source was carried out, and based on the simulation result, the method for improving the measuring accuracy of the system were proposed such as improving light source parameters, correcting and changing test method by using dual light path detecting, etc. Experimental results indicate that: the relative error can be reduced by 20%, and the stability of the test signal is better than 98%. Finally, this error analysis model and the potential applicability in other optoelectronic measuring system were also discussed in the paper.

3. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

SciTech Connect

Lon N. Haney; David I. Gertman

2003-04-01

Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

4. Numerical analysis method for linear induction machines.

NASA Technical Reports Server (NTRS)

Elliott, D. G.

1972-01-01

A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.

5. Error analysis for satellite gravity field determination based on two-dimensional Fourier methods

Cai, L.; Zhou, Z.; Hsu, H.; Gao, F.; Zhu, Z.; Luo, J.

2012-12-01

The time-wise and space-wise approaches are generally applied to data processing and error analysis for satellite gravimetry missions. But both the approaches, which are based on least-squares method, address the whole effect of measurement errors and estimate the resolution of gravity field models mainly from a numerical point of view. Moreover, requirement for higher accuracy and resolution gravity field models could make the computation more difficult, and serious numerical instabilities arise. In order to overcome the problems, this study focuses on constructing a direct relationship between power spectral density of the satellite gravimetry measurements and spherical harmonic coefficients of the Earth's gravity model. Based on two-dimensional Fourier transform, the relationship is analytically concluded. This method provides a deep physical insight into the relation between mission parameters, instrumental parameters and gravity field parameters. In contrast, the least-squares method is mainly based on a mathematical viewpoint. By taking advantage of the analytical expression, it is efficient and distinct for parameter estimation and error analysis of missions. From the relationship and the simulations, it is analytically confirmed that the low-frequency noise affects the gravity field recovery in all degrees for the instance of satellite gradiometer recovery mission. Furthermore, some other results and suggestions are also described.

6. Verifying the error bound of numerical computation implemented in computer systems

DOEpatents

2013-03-12

A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.

7. Evaluating Random Forests for Survival Analysis using Prediction Error Curves.

PubMed

Mogensen, Ulla B; Ishwaran, Hemant; Gerds, Thomas A

2012-09-01

Prediction error curves are increasingly used to assess and compare predictions in survival analysis. This article surveys the R package pec which provides a set of functions for efficient computation of prediction error curves. The software implements inverse probability of censoring weights to deal with right censored data and several variants of cross-validation to deal with the apparent error problem. In principle, all kinds of prediction models can be assessed, and the package readily supports most traditional regression modeling strategies, like Cox regression or additive hazard regression, as well as state of the art machine learning methods such as random forests, a nonparametric method which provides promising alternatives to traditional strategies in low and high-dimensional settings. We show how the functionality of pec can be extended to yet unsupported prediction models. As an example, we implement support for random forest prediction models based on the R-packages randomSurvivalForest and party. Using data of the Copenhagen Stroke Study we use pec to compare random forests to a Cox regression model derived from stepwise variable selection. Reproducible results on the user level are given for publicly available data from the German breast cancer study group.

8. The Communication Link and Error ANalysis (CLEAN) simulator

NASA Technical Reports Server (NTRS)

Ebel, William J.; Ingels, Frank M.; Crowe, Shane

1993-01-01

During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.

9. Linearised and non-linearised isotherm models optimization analysis by error functions and statistical means.

PubMed

Subramanyam, Busetty; Das, Ashutosh

2014-01-01

In adsorption study, to describe sorption process and evaluation of best-fitting isotherm model is a key analysis to investigate the theoretical hypothesis. Hence, numerous statistically analysis have been extensively used to estimate validity of the experimental equilibrium adsorption values with the predicted equilibrium values. Several statistical error analysis were carried out. In the present study, the following statistical analysis were carried out to evaluate the adsorption isotherm model fitness, like the Pearson correlation, the coefficient of determination and the Chi-square test, have been used. The ANOVA test was carried out for evaluating significance of various error functions and also coefficient of dispersion were evaluated for linearised and non-linearised models. The adsorption of phenol onto natural soil (Local name Kalathur soil) was carried out, in batch mode at 30 ± 20 C. For estimating the isotherm parameters, to get a holistic view of the analysis the models were compared between linear and non-linear isotherm models. The result reveled that, among above mentioned error functions and statistical functions were designed to determine the best fitting isotherm. PMID:25018878

10. Rasch Analysis of the Student Refractive Error and Eyeglass Questionnaire

PubMed Central

Crescioni, Mabel; Messer, Dawn H.; Warholak, Terri L.; Miller, Joseph M.; Twelker, J. Daniel; Harvey, Erin M.

2014-01-01

Purpose To evaluate and refine a newly developed instrument, the Student Refractive Error and Eyeglasses Questionnaire (SREEQ), designed to measure the impact of uncorrected and corrected refractive error on vision-related quality of life (VRQoL) in school-aged children. Methods. A 38 statement instrument consisting of two parts was developed: Part A relates to perceptions regarding uncorrected vision and Part B relates to perceptions regarding corrected vision and includes other statements regarding VRQoL with spectacle correction. The SREEQ was administered to 200 Native American 6th through 12th grade students known to have previously worn and who currently require eyeglasses. Rasch analysis was conducted to evaluate the functioning of the SREEQ. Statements on Part A and Part B were analyzed to examine the dimensionality and constructs of the questionnaire, how well the items functioned, and the appropriateness of the response scale used. Results Rasch analysis suggested two items be eliminated and the measurement scale for matching items be reduced from a 4-point response scale to a 3-point response scale. With these modifications, categorical data were converted to interval level data, to conduct an item and person analysis. A shortened version of the SREEQ was constructed with these modifications, the SREEQ-R, which included the statements that were able to capture changes in VRQoL associated with spectacle wear for those with significant refractive error in our study population. Conclusions While the SREEQ Part B appears to be a have less than optimal reliability to assess the impact of spectacle correction on VRQoL in our student population, it is also able to detect statistically significant differences from pretest to posttest on both the group and individual levels to show that the instrument can assess the impact that glasses have on VRQoL. Further modifications to the questionnaire, such as those included in the SREEQ-R, could enhance its functionality

11. Efron-type measures of prediction error for survival analysis.

PubMed

Gerds, Thomas A; Schumacher, Martin

2007-12-01

Estimates of the prediction error play an important role in the development of statistical methods and models, and in their applications. We adapt the resampling tools of Efron and Tibshirani (1997, Journal of the American Statistical Association92, 548-560) to survival analysis with right-censored event times. We find that flexible rules, like artificial neural nets, classification and regression trees, or regression splines can be assessed, and compared to less flexible rules in the same data where they are developed. The methods are illustrated with data from a breast cancer trial.

12. Analysis of Random Segment Errors on Coronagraph Performance

NASA Technical Reports Server (NTRS)

Stahl, Mark T.; Stahl, H. Philip; Shaklan, Stuart B.; N'Diaye, Mamadou

2016-01-01

At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt

13. Analysis of ionospheric refraction error corrections for GRARR systems

NASA Technical Reports Server (NTRS)

Mallinckrodt, A. J.; Parker, H. C.; Berbert, J. H.

1971-01-01

A determination is presented of the ionospheric refraction correction requirements for the Goddard range and range rate (GRARR) S-band, modified S-band, very high frequency (VHF), and modified VHF systems. The relation ships within these four systems are analyzed to show that the refraction corrections are the same for all four systems and to clarify the group and phase nature of these corrections. The analysis is simplified by recognizing that the range rate is equivalent to a carrier phase range change measurement. The equation for the range errors are given.

14. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

NASA Technical Reports Server (NTRS)

Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

1998-01-01

We proposed a novel characterization of errors for numerical weather predictions. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has several important applications, including the model assessment application and the objective analysis application. In this project, we have focused on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP), the 500 hPa geopotential height, and the 315 K potential vorticity fields for forecasts of the short and medium range. The forecasts are generated by the Goddard Earth Observing System (GEOS) data assimilation system with and without ERS-1 scatterometer data. A great deal of novel work has been accomplished under the current contract. In broad terms, we have developed and tested an efficient algorithm for determining distortions. The algorithm and constraints are now ready for application to larger data sets to be used to determine the statistics of the distortion as outlined above, and to be applied in data analysis by using GEOS water vapor imagery to correct short-term forecast errors.

15. Numerical Analysis of Convection/Transpiration Cooling

NASA Technical Reports Server (NTRS)

Glass, David E.; Dilley, Arthur D.; Kelly, H. Neale

1999-01-01

An innovative concept utilizing the natural porosity of refractory-composite materials and hydrogen coolant to provide CONvective and TRANspiration (CONTRAN) cooling and oxidation protection has been numerically studied for surfaces exposed to a high heat flux, high temperature environment such as hypersonic vehicle engine combustor walls. A boundary layer code and a porous media finite difference code were utilized to analyze the effect of convection and transpiration cooling on surface heat flux and temperature. The boundary, layer code determined that transpiration flow is able to provide blocking of the surface heat flux only if it is above a minimum level due to heat addition from combustion of the hydrogen transpirant. The porous media analysis indicated that cooling of the surface is attained with coolant flow rates that are in the same range as those required for blocking, indicating that a coupled analysis would be beneficial.

16. Numerical Analysis of Convection/Transpiration Cooling

NASA Technical Reports Server (NTRS)

Glass, David E.; Dilley, Arthur D.; Kelly, H. Neale

1999-01-01

An innovative concept utilizing the natural porosity of refractory-composite materials and hydrogen coolant to provide CONvective and TRANspiration (CONTRAN) cooling and oxidation protection has been numerically studied for surfaces exposed to a high heat flux high temperature environment such as hypersonic vehicle engine combustor walls. A boundary layer code and a porous media finite difference code were utilized to analyze the effect of convection and transpiration cooling on surface heat flux and temperature. The boundary layer code determined that transpiration flow is able to provide blocking of the surface heat flux only if it is above a minimum level due to heat addition from combustion of the hydrogen transpirant. The porous media analysis indicated that cooling of the surface is attained with coolant flow rates that are in the same range as those required for blocking, indicating that a coupled analysis would be beneficial.

17. Bootstrap Standard Error Estimates in Dynamic Factor Analysis.

PubMed

Zhang, Guangjian; Browne, Michael W

2010-05-28

Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the interdependence of successive observations. Bootstrap methods can fill this need, however. The standard bootstrap of individual timepoints is not appropriate because it destroys their order in time and consequently gives incorrect standard error estimates. Two bootstrap procedures that are appropriate for dynamic factor analysis are described. The moving block bootstrap breaks down the original time series into blocks and draws samples of blocks instead of individual timepoints. A parametric bootstrap is essentially a Monte Carlo study in which the population parameters are taken to be estimates obtained from the available sample. These bootstrap procedures are demonstrated using 103 days of affective mood self-ratings from a pregnant woman, 90 days of personality self-ratings from a psychology freshman, and a simulation study.

18. Analysis of accuracy of approximate, simultaneous, nonlinear confidence intervals on hydraulic heads in analytical and numerical test cases

USGS Publications Warehouse

Hill, M.C.

1989-01-01

Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author

19. Analysis of Solar Two Heliostat Tracking Error Sources

SciTech Connect

Jones, S.A.; Stone, K.W.

1999-01-28

This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.

20. Laser measurement and analysis of reposition error in polishing systems

Liu, Weisen; Wang, Junhua; Xu, Min; He, Xiaoying

2015-10-01

In this paper, robotic reposition error measurement method based on laser interference remote positioning is presented, the geometric error is analyzed in the polishing system based on robot and the mathematical model of the tilt error is presented. Studies show that less than 1 mm error is mainly caused by the tilt error with small incident angle. Marking spot position with interference fringe enhances greatly the error measurement precision, the measurement precision of tilt error can reach 5 um. Measurement results show that reposition error of the polishing system is mainly from the tilt error caused by the motor A, repositioning precision is greatly increased after polishing system improvement. The measurement method has important applications in the actual error measurement with low cost, simple operation.

1. An analysis of spacecraft data time tagging errors

NASA Technical Reports Server (NTRS)

Fang, A. C.

1975-01-01

An indepth examination of the timing and telemetry in just one spacecraft points out the genesis of various types of timing errors and serves as a guide in the design of future timing/telemetry systems. The principal sources of timing errors are examined carefully and are described in detail. Estimates of these errors are also made and presented. It is found that the timing errors within the telemetry system are larger than the total timing errors resulting from all other sources.

2. Numerical Analysis of a Finite Element/Volume Penalty Method

Maury, Bertrand

The penalty method makes it possible to incorporate a large class of constraints in general purpose Finite Element solvers like freeFEM++. We present here some contributions to the numerical analysis of this method. We propose an abstract framework for this approach, together with some general error estimates based on the discretization parameter ɛ and the space discretization parameter h. As this work is motivated by the possibility to handle constraints like rigid motion for fluid-particle flows, we shall pay a special attention to a model problem of this kind, where the constraint is prescribed over a subdomain. We show how the abstract estimate can be applied to this situation, in the case where a non-body-fitted mesh is used. In addition, we describe how this method provides an approximation of the Lagrange multiplier associated to the constraint.

3. Close-range radar rainfall estimation and error analysis

van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.

2016-08-01

4. Starlight emergence angle error analysis of star simulator

Zhang, Jian; Zhang, Guo-yu

2015-10-01

With continuous development of the key technologies of star sensor, the precision of star simulator have been to be further improved, for it directly affects the accuracy of star sensor laboratory calibration. For improving the accuracy level of the star simulator, a theoretical accuracy analysis model need to be proposed. According the ideal imaging model of star simulator, the theoretical accuracy analysis model can be established. Based on theoretically analyzing the theoretical accuracy analysis model we can get that the starlight emergent angle deviation is primarily affected by star position deviation, main point position deviation, focal length deviation, distortion deviation and object plane tilt deviation. Based on the above affecting factors, a comprehensive deviation model can be established. According to the model, the formula of each factors deviation model separately and the comprehensive deviation model can be summarized and concluded out. By analyzing the properties of each factors deviation model and the comprehensive deviation model formula, concluding the characteristics of each factors respectively and the weight relationship among them. According the result of analysis of the comprehensive deviation model, a reasonable designing indexes can be given by considering the star simulator optical system requirements and the precision of machining and adjustment. So, starlight emergence angle error analysis of star simulator is very significant to guide the direction of determining and demonstrating the index of star simulator, analyzing and compensating the error of star simulator for improving the accuracy of star simulator and establishing a theoretical basis for further improving the starlight angle precision of the star simulator can effectively solve the problem.

5. SIRTF Focal Plane Survey: A Pre-flight Error Analysis

NASA Technical Reports Server (NTRS)

Bayard, David S.; Brugarolas, Paul B.; Boussalis, Dhemetrios; Kang, Bryan H.

2003-01-01

This report contains a pre-flight error analysis of the calibration accuracies expected from implementing the currently planned SIRTF focal plane survey strategy. The main purpose of this study is to verify that the planned strategy will meet focal plane survey calibration requirements (as put forth in the SIRTF IOC-SV Mission Plan [4]), and to quantify the actual accuracies expected. The error analysis was performed by running the Instrument Pointing Frame (IPF) Kalman filter on a complete set of simulated IOC-SV survey data, and studying the resulting propagated covariances. The main conclusion of this study is that the all focal plane calibration requirements can be met with the currently planned survey strategy. The associated margins range from 3 to 95 percent, and tend to be smallest for frames having a 0.14" requirement, and largest for frames having a more generous 0.28" (or larger) requirement. The smallest margin of 3 percent is associated with the IRAC 3.6 and 5.8 micron array centers (frames 068 and 069), and the largest margin of 95 percent is associated with the MIPS 160 micron array center (frame 087). For pointing purposes, the most critical calibrations are for the IRS Peakup sweet spots and short wavelength slit centers (frames 019, 023, 052, 028, 034). Results show that these frames are meeting their 0.14" requirements with an expected accuracy of approximately 0.1", which corresponds to a 28 percent margin.

6. A posteriori error analysis for a cut cell finite volume method

SciTech Connect

Haiying Wang; Michael Pernice; Simon Tavener; Don Estep

2011-09-01

We study the solution of a diffusive process in a domain where the diffusion coefficient changes discontinuously across a curved interface. We consider discretizations that use regularly-shaped meshes, so that the interface “cuts” through the cells (elements or volumes) without respecting the regular geometry of the mesh. Consequently, the discontinuity in the diffusion coefficients has a strong impact on the accuracy and convergence of the numerical method. This motivates the derivation of computational error estimates that yield accurate estimates for specified quantities of interest. For this purpose, we adapt the well-known adjoint based a posteriori error analysis technique used for finite element methods. In order to employ this method, we describe a systematic approach to discretizing a cut-cell problem that handles complex geometry in the interface in a natural fashion yet reduces to the well-known Ghost Fluid Method in simple cases. We test the accuracy of the estimates in a series of examples.

7. Error analysis for earth orientation recovery from GPS data

NASA Technical Reports Server (NTRS)

Zelensky, N.; Ray, J.; Liebrecht, P.

1990-01-01

The use of GPS navigation satellites to study earth-orientation parameters in real-time is examined analytically with simulations of network geometries. The Orbit Analysis covariance-analysis program is employed to simulate the block-II constellation of 18 GPS satellites, and attention is given to the budget for tracking errors. Simultaneous solutions are derived for earth orientation given specific satellite orbits, ground clocks, and station positions with tropospheric scaling at each station. Media effects and measurement noise are found to be the main causes of uncertainty in earth-orientation determination. A program similar to the Polaris network using single-difference carrier-phase observations can provide earth-orientation parameters with accuracies similar to those for the VLBI program. The GPS concept offers faster data turnaround and lower costs in addition to more accurate determinations of UT1 and pole position.

8. Soft X Ray Telescope (SXT) focus error analysis

NASA Technical Reports Server (NTRS)

1991-01-01

The analysis performed on the soft x-ray telescope (SXT) to determine the correct thickness of the spacer to position the CCD camera at the best focus of the telescope and to determine the maximum uncertainty in this focus position due to a number of metrology and experimental errors, and thermal, and humidity effects is presented. This type of analysis has been performed by the SXT prime contractor, Lockheed Palo Alto Research Lab (LPARL). The SXT project office at MSFC formed an independent team of experts to review the LPARL work, and verify the analysis performed by them. Based on the recommendation of this team, the project office will make a decision if an end to end focus test is required for the SXT prior to launch. The metrology and experimental data, and the spreadsheets provided by LPARL are used at the basis of the analysis presented. The data entries in these spreadsheets have been verified as far as feasible, and the format of the spreadsheets has been improved to make these easier to understand. The results obtained from this analysis are very close to the results obtained by LPARL. However, due to the lack of organized documentation the analysis uncovered a few areas of possibly erroneous metrology data, which may affect the results obtained by this analytical approach.

9. Fourier analysis of numerical algorithms for the Maxwell equations

NASA Technical Reports Server (NTRS)

Liu, Yen

1993-01-01

The Fourier method is used to analyze the dispersive, dissipative, and isotropy errors of various spatial and time discretizations applied to the Maxwell equations on multi-dimensional grids. Both Cartesian grids and non-Cartesian grids based on hexagons and tetradecahedra are studied and compared. The numerical errors are quantitatively determined in terms of phase speed, wave number, propagation direction, gridspacings, and CFL number. The study shows that centered schemes are more efficient than upwind schemes. The non-Cartesian grids yield superior isotropy and higher accuracy than the Cartesian ones. For the centered schemes, the staggered grids produce less errors than the unstaggered ones. A new unstaggered scheme which has all the best properties is introduced. The study also demonstrates that a proper choice of time discretization can reduce the overall numerical errors due to the spatial discretization.

10. Statistical analysis of modeling error in structural dynamic systems

NASA Technical Reports Server (NTRS)

Hasselman, T. K.; Chrostowski, J. D.

1990-01-01

The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

11. Error Analysis in Composition of Iranian Lower Intermediate Students

ERIC Educational Resources Information Center

Taghavi, Mehdi

2012-01-01

Learners make errors during the process of learning languages. This study examines errors in writing task of twenty Iranian lower intermediate male students aged between 13 and 15. A subject was given to the participants was a composition about the seasons of a year. All of the errors were identified and classified. Corder's classification (1967)…

12. The Impact of Text Genre on Iranian Intermediate EFL Students' Writing Errors: An Error Analysis Perspective

ERIC Educational Resources Information Center

Moqimipour, Kourosh; Shahrokhi, Mohsen

2015-01-01

The present study aimed at analyzing writing errors caused by the interference of the Persian language, regarded as the first language (L1), in three writing genres, namely narration, description, and comparison/contrast by Iranian EFL students. 65 English paragraphs written by the participants, who were at the intermediate level based on their…

13. Reduction of S-parameter errors using singular spectrum analysis.

PubMed

Ozturk, Turgut; Uluer, İhsan; Ünal, İlhami

2016-07-01

A free space measurement method, which consists of two horn antennas, a network analyzer, two frequency extenders, and a sample holder, is used to measure transmission (S21) coefficients in 75-110 GHz (W-Band) frequency range. Singular spectrum analysis method is presented to eliminate the error and noise of raw S21 data after calibration and measurement processes. The proposed model can be applied easily to remove the repeated calibration process for each sample measurement. Hence, smooth, reliable, and accurate data are obtained to determine the dielectric properties of materials. In addition, the dielectric constant of materials (paper, polyvinylchloride-PVC, Ultralam® 3850HT, and glass) is calculated by thin sheet approximation and Newton-Raphson extracting techniques using a filtered S21 transmission parameter. PMID:27475579

14. Reduction of S-parameter errors using singular spectrum analysis.

PubMed

Ozturk, Turgut; Uluer, İhsan; Ünal, İlhami

2016-07-01

A free space measurement method, which consists of two horn antennas, a network analyzer, two frequency extenders, and a sample holder, is used to measure transmission (S21) coefficients in 75-110 GHz (W-Band) frequency range. Singular spectrum analysis method is presented to eliminate the error and noise of raw S21 data after calibration and measurement processes. The proposed model can be applied easily to remove the repeated calibration process for each sample measurement. Hence, smooth, reliable, and accurate data are obtained to determine the dielectric properties of materials. In addition, the dielectric constant of materials (paper, polyvinylchloride-PVC, Ultralam® 3850HT, and glass) is calculated by thin sheet approximation and Newton-Raphson extracting techniques using a filtered S21 transmission parameter.

15. Reduction of S-parameter errors using singular spectrum analysis

Ozturk, Turgut; Uluer, Ihsan; Ünal, Ilhami

2016-07-01

A free space measurement method, which consists of two horn antennas, a network analyzer, two frequency extenders, and a sample holder, is used to measure transmission (S21) coefficients in 75-110 GHz (W-Band) frequency range. Singular spectrum analysis method is presented to eliminate the error and noise of raw S21 data after calibration and measurement processes. The proposed model can be applied easily to remove the repeated calibration process for each sample measurement. Hence, smooth, reliable, and accurate data are obtained to determine the dielectric properties of materials. In addition, the dielectric constant of materials (paper, polyvinylchloride-PVC, Ultralam® 3850HT, and glass) is calculated by thin sheet approximation and Newton-Raphson extracting techniques using a filtered S21 transmission parameter.

16. The error performance analysis over cyclic redundancy check codes

Yoon, Hee B.

1991-06-01

The burst error is generated in digital communication networks by various unpredictable conditions, which occur at high error rates, for short durations, and can impact services. To completely describe a burst error one has to know the bit pattern. This is impossible in practice on working systems. Therefore, under the memoryless binary symmetric channel (MBSC) assumptions, the performance evaluation or estimation schemes for digital signal 1 (DS1) transmission systems carrying live traffic is an interesting and important problem. This study will present some analytical methods, leading to efficient detecting algorithms of burst error using cyclic redundancy check (CRC) code. The definition of burst error is introduced using three different models. Among the three burst error models, the mathematical model is used in this study. The probability density function, function(b) of burst error of length b is proposed. The performance of CRC-n codes is evaluated and analyzed using function(b) through the use of a computer simulation model within CRC block burst error. The simulation result shows that the mean block burst error tends to approach the pattern of the burst error which random bit errors generate.

17. Analysis of the impact of error detection on computer performance

NASA Technical Reports Server (NTRS)

Shin, K. C.; Lee, Y. H.

1983-01-01

Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.

18. Fixed-point error analysis of Winograd Fourier transform algorithms

NASA Technical Reports Server (NTRS)

Patterson, R. W.; Mcclellan, J. H.

1978-01-01

The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

19. Two numerical models for landslide dynamic analysis

Hungr, Oldrich; McDougall, Scott

2009-05-01

Two microcomputer-based numerical models (Dynamic ANalysis (DAN) and three-dimensional model DAN (DAN3D)) have been developed and extensively used for analysis of landslide runout, specifically for the purposes of practical landslide hazard and risk assessment. The theoretical basis of both models is a system of depth-averaged governing equations derived from the principles of continuum mechanics. Original features developed specifically during this work include: an open rheological kernel; explicit use of tangential strain to determine the tangential stress state within the flowing sheet, which is both more realistic and beneficial to the stability of the model; orientation of principal tangential stresses parallel with the direction of motion; inclusion of the centripetal forces corresponding to the true curvature of the path in the motion direction and; the use of very simple and highly efficient free surface interpolation methods. Both models yield similar results when applied to the same sets of input data. Both algorithms are designed to work within the semi-empirical framework of the "equivalent fluid" approach. This approach requires selection of material rheology and calibration of input parameters through back-analysis of real events. Although approximate, it facilitates simple and efficient operation while accounting for the most important characteristics of extremely rapid landslides. The two models have been verified against several controlled laboratory experiments with known physical basis. A large number of back-analyses of real landslides of various types have also been carried out. One example is presented. Calibration patterns are emerging, which give a promise of predictive capability.

20. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

NASA Technical Reports Server (NTRS)

2012-01-01

Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

1. Analysis of the "naming game" with learning errors in communications.

PubMed

Lou, Yang; Chen, Guanrong

2015-07-16

Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

2. Analysis of the "naming game" with learning errors in communications.

PubMed

Lou, Yang; Chen, Guanrong

2015-01-01

Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective. PMID:26178457

3. Analysis of error-correction constraints in an optical disk.

PubMed

Roberts, J D; Ryley, A; Jones, D M; Burke, D

1996-07-10

The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check. PMID:21102793

4. Phytoremediation of metals: a numerical analysis.

PubMed

Lugli, Francesco; Mahler, Claudio Fernando

2015-01-01

A finite element code was used for investigating the effect of some relevant characteristics of a phytoremediation project (crop type and density, presence of an irrigation system, soil capping and root depth). The evolution of the plume of contamination of Cd2+, Pb2+, and Zn2+ was simulated taking into account reactive transport and root processes. The plant contaminant uptake model was previously calibrated using data from greenhouse experiments. The simulations adopted pedological and climatological data representative of a sub-tropical environment. Although the results obtained were specific for the proposed scenario, it was observed that, for more mobile contaminants, poor water conditions favor stabilization but inhibit plant extraction. Otherwise an irrigation system that decreases crop water stress had an opposite effect. For less mobile contaminants, the remediation process did not have appreciable advantages. Despite its simplifying assumptions, particularly about contaminant sorption in the soil and plant system, the numerical analysis provided useful insight for the phytoextraction process important in view of field experiments. PMID:25397982

5. Nonclassicality thresholds for multiqubit states: Numerical analysis

SciTech Connect

Gruca, Jacek; Zukowski, Marek; Laskowski, Wieslaw; Kiesel, Nikolai; Wieczorek, Witlef; Weinfurter, Harald; Schmid, Christian

2010-07-15

States that strongly violate Bell's inequalities are required in many quantum-informational protocols as, for example, in cryptography, secret sharing, and the reduction of communication complexity. We investigate families of such states with a numerical method which allows us to reveal nonclassicality even without direct knowledge of Bell's inequalities for the given problem. An extensive set of numerical results is presented and discussed.

6. Modeling error analysis of stationary linear discrete-time filters

NASA Technical Reports Server (NTRS)

Patel, R.; Toda, M.

1977-01-01

The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

7. Flight instrumentation specification for parameter identification: Program user's guide. [instrument errors/error analysis

NASA Technical Reports Server (NTRS)

Mohr, R. L.

1975-01-01

A set of four digital computer programs is presented which can be used to investigate the effects of instrumentation errors on the accuracy of aircraft and helicopter stability-and-control derivatives identified from flight test data. The programs assume that the differential equations of motion are linear and consist of small perturbations about a quasi-steady flight condition. It is also assumed that a Newton-Raphson optimization technique is used for identifying the estimates of the parameters. Flow charts and printouts are included.

8. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

PubMed

Chiu, Ming-Chuan; Hsieh, Min-Chih

2016-05-01

The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. PMID:26851473

9. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

PubMed

Chiu, Ming-Chuan; Hsieh, Min-Chih

2016-05-01

The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology.

10. Analysis of Children's Computational Errors: A Qualitative Approach

ERIC Educational Resources Information Center

Engelhardt, J. M.

1977-01-01

This study was designed to replicate and extend Roberts' (1968) efforts at classifying computational errors. 198 elementary school students were administered an 84-item arithmetic computation test. Eight types of errors were described which led to several tentative generalizations. (Editor/RK)

11. Systematic error analysis of rotating coil using computer simulation

SciTech Connect

Li, Wei-chuan; Coles, M.

1993-04-01

This report describes a study of the systematic and random measurement uncertainties of magnetic multipoles which are due to construction errors, rotational speed variation, and electronic noise in a digitally bucked tangential coil assembly with dipole bucking windings. The sensitivities of the systematic multipole uncertainty to construction errors are estimated analytically and using a computer simulation program.

12. Factor Rotation and Standard Errors in Exploratory Factor Analysis

ERIC Educational Resources Information Center

Zhang, Guangjian; Preacher, Kristopher J.

2015-01-01

In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…

13. TOWARD ERROR ANALYSIS OF LARGE-SCALE FOREST CARBON BUDGETS

EPA Science Inventory

Quantification of forest carbon sources and sinks is an important part of national inventories of net greenhouse gas emissions. Several such forest carbon budgets have been constructed, but little effort has been made to analyse the sources of error and how these errors propagate...

14. Error Analysis of Stereophotoclinometry in Support of the OSIRIS-REx Mission

Palmer, Eric; Gaskell, Robert W.; Weirich, John R.

2015-11-01

Stereophotoclinometry has been used on numerous planetary bodies to derive the shape model, most recently 67P-Churyumov-Gerasimenko (Jorda, et al., 2014), the Earth (Palmer, et al., 2014) and Vesta (Gaskell, 2012). SPC is planned to create the ultra-high resolution topography for the upcoming mission OSIRIS-REx that will sample the asteroid Bennu, arriving in 2018. This shape model will be used both for scientific analysis as well as operational navigation, to include providing the topography that will ensure a safe collection of the surface.We present the initial results of error analysis of SPC, with specific focus on how both systematic and non-systematic error propagate through SPC into the shape model. For this testing, we have created a notional global truth model at 5cm and a single region at 2.5mm ground sample distance. These truth models were used to create images using GSFC's software Freespace. Then these images were used by SPC to form a derived shape model with a ground sample distance of 5cm.We will report on both the absolute and relative error that the derived shape model has compared to the original truth model as well as other empirical and theoretical measurement of errors within SPC.Jorda, L. et al (2014) "The Shape of Comet 67P/Churyumov-Gerasimenko from Rosetta/Osiris Images", AGU Fall Meeting, #P41C-3943. Gaskell, R (2012) "SPC Shape and Topography of Vesta from DAWN Imaging Data", DSP Meeting #44, #209.03. Palmer, L. Sykes, M. V. Gaskll, R.W. (2014) "Mercator — Autonomous Navigation Using Panoramas", LPCS 45, #1777.

15. Error-reduction techniques and error analysis for fully phase- and amplitude-based encryption.

PubMed

Javidi, B; Towghi, N; Maghzi, N; Verrall, S C

2000-08-10

The performance of fully phase- and amplitude-based encryption processors is analyzed. The effects of noise perturbations on the encrypted information are considered. A thresholding method of decryption that further reduces the mean-squared error (MSE) for the fully phase- and amplitude-based encryption processes is provided. The proposed thresholding scheme significantly improves the performance of fully phase- and amplitude-based encryption, as measured by the MSE metric. We obtain analytical MSE bounds when thresholding is used for both decryption methods, and we also present computer-simulation results. These results show that the fully phase-based method is more robust. We also give a formal proof of a conjecture about the decrypted distribution of distorted encrypted information. This allows the analytical bounds of the MSE to be extended to more general non-Gaussian, nonadditive, nonstationary distortions. Computer simulations support this extension.

16. Towards a Bayesian total error analysis of conceptual rainfall-runoff models: Characterising model error using storm-dependent parameters

Kuczera, George; Kavetski, Dmitri; Franks, Stewart; Thyer, Mark

2006-11-01

SummaryCalibration and prediction in conceptual rainfall-runoff (CRR) modelling is affected by the uncertainty in the observed forcing/response data and the structural error in the model. This study works towards the goal of developing a robust framework for dealing with these sources of error and focuses on model error. The characterisation of model error in CRR modelling has been thwarted by the convenient but indefensible treatment of CRR models as deterministic descriptions of catchment dynamics. This paper argues that the fluxes in CRR models should be treated as stochastic quantities because their estimation involves spatial and temporal averaging. Acceptance that CRR models are intrinsically stochastic paves the way for a more rational characterisation of model error. The hypothesis advanced in this paper is that CRR model error can be characterised by storm-dependent random variation of one or more CRR model parameters. A simple sensitivity analysis is used to identify the parameters most likely to behave stochastically, with variation in these parameters yielding the largest changes in model predictions as measured by the Nash-Sutcliffe criterion. A Bayesian hierarchical model is then formulated to explicitly differentiate between forcing, response and model error. It provides a very general framework for calibration and prediction, as well as for testing hypotheses regarding model structure and data uncertainty. A case study calibrating a six-parameter CRR model to daily data from the Abercrombie catchment (Australia) demonstrates the considerable potential of this approach. Allowing storm-dependent variation in just two model parameters (with one of the parameters characterising model error and the other reflecting input uncertainty) yields a substantially improved model fit raising the Nash-Sutcliffe statistic from 0.74 to 0.94. Of particular significance is the use of posterior diagnostics to test the key assumptions about the data and model errors

17. Numerical analysis of granular soil fabrics

Torbahn, L.; Huhn, K.

2012-04-01

Soil stability strongly depends on the material strength that is in general influenced by deformation processes and vice versa. Hence, investigation of material strength is of great interest in many geoscientific studies where soil deformations occur, e.g. the destabilization of slopes or the evolution of fault gouges. Particularly in the former case, slope failure occurs if the applied forces exceed the shear strength of slope material. Hence, the soil resistance or respectively the material strength acts contrary to deformation processes. Besides, geotechnical experiments, e.g. direct shear or ring shear tests, suggest that shear resistance mainly depends on properties of soil structure, texture and fabric. Although laboratory tests enable investigations of soil structure and texture during shear, detailed observations inside the sheared specimen during the failure processes as well as fabric effects are very limited. So, high-resolution information in space and time regarding texture evolution and/or grain behavior during shear is refused. However, such data is essential to gain a deeper insight into the key role of soil structure, texture, etc. on material strength and the physical processes occurring during material deformation on a micro-scaled level. Additionally, laboratory tests are not completely reproducible enabling a detailed statistical investigation of fabric during shear. So, almost identical setups to run methodical tests investigating the impact of fabric on soil resistance are hard to archive under laboratory conditions. Hence, we used numerical shear test experiments utilizing the Discrete Element Method to quantify the impact of different material fabrics on the shear resistance of soil as this granular model approach enables to investigate failure processes on a grain-scaled level. Our numerical setup adapts general settings from laboratory tests while the model characteristics are fixed except for the soil structure particularly the used

18. Motion error analysis of the 3D coordinates of airborne lidar for typical terrains

Peng, Tao; Lan, Tian; Ni, Guoqiang

2013-07-01

A motion error model of 3D coordinates is established and the impact on coordinate errors caused by the non-ideal movement of the airborne platform is analyzed. The simulation results of the model show that when the lidar system operates at high altitude, the influence on the positioning errors derived from laser point cloud spacing is small. For the model the positioning errors obey simple harmonic vibration whose amplitude envelope gradually reduces with the increase of the vibration frequency. When the vibration period number is larger than 50, the coordinate errors are almost uncorrelated with time. The elevation error is less than the plane error and in the plane the error in the scanning direction is less than the error in the flight direction. Through the analysis of flight test data, the conclusion is verified.

19. Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas

ERIC Educational Resources Information Center

Herzberg, Tina

2010-01-01

In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…

20. The slider motion error analysis by positive solution method in parallel mechanism

Ma, Xiaoqing; Zhang, Lisong; Zhu, Liang; Yang, Wenguo; Hu, Penghao

2016-01-01

Motion error of slider plays key role in 3-PUU parallel coordinates measuring machine (CMM) performance and influence the CMM accuracy, which attracts lots of experts eyes in the world, Generally, the analysis method is based on the view of space 6-DOF. Here, a new analysis method is provided. First, the structure relation of slider and guideway can be abstracted as a 4-bar parallel mechanism. So, the sliders can be considered as moving platform in parallel kinematic mechanism PKM. Its motion error analysis is also transferred to moving platform position analysis in PKM. Then, after establishing the positive and negative solutions, some existed theory and technology for PKM can be applied to analyze slider straightness motion error and angular motion error simultaneously. Thirdly, some experiments by autocollimator are carried out to capture the original error data about guideway its own error, the data can be described as straightness error function by fitting curvilinear equation. Finally, the Straightness error of two guideways are considered as the variation of rod length in parallel mechanism, the slider's straightness error and angular error can be obtained by putting data into the established model. The calculated result is generally consistent with experiment result. The idea will be beneficial on accuracy calibration and error correction of 3-PUU CMM and also provides a new thought to analyze kinematic error of guideway in precision machine tool and precision instrument.

1. Errors Analysis of Solving Linear Inequalities among the Preparatory Year Students at King Saud University

ERIC Educational Resources Information Center

El-khateeb, Mahmoud M. A.

2016-01-01

The purpose of this study aims to investigate the errors classes occurred by the Preparatory year students at King Saud University, through analysis student responses to the items of the study test, and to identify the varieties of the common errors and ratios of common errors that occurred in solving inequalities. In the collection of the data,…

2. Numerical analysis of a vortex controlled diffuser

NASA Technical Reports Server (NTRS)

Spall, Robert E.

1993-01-01

A numerical study of a prototypical vortex controlled diffuser is performed. The basic diffuser geometry consists of a step expansion in a pipe of area ratio 2.25:1. The incompressible Reynolds averaged Navier-Stokes equations, employing the K-epsilon turbulence model, are solved. Results are presented for bleed rates ranging from 1 to 7 percent. Diffuser efficiencies in excess of 80 percent are obtained. Reattachment lengths are reduced by a factor of up to 3. These results are in qualitative agreement with previous experimental work. However, differences in some basic details of experimentally observed and the present numerically generated flowfields exist. The effect of swirl is also investigated.

3. US-LHC IR MAGNET ERROR ANALYSIS AND COMPENSATION.

SciTech Connect

WEI, J.

1998-06-26

This paper studies the impact of the insertion-region (IR) magnet field errors on LHC collision performance. Compensation schemes including magnet orientation optimization, body-end compensation, tuning shims, and local nonlinear correction are shown to be highly effective.

4. Direct Numerical Simulations in Solid Mechanics for Quantifying the Macroscale Effects of Microstructure and Material Model-Form Error

Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; Littlewood, David J.; Baines, Andrew J.

2016-05-01

Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cell represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Ultimately, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.

5. Direct numerical simulations in solid mechanics for quantifying the macroscale effects of microstructure and material model-form error

DOE PAGESBeta

Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; Littlewood, David J.; Baines, Andrew J.

2016-03-16

Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less

6. SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS

NASA Technical Reports Server (NTRS)

Frisch, H. P.

1994-01-01

SAMSAN algorithm; however, it is generally agreed by experienced users, and in the numerical error analysis literature, that computation with non-symmetric matrices of order greater than about 200 should be avoided or treated with extreme care. SAMSAN attempts to support the needs of application oriented analysis by providing: 1) a methodology with unlimited growth potential, 2) a methodology to insure that associated documentation is current and available "on demand", 3) a foundation of basic computational algorithms that most controls analysis procedures are based upon, 4) a set of check out and evaluation programs which demonstrate usage of the algorithms on a series of problems which are structured to expose the limits of each algorithm's applicability, and 5) capabilities which support both a priori and a posteriori error analysis for the computational algorithms provided. The SAMSAN algorithms are coded in FORTRAN 77 for batch or interactive execution and have been implemented on a DEC VAX computer under VMS 4.7. An effort was made to assure that the FORTRAN source code was portable and thus SAMSAN may be adaptable to other machine environments. The documentation is included on the distribution tape or can be purchased separately at the price below. SAMSAN version 2.0 was developed in 1982 and updated to version 3.0 in 1988.

7. A Numerical Model for Atomtronic Circuit Analysis

SciTech Connect

Chow, Weng W.; Straatsma, Cameron J. E.; Anderson, Dana Z.

2015-07-16

A model for studying atomtronic devices and circuits based on finite-temperature Bose-condensed gases is presented. The approach involves numerically solving equations of motion for atomic populations and coherences, derived using the Bose-Hubbard Hamiltonian and the Heisenberg picture. The resulting cluster expansion is truncated at a level giving balance between physics rigor and numerical demand mitigation. This approach allows parametric studies involving time scales that cover both the rapid population dynamics relevant to nonequilibrium state evolution, as well as the much longer time durations typical for reaching steady-state device operation. This model is demonstrated by studying the evolution of a Bose-condensed gas in the presence of atom injection and extraction in a double-well potential. In this configuration phase locking between condensates in each well of the potential is readily observed, and its influence on the evolution of the system is studied.

8. Error analysis in predictive modelling demonstrated on mould data.

PubMed

Baranyi, József; Csernus, Olívia; Beczner, Judit

2014-01-17

The purpose of this paper was to develop a predictive model for the effect of temperature and water activity on the growth rate of Aspergillus niger and to determine the sources of the error when the model is used for prediction. Parallel mould growth curves, derived from the same spore batch, were generated and fitted to determine their growth rate. The variances of replicate ln(growth-rate) estimates were used to quantify the experimental variability, inherent to the method of determining the growth rate. The environmental variability was quantified by the variance of the respective means of replicates. The idea is analogous to the "within group" and "between groups" variability concepts of ANOVA procedures. A (secondary) model, with temperature and water activity as explanatory variables, was fitted to the natural logarithm of the growth rates determined by the primary model. The model error and the experimental and environmental errors were ranked according to their contribution to the total error of prediction. Our method can readily be applied to analysing the error structure of predictive models of bacterial growth models, too.

9. Stochastic modelling and analysis of IMU sensor errors

Zaho, Y.; Horemuz, M.; Sjöberg, L. E.

2011-12-01

The performance of a GPS/INS integration system is greatly determined by the ability of stand-alone INS system to determine position and attitude within GPS outage. The positional and attitude precision degrades rapidly during GPS outage due to INS sensor errors. With advantages of low price and volume, the Micro Electrical Mechanical Sensors (MEMS) have been wildly used in GPS/INS integration. Moreover, standalone MEMS can keep a reasonable positional precision only a few seconds due to systematic and random sensor errors. General stochastic error sources existing in inertial sensors can be modelled as (IEEE STD 647, 2006) Quantization Noise, Random Walk, Bias Instability, Rate Random Walk and Rate Ramp. Here we apply different methods to analyze the stochastic sensor errors, i.e. autoregressive modelling, Gauss-Markov process, Power Spectral Density and Allan Variance. Then the tests on a MEMS based inertial measurement unit were carried out with these methods. The results show that different methods give similar estimates of stochastic error model parameters. These values can be used further in the Kalman filter for better navigation accuracy and in the Doppler frequency estimate for faster acquisition after GPS signal outage.

Burks, D. G.; Graf, E. R.; Fahey, M. D.

1982-09-01

An analysis is presented of the effect of a tangent ogive radome on the pointing accuracy of a monopulse radar employing an aperture antenna. The radar is assumed to be operating in the receive mode, and the incident fields at the antenna are found by a ray tracing procedure. Rays entering the antenna aperture by direct transmission through the radome and by single reflection from the radome interior are considered. The radome wall is treated as being locally planar. The antenna can be scanned in two angular directions, and two orthogonal polarization states which produce an arbitrarily polarized incident field are considered. Numerical results are presented for both in-plane and cross-plane errors as a function of scan angle and polarization.

11. The impact of response measurement error on the analysis of designed experiments

SciTech Connect

Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

2015-12-21

This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.

12. The impact of response measurement error on the analysis of designed experiments

DOE PAGESBeta

Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

2015-12-21

This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

13. Frequency analysis of nonlinear oscillations via the global error minimization

Kalami Yazdi, M.; Hosseini Tehrani, P.

2016-06-01

The capacity and effectiveness of a modified variational approach, namely global error minimization (GEM) is illustrated in this study. For this purpose, the free oscillations of a rod rocking on a cylindrical surface and the Duffing-harmonic oscillator are treated. In order to validate and exhibit the merit of the method, the obtained result is compared with both of the exact frequency and the outcome of other well-known analytical methods. The corollary reveals that the first order approximation leads to an acceptable relative error, specially for large initial conditions. The procedure can be promisingly exerted to the conservative nonlinear problems.

14. BNL-BUILT LHC MAGNET ERROR IMPACT ANALYSIS AND COMPENSATION.

SciTech Connect

PTITSIN,V.; TEPIKIAN,S.; WEI,J.

1999-03-29

Superconducting magnets built at the Brookhaven National Laboratory will be installed in both the Insertion Region IP2 and IP8, and the RF Region of the Large Hadron Collider (LHC). In particular, field quality of these IR dipoles will become important during LHC heavy-ion operation when the {beta}* at IP2 is reduced to 0.5 meters. This paper studies the impact of the magnetic errors in BNL-built magnets on LHC performance at injection and collision, both for proton and heavy-ion operation. Methods and schemes for error compensation are considered including optimization of magnet orientation and compensation using local IR correctors.

15. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

Pan, B.; Wang, B.; Lubineau, G.

2016-07-01

Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work.

16. Normal-reciprocal error models for quantitative ERT in permafrost environments: bin analysis versus histogram analysis

Verleysdonk, Sarah; Flores-Orozco, Adrian; Krautblatter, Michael; Kemna, Andreas

2010-05-01

Electrical resistivity tomography (ERT) has been used for the monitoring of permafrost-affected rock walls for some years now. To further enhance the interpretation of ERT measurements a deeper insight into error sources and the influence of error model parameters on the imaging results is necessary. Here, we present the effect of different statistical schemes for the determination of error parameters from the discrepancies between normal and reciprocal measurements - bin analysis and histogram analysis - using a smoothness-constrained inversion code (CRTomo) with an incorporated appropriate error model. The study site is located in galleries adjacent to the Zugspitze North Face (2800 m a.s.l.) at the border between Austria and Germany. A 20 m * 40 m rock permafrost body and its surroundings have been monitored along permanently installed transects - with electrode spacings of 1.5 m and 4.6 m - from 2007 to 2009. For data acquisition, a conventional Wenner survey was conducted as this array has proven to be the most robust array in frozen rock walls. Normal and reciprocal data were collected directly one after another to ensure identical conditions. The ERT inversion results depend strongly on the chosen parameters of the employed error model, i.e., the absolute resistance error and the relative resistance error. These parameters were derived (1) for large normal/reciprocal data sets by means of bin analyses and (2) for small normal/reciprocal data sets by means of histogram analyses. Error parameters were calculated independently for each data set of a monthly monitoring sequence to avoid the creation of artefacts (over-fitting of the data) or unnecessary loss of contrast (under-fitting of the data) in the images. The inversion results are assessed with respect to (1) raw data quality as described by the error model parameters, (2) validation via available (rock) temperature data and (3) the interpretation of the images from a geophysical as well as a

17. Analysis of Errors Made by Students Solving Genetics Problems.

ERIC Educational Resources Information Center

Costello, Sandra Judith

The purpose of this study was to analyze the errors made by students solving genetics problems. A sample of 10 non-science undergraduate students was obtained from a private college in Northern New Jersey. The results support prior research in the area of genetics education and show that a weak understanding of the relationship of meiosis to…

18. Analysis of Students' Error in Learning of Quadratic Equations

ERIC Educational Resources Information Center

Zakaria, Effandi; Ibrahim; Maat, Siti Mistima

2010-01-01

The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…

19. Pitch Error Analysis of Young Piano Students' Music Reading Performances

ERIC Educational Resources Information Center

Rut Gudmundsdottir, Helga

2010-01-01

This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…

ERIC Educational Resources Information Center

Abu-rabia, Salim; Taha, Haitham

2004-01-01

This study was an investigation of reading and spelling errors of dyslexic Arabic readers ("n"=20) compared with two groups of normal readers: a young readers group, matched with the dyslexics by reading level ("n"=20) and an age-matched group ("n"=20). They were tested on reading and spelling of texts, isolated words and pseudowords. Two…

1. Oral Definitions of Newly Learned Words: An Error Analysis

ERIC Educational Resources Information Center

Steele, Sara C.

2012-01-01

This study examined and compared patterns of errors in the oral definitions of newly learned words. Fifteen 9- to 11-year-old children with language learning disability (LLD) and 15 typically developing age-matched peers inferred the meanings of 20 nonsense words from four novel reading passages. After reading, children provided oral definitions…

2. Numerical analysis of slender vortex motion

SciTech Connect

Zhou, H.

1996-02-01

Several numerical methods for slender vortex motion (the local induction equation, the Klein-Majda equation, and the Klein-Knio equation) are compared on the specific example of sideband instability of Kelvin waves on a vortex. Numerical experiments on this model problem indicate that all these methods yield qualitatively similar behavior, and this behavior is different from the behavior of a non-slender vortex with variable cross-section. It is found that the boundaries between stable, recurrent, and chaotic regimes in the parameter space of the model problem depend on the method used. The boundaries of these domains in the parameter space for the Klein-Majda equation and for the Klein-Knio equation are closely related to the core size. When the core size is large enough, the Klein-Majda equation always exhibits stable solutions for our model problem. Various conclusions are drawn; in particular, the behavior of turbulent vortices cannot be captured by these local approximations, and probably cannot be captured by any slender vortex model with constant vortex cross-section. Speculations about the differences between classical and superfluid hydrodynamics are also offered.

3. Template Construction as a Basis for Error-Analysis Packages in Language Learning Programs.

ERIC Educational Resources Information Center

Helmreich, Stephen C.

1987-01-01

An "intelligent" system for constructing computer-assisted pattern drills to be used in second language instruction is proposed. First, some of the difficulties in designing intelligent error analysis are discussed briefly. Two major approaches to error analysis in computer-assisted instruction, pattern matching and parsing, are described, and…

4. Using Online Error Analysis Items to Support Preservice Teachers' Pedagogical Content Knowledge in Mathematics

ERIC Educational Resources Information Center

McGuire, Patrick

2013-01-01

This article describes how a free, web-based intelligent tutoring system, (ASSISTment), was used to create online error analysis items for preservice elementary and secondary mathematics teachers. The online error analysis items challenged preservice teachers to analyze, diagnose, and provide targeted instructional remediation intended to help…

5. Procedures for numerical analysis of circadian rhythms

PubMed Central

REFINETTI, ROBERTO; LISSEN, GERMAINE CORNÉ; HALBERG, FRANZ

2010-01-01

This article reviews various procedures used in the analysis of circadian rhythms at the populational, organismal, cellular and molecular levels. The procedures range from visual inspection of time plots and actograms to several mathematical methods of time series analysis. Computational steps are described in some detail, and additional bibliographic resources and computer programs are listed. PMID:23710111

6. Manufacturing in space: Fluid dynamics numerical analysis

NASA Technical Reports Server (NTRS)

Robertson, S. J.; Nicholson, L. A.; Spradley, L. W.

1982-01-01

Numerical computations were performed for natural convection in circular enclosures under various conditions of acceleration. It was found that subcritical acceleration vectors applied in the direction of the temperature gradient will lead to an eventual state of rest regardless of the initial state of motion. Supercritical acceleration vectors will lead to the same steady state condition of motion regardless of the initial state of motion. Convection velocities were computed for acceleration vectors at various angles of the initial temperature gradient. The results for Rayleigh numbers of 1000 or less were found to closely follow Weinbaum's first order theory. Higher Rayleigh number results were shown to depart significantly from the first order theory. Supercritical behavior was confirmed for Rayleigh numbers greater than the known supercritical value of 9216. Response times were determined to provide an indication of the time required to change states of motion for the various cases considered.

7. Numerical Analysis of Magnetic Sail Spacecraft

SciTech Connect

Sasaki, Daisuke; Yamakawa, Hiroshi; Usui, Hideyuki; Funaki, Ikkoh; Kojima, Hirotsugu

2008-12-31

To capture the kinetic energy of the solar wind by creating a large magnetosphere around the spacecraft, magneto-plasma sail injects a plasma jet into a strong magnetic field produced by an electromagnet onboard the spacecraft. The aim of this paper is to investigate the effect of the IMF (interplanetary magnetic field) on the magnetosphere of magneto-plasma sail. First, using an axi-symmetric two-dimensional MHD code, we numerically confirm the magnetic field inflation, and the formation of a magnetosphere by the interaction between the solar wind and the magnetic field. The expansion of an artificial magnetosphere by the plasma injection is then simulated, and we show that the magnetosphere is formed by the interaction between the solar wind and the magnetic field expanded by the plasma jet from the spacecraft. This simulation indicates the size of the artificial magnetosphere becomes smaller when applying the IMF.

8. Numerical analysis of a thermal deicer

NASA Technical Reports Server (NTRS)

Wright, W. B.; Keith, T. G., Jr.; Dewitt, K. J.

1992-01-01

An algorithm has been developed to numerically model the concurrent phenomena of two-dimensional transient heat transfer, ice accretion and ice shedding which arise from the use of an electrothermal pad. The Alternating Direction Implicit method is used to simultaneously solve the heat transfer and accretion equations occurring in a multilayered body covered with ice. In order to model the phase change between ice and water, a technique was used which assumes a phase for each node. This allows the equations to be linearized such that a direct solution is possible. This technique requires an iterative procedure to find the correct phase at each node. The computer program developed to find this solution has been integrated with the NASA/Lewis flow/trajectory code LEWICE.

9. Research in applied mathematics, numerical analysis, and computer science

NASA Technical Reports Server (NTRS)

1984-01-01

Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

10. Magnetic error analysis of recycler pbar injection transfer line

SciTech Connect

Yang, M.J.; /Fermilab

2007-06-01

Detailed study of Fermilab Recycler Ring anti-proton injection line became feasible with its BPM system upgrade, though the beamline has been in existence and operational since year 2000. Previous attempts were not fruitful due to limitations in the BPM system. Among the objectives are the assessment of beamline optics and the presence of error fields. In particular the field region of the permanent Lambertson magnets at both ends of R22 transfer line will be scrutinized.

11. Digital floodplain mapping and an analysis of errors involved

USGS Publications Warehouse

Hamblen, C.S.; Soong, D.T.; Cai, X.

2007-01-01

Mapping floodplain boundaries using geographical information system (GIS) and digital elevation models (DEMs) was completed in a recent study. However convenient this method may appear at first, the resulting maps potentially can have unaccounted errors. Mapping the floodplain using GIS is faster than mapping manually, and digital mapping is expected to be more common in the future. When mapping is done manually, the experience and judgment of the engineer or geographer completing the mapping and the contour resolution of the surface topography are critical in determining the flood-plain and floodway boundaries between cross sections. When mapping is done digitally, discrepancies can result from the use of the computing algorithm and digital topographic datasets. Understanding the possible sources of error and how the error accumulates through these processes is necessary for the validation of automated digital mapping. This study will evaluate the procedure of floodplain mapping using GIS and a 3 m by 3 m resolution DEM with a focus on the accumulated errors involved in the process. Within the GIS environment of this mapping method, the procedural steps of most interest, initially, include: (1) the accurate spatial representation of the stream centerline and cross sections, (2) properly using a triangulated irregular network (TIN) model for the flood elevations of the studied cross sections, the interpolated elevations between them and the extrapolated flood elevations beyond the cross sections, and (3) the comparison of the flood elevation TIN with the ground elevation DEM, from which the appropriate inundation boundaries are delineated. The study area involved is of relatively low topographic relief; thereby, making it representative of common suburban development and a prime setting for the need of accurately mapped floodplains. This paper emphasizes the impacts of integrating supplemental digital terrain data between cross sections on floodplain delineation

12. Alignment error analysis of the snapshot imaging polarimeter.

PubMed

Liu, Zhen; Yang, Wei-Feng; Ye, Qing-Hao; Hong, Jin; Gong, Guan-Yuan; Zheng, Xiao-Bing

2016-03-10

A snapshot imaging polarimeter (SIP) system is able to reconstruct two-dimensional spatial polarization information through a single interferogram. In this system, the alignment errors of the half-wave plate (HWP) and the analyzer have a predominant impact on the accuracies of reconstructed complete Stokes parameters. A theoretical model for analyzing the alignment errors in the SIP system is presented in this paper. Based on this model, the accuracy of the reconstructed Stokes parameters has been evaluated by using different incident states of polarization. An optimum thickness of the Savart plate for alleviating the perturbation introduced by the alignment error of the HWP is found by using the condition number of the system measurement matrix as an objective function in a minimization procedure. The result shows that when the thickness of a Savart plate is 23 mm, corresponding to the condition number 2.06, the precision of the SIP system can reach to 0.21% at 1° alignment tolerance of the HWP. PMID:26974785

13. PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.

SciTech Connect

PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.

1999-03-29

All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.

14. Probability analysis of position errors using uncooled IR stereo camera

Oh, Jun Ho; Lee, Sang Hwa; Lee, Boo Hwan; Park, Jong-Il

2016-05-01

This paper analyzes the random phenomenon of 3D positions when tracking moving objects using the infrared (IR) stereo camera, and proposes a probability model of 3D positions. The proposed probability model integrates two random error phenomena. One is the pixel quantization error which is caused by discrete sampling pixels in estimating disparity values of stereo camera. The other is the timing jitter which results from the irregular acquisition-timing in the uncooled IR cameras. This paper derives a probability distribution function by combining jitter model with pixel quantization error. To verify the proposed probability function of 3D positions, the experiments on tracking fast moving objects are performed using IR stereo camera system. The 3D depths of moving object are estimated by stereo matching, and be compared with the ground truth obtained by laser scanner system. According to the experiments, the 3D depths of moving object are estimated within the statistically reliable range which is well derived by the proposed probability distribution. It is expected that the proposed probability model of 3D positions can be applied to various IR stereo camera systems that deal with fast moving objects.

15. Analysis of errors in medical rapid prototyping models.

PubMed

Choi, J Y; Choi, J H; Kim, N K; Kim, Y; Lee, J K; Kim, M K; Lee, J H; Kim, M J

2002-02-01

Rapid prototyping (RP) is a relatively new technology that produces physical models by selectively solidifying UV-sensitive liquid resin using a laser beam. The technology has gained a great amount of attention, particularly in oral and maxillofacial surgery. An important issue in RP applications in this field is how to obtain RP models of the required accuracy. We investigated errors generated during the production of medical RP models, and identified the factors that caused dimensional errors in each production phase. The errors were mainly due to the volume-averaging effect, threshold value, and difficulty in the exact replication of landmark locations. We made 16 linear measurements on a dry skull, a replicated three-dimensional (3-D) visual (STL) model, and an RP model. The results showed that the absolute mean deviation between the original dry skull and the RP model over the 16 linear measurements was 0.62 +/- 0.35 mm (0.56 +/- 0.39%), which is smaller than values reported in previous studies. A major emphasis is placed on the dumb-bell effect. Classifying measurements as internal and external measurements, we observed that the effect of an inadequate threshold value differs with the type of measurement.

16. Numerical Differentiation Methods for Computing Error Covariance Matrices in Item Response Theory Modeling: An Evaluation and a New Proposal

ERIC Educational Resources Information Center

Tian, Wei; Cai, Li; Thissen, David; Xin, Tao

2013-01-01

In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…

17. NA-NET numerical analysis net

SciTech Connect

Dongarra, J. |; Rosener, B.

1991-12-01

This report describes a facility called NA-NET created to allow numerical analysts (na) an easy method of communicating with one another. The main advantage of the NA-NET is uniformity of addressing. All mail is addressed to the Internet host na-net.ornl.gov at Oak Ridge National Laboratory. Hence, members of the NA-NET do not need to remember complicated addresses or even where a member is currently located. As long as moving members change their e-mail address in the NA-NET everything works smoothly. The NA-NET system is currently located at Oak Ridge National Laboratory. It is running on the same machine that serves netlib. Netlib is a separate facility that distributes mathematical software via electronic mail. For more information on netlib consult, or send the one-line message send index to netlib{at}ornl.gov. The following report describes the current NA-NET system from both a users perspective and from an implementation perspective. Currently, there are over 2100 members in the NA-NET. An average of 110 mail messages pass through this facility daily.

18. NA-NET numerical analysis net

SciTech Connect

Dongarra, J. . Dept. of Computer Science Oak Ridge National Lab., TN ); Rosener, B. . Dept. of Computer Science)

1991-12-01

This report describes a facility called NA-NET created to allow numerical analysts (na) an easy method of communicating with one another. The main advantage of the NA-NET is uniformity of addressing. All mail is addressed to the Internet host na-net.ornl.gov'' at Oak Ridge National Laboratory. Hence, members of the NA-NET do not need to remember complicated addresses or even where a member is currently located. As long as moving members change their e-mail address in the NA-NET everything works smoothly. The NA-NET system is currently located at Oak Ridge National Laboratory. It is running on the same machine that serves netlib. Netlib is a separate facility that distributes mathematical software via electronic mail. For more information on netlib consult, or send the one-line message send index'' to netlib{at}ornl.gov. The following report describes the current NA-NET system from both a user's perspective and from an implementation perspective. Currently, there are over 2100 members in the NA-NET. An average of 110 mail messages pass through this facility daily.

19. Close-range radar rainfall estimation and error analysis

van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.

2012-04-01

It is well-known that quantitative precipitation estimation (QPE) is affected by many sources of error. The most important of these are 1) radar calibration, 2) wet radome attenuation, 3) rain attenuation, 4) vertical profile of reflectivity, 5) variations in drop size distribution, and 6) sampling effects. The study presented here is an attempt to separate and quantify these sources of error. For this purpose, QPE is performed very close to the radar (~1-2 km) so that 3), 4), and 6) will only play a minor role. Error source 5) can be corrected for because of the availability of two disdrometers (instruments that measure the drop size distribution). A 3-day rainfall event (25-27 August 2010) that produced more than 50 mm in De Bilt, The Netherlands is analyzed. Radar, rain gauge, and disdrometer data from De Bilt are used for this. It is clear from the analyses that without any corrections, the radar severely underestimates the total rain amount (only 25 mm). To investigate the effect of wet radome attenuation, stable returns from buildings close to the radar are analyzed. It is shown that this may have caused an underestimation up to ~4 dB. The calibration of the radar is checked by looking at received power from the sun. This turns out to cause another 1 dB of underestimation. The effect of variability of drop size distributions is shown to cause further underestimation. Correcting for all of these effects yields a good match between radar QPE and gauge measurements.

20. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

NASA Technical Reports Server (NTRS)

LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

2011-01-01

This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

1. Numerical analysis of the orthogonal descent method

SciTech Connect

Shokov, V.A.; Shchepakin, M.B.

1994-11-01

The author of the orthogonal descent method has been testing it since 1977. The results of these tests have only strengthened the need for further analysis and development of orthogonal descent algorithms for various classes of convex programming problems. Systematic testing of orthogonal descent algorithms and comparison of test results with other nondifferentiable optimization methods was conducted at TsEMI RAN in 1991-1992 using the results.

2. Numerical analysis and design of upwind sails

Shankaran, Sriram

The use of computational techniques that solve the Euler or the Navier-Stokes equations are increasingly being used by competing syndicates in races like the Americas Cup. For sail configurations, this desire stems from a need to understand the influence of the mast on the boundary layer and pressure distribution on the main sail, the effect of camber and planform variations of the sails on the driving and heeling force produced by them and the interaction of the boundary layer profile of the air over the surface of the water and the gap between the boom and the deck on the performance of the sail. Traditionally, experimental methods along with potential flow solvers have been widely used to quantify these effects. While these approaches are invaluable either for validation purposes or during the early stages of design, the potential advantages of high fidelity computational methods makes them attractive candidates during the later stages of the design process. The aim of this study is to develop and validate numerical methods that solve the inviscid field equations (Euler) to simulate and design upwind sails. The three dimensional compressible Euler equations are modified using the idea of artificial compressibility and discretized on unstructured tetrahedral grids to provide estimates of lift and drag for upwind sail configurations. Convergence acceleration techniques like multigrid and residual averaging are used along with parallel computing platforms to enable these simulations to be performed in a few minutes. To account for the elastic nature of the sail cloth, this flow solver was coupled to NASTRAN to provide estimates of the deflections caused by the pressure loading. The results of this aeroclastic simulation, showed that the major effect of the sail elasticity; was in altering the pressure distribution around the leading edge of the head and the main sail. Adjoint based design methods were developed next and were used to induce changes to the camber

3. Numerical bifurcation analysis of immunological models with time delays

Luzyanina, Tatyana; Roose, Dirk; Bocharov, Gennady

2005-12-01

In recent years, a large number of mathematical models that are described by delay differential equations (DDEs) have appeared in the life sciences. To analyze the models' dynamics, numerical methods are necessary, since analytical studies can only give limited results. In turn, the availability of efficient numerical methods and software packages encourages the use of time delays in mathematical modelling, which may lead to more realistic models. We outline recently developed numerical methods for bifurcation analysis of DDEs and illustrate the use of these methods in the analysis of a mathematical model of human hepatitis B virus infection.

4. Analysis of star camera errors in GRACE data and their impact on monthly gravity field models

Inácio, Pedro; Ditmar, Pavel; Klees, Roland; Farahani, Hassan Hashemi

2015-06-01

Star cameras (SCs) on board the GRACE satellites provide information about the attitudes of the spacecrafts. This information is needed to reduce the K-band ranging data to the centre of mass of the satellites. In this paper, we analyse GRACE SC errors using two months of real data of the primary and secondary SCs. We show that the errors consist of a harmonic component, which is highly correlated with the satellite's true anomaly, and a stochastic component. We built models of both error components, and use these models for error propagation studies. Firstly, we analyse the propagation of SC errors into inter-satellite accelerations. A spectral analysis reveals that the stochastic component exceeds the harmonic component, except in the 3-10 mHz frequency band. In this band, which contains most of the geophysically relevant signal, the harmonic error component is larger than the random component. Secondly, we propagate SC errors into optimally filtered monthly mass anomaly maps and compare them with the total error. We found that SC errors account for about 18 % of the total error. Moreover, gaps in the SC data series amplify the effect of SC errors by a factor of . Finally, an analysis of inter-satellite pointing angles for GRACE data between 2003 and 2010 reveals that inter-satellite ranging errors were exceptionally large during the period February 2003 till May 2003. During these months, SC noise is amplified by a factor of 3 and is a considerable source of errors in monthly GRACE mass anomaly maps. In the context of future satellite gravity missions, the noise models developed in this paper may be valuable for mission performance studies.

5. ANALYSIS OF A CLASSIFICATION ERROR MATRIX USING CATEGORICAL DATA TECHNIQUES.

USGS Publications Warehouse

Rosenfield, George H.; Fitzpatrick-Lins, Katherine

1984-01-01

Summary form only given. A classification error matrix typically contains tabulation results of an accuracy evaluation of a thematic classification, such as that of a land use and land cover map. The diagonal elements of the matrix represent the counts corrected, and the usual designation of classification accuracy has been the total percent correct. The nondiagonal elements of the matrix have usually been neglected. The classification error matrix is known in statistical terms as a contingency table of categorical data. As an example, an application of these methodologies to a problem of remotely sensed data concerning two photointerpreters and four categories of classification indicated that there is no significant difference in the interpretation between the two photointerpreters, and that there are significant differences among the interpreted category classifications. However, two categories, oak and cottonwood, are not separable in classification in this experiment at the 0. 51 percent probability. A coefficient of agreement is determined for the interpreted map as a whole, and individually for each of the interpreted categories. A conditional coefficient of agreement for the individual categories is compared to other methods for expressing category accuracy which have already been presented in the remote sensing literature.

6. Error Analysis of non-TLD HDR Brachytherapy Dosimetric Techniques

The American Association of Physicists in Medicine Task Group Report43 (AAPM-TG43) and its updated version TG-43U1 rely on the LiF TLD detector to determine the experimental absolute dose rate for brachytherapy. The recommended uncertainty estimates associated with TLD experimental dosimetry include 5% for statistical errors (Type A) and 7% for systematic errors (Type B). TG-43U1 protocol does not include recommendation for other experimental dosimetric techniques to calculate the absolute dose for brachytherapy. This research used two independent experimental methods and Monte Carlo simulations to investigate and analyze uncertainties and errors associated with absolute dosimetry of HDR brachytherapy for a Tandem applicator. An A16 MicroChamber* and one dose MOSFET detectors† were selected to meet the TG-43U1 recommendations for experimental dosimetry. Statistical and systematic uncertainty analyses associated with each experimental technique were analyzed quantitatively using MCNPX 2.6‡ to evaluate source positional error, Tandem positional error, the source spectrum, phantom size effect, reproducibility, temperature and pressure effects, volume averaging, stem and wall effects, and Tandem effect. Absolute dose calculations for clinical use are based on Treatment Planning System (TPS) with no corrections for the above uncertainties. Absolute dose and uncertainties along the transverse plane were predicted for the A16 microchamber. The generated overall uncertainties are 22%, 17%, 15%, 15%, 16%, 17%, and 19% at 1cm, 2cm, 3cm, 4cm, and 5cm, respectively. Predicting the dose beyond 5cm is complicated due to low signal-to-noise ratio, cable effect, and stem effect for the A16 microchamber. Since dose beyond 5cm adds no clinical information, it has been ignored in this study. The absolute dose was predicted for the MOSFET detector from 1cm to 7cm along the transverse plane. The generated overall uncertainties are 23%, 11%, 8%, 7%, 7%, 9%, and 8% at 1cm, 2cm, 3cm

7. Numerical Uncertainty Quantification for Radiation Analysis Tools

NASA Technical Reports Server (NTRS)

Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha

2007-01-01

Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.

8. A numerical comparison of sensitivity analysis techniques

SciTech Connect

Hamby, D.M.

1993-12-31

Engineering and scientific phenomena are often studied with the aid of mathematical models designed to simulate complex physical processes. In the nuclear industry, modeling the movement and consequence of radioactive pollutants is extremely important for environmental protection and facility control. One of the steps in model development is the determination of the parameters most influential on model results. A {open_quotes}sensitivity analysis{close_quotes} of these parameters is not only critical to model validation but also serves to guide future research. A previous manuscript (Hamby) detailed many of the available methods for conducting sensitivity analyses. The current paper is a comparative assessment of several methods for estimating relative parameter sensitivity. Method practicality is based on calculational ease and usefulness of the results. It is the intent of this report to demonstrate calculational rigor and to compare parameter sensitivity rankings resulting from various sensitivity analysis techniques. An atmospheric tritium dosimetry model (Hamby) is used here as an example, but the techniques described can be applied to many different modeling problems. Other investigators (Rose; Dalrymple and Broyd) present comparisons of sensitivity analyses methodologies, but none as comprehensive as the current work.

9. Analysis of measured data of human body based on error correcting frequency

Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

2014-04-01

Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

10. An error analysis of higher-order finite-element methods: effect of degenerate coupling on simulation of elastic wave propagation

Hasegawa, Kei; Geller, Robert J.; Hirabayashi, Nobuyasu

2016-06-01

We present a theoretical analysis of the error of synthetic seismograms computed by higher-order finite-element methods (ho-FEMs). We show the existence of a previously unrecognized type of error due to degenerate coupling between waves with the same frequency but different wavenumbers. These results are confirmed by simple numerical experiments using the spectral element method as an example of ho-FEMs. Errors of the type found by this study may occur generally in applications of ho-FEMs.

11. Mark-Up-Based Writing Error Analysis Model in an On-Line Classroom.

ERIC Educational Resources Information Center

Feng, Cheng; Yano, Yoneo; Ogata, Hiroaki

2000-01-01

Describes a new component called "Writing Error Analysis Model" (WEAM) in the CoCoA system for teaching writing composition in Japanese as a foreign language. The Weam can be used for analyzing learners' morphological errors and selecting appropriate compositions for learners' revising exercises. (Author/VWL)

12. Analysis of Errors and Misconceptions in the Learning of Calculus by Undergraduate Students

ERIC Educational Resources Information Center

Muzangwa, Jonatan; Chifamba, Peter

2012-01-01

This paper is going to analyse errors and misconceptions in an undergraduate course in Calculus. The study will be based on a group of 10 BEd. Mathematics students at Great Zimbabwe University. Data is gathered through use of two exercises on Calculus 1&2.The analysis of the results from the tests showed that a majority of the errors were due…

13. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

ERIC Educational Resources Information Center

Jennrich, Robert I.

2008-01-01

The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

14. Analysis and Correction of Systematic Height Model Errors

Jacobsen, K.

2016-06-01

The geometry of digital height models (DHM) determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC). Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3) has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP), but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM) digital surface model (DSM) or the new AW3D30 DSM, based on ALOS PRISM images, are

15. Carriage Error Identification Based on Cross-Correlation Analysis and Wavelet Transformation

PubMed Central

Mu, Donghui; Chen, Dongju; Fan, Jinwei; Wang, Xiaofeng; Zhang, Feihu

2012-01-01

This paper proposes a novel method for identifying carriage errors. A general mathematical model of a guideway system is developed, based on the multi-body system method. Based on the proposed model, most error sources in the guideway system can be measured. The flatness of a workpiece measured by the PGI1240 profilometer is represented by a wavelet. Cross-correlation analysis performed to identify the error source of the carriage. The error model is developed based on experimental results on the low frequency components of the signals. With the use of wavelets, the identification precision of test signals is very high. PMID:23012558

16. Analysis and Evaluation of Error-Proof Systems for Configuration Data Management in Railway Signalling

Shimazoe, Toshiyuki; Ishikawa, Hideto; Takei, Tsuyoshi; Tanaka, Kenji

Recent types of train protection systems such as ATC require the large amounts of low-level configuration data compared to conventional types of them. Hence management of the configuration data is becoming more important than before. Because of this, the authors developed an error-proof system focusing on human operations in the configuration data management. This error-proof system has already been introduced to the Tokaido Shinkansen ATC data management system. However, as effectiveness of the system has not been presented objectively, its full perspective is not clear. To clarify the effectiveness, this paper analyses error-proofing cases introduced to the system, using the concept of QFD and the error-proofing principles. From this analysis, the following methods of evaluation for error-proof systems are proposed: metrics to review the rationality of required qualities are provided by arranging the required qualities according to hazard levels and work phases; metrics to evaluate error-proof systems are provided to improve their reliability effectively by mapping the error-proofing principles onto the error-proofing cases which are applied according to the required qualities and the corresponding hazard levels. In addition, these objectively-analysed error-proofing cases are available to be used as error-proofing-cases database or guidelines for safer HMI design especially for data management.

17. Analysis of Naming Errors during Cortical Stimulation Mapping: Implications for Models of Language Representation

PubMed Central

Corina, David P.; Loudermilk, Brandon C.; Detwiler, Landon; Martin, Richard F.; Brinkley, James F.; Ojemann, George

2011-01-01

This study reports on the characteristics and distribution of naming errors of patients undergoing cortical stimulation mapping (CSM). During the procedure, electrical stimulation is used to induce temporary functional lesions and locate ‘essential’ language areas for preservation. Under stimulation, patients are shown slides of common objects and asked to name them. Cortical stimulation can lead to a variety of naming errors. In the present study, we aggregate errors across patients to examine the neuroanatomical correlates and linguistic characteristics of six common errors: semantic paraphasias, circumlocutions, phonological paraphasias, neologisms, performance errors, and no-response errors. Aiding analysis, we relied on a suite of web-based querying and imaging tools that enabled the summative mapping of normalized stimulation sites. Errors were visualized and analyzed by type and location. We provide descriptive statistics to characterize the commonality of errors across patients and location. The errors observed suggest a widely distributed and heterogeneous cortical network that gives rise to differential patterning of paraphasic errors. Data are discussed in relation to emerging models of language representation that honor distinctions between frontal, parietal, and posterior temporal dorsal implementation systems and ventral-temporal lexical semantic and phonological storage and assembly regions; the latter of which may participate both in language comprehension and production. PMID:20452661

18. Error analysis of empirical ocean tide models estimated from TOPEX/POSEIDON altimetry

Desai, Shailen D.; Wahr, John M.; Chao, Yi

1997-11-01

An error budget is proposed for the TOPEX/POSEIDON (T/P) empirical ocean tide models estimated during the primary mission. The error budget evaluates the individual contribution of errors in each of the altimetric range corrections, orbit errors caused by errors in the background ocean tide potential, and errors caused by the general circulation of the oceans, to errors in the ocean tide models of the eight principal diurnal and semidiurnal tidal components, and the two principal long-period tidal components. The effect of continually updating the T/P empirical ocean tide models during the primary T/P mission is illustrated through tide gauge comparisons and then used to predict the impact of further updates during the extended mission. Both the tide gauge comparisons and the error analysis predict errors in the tide models for the eight principal diurnal and semidiurnal constituents to be of the order of 2-3 cm root-sum-square. The dominant source of errors in the T/P ocean tide models appears to be caused by the general circulation of the oceans observed by the T/P altimeter. Further updates of the T/P empirical ocean tide models during the extended mission should not provide significant improvements in the diurnal and semidiurnal ocean tide models but should provide significant improvements in the long-period ocean tide models, particularly in the monthly (Mm) tidal component.

19. Numerical analysis of Weyl's method for integrating boundary layer equations

NASA Technical Reports Server (NTRS)

Najfeld, I.

1982-01-01

A fast method for accurate numerical integration of Blasius equation is proposed. It is based on the limit interchange in Weyl's fixed point method formulated as an iterated limit process. Each inner limit represents convergence to a discrete solution. It is shown that the error in a discrete solution admits asymptotic expansion in even powers of step size. An extrapolation process is set up to operate on a sequence of discrete solutions to reach the outer limit. Finally, this method is extended to related boundary layer equations.

20. Error Analysis of Remotely-Acquired Mossbauer Spectra

NASA Technical Reports Server (NTRS)

Schaefer, Martha W.; Dyar, M. Darby; Agresti, David G.; Schaefer, Bradley E.

2005-01-01

On the Mars Exploration Rovers, Mossbauer spectroscopy has recently been called upon to assist in the task of mineral identification, a job for which it is rarely used in terrestrial studies. For example, Mossbauer data were used to support the presence of olivine in Martian soil at Gusev and jarosite in the outcrop at Meridiani. The strength (and uniqueness) of these interpretations lies in the assumption that peak positions can be determined with high degrees of both accuracy and precision. We summarize here what we believe to be the major sources of error associated with peak positions in remotely-acquired spectra, and speculate on their magnitudes. Our discussion here is largely qualitative because necessary background information on MER calibration sources, geometries, etc., have not yet been released to the PDS; we anticipate that a more quantitative discussion can be presented by March 2005.

1. Optical refractive synchronization: bit error rate analysis and measurement

Palmer, James R.

1999-11-01

The direction of this paper is to describe the analytical tools and measurement techniques used at SilkRoad to evaluate the optical and electrical signals used in Optical Refractive Synchronization for transporting SONET signals across the transmission fiber. Fundamentally, the direction of this paper is to provide an outline of how SilkRoad, Inc., transports a multiplicity of SONET signals across a distance of fiber > 100 Km without amplification or regeneration of the optical signal, i.e., one laser over one fiber. Test and measurement data are presented to reflect how the SilkRoad technique of Optical Refractive Synchronization is employed to provide a zero bit error rate for transmission of multiple OC-12 and OC-48 SONET signals that are sent over a fiber optical cable which is > 100Km. The recovery and transformation modules are described for the modification and transportation of these SONET signals.

2. Error analysis and implementation issues for energy density probe

Locey, Lance L.; Woolford, Brady L.; Sommerfeldt, Scott D.; Blotter, Jonathan D.

2001-05-01

Previous research has demonstrated the utility of acoustic energy density measurements as a means to gain a greater understanding of acoustic fields. Three spherical energy density probe designs are under development. The first probe design has three orthogonal pairs of surface mounted microphones. The second probe design utilizes a similarly sized sphere with four surface mounted microphones. The four microphones are located at the origin and unit vectors of a Cartesian coordinate system, where the origin and the tips of the three unit vectors all lie on the surface of the sphere. The third probe design consists of a similarly sized sphere, again with four surface microphones, each placed at the vertices of a regular tetrahedron. The sensing elements of all three probes are Panasonic electret microphones. The work presented here will expand on previously reported work, and address bias errors, spherical scattering effects, and practical implementation issues. [Work supported by NASA.

3. Measuring the impact of character recognition errors on downstream text analysis

Lopresti, Daniel

2008-01-01

Noise presents a serious challenge in optical character recognition, as well as in the downstream applications that make use of its outputs as inputs. In this paper, we describe a paradigm for measuring the impact of recognition errors on the stages of a standard text analysis pipeline: sentence boundary detection, tokenization, and part-of-speech tagging. Employing a hierarchical methodology based on approximate string matching for classifying errors, their cascading effects as they travel through the pipeline are isolated and analyzed. We present experimental results based on injecting single errors into a large corpus of test documents to study their varying impacts depending on the nature of the error and the character(s) involved. While most such errors are found to be localized, in the worst case some can have an amplifying effect that extends well beyond the site of the original error, thereby degrading the performance of the end-to-end system.

4. NASCRIN - NUMERICAL ANALYSIS OF SCRAMJET INLET

NASA Technical Reports Server (NTRS)

Kumar, A.

1994-01-01

The NASCRIN program was developed for analyzing two-dimensional flow fields in supersonic combustion ramjet (scramjet) inlets. NASCRIN solves the two-dimensional Euler or Navier-Stokes equations in conservative form by an unsplit, explicit, two-step finite-difference method. A more recent explicit-implicit, two-step scheme has also been incorporated in the code for viscous flow analysis. An algebraic, two-layer eddy-viscosity model is used for the turbulent flow calculations. NASCRIN can analyze both inviscid and viscous flows with no struts, one strut, or multiple struts embedded in the flow field. NASCRIN can be used in a quasi-three-dimensional sense for some scramjet inlets under certain simplifying assumptions. Although developed for supersonic internal flow, NASCRIN may be adapted to a variety of other flow problems. In particular, it should be readily adaptable to subsonic inflow with supersonic outflow, supersonic inflow with subsonic outflow, or fully subsonic flow. The NASCRIN program is available for batch execution on the CDC CYBER 203. The vectorized FORTRAN version was developed in 1983. NASCRIN has a central memory requirement of approximately 300K words for a grid size of about 3,000 points.

5. Errors associated with metabolic control analysis. Application Of Monte-Carlo simulation of experimental data.

PubMed

Ainscow, E K; Brand, M D

1998-09-21

The errors associated with experimental application of metabolic control analysis are difficult to assess. In this paper, we give examples where Monte-Carlo simulations of published experimental data are used in error analysis. Data was simulated according to the mean and error obtained from experimental measurements and the simulated data was used to calculate control coefficients. Repeating the simulation 500 times allowed an estimate to be made of the error implicit in the calculated control coefficients. In the first example, state 4 respiration of isolated mitochondria, Monte-Carlo simulations based on the system elasticities were performed. The simulations gave error estimates similar to the values reported within the original paper and those derived from a sensitivity analysis of the elasticities. This demonstrated the validity of the method. In the second example, state 3 respiration of isolated mitochondria, Monte-Carlo simulations were based on measurements of intermediates and fluxes. A key feature of this simulation was that the distribution of the simulated control coefficients did not follow a normal distribution, despite simulation of the original data being based on normal distributions. Consequently, the error calculated using simulation was greater and more realistic than the error calculated directly by averaging the original results. The Monte-Carlo simulations are also demonstrated to be useful in experimental design. The individual data points that should be repeated in order to reduce the error in the control coefficients can be highlighted.

6. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant

PubMed Central

Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar

2015-01-01

Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485

7. New dimension analyses with error analysis for quaking aspen and black spruce

NASA Technical Reports Server (NTRS)

Woods, K. D.; Botkin, D. B.; Feiveson, A. H.

1987-01-01

Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.

8. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

NASA Technical Reports Server (NTRS)

Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

1996-01-01

We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.

9. Measurement error analysis of Brillouin lidar system using F-P etalon and ICCD

Yao, Yuan; Niu, Qunjie; Liang, Kun

2016-09-01

Brillouin lidar system using Fabry-Pérot (F-P) etalon and Intensified Charge Coupled Device (ICCD) is capable of real time remote measuring of properties like temperature of seawater. The measurement accuracy is determined by two key parameters, Brillouin frequency shift and Brillouin linewidth. Three major errors, namely the laser frequency instability, the calibration error of F-P etalon and the random shot noise are discussed. Theoretical analysis combined with simulation results showed that the laser and F-P etalon will cause about 4 MHz error to both Brillouin shift and linewidth, and random noise bring more error to linewidth than frequency shift. A comprehensive and comparative analysis of the overall errors under various conditions proved that colder ocean(10 °C) is more accurately measured with Brillouin linewidth, and warmer ocean (30 °C) is better measured with Brillouin shift.

10. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

SciTech Connect

Estep, Donald

2015-11-30

This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

11. Confirmation of standard error analysis techniques applied to EXAFS using simulations

SciTech Connect

Booth, Corwin H; Hu, Yung-Jin

2009-12-14

Systematic uncertainties, such as those in calculated backscattering amplitudes, crystal glitches, etc., not only limit the ultimate accuracy of the EXAFS technique, but also affect the covariance matrix representation of real parameter errors in typical fitting routines. Despite major advances in EXAFS analysis and in understanding all potential uncertainties, these methods are not routinely applied by all EXAFS users. Consequently, reported parameter errors are not reliable in many EXAFS studies in the literature. This situation has made many EXAFS practitioners leery of conventional error analysis applied to EXAFS data. However, conventional error analysis, if properly applied, can teach us more about our data, and even about the power and limitations of the EXAFS technique. Here, we describe the proper application of conventional error analysis to r-space fitting to EXAFS data. Using simulations, we demonstrate the veracity of this analysis by, for instance, showing that the number of independent dat a points from Stern's rule is balanced by the degrees of freedom obtained from a 2 statistical analysis. By applying such analysis to real data, we determine the quantitative effect of systematic errors. In short, this study is intended to remind the EXAFS community about the role of fundamental noise distributions in interpreting our final results.

12. Computerised physician order entry-related medication errors: analysis of reported errors and vulnerability testing of current systems

PubMed Central

Schiff, G D; Amato, M G; Eguale, T; Boehne, J J; Wright, A; Koppel, R; Rashidee, A H; Elson, R B; Whitney, D L; Thach, T-T; Bates, D W; Seger, A C

2015-01-01

Importance Medication computerised provider order entry (CPOE) has been shown to decrease errors and is being widely adopted. However, CPOE also has potential for introducing or contributing to errors. Objectives The objectives of this study are to (a) analyse medication error reports where CPOE was reported as a ‘contributing cause’ and (b) develop ‘use cases’ based on these reports to test vulnerability of current CPOE systems to these errors. Methods A review of medication errors reported to United States Pharmacopeia MEDMARX reporting system was made, and a taxonomy was developed for CPOE-related errors. For each error we evaluated what went wrong and why and identified potential prevention strategies and recurring error scenarios. These scenarios were then used to test vulnerability of leading CPOE systems, asking typical users to enter these erroneous orders to assess the degree to which these problematic orders could be entered. Results Between 2003 and 2010, 1.04 million medication errors were reported to MEDMARX, of which 63 040 were reported as CPOE related. A review of 10 060 CPOE-related cases was used to derive 101 codes describing what went wrong, 67 codes describing reasons why errors occurred, 73 codes describing potential prevention strategies and 21 codes describing recurring error scenarios. Ability to enter these erroneous order scenarios was tested on 13 CPOE systems at 16 sites. Overall, 298 (79.5%) of the erroneous orders were able to be entered including 100 (28.0%) being ‘easily’ placed, another 101 (28.3%) with only minor workarounds and no warnings. Conclusions and relevance Medication error reports provide valuable information for understanding CPOE-related errors. Reports were useful for developing taxonomy and identifying recurring errors to which current CPOE systems are vulnerable. Enhanced monitoring, reporting and testing of CPOE systems are important to improve CPOE safety. PMID:25595599

13. Numerical analysis of an H1-Galerkin mixed finite element method for time fractional telegraph equation.

PubMed

Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong

2014-01-01

We discuss and analyze an H(1)-Galerkin mixed finite element (H(1)-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H(1)-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H(1)-GMFE method. Based on the discussion on the theoretical error analysis in L(2)-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H(1)-norm. Moreover, we derive and analyze the stability of H(1)-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure.

14. Numerical analysis of an H1-Galerkin mixed finite element method for time fractional telegraph equation.

PubMed

Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong

2014-01-01

We discuss and analyze an H(1)-Galerkin mixed finite element (H(1)-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H(1)-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H(1)-GMFE method. Based on the discussion on the theoretical error analysis in L(2)-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H(1)-norm. Moreover, we derive and analyze the stability of H(1)-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148

15. Error analysis in the measurement of average power with application to switching controllers

NASA Technical Reports Server (NTRS)

Maisel, J. E.

1980-01-01

Power measurement errors due to the bandwidth of a power meter and the sampling of the input voltage and current of a power meter were investigated assuming sinusoidal excitation and periodic signals generated by a model of a simple chopper system. Errors incurred in measuring power using a microcomputer with limited data storage were also considered. The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current, and the signal multiplier was studied. Results indicate that this power measurement error can be minimized if the frequency responses of the first order transfer functions are identical. The power error analysis was extended to include the power measurement error for a model of a simple chopper system with a power source and an ideal shunt motor acting as an electrical load for the chopper. The behavior of the power measurement error was determined as a function of the chopper's duty cycle and back EMF of the shunt motor. Results indicate that the error is large when the duty cycle or back EMF is small. Theoretical and experimental results indicate that the power measurement error due to sampling of sinusoidal voltages and currents becomes excessively large when the number of observation periods approaches one-half the size of the microcomputer data memory allocated to the storage of either the input sinusoidal voltage or current.

16. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

SciTech Connect

Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

2006-10-01

This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

17. ACTION AND PHASE ANALYSIS TO DETERMINE SEXTUPOLE ERRORS IN RHIC AND THE SPS.

SciTech Connect

CARDONA,J.PEGGS,S.SATOGATA,T.TOMAS,R.

2003-05-12

Success in the application of the action and phase analysis to find linear errors at RHIC Interaction Regions [1] has encouraged the creation of a technique based on the action and phase analysis to find non linear errors. In this paper we show the first attempt to measure the sextupole components at RHIC interaction regions using the action and phase method. Experiments done by intentionally activating sextupoles in RHIC and in SPS [2] will also be analyzed with this method. First results have given values for the sextupole errors that at least have the same order of magnitude as the values found by an alternate technique during the RHIC 2001 run [3].

18. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

NASA Technical Reports Server (NTRS)

Duda, David P.; Minnis, Patrick

2009-01-01

Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

19. Error Patterns Analysis of Hearing Aid and Cochlear Implant Users as a Function of Noise

PubMed Central

Chun, Hyungi; Ma, Sunmi; Chun, Youngmyoung

2015-01-01

Background and Objectives Not all impaired listeners may have the same speech perception ability although they will have similar pure-tone threshold and configuration. For this reason, the present study analyzes error patterns in the hearing-impaired compared to normal hearing (NH) listeners as a function of signal-to-noise ratio (SNR). Subjects and Methods Forty-four adults participated: 10 listeners with NH, 20 hearing aids (HA) users and 14 cochlear implants (CI) users. The Korean standardized monosyllables were presented as the stimuli in quiet and three different SNRs. Total error patterns were classified into types of substitution, omission, addition, fail, and no response, using stacked bar plots. Results Total error percent for the three groups significantly increased as the SNRs decreased. For error pattern analysis, the NH group showed substitution errors dominantly regardless of the SNRs compared to the other groups. Both the HA and CI groups had substitution errors that declined, while no response errors appeared as the SNRs increased. The CI group was characterized by lower substitution and higher fail errors than did the HA group. Substitutions of initial and final phonemes in the HA and CI groups were limited by place of articulation errors. However, the HA group had missed consonant place cues, such as formant transitions and stop consonant bursts, whereas the CI group usually had limited confusions of nasal consonants with low frequency characteristics. Interestingly, all three groups showed /k/ addition in the final phoneme, a trend that magnified as noise increased. Conclusions The HA and CI groups had their unique error patterns even though the aided thresholds of the two groups were similar. We expect that the results of this study will focus on high error patterns in auditory training of hearing-impaired listeners, resulting in reducing those errors and improving their speech perception ability. PMID:26771013

20. Quantitative analysis of numerical solvers for oscillatory biomolecular system models

PubMed Central

Quo, Chang F; Wang, May D

2008-01-01

Background This article provides guidelines for selecting optimal numerical solvers for biomolecular system models. Because various parameters of the same system could have drastically different ranges from 10-15 to 1010, the ODEs can be stiff and ill-conditioned, resulting in non-unique, non-existing, or non-reproducible modeling solutions. Previous studies have not examined in depth how to best select numerical solvers for biomolecular system models, which makes it difficult to experimentally validate the modeling results. To address this problem, we have chosen one of the well-known stiff initial value problems with limit cycle behavior as a test-bed system model. Solving this model, we have illustrated that different answers may result from different numerical solvers. We use MATLAB numerical solvers because they are optimized and widely used by the modeling community. We have also conducted a systematic study of numerical solver performances by using qualitative and quantitative measures such as convergence, accuracy, and computational cost (i.e. in terms of function evaluation, partial derivative, LU decomposition, and "take-off" points). The results show that the modeling solutions can be drastically different using different numerical solvers. Thus, it is important to intelligently select numerical solvers when solving biomolecular system models. Results The classic Belousov-Zhabotinskii (BZ) reaction is described by the Oregonator model and is used as a case study. We report two guidelines in selecting optimal numerical solver(s) for stiff, complex oscillatory systems: (i) for problems with unknown parameters, ode45 is the optimal choice regardless of the relative error tolerance; (ii) for known stiff problems, both ode113 and ode15s are good choices under strict relative tolerance conditions. Conclusions For any given biomolecular model, by building a library of numerical solvers with quantitative performance assessment metric, we show that it is possible

1. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

2012-12-01

This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

2. Quantitative error analysis for computer assisted navigation: a feasibility study

PubMed Central

Güler, Ö.; Perwög, M.; Kral, F.; Schwarm, F.; Bárdosi, Z. R.; Göbel, G.; Freysinger, W.

2013-01-01

Purpose The benefit of computer-assisted navigation depends on the registration process, at which patient features are correlated to some preoperative imagery. The operator-induced uncertainty in localizing patient features – the User Localization Error (ULE) - is unknown and most likely dominating the application accuracy. This initial feasibility study aims at providing first data for ULE with a research navigation system. Methods Active optical navigation was done in CT-images of a plastic skull, an anatomic specimen (both with implanted fiducials) and a volunteer with anatomical landmarks exclusively. Each object was registered ten times with 3, 5, 7, and 9 registration points. Measurements were taken at 10 (anatomic specimen and volunteer) and 11 targets (plastic skull). The active NDI Polaris system was used under ideal working conditions (tracking accuracy 0.23 mm root mean square, RMS; probe tip calibration was 0.18 mm RMS. Variances of tracking along the principal directions were measured as 0.18 mm2, 0.32 mm2, and 0.42 mm2. ULE was calculated from predicted application accuracy with isotropic and anisotropic models and from experimental variances, respectively. Results The ULE was determined from the variances as 0.45 mm (plastic skull), 0.60 mm (anatomic specimen), and 4.96 mm (volunteer). The predicted application accuracy did not yield consistent values for the ULE. Conclusions Quantitative data of application accuracy could be tested against prediction models with iso- and anisotropic noise models and revealed some discrepancies. This could potentially be due to the facts that navigation and one prediction model wrongly assume isotropic noise (tracking is anisotropic), while the anisotropic noise prediction model assumes an anisotropic registration strategy (registration is isotropic in typical navigation systems). The ULE data are presumably the first quantitative values for the precision of localizing anatomical landmarks and implanted fiducials

3. Generalized multiplicative error models: Asymptotic inference and empirical analysis

Li, Qian

This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.

4. Error Analysis for Discontinuous Galerkin Method for Parabolic Problems

NASA Technical Reports Server (NTRS)

Kaneko, Hideaki

2004-01-01

In the proposal, the following three objectives are stated: (1) A p-version of the discontinuous Galerkin method for a one dimensional parabolic problem will be established. It should be recalled that the h-version in space was used for the discontinuous Galerkin method. An a priori error estimate as well as a posteriori estimate of this p-finite element discontinuous Galerkin method will be given. (2) The parameter alpha that describes the behavior double vertical line u(sub t)(t) double vertical line 2 was computed exactly. This was made feasible because of the explicitly specified initial condition. For practical heat transfer problems, the initial condition may have to be approximated. Also, if the parabolic problem is proposed on a multi-dimensional region, the parameter alpha, for most cases, would be difficult to compute exactly even in the case that the initial condition is known exactly. The second objective of this proposed research is to establish a method to estimate this parameter. This will be done by computing two discontinuous Galerkin approximate solutions at two different time steps starting from the initial time and use them to derive alpha. (3) The third objective is to consider the heat transfer problem over a two dimensional thin plate. The technique developed by Vogelius and Babuska will be used to establish a discontinuous Galerkin method in which the p-element will be used for through thickness approximation. This h-p finite element approach, that results in a dimensional reduction method, was used for elliptic problems, but the application appears new for the parabolic problem. The dimension reduction method will be discussed together with the time discretization method.

5. Error analysis for reducing noisy wide-gap concentric cylinder rheometric data for nonlinear fluids - Theory and applications

NASA Technical Reports Server (NTRS)

Borgia, Andrea; Spera, Frank J.

1990-01-01

This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.

6. Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors

NASA Technical Reports Server (NTRS)

Boussalis, Dhemetrios; Bayard, David S.

2013-01-01

G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to

7. Decimal Fraction Arithmetic: Logical Error Analysis and Its Validation.

ERIC Educational Resources Information Center

Standiford, Sally N.; And Others

This report illustrates procedures of item construction for addition and subtraction examples involving decimal fractions. Using a procedural network of skills required to solve such examples, an item characteristic matrix of skills analysis was developed to describe the characteristics of the content domain by projected student difficulties. Then…

8. An initial state perturbation experiment with the GISS model. [random error effects on numerical weather prediction models

NASA Technical Reports Server (NTRS)

Spar, J.; Notario, J. J.; Quirk, W. J.

1978-01-01

Monthly mean global forecasts for January 1975 have been computed with the Goddard Institute for Space Studies model from four slightly different sets of initial conditions - a 'control' state and three random perturbations thereof - to simulate the effects of initial state uncertainty on forecast quality. Differences among the forecasts are examined in terms of energetics, synoptic patterns and forecast statistics. The 'noise level' of the model predictions is depicted on global maps of standard deviations of sea level pressures, 500 mb heights and 850 mb temperatures for the set of four forecasts. Initial small-scale random errors do not appear to result in any major degradation of the large-scale monthly mean forecast beyond that generated by the model itself, nor do they appear to represent the major source of large-scale forecast error.

9. Simulated forecast error and climate drift resulting from the omission of the upper stratosphere in numerical models

NASA Technical Reports Server (NTRS)

Boville, Byron A.; Baumhefner, David P.

1990-01-01

Using an NCAR community climate model, Version I, the forecast error growth and the climate drift resulting from the omission of the upper stratosphere are investigated. In the experiment, the control simulation is a seasonal integration of a medium horizontal general circulation model with 30 levels extending from the surface to the upper mesosphere, while the main experiment uses an identical model, except that only the bottom 15 levels (below 10 mb) are retained. It is shown that both random and systematic errors develop rapidly in the lower stratosphere with some local propagation into the troposphere in the 10-30-day time range. The random growth rate in the troposphere in the case of the altered upper boundary was found to be slightly faster than that for the initial-condition uncertainty alone. However, this is not likely to make a significant impact in operational forecast models, because the initial-condition uncertainty is very large.

10. Numerical and semiclassical analysis of some generalized Casimir pistons

SciTech Connect

2009-05-15

The Casimir force due to a scalar field in a cylinder of radius r with a spherical cap of radius R>r is computed numerically in the world-line approach. A geometrical subtraction scheme gives the finite interaction energy that determines the Casimir force. The spectral function of convex domains is obtained from a probability measure on convex surfaces that is induced by the Wiener measure on Brownian bridges the convex surfaces are the hulls of. Due to reflection positivity, the vacuum force on the piston by a scalar field satisfying Dirichlet boundary conditions is attractive in these geometries, but the strength and short-distance behavior of the force depend strongly on the shape of the piston casing. For a cylindrical casing with a hemispherical head, the force on the piston does not depend on the dimension of the casing at small piston elevation a<numerically approaches F{sub cas}(a<numerical results for the small-distance behavior of the force within statistical errors, whereas the proximity force approximation is off by one order of magnitude when R{approx}r.

11. Phase error analysis and compensation considering ambient light for phase measuring profilometry

Zhou, Ping; Liu, Xinran; He, Yi; Zhu, Tongjing

2014-04-01

The accuracy of phase measuring profilometry (PMP) system based on phase-shifting method is susceptible to gamma non-linearity of the projector-camera pair and uncertain ambient light inevitably. Although many researches on gamma model and phase error compensation methods have been implemented, the effect of ambient light is not explicit all along. In this paper, we perform theoretical analysis and experiments of phase error compensation taking account of both gamma non-linearity and uncertain ambient light. First of all, a mathematical phase error model is proposed to illustrate the reason of phase error generation in detail. We propose that the phase error is related not only to the gamma non-linearity of the projector-camera pair, but also to the ratio of intensity modulation to average intensity in the fringe patterns captured by the camera which is affected by the ambient light. Subsequently, an accurate phase error compensation algorithm is proposed based on the mathematical model, where the relationship between phase error and ambient light is illustrated. Experimental results with four-step phase-shifting PMP system show that the proposed algorithm can alleviate the phase error effectively even though the ambient light is considered.

12. Error analysis of a direct current electromagnetic tracking system in digitizing 3-dimensional surface geometries.

PubMed

Milne, A D; Lee, J M

1999-01-01

The direct current electromagnetic tracking device has seen increasing use in biomechanics studies of joint kinematics and anatomical surface geometry. In these applications, a stylus is attached to a sensor to measure the spatial location of three-dimensional landmarks. Stylus calibration is performed by rotating the stylus about a fixed point in space and using regression analysis to determine the tip offset vector. Measurement errors can be induced via several pathways, including; intrinsic system errors in sensor position or angle and tip offset calibration errors. A detailed study was performed to determine the errors introduced in digitizing small surfaces with different stylus lengths (35, 55, and 65 mm) and approach angles (30 and 45 degrees) using a plastic calibration board and hemispherical models. Two-point discrimination errors increased to an average of 1.93 mm for a 254 mm step size. Rotation about a single point produced mean errors of 0.44 to 1.18 mm. Statistically significant differences in error were observed with increasing approach angles (p < 0.001). Errors of less than 6% were observed in determining the curvature of a 19 mm hemisphere. This study demonstrates that the "Flock of Birds" can be used as a digitizing tool with accuracy better than 0.76% over 254 mm step sizes. PMID:11143353

13. A stochastic dynamic model for human error analysis in nuclear power plants

Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.

14. A general numerical model for wave rotor analysis

NASA Technical Reports Server (NTRS)

Paxson, Daniel W.

1992-01-01

Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.

15. Error analysis and feasibility study of dynamic stiffness matrix-based damping matrix identification

Ozgen, Gokhan O.; Kim, Jay H.

2009-02-01

Developing a method to formulate a damping matrix that represents the actual spatial distribution and mechanism of damping of the dynamic system has been an elusive goal. The dynamic stiffness matrix (DSM)-based damping identification method proposed by Lee and Kim is attractive and promising because it identifies the damping matrix from the measured DSM without relying on any unfounded assumptions. However, in ensuing works it was found that damping matrices identified from the method had unexpected forms and showed traces of large variance errors. The causes and possible remedies of the problem are sought for in this work. The variance and leakage errors are identified as the major sources of the problem, which are then related to system parameters through numerical and experimental simulations. An improved experimental procedure is developed to reduce the effect of these errors in order to make the DSM-based damping identification method a practical option.

16. Systematic errors analysis for a large dynamic range aberrometer based on aberration theory.

PubMed

Wu, Peng; Liu, Sheng; DeHoog, Edward; Schwiegerling, Jim

2009-11-10

In Ref. 1, it was demonstrated that the significant systematic errors of a type of large dynamic range aberrometer are strongly related to the power error (defocus) in the input wavefront. In this paper, a generalized theoretical analysis based on vector aberration theory is presented, and local shift errors of the SH spot pattern as a function of the lenslet position and the local wavefront tilt over the corresponding lenslet are derived. Three special cases, a spherical wavefront, a crossed cylindrical wavefront, and a cylindrical wavefront, are analyzed and the possibly affected Zernike terms in the wavefront reconstruction are investigated. The simulation and experimental results are illustrated to verify the theoretical predictions.

17. Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?

NASA Technical Reports Server (NTRS)

Hou, Arthur Y.; Zhang, Sara Q.

2004-01-01

Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.

18. Wavefront-error evaluation by mathematical analysis of experimental Foucault-test data

NASA Technical Reports Server (NTRS)

Wilson, R. G.

1975-01-01

The diffraction theory of the Foucault test provides an integral formula expressing the complex amplitude and irradiance distribution in the Foucault pattern of a test mirror (lens) as a function of wavefront error. Recent literature presents methods of inverting this formula to express wavefront error in terms of irradiance in the Foucault pattern. The present paper describes a study in which the inversion formulation was applied to photometric Foucault-test measurements on a nearly diffraction-limited mirror to determine wavefront errors for direct comparison with ones determined from scatter-plate interferometer measurements. The results affirm the practicability of the Foucault test for quantitative wavefront analysis of very small errors, and they reveal the fallacy of the prevalent belief that the test is limited to qualitative use only. Implications of the results with regard to optical testing and the potential use of the Foucault test for wavefront analysis in orbital space telescopes are discussed.

19. Probability of error analysis for FHSS/CDMA communications in the presence of fading

Wickert, Mark A.; Turcotte, Randy L.

1992-04-01

Expressions are found for the error probability of a slow frequency-hopped spread-spectrum (FHSS) M-ary FSK multiple-access system in the presence of slow-nonselective Rayleigh or single-term Rician fading. The approach is general enough to allow for the consideration of independent power levels; that is to say, the power levels of the interfering signals can be varied with respect to the power level of the desired signal. The exact analysis is carried out for one and two multiple-access interferers using BFSK modulation. The analysis is general enough for the consideration of the near/far problem under the specified channel conditions. Comparisons between the error expressions developed here and previously published upper bounds (Geraniotis and Pursley, 1982) show that, under certain conditions, the previous upper bounds on the error probability may exceed the true error probability by an order of magnitude.

20. Mars Entry Atmospheric Data System Modeling, Calibration, and Error Analysis

NASA Technical Reports Server (NTRS)

Karlgaard, Christopher D.; VanNorman, John; Siemers, Paul M.; Schoenenberger, Mark; Munk, Michelle M.

2014-01-01

The Mars Science Laboratory (MSL) Entry, Descent, and Landing Instrumentation (MEDLI)/Mars Entry Atmospheric Data System (MEADS) project installed seven pressure ports through the MSL Phenolic Impregnated Carbon Ablator (PICA) heatshield to measure heatshield surface pressures during entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the dynamic pressure, angle of attack, and angle of sideslip. This report describes the calibration of the pressure transducers utilized to reconstruct the atmospheric data and associated uncertainty models, pressure modeling and uncertainty analysis, and system performance results. The results indicate that the MEADS pressure measurement system hardware meets the project requirements.

1. Scilab and Maxima Environment: Towards Free Software in Numerical Analysis

ERIC Educational Resources Information Center

Mora, Angel; Galan, Jose Luis; Aguilera, Gabriel; Fernandez, Alvaro; Merida, Enrique; Rodriguez, Pedro

2010-01-01

In this work we will present the ScilabUMA environment we have developed as an alternative to Matlab. This environment connects Scilab (for numerical analysis) and Maxima (for symbolic computations). Furthermore, the developed interface is, in our opinion at least, as powerful as the interface of Matlab. (Contains 3 figures.)

2. Evaluating clinical accuracy of continuous glucose monitoring systems: Continuous Glucose-Error Grid Analysis (CG-EGA).

PubMed

Clarke, William L; Anderson, Stacey; Kovatchev, Boris

2008-08-01

Continuous Glucose Sensors (CGS) generate rich and informative continuous data streams which have the potential to improve the glycemic condition of the patient with diabetes. Such data are critical to the development of closed loop systems for automated glycemic control. Thus the numerical and clinical accuracy of such must be assured. Although numerical point accuracy of these systems has been described using traditional statistics, there are no requirements, as of yet, for determining and reporting the rate (trend) accuracy of the data generated. In addition, little attention has been paid to the clinical accuracy. of these systems. Continuous Glucose-Error Grid Analysis (CG-EGA) is the only method currently available for assessing the clinical accuracy of such data and reporting this accuracy for each of the relevant glycemic ranges, - hypoglycemia, euglycemia, hyperglycemia. This manuscript reviews the development of the original Error Grid Analysis (EGA) and describes its inadequacies when used to determine point accuracy of CGS systems. The development of CG-EGA as a logical extension of EGA for use with CGS is described in detail and examples of how it can be used to describe the clinical accuracy of several CGS are shown. Information is presented on how to obtain assistance with the use of CG-EGA.

3. A numerical algorithm to determine straightness error, surface roughness, and waviness measured using a fiber optic interferometer

Yildirim, Murat; Okutucu-Özyurt, Tuba; Dursunkaya, Zafer

2016-11-01

Fiber optic interferometry has been used to detect small displacements in diverse applications. Counting the number of fringes in fiber-optic interferometry is challenging due to the external effects induced in dynamic systems. In this paper, a novel interference fringe counting technique is developed to convert the intensity of interference data into displacements in the range of micrometers to millimeters while simultaneously resolving external dynamic effects. This technique consists of filtering the rough experimental data, converting filtered optical interference data into displacements, and resolving dynamic effects of the experimental system. Filtering the rough data is performed in time by using the moving average method with a window size of 400 data points. Filtered optical data is further converted into displacement by calculating relative phase differences of each data point compared to local maximum and local minimum points. Next, a linear curve-fit is subtracted from the calculated displacement curve to reveal dynamic effects. Straightness error of the lead screw driven stage, dynamics of the stepper motor, and profile of the reflective surfaces are investigated as the external dynamic effects. Straightness error is characterized by a 9th order polynomial function, and the effect of the dynamics of the stepper motor is fitted using a sinusoidal function. The remaining part of the measurement is the effect of roughness and waviness of the reflective surfaces. As explained in the experimental setup part, two fiber-optic probes detect the vertical relative displacements in the range of 1-50 μm, and the encoder probe detects 13.5 mm horizontal displacement. Thus, this technique can detect three order of magnitude different dynamic displacements with sub-micrometer resolution. The current methodology can be utilized in different applications which require measuring straightness error of lead-screw driven stages, large area surface profile of specimens

4. Estimating error cross-correlations in soil moisture data sets using extended collocation analysis

Gruber, A.; Su, C.-H.; Crow, W. T.; Zwieback, S.; Dorigo, W. A.; Wagner, W.

2016-02-01

Global soil moisture records are essential for studying the role of hydrologic processes within the larger earth system. Various studies have shown the benefit of assimilating satellite-based soil moisture data into water balance models or merging multisource soil moisture retrievals into a unified data set. However, this requires an appropriate parameterization of the error structures of the underlying data sets. While triple collocation (TC) analysis has been widely recognized as a powerful tool for estimating random error variances of coarse-resolution soil moisture data sets, the estimation of error cross covariances remains an unresolved challenge. Here we propose a method—referred to as extended collocation (EC) analysis—for estimating error cross-correlations by generalizing the TC method to an arbitrary number of data sets and relaxing the therein made assumption of zero error cross-correlation for certain data set combinations. A synthetic experiment shows that EC analysis is able to reliably recover true error cross-correlation levels. Applied to real soil moisture retrievals from Advanced Microwave Scanning Radiometer-EOS (AMSR-E) C-band and X-band observations together with advanced scatterometer (ASCAT) retrievals, modeled data from Global Land Data Assimilation System (GLDAS)-Noah and in situ measurements drawn from the International Soil Moisture Network, EC yields reasonable and strong nonzero error cross-correlations between the two AMSR-E products. Against expectation, nonzero error cross-correlations are also found between ASCAT and AMSR-E. We conclude that the proposed EC method represents an important step toward a fully parameterized error covariance matrix for coarse-resolution soil moisture data sets, which is vital for any rigorous data assimilation framework or data merging scheme.

5. On vertical advection truncation errors in terrain-following numerical models: Comparison to a laboratory model for upwelling over submarine canyons

Allen, S. E.; Dinniman, M. S.; Klinck, J. M.; Gorby, D. D.; Hewett, A. J.; Hickey, B. M.

2003-01-01

Submarine canyons which indent the continental shelf are frequently regions of steep (up to 45°), three-dimensional topography. Recent observations have delineated the flow over several submarine canyons during 2-4 day long upwelling episodes. Thus upwelling episodes over submarine canyons provide an excellent flow regime for evaluating numerical and physical models. Here we compare a physical and numerical model simulation of an upwelling event over a simplified submarine canyon. The numerical model being evaluated is a version of the S-Coordinate Rutgers University Model (SCRUM). Careful matching between the models is necessary for a stringent comparison. Results show a poor comparison for the homogeneous case due to nonhydrostatic effects in the laboratory model. Results for the stratified case are better but show a systematic difference between the numerical results and laboratory results. This difference is shown not to be due to nonhydrostatic effects. Rather, the difference is due to truncation errors in the calculation of the vertical advection of density in the numerical model. The calculation is inaccurate due to the terrain-following coordinates combined with a strong vertical gradient in density, vertical shear in the horizontal velocity and topography with strong curvature.

6. One active debris removal control system design and error analysis

Wang, Weilin; Chen, Lei; Li, Kebo; Lei, Yongjun

2016-11-01

The increasing expansion of debris presents a significant challenge to space safety and sustainability. To address it, active debris removal, usually involving a chaser performing autonomous rendezvous with targeted debris to be removed is a feasible solution. In this paper, we explore a mid-range autonomous rendezvous control system based on augmented proportional navigation (APN), establishing a three-dimensional kinematic equation set constructed in a rotating coordinate system. In APN, feedback control is applied in the direction of line of sight (LOS), thus analytical solutions of LOS rate and relative motion are expectedly obtained. To evaluate the effectiveness of the control system, we adopt Zero-Effort-Miss (ZEM) in this research as the index, the uncertainty of which is directly determined by that of LOS rate. Accordingly, we apply covariance analysis (CA) method to analyze the propagation of LOS rate uncertainty. Consequently, we find that the accuracy of the control system can be verified even with uncertainty and the CA method is drastically more computationally efficient compared with nonlinear Monte-Carlo method. Additionally, to justify the superiority of the system, we further discuss more simulation cases to show the robustness and feasibility of APN proposed in the paper.

7. Sedimentation analysis of noninteracting and self-associating solutes using numerical solutions to the Lamm equation.

PubMed Central

Schuck, P

1998-01-01

The potential of using the Lamm equation in the analysis of hydrodynamic shape and gross conformation of proteins and reversibly formed protein complexes from analytical ultracentrifugation data was investigated. An efficient numerical solution of the Lamm equation for noninteracting and rapidly self-associating proteins by using combined finite-element and moving grid techniques is described. It has been implemented for noninteracting solutes and monomer-dimer and monomer-trimer equilibria. To predict its utility, the error surface of a nonlinear regression of simulated sedimentation profiles was explored. Error contour maps were calculated for conventional independent and global analyses of experiments with noninteracting solutes and with monomer-dimer systems at different solution column heights, loading concentrations, and centrifugal fields. It was found that the rotor speed is the major determinant for the shape of the error surface, and that global analysis of different experiments can allow substantially improved characterization of the solutes. We suggest that the global analysis of the approach to equilibrium in a short-column sedimentation equilibrium experiment followed by a high-speed short-column sedimentation velocity experiment can result in sedimentation and diffusion coefficients of very high statistical accuracy. In addition, in the case of a protein in rapid monomer-dimer equilibrium, this configuration was found to reveal the most precise estimate of the association constant. PMID:9726952

8. Kramers-Krönig analysis of modulated reflectance data investigation of errors.

PubMed

Balzarotti, A; Colavita, E; Gentile, S; Rosei, R

1975-10-01

The errors introduced in Deltaepsilon(2) spectra by Kramers-Krönig analysis of modulated reflectivity data are investigated using an analytical model. It is found that the energy position of singularities is always reproduced with good accuracy even if the experimental spectrum of DeltaR/R is cut barely above the last structure of interest. This procedure is instead completely insufficient when a quantitative line shape analysis is required. In such cases data up to very high energy are required for a meaningful analysis. Errors due to other sources, like baseline shifts or inaccurate static optical constants, are also investigated.

9. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

PubMed Central

Sun, Ting; Xing, Fei; You, Zheng

2013-01-01

The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

10. Learner Errors and Misconceptions in Elementary Analysis: A Case Study of a Grade 12 Class in South Africa

ERIC Educational Resources Information Center

Luneta, Kakoma; Makonye, Paul J.

2010-01-01

The paper focuses on analysing grade 12 learner errors and the misconceptions in calculus at a secondary school in Limpopo Province, South Africa. As part of the analysis the paper outlines the nature of mathematics errors and misconceptions. Coding of learners' errors was done through the lens of a typological framework. The analysis showed that…

11. Dynamic error analysis based on flexible shaft of wind turbine gearbox

Liu, H.; Zhao, R. Z.

2013-12-01

In view of the asynchrony issue between excitation and response in the transmission system, a study on the system dynamic error caused by sun axis which suspended in the gear box of a 1.5MW wind turbine was carried out considering flexibility of components. Firstly, the numerical recursive model was established by using D'Alembert's principle, then an application of MATLAB was used to simulate and analyze the model which was verified by the equivalent system. The results show that the dynamic error is not only related to the inherent parameter of system but also the external load imposed on the system; the module value of dynamic error are represented as a linear superposition of synchronization error component and harmonic vibration component and the latter can cause a random fluctuations of the gears, However, the dynamic error could be compensated partly if the stiffness coefficient of the sun axis is increased, thereby it is beneficial to improve the stability and accuracy of transmission system.

12. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

PubMed

Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

2014-01-10

Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

13. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

PubMed Central

Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

2014-01-01

Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

14. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

PubMed

Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

2013-01-01

Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

15. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

Sun, Hong; Wu, Qian-zhong

2013-09-01

In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

16. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

2016-06-01

The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

17. The design and analysis of single flank transmission error tester for loaded gears

NASA Technical Reports Server (NTRS)

Bassett, Duane E.; Houser, Donald R.

1987-01-01

To strengthen the understanding of gear transmission error and to verify mathematical models which predict them, a test stand that will measure the transmission error of gear pairs under design loads has been investigated. While most transmission error testers have been used to test gear pairs under unloaded conditions, the goal of this report was to design and perform dynamic analysis of a unique tester with the capability of measuring the transmission error of gears under load. This test stand will have the capability to continuously load a gear pair at torques up to 16,000 in-lb at shaft speeds from 0 to 5 rpm. Error measurement will be accomplished with high resolution optical encoders and the accompanying signal processing unit from an existing unloaded transmission error tester. Input power to the test gear box will be supplied by a dc torque motor while the load will be applied with a similar torque motor. A dual input, dual output control system will regulate the speed and torque of the system. This control system's accuracy and dynamic response were analyzed and it was determined that proportional plus derivative speed control is needed in order to provide the precisely constant torque necessary for error-free measurement.

18. Theoretical Implications of an Error Analysis of Second Language Phonology Production.

ERIC Educational Resources Information Center

Altenberg, Evelyn P.; Vago, Robert M.

1983-01-01

Investigates second language phonology (English) of two native Hungarian speakers. Finds evidence for phonetic and phonological transfer but argues that there are limitations on what can be transferred. Contrasts error analysis approach with autonomous system analysis and concludes that each provides unique information and should be used together…

19. Utilizing spectral analysis of coastal discharge computed by a numerical model to determine boundary influence

USGS Publications Warehouse

Swain, E.D.; Langevin, C.D.; Wang, J.D.

2008-01-01

In the present study, a spectral analysis was applied to field data and a numerical model of southeastern Everglades and northeastern Florida Bay that involved computing and comparing the power spectrum of simulated and measured flows at the primary coastal outflow creek. Four dominant power frequencies, corresponding to the S1, S2, M2, and 01 tidal periods, were apparent in the measured outflows. The model seemed to reproduce the magnitudes of the S1 and S2 components better than those of the M2 and 01 components. To determine the cause of the relatively poor representation of the M2 and 01 components, we created a steady-base version of the model by setting the time-varying forcing functions - rainfall, evapotranspiration, wind, and inland and tidal boundary conditions - to averaged values. The steady-base model was then modified to produce multiple simulations with only one time-varying forcing function for each model run. These experimental simulations approximated the individual effects of each forcing function on the system. The spectral analysis of the experimental simulations indicated that temporal fluctuations in rainfall, evapotranspiration, and inland water level and discharge boundaries have negligible effects on coastal creek flow fluctuations with periods of less than 48 hours. The tidal boundary seems to be the only forcing function inducing the M2 and 01 frequency flow fluctuations in the creek. An analytical formulation was developed, relating the errors induced by the tidal water-level gauge resolution to the errors in the simulated discharge fluctuations at the coastal creek. This formulation yielded a discharge-fluctuation error similar in magnitude to the errors observed when comparing the spectrum of the simulated and measured discharge. The dominant source of error in the simulation of discharge fluctuation magnitude is most likely the resolution of the water-level gauges used to create the model boundary.

20. Numerical analysis of the big bounce in loop quantum cosmology

SciTech Connect

Laguna, Pablo

2007-01-15

Loop quantum cosmology (LQC) homogeneous models with a massless scalar field show that the big-bang singularity can be replaced by a big quantum bounce. To gain further insight on the nature of this bounce, we study the semidiscrete loop quantum gravity Hamiltonian constraint equation from the point of view of numerical analysis. For illustration purposes, we establish a numerical analogy between the quantum bounces and reflections in finite difference discretizations of wave equations triggered by the use of nonuniform grids or, equivalently, reflections found when solving numerically wave equations with varying coefficients. We show that the bounce is closely related to the method for the temporal update of the system and demonstrate that explicit time-updates in general yield bounces. Finally, we present an example of an implicit time-update devoid of bounces and show back-in-time, deterministic evolutions that reach and partially jump over the big-bang singularity.

1. Numerical model and analysis of transistors with polysilicon emitters

Yu, Z.

With the advent of Very Large Scale Integration (VLS) technology, innovative bipolar devices with shallow junctions and high performances are being developed both for silicon and compound semiconductor materials. In the composite structure, such as HBJT (Heterojunction Bipolar Junction Transistor), the device characteristics are controlled not only by the doping profile but also by the composition of the structure. A complete physical and numerical model was developed to handle the carrier transport in such composite structure. An analytical approach (the introduction of an effective recombination velocity) to analyze carrier transport in the emitter of the bipolar transistor is discussed. Both analytical and numerical methods are then applied to the analysis of the device characteristics of transistors with polysilicon emitters. Good agreement between simulations and experimental results is achieved, and a regime of carrier distribution in the base space charge region is revealed. The numerical implementation of the model--a general purpose, one dimensional device simulation program (SEDAN) is briefly discussed.

2. Numerical Analysis of Deflections of Multi-Layered Beams

2015-03-01

The paper concerns the rheological bending problem of wooden beams reinforced with embedded composite bars. A theoretical model of the behaviour of a multi-layered beam is presented. The component materials of this beam are described with equations for the linear viscoelastic five-parameter rheological model. Two numerical analysis methods for the long-term response of wood structures are presented. The first method has been developed with SCILAB software. The second one has been developed with the finite element calculation software ABAQUS and user subroutine UMAT. Laboratory investigations were conducted on sample beams of natural dimensions in order to validate the proposed theoretical model and verify numerical simulations. Good agreement between experimental measurements and numerical results is observed.

3. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

PubMed Central

Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

2016-01-01

This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

4. Error analysis in post linac to driver linac transport beam line of RAON

Kim, Chanmi; Kim, Eun-San

2016-07-01

We investigated the effects of magnet errors in the beam transport line connecting the post linac to the driver linac (P2DT) in the Rare Isotope Accelerator in Korea (RAON). The P2DT beam line is bent by 180-degree to send the radioactive Isotope Separation On-line (ISOL) beams accelerated in Linac-3 to Linac-2. This beam line transports beams with multi-charge state 132Sn45,46,47. The P2DT beam line includes 42 quadrupole, 4 dipole and 10 sextupole magnets. We evaluate the effects of errors on the trajectory of the beam by using the TRACK code, which includes the translational and the rotational errors of the quadrupole, dipole and sextupole magnets in the beam line. The purpose of this error analysis is to reduce the rate of beam loss in the P2DT beam line. The distorted beam trajectories can be corrected by using six correctors and seven monitors.

5. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

2016-09-01

Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

6. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

2014-07-01

Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

7. Integrated numerical methods for hypersonic aircraft cooling systems analysis

NASA Technical Reports Server (NTRS)

Petley, Dennis H.; Jones, Stuart C.; Dziedzic, William M.

1992-01-01

Numerical methods have been developed for the analysis of hypersonic aircraft cooling systems. A general purpose finite difference thermal analysis code is used to determine areas which must be cooled. Complex cooling networks of series and parallel flow can be analyzed using a finite difference computer program. Both internal fluid flow and heat transfer are analyzed, because increased heat flow causes a decrease in the flow of the coolant. The steady state solution is a successive point iterative method. The transient analysis uses implicit forward-backward differencing. Several examples of the use of the program in studies of hypersonic aircraft and rockets are provided.

8. Error analysis for semi-analytic displacement derivatives with respect to shape and sizing variables

NASA Technical Reports Server (NTRS)

Fenyes, Peter A.; Lust, Robert V.

1989-01-01

Sensitivity analysis is fundamental to the solution of structural optimization problems. Consequently, much research has focused on the efficient computation of static displacement derivatives. As originally developed, these methods relied on analytical representations for the derivatives of the structural stiffness matrix (K) with respect to the design variables (b sub i). To extend these methods for use with complex finite element formulations and facilitate their implementation into structural optimization programs using the general finite element method analysis codes, the semi-analytic method was developed. In this method the matrix the derivative of K/the derivative b sub i is approximated by finite difference. Although it is well known that the accuracy of the semi-analytic method is dependent on the finite difference parameter, recent work has suggested that more fundamental inaccuracies exist in the method when used for shape optimization. Another study has argued qualitatively that these errors are related to nonuniform errors in the stiffness matrix derivatives. The accuracy of the semi-analytic method is investigated. A general framework was developed for the error analysis and then it is shown analytically that the errors in the method are entirely accounted for by errors in delta K/delta b sub i. Furthermore, it is demonstrated that acceptable accuracy in the derivatives can be obtained through careful selection of the finite difference parameter.

9. [Systemic error analysis as a key element of clinical risk management].

PubMed

Bartz, Hans-Jürgen

2015-01-01

Systemic error analysis plays a key role in clinical risk management. This includes all clinical and administrative activities which identify, assess and reduce the risks of damage to patients and to the organization. The clinical risk management is an integral part of quality management. This is also the policy of the Federal Joint Committee (Gemeinsamer Bundesausschuss, G-BA) on the fundamental requirements of an internal quality management. The goal of all activities is to improve the quality of medical treatment and patient safety. Primarily this is done by a systemic analysis of incidents and errors. A results-oriented systemic error analysis needs an open and unprejudiced corporate culture. Errors have to be transparent and measures to improve processes have to be taken. Disciplinary action on staff must not be part of the process. If these targets are met, errors and incidents can be analyzed and the process can create added value to the organization. There are some proven instruments to achieve that. This paper discusses in detail the error and risk analysis (ERA), which is frequently used in German healthcare organizations. The ERA goes far beyond the detection of problems due to faulty procedures. It focuses on the analysis of the following contributory factors: patient factors, task and process factors, individual factors, team factors, occupational and environmental factors, psychological factors, organizational and management factors and institutional context. Organizations can only learn from mistakes by analyzing these factors systemically and developing appropriate corrective actions. This article describes the fundamentals and implementation of the method at the University Medical Center Hamburg-Eppendorf.

10. [Systemic error analysis as a key element of clinical risk management].

PubMed

Bartz, Hans-Jürgen

2015-01-01

Systemic error analysis plays a key role in clinical risk management. This includes all clinical and administrative activities which identify, assess and reduce the risks of damage to patients and to the organization. The clinical risk management is an integral part of quality management. This is also the policy of the Federal Joint Committee (Gemeinsamer Bundesausschuss, G-BA) on the fundamental requirements of an internal quality management. The goal of all activities is to improve the quality of medical treatment and patient safety. Primarily this is done by a systemic analysis of incidents and errors. A results-oriented systemic error analysis needs an open and unprejudiced corporate culture. Errors have to be transparent and measures to improve processes have to be taken. Disciplinary action on staff must not be part of the process. If these targets are met, errors and incidents can be analyzed and the process can create added value to the organization. There are some proven instruments to achieve that. This paper discusses in detail the error and risk analysis (ERA), which is frequently used in German healthcare organizations. The ERA goes far beyond the detection of problems due to faulty procedures. It focuses on the analysis of the following contributory factors: patient factors, task and process factors, individual factors, team factors, occupational and environmental factors, psychological factors, organizational and management factors and institutional context. Organizations can only learn from mistakes by analyzing these factors systemically and developing appropriate corrective actions. This article describes the fundamentals and implementation of the method at the University Medical Center Hamburg-Eppendorf. PMID:25404172

11. Recent advances in numerical analysis of structural eigenvalue problems

NASA Technical Reports Server (NTRS)

Gupta, K. K.

1973-01-01

A wide range of eigenvalue problems encountered in practical structural engineering analyses is defined, in which the structures are assumed to be discretized by any suitable technique such as the finite-element method. A review of the usual numerical procedures for the solution of such eigenvalue problems is presented and is followed by an extensive account of recently developed eigenproblem solution procedures. Particular emphasis is placed on the new numerical algorithms and associated computer programs based on the Sturm sequence method. Eigenvalue algorithms developed for efficient solution of natural frequency and buckling problems of structures are presented, as well as some eigenvalue procedures formulated in connection with the solution of quadratic matrix equations associated with free vibration analysis of structures. A new algorithm is described for natural frequency analysis of damped structural systems.

12. An analysis of error patterns in children's backward digit recall in noise.

PubMed

Osman, Homira; Sullivan, Jessica R

2015-01-01

The purpose of the study was to determine whether perceptual masking or cognitive processing accounts for a decline in working memory performance in the presence of competing speech. The types and patterns of errors made on the backward digit span in quiet and multitalker babble at -5 dB signal-to-noise ratio (SNR) were analyzed. The errors were classified into two categories: item (if digits that were not presented in a list were repeated) and order (if correct digits were repeated but in an incorrect order). Fifty five children with normal hearing were included. All the children were aged between 7 years and 10 years. Repeated measures of analysis of variance (RM-ANOVA) revealed the main effects for error type and digit span length. In terms of listening condition interaction, it was found that the order errors occurred more frequently than item errors in the degraded listening condition compared to quiet. In addition, children had more difficulty recalling the correct order of intermediate items, supporting strong primacy and recency effects. Decline in children's working memory performance was not primarily related to perceptual difficulties alone. The majority of errors was related to the maintenance of sequential order information, which suggests that reduced performance in competing speech may result from increased cognitive processing demands in noise.

13. NGS-eval: NGS Error analysis and novel sequence VAriant detection tooL.

PubMed

May, Ali; Abeln, Sanne; Buijs, Mark J; Heringa, Jaap; Crielaard, Wim; Brandt, Bernd W

2015-07-01

Massively parallel sequencing of microbial genetic markers (MGMs) is used to uncover the species composition in a multitude of ecological niches. These sequencing runs often contain a sample with known composition that can be used to evaluate the sequencing quality or to detect novel sequence variants. With NGS-eval, the reads from such (mock) samples can be used to (i) explore the differences between the reads and their references and to (ii) estimate the sequencing error rate. This tool maps these reads to references and calculates as well as visualizes the different types of sequencing errors. Clearly, sequencing errors can only be accurately calculated if the reference sequences are correct. However, even with known strains, it is not straightforward to select the correct references from databases. We previously analysed a pyrosequencing dataset from a mock sample to estimate sequencing error rates and detected sequence variants in our mock community, allowing us to obtain an accurate error estimation. Here, we demonstrate the variant detection and error analysis capability of NGS-eval with Illumina MiSeq reads from the same mock community. While tailored towards the field of metagenomics, this server can be used for any type of MGM-based reads. NGS-eval is available at http://www.ibi.vu.nl/programs/ngsevalwww/.

14. Analysis of spelling error patterns of individuals with complex communication needs and physical impairments.

PubMed

Hart, Pamela; Scherz, Julie; Apel, Kenn; Hodson, Barbara

2007-03-01

The purpose of this study was to examine the relationships between patterns of spelling error and related linguistic abilities of four persons with complex communication needs and physical impairments, compared to younger individuals without disabilities matched by spelling age. All participants completed a variety of spelling and linguistic tasks to determine overall spelling age, patterns of spelling errors, and abilities across phonemic, orthographic, and morphological awareness. Performance of the spelling-age matched pairs was similar across most of the phonemic, orthographic, and morphological awareness tasks. Analysis of the participants' spelling errors, however, revealed different patterns of spelling errors for three of the spelling-age matched pairs. Within these three pairs, the participants with complex communication needs and physical impairments made most of their spelling errors due to phonemic awareness difficulties, while most of the errors on the part of the participants without disabilities were due to orthographic difficulties. The results of this study lend support to the findings of previous investigations that reported difficulties among individuals with complex communication needs and physical impairments evidence when applying phonemic knowledge to literacy tasks. PMID:17364485

15. Analysis and modeling of radiometric error caused by imaging blur in optical remote sensing systems

Xie, Xufen; Zhang, Yuncui; Wang, Hongyuan; Zhang, Wei

2016-07-01

Imaging blur changes the digital output values of imaging systems. It leads to radiometric errors when the system is used for measurement. In this paper, we focus on the radiometric error due to imaging blur in remote sensing imaging systems. First, in accordance with the radiometric response calibration of imaging systems, we provide a theoretical analysis on the evaluation standard of radiometric errors caused by imaging blur. Then, we build a radiometric error model for imaging blur based on the natural stochastic fractal characteristics of remote sensing images. Finally, we verify the model by simulations and physical defocus experiments. The simulation results show that the modeling estimation result approaches to the simulation computation. The maximum difference of relative MSE (Mean Squared Error) between simulation computation and modeling estimation can achieve 1.6%. The physical experimental results show that the maximum difference of relative MSE between experimental results and modeling estimation is only 1.29% under experimental conditions. Simulations and experiments demonstrate that the proposed model is correct, which can be used to estimate the radiometric error caused by imaging blur in remote sensing images. This research is of great importance for radiometric measurement system evaluation and application.

16. An analysis of error patterns in children's backward digit recall in noise.

PubMed

Osman, Homira; Sullivan, Jessica R

2015-01-01

The purpose of the study was to determine whether perceptual masking or cognitive processing accounts for a decline in working memory performance in the presence of competing speech. The types and patterns of errors made on the backward digit span in quiet and multitalker babble at -5 dB signal-to-noise ratio (SNR) were analyzed. The errors were classified into two categories: item (if digits that were not presented in a list were repeated) and order (if correct digits were repeated but in an incorrect order). Fifty five children with normal hearing were included. All the children were aged between 7 years and 10 years. Repeated measures of analysis of variance (RM-ANOVA) revealed the main effects for error type and digit span length. In terms of listening condition interaction, it was found that the order errors occurred more frequently than item errors in the degraded listening condition compared to quiet. In addition, children had more difficulty recalling the correct order of intermediate items, supporting strong primacy and recency effects. Decline in children's working memory performance was not primarily related to perceptual difficulties alone. The majority of errors was related to the maintenance of sequential order information, which suggests that reduced performance in competing speech may result from increased cognitive processing demands in noise. PMID:26168949

17. EAC: A program for the error analysis of STAGS results for plates

NASA Technical Reports Server (NTRS)

Sistla, Rajaram; Thurston, Gaylen A.; Bains, Nancy Jane C.

1989-01-01

A computer code is now available for estimating the error in results from the STAGS finite element code for a shell unit consisting of a rectangular orthotropic plate. This memorandum contains basic information about the computer code EAC (Error Analysis and Correction) and describes the connection between the input data for the STAGS shell units and the input data necessary to run the error analysis code. The STAGS code returns a set of nodal displacements and a discrete set of stress resultants; the EAC code returns a continuous solution for displacements and stress resultants. The continuous solution is defined by a set of generalized coordinates computed in EAC. The theory and the assumptions that determine the continuous solution are also outlined in this memorandum. An example of application of the code is presented and instructions on its usage on the Cyber and the VAX machines have been provided.

18. Errors in reduction methods. [in dynamic analysis of multi-degree of freedom systems

NASA Technical Reports Server (NTRS)

Utku, S.; Salama, M.; Clemente, J. L. M.

1985-01-01

A mathematical basis is given for comparing the relative merits of various techniques used to reduce the order of large linear and nonlinear dynamics problems during their numerical integration. In such techniques as Guyan-Irons, path derivatives, selected eigenvectors, Ritz vectors, etc., the nth order initial value problem of /y(dot) = f(y) for t greater than 0, y(0) given/ is typically reduced to the mth order (m is much less than n) problem of /z(dot) = g(z) for t greater than 0, z(0) given/ by the transformation y = Pz where P changes from technique to technique. This paper gives an explicit approximate expression for the reduction error e-i in terms of P and the Jacobian of f. It is shown that: (a) reduction techniques are more accurate when the time rate of change of the response y is relatively small; (b) the change in response between two successive stations contributes to the errors at future stations after the change in response is transformed by a filtering matrix H, defined in terms of P; (c) the error committed at a station propagates to future stations by a mixing and scaling matrix G, defined in terms of P, Jacobian and of f, and time increment h. The paper discusses the conditions under which the reduction errors may be minimized and gives guidelines for selecting the reduction basis vector, i.e., the columns of P.

19. Numerical analysis on thermal drilling of aluminum metal matrix composite

Hynes, N. Rajesh Jesudoss; Maheshwaran, M. V.

2016-05-01

The work-material deformation is very large and both the tool and workpiece temperatures are high in thermal drilling. Modeling is a necessary tool to understand the material flow, temperatures, stress, and strains, which are difficult to measure experimentally during thermal drilling. The numerical analysis of thermal drilling process of aluminum metal matrix composite has been done in the present work. In this analysis the heat flux of different stages is calculated. The calculated heat flux is applied on the surface of work piece and thermal distribution is predicted in different stages during the thermal drilling process.

20. Proper handling of random errors and distortions in astronomical data analysis

Cardiel, Nicolas; Gorgas, Javier; Gallego, Jess; Serrano, Angel; Zamorano, Jaime; Garcia-Vargas, Maria-Luisa; Gomez-Cambronero, Pedro; Filgueira, Jose M.

2002-12-01

The aim of a data reduction process is to minimize the influence of data acquisition imperfections on the estimation of the desired astronomical quantity. For this purpose, one must perform appropriate manipulations with data and calibration frames. In addition, random-error frames (computed from first principles: expected statistical distribution of photo-electrons, detector gain, readout-noise, etc.), corresponding to the raw-data frames, can also be properly reduced. This parallel treatment of data and errors guarantees the correct propagation of random errors due to the arithmetic manipulations throughout the reduction procedure. However, due to the unavoidable fact that the information collected by detectors is physically sampled, this approach collides with a major problem: errors are correlated when applying image manipulations involving non-integer pixel shifts of data. Since this is actually the case for many common reduction steps (wavelength calibration into a linear scale, image rectification when correcting for geometric distortions,...), we discuss the benefits of considering the data reduction as the full characterization of the raw-data frames, but avoiding, as far as possible, the arithmetic manipulation of that data until the final measure of the image properties with a scientific meaning for the astronomer. For this reason, it is essential that the software tools employed for the analysis of the data perform their work using that characterization. In that sense, the real reduction of the data should be performed during the analysis, and not before, in order to guarantee the proper treatment of errors.

1. A Monte Carlo error analysis program for near-Mars, finite-burn, orbital transfer maneuvers

NASA Technical Reports Server (NTRS)

Green, R. N.; Hoffman, L. H.; Young, G. R.

1972-01-01

A computer program was developed which performs an error analysis of a minimum-fuel, finite-thrust, transfer maneuver between two Keplerian orbits in the vicinity of Mars. The method of analysis is the Monte Carlo approach where each off-nominal initial orbit is targeted to the desired final orbit. The errors in the initial orbit are described by two covariance matrices of state deviations and tracking errors. The function of the program is to relate these errors to the resulting errors in the final orbit. The equations of motion for the transfer trajectory are those of a spacecraft maneuvering with constant thrust and mass-flow rate in the neighborhood of a single body. The thrust vector is allowed to rotate in a plane with a constant pitch rate. The transfer trajectory is characterized by six control parameters and the final orbit is defined, or partially defined, by the desired target parameters. The program is applicable to the deboost maneuver (hyperbola to ellipse), orbital trim maneuver (ellipse to ellipse), fly-by maneuver (hyperbola to hyperbola), escape maneuvers (ellipse to hyperbola), and deorbit maneuver.

2. The CarbonSat Earth Explorer 8 candidate mission: Error analysis for carbon dioxide and methane

Buchwitz, Michael; Bovensmann, Heinrich; Reuter, Maximilian; Gerilowski, Konstantin; Meijer, Yasjka; Sierk, Bernd; Caron, Jerome; Loescher, Armin; Ingmann, Paul; Burrows, John P.

2015-04-01

CarbonSat is one of two candidate missions for ESA's Earth Explorer 8 (EE8) satellite to be launched around 2022. The main goal of CarbonSat is to advance our knowledge on the natural and man-made sources and sinks of the two most important anthropogenic greenhouse gases (GHGs) carbon dioxide (CO2) and methane (CH4) on various temporal and spatial scales (e.g., regional, city and point source scale), as well as related climate feedbacks. CarbonSat will be the first satellite mission optimised to detect emission hot spots of CO2 (e.g., cities, industrialised areas, power plants) and CH4 (e.g., oil and gas fields) and to quantify their emissions. Furthermore, CarbonSat will deliver a number of important by-products such as Vegetation Chlorophyll Fluorescence (VCF, also called Solar Induced Fluorescence (SIF)) at 755 nm. These applications require appropriate retrieval algorithms which are currently being optimized and used for error analysis. The status of this error analysis will be presented based on the latest version of the CO2 and CH4 retrieval algorithm and taking the current instrument specification into account. An overview will be presented focusing on nadir observations over land. Focus will be on specific issues such as errors of the CO2 and CH4 products due to residual polarization related errors and errors related to inhomogeneous ground scenes.

3. On the relationship between anxiety and error monitoring: a meta-analysis and conceptual framework.

PubMed

Moser, Jason S; Moran, Tim P; Schroder, Hans S; Donnellan, M Brent; Yeung, Nick

2013-01-01

Research involving event-related brain potentials has revealed that anxiety is associated with enhanced error monitoring, as reflected in increased amplitude of the error-related negativity (ERN). The nature of the relationship between anxiety and error monitoring is unclear, however. Through meta-analysis and a critical review of the literature, we argue that anxious apprehension/worry is the dimension of anxiety most closely associated with error monitoring. Although, overall, anxiety demonstrated a robust, "small-to-medium" relationship with enhanced ERN (r = -0.25), studies employing measures of anxious apprehension show a threefold greater effect size estimate (r = -0.35) than those utilizing other measures of anxiety (r = -0.09). Our conceptual framework helps explain this more specific relationship between anxiety and enhanced ERN and delineates the unique roles of worry, conflict processing, and modes of cognitive control. Collectively, our analysis suggests that enhanced ERN in anxiety results from the interplay of a decrease in processes supporting active goal maintenance and a compensatory increase in processes dedicated to transient reactivation of task goals on an as-needed basis when salient events (i.e., errors) occur.

4. Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry

NASA Technical Reports Server (NTRS)

Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto

2006-01-01

We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.

5. On the relationship between anxiety and error monitoring: a meta-analysis and conceptual framework

PubMed Central

Moser, Jason S.; Moran, Tim P.; Schroder, Hans S.; Donnellan, M. Brent; Yeung, Nick

2013-01-01

Research involving event-related brain potentials has revealed that anxiety is associated with enhanced error monitoring, as reflected in increased amplitude of the error-related negativity (ERN). The nature of the relationship between anxiety and error monitoring is unclear, however. Through meta-analysis and a critical review of the literature, we argue that anxious apprehension/worry is the dimension of anxiety most closely associated with error monitoring. Although, overall, anxiety demonstrated a robust, “small-to-medium” relationship with enhanced ERN (r = −0.25), studies employing measures of anxious apprehension show a threefold greater effect size estimate (r = −0.35) than those utilizing other measures of anxiety (r = −0.09). Our conceptual framework helps explain this more specific relationship between anxiety and enhanced ERN and delineates the unique roles of worry, conflict processing, and modes of cognitive control. Collectively, our analysis suggests that enhanced ERN in anxiety results from the interplay of a decrease in processes supporting active goal maintenance and a compensatory increase in processes dedicated to transient reactivation of task goals on an as-needed basis when salient events (i.e., errors) occur. PMID:23966928

6. Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis

ERIC Educational Resources Information Center

Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara

2014-01-01

This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…

7. Advanced GIS Exercise: Performing Error Analysis in ArcGIS ModelBuilder

ERIC Educational Resources Information Center

Hall, Steven T.; Post, Christopher J.

2009-01-01

Knowledge of Geographic Information Systems is quickly becoming an integral part of the natural resource professionals' skill set. With the growing need of professionals with these skills, we created an advanced geographic information systems (GIS) exercise for students at Clemson University to introduce them to the concept of error analysis,…

8. Formulation and error analysis for a generalized image point correspondence algorithm

NASA Technical Reports Server (NTRS)

Shapiro, Linda (Editor); Rosenfeld, Azriel (Editor); Fotedar, Sunil; Defigueiredo, Rui J. P.; Krishen, Kumar

1992-01-01

A Generalized Image Point Correspondence (GIPC) algorithm, which enables the determination of 3-D motion parameters of an object in a configuration where both the object and the camera are moving, is discussed. A detailed error analysis of this algorithm has been carried out. Furthermore, the algorithm was tested on both simulated and video-acquired data, and its accuracy was determined.

9. An Error Analysis in Division Problems in Fractions Posed by Pre-Service Elementary Mathematics Teachers

ERIC Educational Resources Information Center

Isik, Cemalettin; Kar, Tugrul

2012-01-01

The present study aimed to make an error analysis in the problems posed by pre-service elementary mathematics teachers about fractional division operation. It was carried out with 64 pre-service teachers studying in their final year in the Department of Mathematics Teaching in an eastern university during the spring semester of academic year…

10. The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications

SciTech Connect

Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em

2008-11-20

Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L{sup 2} error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods.

11. Error analysis of marker-based object localization using a single-plane XRII

SciTech Connect

Habets, Damiaan F.; Pollmann, Steven I.; Yuan, Xunhua; Peters, Terry M.; Holdsworth, David W.

2009-01-15

The role of imaging and image guidance is increasing in surgery and therapy, including treatment planning and follow-up. Fluoroscopy is used for two-dimensional (2D) guidance or localization; however, many procedures would benefit from three-dimensional (3D) guidance or localization. Three-dimensional computed tomography (CT) using a C-arm mounted x-ray image intensifier (XRII) can provide high-quality 3D images; however, patient dose and the required acquisition time restrict the number of 3D images that can be obtained. C-arm based 3D CT is therefore limited in applications for x-ray based image guidance or dynamic evaluations. 2D-3D model-based registration, using a single-plane 2D digital radiographic system, does allow for rapid 3D localization. It is our goal to investigate - over a clinically practical range - the impact of x-ray exposure on the resulting range of 3D localization precision. In this paper it is assumed that the tracked instrument incorporates a rigidly attached 3D object with a known configuration of markers. A 2D image is obtained by a digital fluoroscopic x-ray system and corrected for XRII distortions ({+-}0.035 mm) and mechanical C-arm shift ({+-}0.080 mm). A least-square projection-Procrustes analysis is then used to calculate the 3D position using the measured 2D marker locations. The effect of x-ray exposure on the precision of 2D marker localization and on 3D object localization was investigated using numerical simulations and x-ray experiments. The results show a nearly linear relationship between 2D marker localization precision and the 3D localization precision. However, a significant amplification of error, nonuniformly distributed among the three major axes, occurs, and that is demonstrated. To obtain a 3D localization error of less than {+-}1.0 mm for an object with 20 mm marker spacing, the 2D localization precision must be better than {+-}0.07 mm. This requirement was met for all investigated nominal x-ray exposures at 28 cm

12. 1-D Numerical Analysis of ABCC Engine Performance

NASA Technical Reports Server (NTRS)

Holden, Richard

1999-01-01

ABCC engine combines air breathing and rocket engine into a single engine to increase the specific impulse over an entire flight trajectory. Except for the heat source, the basic operation of the ABCC is similar to the basic operation of the RBCC engine. The ABCC is intended to have a higher specific impulse than the RBCC for single stage Earth to orbit vehicle. Computational fluid dynamics (CFD) is a useful tool for the analysis of complex transport processes in various components in ABCC propulsion system. The objective of the present research was to develop a transient 1-D numerical model using conservation of mass, linear momentum, and energy equations that could be used to predict flow behavior throughout a generic ABCC engine following a flight path. At specific points during the development of the 1-D numerical model a myriad of tests were performed to prove the program produced consistent, realistic numbers that follow compressible flow theory for various inlet conditions.

13. Numerical Ergonomics Analysis in Operation Environment of CNC Machine

Wong, S. F.; Yang, Z. X.

2010-05-01

The performance of operator will be affected by different operation environments [1]. Moreover, poor operation environment may cause health problems of the operator [2]. Physical and psychological considerations are two main factors that will affect the performance of operator under different conditions of operation environment. In this paper, applying scientific and systematic methods find out the pivot elements in the field of physical and psychological factors. There are five main factors including light, temperature, noise, air flow and space that are analyzed. A numerical ergonomics model has been built up regarding the analysis results which can support to advance the design of operation environment. Moreover, the output of numerical ergonomic model can provide the safe, comfortable, more productive conditions for the operator.

14. Throughput Analysis of IEEE 802.11 DCF in the Presence of Transmission Errors

Alshanyour, Ahed; Agarwal, Anjali

This paper introduces an accurate analysis using three- dimensional Markov chain modeling to compute the IEEE 802.11 DCF throughput under heavy traffic conditions and absence of hidden terminals for both access modes, basic and rts/cts. The proposed model considers the impact of retry counts of control and data frames jointly on the saturated throughput. Moreover, It considers the impact of transmission errors by taking into account the strength of the received signal and using the BER model to convert the SNR to a bit error probability.

15. Treatment of the background error in the statistical analysis of Poisson processes

Giunti, C.

1999-06-01

The formalism that allows one to take into account the error σb of the expected mean background b¯ in the statistical analysis of a Poisson process with the frequentistic method is presented. It is shown that the error σb cannot be neglected if it is not much smaller than b¯. The resulting confidence belt is larger that the one for σb=0, leading to larger confidence intervals for the mean μ of signal events.

16. Numeral-Incorporating Roots in Numeral Systems: A Comparative Analysis of Two Sign Languages

ERIC Educational Resources Information Center

Fuentes, Mariana; Massone, Maria Ignacia; Fernandez-Viader, Maria del Pilar; Makotrinsky, Alejandro; Pulgarin, Francisca

2010-01-01

Numeral-incorporating roots in the numeral systems of Argentine Sign Language (LSA) and Catalan Sign Language (LSC), as well as the main features of the number systems of both languages, are described and compared. Informants discussed the use of numerals and roots in both languages (in most cases in natural contexts). Ten informants took part in…

17. Numerical Analysis of a Radiant Heat Flux Calibration System

NASA Technical Reports Server (NTRS)

Jiang, Shanjuan; Horn, Thomas J.; Dhir, V. K.

1998-01-01

A radiant heat flux gage calibration system exists in the Flight Loads Laboratory at NASA's Dryden Flight Research Center. This calibration system must be well understood if the heat flux gages calibrated in it are to provide useful data during radiant heating ground tests or flight tests of high speed aerospace vehicles. A part of the calibration system characterization process is to develop a numerical model of the flat plate heater element and heat flux gage, which will help identify errors due to convection, heater element erosion, and other factors. A 2-dimensional mathematical model of the gage-plate system has been developed to simulate the combined problem involving convection, radiation and mass loss by chemical reaction. A fourth order finite difference scheme is used to solve the steady state governing equations and determine the temperature distribution in the gage and plate, incident heat flux on the gage face, and flat plate erosion. Initial gage heat flux predictions from the model are found to be within 17% of experimental results.

18. Application of symbolic representation method to the analysis of machine errors

Chen, Cha'o.-Kuang; Wu, Tzong-Mou

1993-09-01

jSYlIJIJOliC FepVCSCIltaLiOfl of rnachiiie errors for the opetied loop chain aJl(1 closed J():)fJ Cllaifl iii positioii and orientation is presented. i]iis representatioL1 (foes away with CtJJ11l)CrSOUC natrix rnuitiplicaiioiis and is able tO Ofilit 1ie zero value of multiplication of matrix. A program is also (leVeIolJC(I by I''iogran syrti holic rcj:ncseii lation method which is apjilicable to the analysis of nialiiiie Crrors. An example is given to illustrate the use of this prograiii for the analysis of machue errors. it is hoped that the itietliod presented if! this study will provide an easy and powerful tool for the analysis of machine errors. In iroduc Lion lfoclianism are commonly used in . i specified pOSitiOfl and orienLation in two or Ldimensional space. In accuracies introduced by clearances in the mechanism connections and errors j manufacturing are one of the prin SP1E Vol. 2101 Measurement Technology and Intelligent Instruments (1993)! 155

19. Error analysis of look-up-table implementations in device-independent color imaging systems

Jennings, Eddie; Holland, R. D.; Lee, C. C.

1994-04-01

In device-independent color imaging systems, it is necessary to relate device color coordinates to and from standard colorimetric or appearance based color spaces. Such relationships are determined by mathematical modeling techniques with error estimates commonly quoted with the CIELAB (Delta) E metric. Due to performance considerations, a lookup table (LUT) is commonly used to approximate the model. LUT approximation accuracy is affected by the number of LUT entries, the distribution of the LUT data, and the interpolation technique used (full linear interpolation using cubes or hypercubes versus partial linear interpolation using tetrahedrons or hypertetrahedrons). Error estimates of such LUT approximations are not widely known. An overview of the modeling process and lookup table approximation technique is given with a study of relevant error analysis techniques. The application of such error analyses is shown for two common problems (converting scanner RGB and prepress proofing CMYK color definitions to CIELAB). In each application, (Delta) E statistics are shown for LUTs based on the above contributing factors. An industry recommendation is made for a standard way of communicating error information about interpolation solutions that will be meaningful to both vendors and end users.

20. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

PubMed Central

Tian, Zengshan; Xu, Kunjie; Yu, Xiang

2014-01-01

This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349

1. Errors analysis on temperature and emissivity determination from hyperspectral thermal infrared data.

PubMed

OuYang, Xiaoying; Wang, Ning; Wu, Hua; Li, Zhao-Liang

2010-01-18

Sensitivity analysis of temperature-emissivity separation method commonly applied to hyperspectral data to various sources of errors is performed in this paper. In terms of resulting errors in the process of retrieving surface temperature, results show that: (1) Satisfactory results can be obtained for heterogeneous land surfaces and retrieval error of surface temperature is small enough to be neglected for all atmospheric conditions. (2) Separation of atmospheric downwelling radiance from at-ground radiance is not very sensitive to the uncertainty of column water vapor (WV) in the atmosphere. The errors in land surface temperature retrievals from at-ground radiance with the DRRI method due to the uncertainty in atmospheric downwelling radiance vary from -0.2 to 0.6K if the uncertainty of WV is within 50% of the actual WV; (3) Impact of the errors generated by the poor atmospheric corrections is significant, implying that a well-done atmospheric correction is indeed required to obtain accurate at-ground radiance from at-satellite radiance for successful separation of land-surface temperature and emissivity.

2. Error analysis for RADAR neighbor matching localization in linear logarithmic strength varying Wi-Fi environment.

PubMed

Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo

2014-01-01

This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.

3. Covariate measurement error correction methods in mediation analysis with failure time data.

PubMed

Zhao, Shanshan; Prentice, Ross L

2014-12-01

Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

4. A neighbourhood analysis based technique for real-time error concealment in H.264 intra pictures

Beesley, Steven T. C.; Grecos, Christos; Edirisinghe, Eran

2007-02-01

H.264s extensive use of context-based adaptive binary arithmetic or variable length coding makes streams highly susceptible to channel errors, a common occurrence over networks such as those used by mobile devices. Even a single bit error will cause a decoder to discard all stream data up to the next fixed length resynchronisation point, the worst scenario is that an entire slice is lost. In cases where retransmission and forward error concealment are not possible, a decoder should conceal any erroneous data in order to minimise the impact on the viewer. Stream errors can often be spotted early in the decode cycle of a macroblock which if aborted can provide unused processor cycles, these can instead be used to conceal errors at minimal cost, even as part of a real time system. This paper demonstrates a technique that utilises Sobel convolution kernels to quickly analyse the neighbourhood surrounding erroneous macroblocks before performing a weighted multi-directional interpolation. This generates significantly improved statistical (PSNR) and visual (IEEE structural similarity) results when compared to the commonly used weighted pixel value averaging. Furthermore it is also computationally scalable, both during analysis and concealment, achieving maximum performance from the spare processing power available.

5. Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid

NASA Technical Reports Server (NTRS)

VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)

1997-01-01

The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).

6. Covariate measurement error correction methods in mediation analysis with failure time data.

PubMed

Zhao, Shanshan; Prentice, Ross L

2014-12-01

Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk.

7. Error Analysis in a Device to Test Optical Systems by Using Ronchi Test and Phase Shifting

SciTech Connect

Cabrera-Perez, Brasilia; Castro-Ramos, Jorge; Gordiano-Alvarado, Gabriel; Vazquez y Montiel, Sergio

2008-04-15

In optical workshops, Ronchi test is used to determine the optical quality of any concave surface, while it is in the polishing process its quality is verified. The Ronchi test is one of the simplest and most effective methods used for evaluating and measuring aberrations. In this work, we describe a device to test converging mirrors and lenses either with small F/numbers or large F/numbers, using LED (Light-Emitting Diode) that has been adapted in the Ronchi testing as source of illumination. With LED used the radiation angle is bigger than common LED. It uses external power supplies to have well stability intensity to avoid error during the phase shift. The setup also has the advantage to receive automatic input and output data, this is possible because phase shifting interferometry and a square Ronchi ruling with a variable intensity LED were used. Error analysis of the different parameters involved in the test of Ronchi was made. For example, we analyze the error in the shifting of phase, the error introduced by the movement of the motor, misalignments of x-axis, y-axis and z-axis of the surface under test, error in the period of the grid used.

8. Type I error and statistical power of the Mantel-Haenszel procedure for detecting DIF: a meta-analysis.

PubMed

Guilera, Georgina; Gómez-Benito, Juana; Hidalgo, Maria Dolores; Sánchez-Meca, Julio

2013-12-01

This article presents a meta-analysis of studies investigating the effectiveness of the Mantel-Haenszel (MH) procedure when used to detect differential item functioning (DIF). Studies were located electronically in the main databases, representing the codification of 3,774 different simulation conditions, 1,865 related to Type I error and 1,909 to statistical power. The homogeneity of effect-size distributions was assessed by the Q statistic. The extremely high heterogeneity in both error rates (I² = 94.70) and power (I² = 99.29), due to the fact that numerous studies test the procedure in extreme conditions, means that the main interest of the results lies in explaining the variability in detection rates. One-way analysis of variance was used to determine the effects of each variable on detection rates, showing that the MH test was more effective when purification procedures were used, when the data fitted the Rasch model, when test contamination was below 20%, and with sample sizes above 500. The results imply a series of recommendations for practitioners who wish to study DIF with the MH test. A limitation, one inherent to all meta-analyses, is that not all the possible moderator variables, or the levels of variables, have been explored. This serves to remind us of certain gaps in the scientific literature (i.e., regarding the direction of DIF or variances in ability distribution) and is an aspect that methodologists should consider in future simulation studies. PMID:24127986

9. Error analysis of deep sequencing of phage libraries: peptides censored in sequencing.

PubMed

2013-01-01

Next-generation sequencing techniques empower selection of ligands from phage-display libraries because they can detect low abundant clones and quantify changes in the copy numbers of clones without excessive selection rounds. Identification of errors in deep sequencing data is the most critical step in this process because these techniques have error rates >1%. Mechanisms that yield errors in Illumina and other techniques have been proposed, but no reports to date describe error analysis in phage libraries. Our paper focuses on error analysis of 7-mer peptide libraries sequenced by Illumina method. Low theoretical complexity of this phage library, as compared to complexity of long genetic reads and genomes, allowed us to describe this library using convenient linear vector and operator framework. We describe a phage library as N × 1 frequency vector n = ||ni||, where ni is the copy number of the ith sequence and N is the theoretical diversity, that is, the total number of all possible sequences. Any manipulation to the library is an operator acting on n. Selection, amplification, or sequencing could be described as a product of a N × N matrix and a stochastic sampling operator (Sa). The latter is a random diagonal matrix that describes sampling of a library. In this paper, we focus on the properties of Sa and use them to define the sequencing operator (Seq). Sequencing without any bias and errors is Seq = Sa IN, where IN is a N × N unity matrix. Any bias in sequencing changes IN to a nonunity matrix. We identified a diagonal censorship matrix (CEN), which describes elimination or statistically significant downsampling, of specific reads during the sequencing process. PMID:24416071

10. Ancient numerical daemons of conceptual hydrological modeling: 2. Impact of time stepping schemes on model analysis and prediction

Kavetski, Dmitri; Clark, Martyn P.

2010-10-01

Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable

11. Enhancement of the robustness on dynamic speckle laser numerical analysis

Cardoso, R. R.; Braga, R. A.

2014-12-01

When a dynamic process occurs in a material under laser illumination the phenomenon that appears is named dynamic laser speckle, or biospeckle laser (BSL) if we have a biological material. The work with biological material and its dispersion of light brings considerable complexity, and the way we can deal with that complex outputs is based on a sophisticated analysis of the images associated to statistical approaches. One of the most known numerical analysis of the BSL has been applied in many applications, and it is named Inertia Moment, however its outputs have great coefficients of variation, most of the time attributed to the variability of the biological material. A change in the inertia moment method was done and the Absolute Value of the Differences (AVD) was presented as an alternative to reduce the variations and to follow a broader range of frequencies than before. However, it was not enough concerning with the variability of the outputs. This study aimed to improve the BSL technique in order to enhance the robustness of the numerical method known as Inertia Moment (IM) and improve the absolute value of the differences reducing even more its coefficient of variation by means of changes in the normalization provided in both methods. The new normalization was tested in simulated data, as well as in real data. The results showed the improvements of the methods, IM and AVD, with the reduction of the coefficients of variation of the activity in the outputs, increasing the robustness of the analysis.

12. Unsaturated Shear Strength and Numerical Analysis Methods for Unsaturated Soils

Kim, D.; Kim, G.; Kim, D.; Baek, H.; Kang, S.

2011-12-01

The angles of shearing resistance(φb) and internal friction(φ') appear to be identical in low suction range, but the angle of shearing resistance shows non-linearity as suction increases. In most numerical analysis however, a fixed value for the angle of shearing resistance is applied even in low suction range for practical reasons, often leading to a false conclusion. In this study, a numerical analysis has been undertaken employing the estimated shear strength curve of unsaturated soils from the residual water content of SWCC proposed by Vanapalli et al.(1996). The result was also compared with that from a fixed value of φb. It is suggested that, in case it is difficult to measure the unsaturated shear strength curve through the triaxial soil tests, the estimated shear strength curve using the residual water content can be a useful alternative. This result was applied for analyzing the slope stablity of unsaturated soils. The effects of a continuous rainfall on slope stability were analyzed using a commercial program "SLOPE/W", with the coupled infiltration analysis program "SEEP/W" from the GEO-SLOPE International Ltd. The results show that, prior to the infiltration by the intensive rainfall, the safety factors using the estimated shear strength curve were substantially higher than that from the fixed value of φb at all time points. After the intensive infiltration, both methods showed a similar behavior.

13. Bit Error Rate Analysis for MC-CDMA Systems in Nakagami-[InlineEquation not available: see fulltext.] Fading Channels

Li, Zexian; Latva-aho, Matti

2004-12-01

Multicarrier code division multiple access (MC-CDMA) is a promising technique that combines orthogonal frequency division multiplexing (OFDM) with CDMA. In this paper, based on an alternative expression for the[InlineEquation not available: see fulltext.]-function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER) of multiuser MC-CDMA systems in frequency-selective Nakagami-[InlineEquation not available: see fulltext.] fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC) or equal gain combining (EGC). The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.

14. Analysis and mitigation of systematic errors in spectral shearing interferometry of pulses approaching the single-cycle limit [Invited

SciTech Connect

Birge, Jonathan R.; Kaertner, Franz X.

2008-06-15

We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty.

15. Hybridizing experimental, numerical, and analytical stress analysis techniques

Rowlands, Robert E.

2001-06-01

Good measurements enjoy the advantage of conveying what actually occurs. However, recognizing that vast amounts of displacement, strain and/or stress-related information can now be recorded at high resolution, effective and reliable means of processing the data become important. It can therefore be advantageous to combine measured result with analytical and computations methods. This presentation will describe such synergism and applications to engineering problems. This includes static and transient analysis, notched and perforated composites, and fracture of composites and fiber-filled cement. Experimental methods of moire, thermo elasticity and strain gages are emphasized. Numerical techniques utilized include pseudo finite-element and boundary-element concepts.

16. Numerical analysis of decoy state quantum key distribution protocols

SciTech Connect

Harrington, Jim W; Rice, Patrick R

2008-01-01

Decoy state protocols are a useful tool for many quantum key distribution systems implemented with weak coherent pulses, allowing significantly better secret bit rates and longer maximum distances. In this paper we present a method to numerically find optimal three-level protocols, and we examine how the secret bit rate and the optimized parameters are dependent on various system properties, such as session length, transmission loss, and visibility. Additionally, we show how to modify the decoy state analysis to handle partially distinguishable decoy states as well as uncertainty in the prepared intensities.

17. Diffraction patterns from multiple tilted laser apertures: numerical analysis

Kovalev, Anton V.; Polyakov, Vadim M.

2016-03-01

We propose a Rayleigh-Sommerfeld based method for numerical calculation of multiple tilted apertures near and far field diffraction patterns. Method is based on iterative procedure of fast Fourier transform based circular convolution of the initial field complex amplitudes distribution and impulse response function modified in order to account aperture and observation planes mutual tilt. The method is computationally efficient and has good accordance with the results of experimental diffraction patterns and can be applied for analysis of spatial noises occurring in master oscillator power amplifier laser systems. The example of diffraction simulation for a Phobos-Ground laser rangefinder amplifier is demonstrated.

18. Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response

PubMed Central

Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios

2016-01-01

In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system’s response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor’s optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section. PMID:27136562

19. A Meta-Analysis for Association of Maternal Smoking with Childhood Refractive Error and Amblyopia

PubMed Central

Li, Li; Qi, Ya; Shi, Wei; Wang, Yuan; Liu, Wen; Hu, Man

2016-01-01

Background. We aimed to evaluate the association between maternal smoking and the occurrence of childhood refractive error and amblyopia. Methods. Relevant articles were identified from PubMed and EMBASE up to May 2015. Combined odds ratio (OR) corresponding with its 95% confidence interval (CI) was calculated to evaluate the influence of maternal smoking on childhood refractive error and amblyopia. The heterogeneity was evaluated with the Chi-square-based Q statistic and the I2 test. Potential publication bias was finally examined by Egger's test. Results. A total of 9 articles were included in this meta-analysis. The pooled OR showed that there was no significant association between maternal smoking and childhood refractive error. However, children whose mother smoked during pregnancy were 1.47 (95% CI: 1.12–1.93) times and 1.43 (95% CI: 1.23-1.66) times more likely to suffer from amblyopia and hyperopia, respectively, compared with children whose mother did not smoke, and the difference was significant. Significant heterogeneity was only found among studies involving the influence of maternal smoking on children's refractive error (P < 0.05; I2 = 69.9%). No potential publication bias was detected by Egger's test. Conclusion. The meta-analysis suggests that maternal smoking is a risk factor for childhood hyperopia and amblyopia. PMID:27247800

20. A Meta-Analysis for Association of Maternal Smoking with Childhood Refractive Error and Amblyopia.

PubMed

Li, Li; Qi, Ya; Shi, Wei; Wang, Yuan; Liu, Wen; Hu, Man

2016-01-01

Background. We aimed to evaluate the association between maternal smoking and the occurrence of childhood refractive error and amblyopia. Methods. Relevant articles were identified from PubMed and EMBASE up to May 2015. Combined odds ratio (OR) corresponding with its 95% confidence interval (CI) was calculated to evaluate the influence of maternal smoking on childhood refractive error and amblyopia. The heterogeneity was evaluated with the Chi-square-based Q statistic and the I (2) test. Potential publication bias was finally examined by Egger's test. Results. A total of 9 articles were included in this meta-analysis. The pooled OR showed that there was no significant association between maternal smoking and childhood refractive error. However, children whose mother smoked during pregnancy were 1.47 (95% CI: 1.12-1.93) times and 1.43 (95% CI: 1.23-1.66) times more likely to suffer from amblyopia and hyperopia, respectively, compared with children whose mother did not smoke, and the difference was significant. Significant heterogeneity was only found among studies involving the influence of maternal smoking on children's refractive error (P < 0.05; I (2) = 69.9%). No potential publication bias was detected by Egger's test. Conclusion. The meta-analysis suggests that maternal smoking is a risk factor for childhood hyperopia and amblyopia. PMID:27247800

1. Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response.

PubMed

Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios

2016-01-01

In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system's response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor's optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section.

2. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

2015-07-01

Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

3. Error analysis applied to several inversion techniques used for the retrieval of middle atmospheric constituents from limb-scanning MM-wave spectroscopic measurements

NASA Technical Reports Server (NTRS)

Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.

1992-01-01

The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.

4. Error modeling based on geostatistics for uncertainty analysis in crop mapping using Gaofen-1 multispectral imagery

You, Jiong; Pei, Zhiyuan

2015-01-01

With the development of remote sensing technology, its applications in agriculture monitoring systems, crop mapping accuracy, and spatial distribution are more and more being explored by administrators and users. Uncertainty in crop mapping is profoundly affected by the spatial pattern of spectral reflectance values obtained from the applied remote sensing data. Errors in remotely sensed crop cover information and the propagation in derivative products need to be quantified and handled correctly. Therefore, this study discusses the methods of error modeling for uncertainty characterization in crop mapping using GF-1 multispectral imagery. An error modeling framework based on geostatistics is proposed, which introduced the sequential Gaussian simulation algorithm to explore the relationship between classification errors and the spectral signature from remote sensing data source. On this basis, a misclassification probability model to produce a spatially explicit classification error probability surface for the map of a crop is developed, which realizes the uncertainty characterization for crop mapping. In this process, trend surface analysis was carried out to generate a spatially varying mean response and the corresponding residual response with spatial variation for the spectral bands of GF-1 multispectral imagery. Variogram models were employed to measure the spatial dependence in the spectral bands and the derived misclassification probability surfaces. Simulated spectral data and classification results were quantitatively analyzed. Through experiments using data sets from a region in the low rolling country located at the Yangtze River valley, it was found that GF-1 multispectral imagery can be used for crop mapping with a good overall performance, the proposal error modeling framework can be used to quantify the uncertainty in crop mapping, and the misclassification probability model can summarize the spatial variation in map accuracy and is helpful for

5. Regularization methods used in error analysis of solar particle spectra measured on SOHO/EPHIN

Kharytonov, A.; Böhm, E.; Wimmer-Schweingruber, R. F.

2009-02-01

Context: The telescope EPHIN (Electron, Proton, Helium INstrument) on the SOHO (SOlar and Heliospheric Observatory) spacecraft measures the energy deposit of solar particles passing through the detector system. The original energy spectrum of solar particles is obtained by regularization methods from EPHIN measurements. It is important not only to obtain the solution of this inverse problem but also to estimate errors or uncertainties of the solution. Aims: The focus of this paper is to evaluate the influence of errors or noise in the instrument response function (IRF) and in the measurements when calculating energy spectra in space-based observations by regularization methods. Methods: The basis of solar particle spectra calculation is the Fredholm integral equation with the instrument response function as the kernel that is obtained by the Monte Carlo technique in matrix form. The original integral equation reduces to a singular system of linear algebraic equations. The nonnegative solution is obtained by optimization with constraints. For the starting value we use the solution of the algebraic problem that is calculated by regularization methods such as the singular value decomposition (SVD) or the Tikhonov methods. We estimate the local errors from special algebraic and statistical equations that are considered as direct or inverse problems. Inverse problems for the evaluation of errors are solved by regularization methods. Results: This inverse approach with error analysis is applied to data from the solar particle event observed by SOHO/EPHIN on day 1996/191. We find that the various methods have different strengths and weaknesses in the treatment of statistical and systematic errors.

6. Packet error rate analysis of digital pulse interval modulation in intersatellite optical communication systems with diversified wavefront deformation.

PubMed

Zhu, Jin; Wang, Dayan; Xie, Wanqing

2015-02-20

Diversified wavefront deformation is an inevitable phenomenon in intersatellite optical communication systems, which will decrease system performance. In this paper, we investigate the description of wavefront deformation and its influence on the packet error rate (PER) of digital pulse interval modulation (DPIM). With the wavelet method, the diversified wavefront deformation can be described by wavelet parameters: coefficient, dilation, and shift factors, where the coefficient factor represents the depth, dilation factor represents the area, and shift factor is for location. Based on this, the relationship between PER and wavelet parameters is analyzed from a theoretical viewpoint. Numerical results illustrate the validity of theoretical analysis: PER increases with the depth and area and decreases if location gets farther from the center of the optical antenna. In addition to describing diversified deformation, the advantage of the wavelet method over Zernike polynomials in computational complexity is shown via numerical example. This work provides a feasible method for the description along with influence analysis of diversified wavefront deformation from a practical viewpoint and will be helpful for designing optical systems.

7. Orbit Determination Error Analysis Results for the Triana Sun-Earth L2 Libration Point Mission

NASA Technical Reports Server (NTRS)

Marr, G.

2003-01-01

Using the NASA Goddard Space Flight Center's Orbit Determination Error Analysis System (ODEAS), orbit determination error analysis results are presented for all phases of the Triana Sun-Earth L1 libration point mission and for the science data collection phase of a future Sun-Earth L2 libration point mission. The Triana spacecraft was nominally to be released by the Space Shuttle in a low Earth orbit, and this analysis focuses on that scenario. From the release orbit a transfer trajectory insertion (TTI) maneuver performed using a solid stage would increase the velocity be approximately 3.1 km/sec sending Triana on a direct trajectory to its mission orbit. The Triana mission orbit is a Sun-Earth L1 Lissajous orbit with a Sun-Earth-vehicle (SEV) angle between 4.0 and 15.0 degrees, which would be achieved after a Lissajous orbit insertion (LOI) maneuver at approximately launch plus 6 months. Because Triana was to be launched by the Space Shuttle, TTI could potentially occur over a 16 orbit range from low Earth orbit. This analysis was performed assuming TTI was performed from a low Earth orbit with an inclination of 28.5 degrees and assuming support from a combination of three Deep Space Network (DSN) stations, Goldstone, Canberra, and Madrid and four commercial Universal Space Network (USN) stations, Alaska, Hawaii, Perth, and Santiago. These ground stations would provide coherent two-way range and range rate tracking data usable for orbit determination. Larger range and range rate errors were assumed for the USN stations. Nominally, DSN support would end at TTI+144 hours assuming there were no USN problems. Post-TTI coverage for a range of TTI longitudes for a given nominal trajectory case were analyzed. The orbit determination error analysis after the first correction maneuver would be generally applicable to any libration point mission utilizing a direct trajectory.

8. Numerical analysis and experimental verification of vehicle trajectories

Wekezer, J. W.; Cichocki, K.

2003-09-01

The paper presents research results of a study, in which computational mechanics was utilized to predict vehicle trajectories upon traversing standard Florida DOT street curbs. Computational analysis was performed using LS-DYNA non-linear, finite element computer code with two public domain, finite element models of motor vehicles: Ford Festiva and Ford Taurus. Shock absorbers were modeled using discrete spring and damper elements. Connections for the modifie suspension systems were carefully designed to assure proper range of motion for the suspension models. Inertia properties of the actual vehicles were collected using tilt-table tests and were used for LS-DYNA vehicle models. Full-scale trajectory tests have been performed at Texas Transportation Institute to validate the numerical models and predictions from computational mechanics. Experiments were conducted for Ford Festiva and Ford Taurus, both for two values of approach angle: 15 and 90 degrees, with impact velocity of 45 mph. Experimental data including accelerations, displacements and overall vehicles behavior were collected by high-speed video cameras and have e been compared with numerical results. Verification results indicated a good correlation between computational analysis and full-scale test data. The study also underlined a strong dependence of properly modeled suspension and tires on resulting vehicle trajectories.

9. Asymptotic and numerical analysis of electrohydrodynamic flows of dielectric liquid

Suh, Y. K.; Baek, K. H.; Cho, D. S.

2013-08-01

We perform an asymptotic analysis of electrohydrodynamic (EHD) flow of nonpolar liquid subjected to an external, nonuniform electric field. The domain of interest covers the bulk as well as the thin dissociation layers (DSLs) near the electrodes. Outer (i.e., bulk) equations for the ion transport in hierarchical order of perturbation parameters can be expressed in linear form, whereas the inner (i.e., DSL) equations take a nonlinear form. We derive a simple formula in terms of various parameters which can be used to estimate the relative importance of the DSL-driven flow compared with the bulk-driven flow. EHD flow over a pair of cylindrical electrodes is then solved asymptotically and numerically. It is found that in large geometric scale and high ion concentration the EHD flow is dominated by the bulk-charge-induced flow. As the scale and concentration are decreased, the DSL-driven slip velocity increases and the resultant flow tends to dominate the domain and finally leads to flow reversal. We also conduct a flow-visualization experiment to verify the analysis and attain good agreement between the two results with parameter tuning. We finally show, based on the comparison of experimental and numerical solutions, that the rate of free-ion generation (dissociation) should be less than the one predicted from the existing formula.

10. Numerical analysis of cocurrent conical and cylindrical axial cyclone separators

Nor, M. A. M.; Al-Kayiem, H. H.; Lemma, T. A.

2015-12-01

Axial concurrent liquid-liquid separator is seen as an alternative unit to the traditional tangential counter current cyclone due to lower droplet break ups, turbulence and pressure drop. This paper presents the numerical analysis of a new conical axial cocurrent design along with a comparison to the cylindrical axial cocurrent type. The simulation was carried out using CFD technique in ANSYS-FLUENT software. The simulation results were validated by comparison with experimental data from literature, and mesh independency and quality were performed. The analysis indicates that the conical version achieves better separation performance compared to the cylindrical type. Simulation results indicate tangential velocity with 8% higher and axial velocity with 80% lower recirculation compared to the cylindrical type. Also, the flow visualization counters shows smaller recirculation region relative to the cylindrical unit. The proposed conical design seems more efficient and suits the crude/water separation in O&G industry.

11. Numerical analysis of modified Central Solenoid insert design

DOE PAGESBeta

Khodak, Andrei; Martovetsky, Nicolai; Smirnov, Aleksandre; Titus, Peter

2015-06-21

The United States ITER Project Office (USIPO) is responsible for fabrication of the Central Solenoid (CS) for ITER project. The ITER machine is currently under construction by seven parties in Cadarache, France. The CS Insert (CSI) project should provide a verification of the conductor performance in relevant conditions of temperature, field, currents and mechanical strain. The US IPO designed the CSI that will be tested at the Central Solenoid Model Coil (CSMC) Test Facility at JAEA, Naka. To validate the modified design we performed three-dimensional numerical simulations using coupled solver for simultaneous structural, thermal and electromagnetic analysis. Thermal and electromagneticmore » simulations supported structural calculations providing necessary loads and strains. According to current analysis design of the modified coil satisfies ITER magnet structural design criteria for the following conditions: (1) room temperature, no current, (2) temperature 4K, no current, (3) temperature 4K, current 60 kA direct charge, and (4) temperature 4K, current 60 kA reverse charge. Fatigue life assessment analysis is performed for the alternating conditions of: temperature 4K, no current, and temperature 4K, current 45 kA direct charge. Results of fatigue analysis show that parts of the coil assembly can be qualified for up to 1 million cycles. Distributions of the Current Sharing Temperature (TCS) in the superconductor were obtained from numerical results using parameterization of the critical surface in the form similar to that proposed for ITER. Lastly, special ADPL scripts were developed for ANSYS allowing one-dimensional representation of TCS along the cable, as well as three-dimensional fields of TCS in superconductor material. Published by Elsevier B.V.« less

12. Numerical analysis of modified Central Solenoid insert design

SciTech Connect

Khodak, Andrei; Martovetsky, Nicolai; Smirnov, Aleksandre; Titus, Peter

2015-06-21

The United States ITER Project Office (USIPO) is responsible for fabrication of the Central Solenoid (CS) for ITER project. The ITER machine is currently under construction by seven parties in Cadarache, France. The CS Insert (CSI) project should provide a verification of the conductor performance in relevant conditions of temperature, field, currents and mechanical strain. The US IPO designed the CSI that will be tested at the Central Solenoid Model Coil (CSMC) Test Facility at JAEA, Naka. To validate the modified design we performed three-dimensional numerical simulations using coupled solver for simultaneous structural, thermal and electromagnetic analysis. Thermal and electromagnetic simulations supported structural calculations providing necessary loads and strains. According to current analysis design of the modified coil satisfies ITER magnet structural design criteria for the following conditions: (1) room temperature, no current, (2) temperature 4K, no current, (3) temperature 4K, current 60 kA direct charge, and (4) temperature 4K, current 60 kA reverse charge. Fatigue life assessment analysis is performed for the alternating conditions of: temperature 4K, no current, and temperature 4K, current 45 kA direct charge. Results of fatigue analysis show that parts of the coil assembly can be qualified for up to 1 million cycles. Distributions of the Current Sharing Temperature (TCS) in the superconductor were obtained from numerical results using parameterization of the critical surface in the form similar to that proposed for ITER. Lastly, special ADPL scripts were developed for ANSYS allowing one-dimensional representation of TCS along the cable, as well as three-dimensional fields of TCS in superconductor material. Published by Elsevier B.V.

13. Numerical analysis of electrically tunable aspherical optofluidic lenses.

PubMed

In this work, we use the numerical simulation platform Zemax to investigate the optical properties of electrically tunable aspherical liquid lenses, as we recently reported in an experimental study [ K. Mishra C. Murade B. Carreel I. Roghair J. M. Oh G. Manukyan D. van den Ende F. Mugele , "Optofluidic lens with tunable focal length and asphericity," Sci. Rep.4, 6378 (2014)]. Based on the measured lens profiles in the presence of an inhomogeneous electric field and the geometry of the optical device, we calculate the optical aberrations, focusing in particular on the Z11 Zernike coefficient of spherical aberration obtained at zero defocus (Z4). Focal length and spherical aberrations are calculated for a wide range of control parameters (fluid pressure and electric field), parallel with the experimental results. Similarly, the modulation transfer function (MTF), image spot diagrams, Strehl's ratio, and peak-to-valley (P-V) and root mean square (RMS) wavefront errors are calculated to quantify the performance of our aspherical liquid lenses. We demonstrate that the device concept allows compensation for a wide range of spherical aberrations encountered in optical systems. PMID:27410619; Mishra, Kartikeya; Mugele, Frieder

2016-06-27

In this work, we use the numerical simulation platform Zemax to investigate the optical properties of electrically tunable aspherical liquid lenses, as we recently reported in an experimental study [ K. Mishra C. Murade B. Carreel I. Roghair J. M. Oh G. Manukyan D. van den Ende F. Mugele , "Optofluidic lens with tunable focal length and asphericity," Sci. Rep.4, 6378 (2014)]. Based on the measured lens profiles in the presence of an inhomogeneous electric field and the geometry of the optical device, we calculate the optical aberrations, focusing in particular on the Z11 Zernike coefficient of spherical aberration obtained at zero defocus (Z4). Focal length and spherical aberrations are calculated for a wide range of control parameters (fluid pressure and electric field), parallel with the experimental results. Similarly, the modulation transfer function (MTF), image spot diagrams, Strehl's ratio, and peak-to-valley (P-V) and root mean square (RMS) wavefront errors are calculated to quantify the performance of our aspherical liquid lenses. We demonstrate that the device concept allows compensation for a wide range of spherical aberrations encountered in optical systems. PMID:27410619

14. Space Trajectory Error Analysis Program (STEAP) for halo orbit missions. Volume 2: Programmer's manual

NASA Technical Reports Server (NTRS)

Byrnes, D. V.; Carney, P. C.; Underwood, J. W.; Vogt, E. D.

1974-01-01

The six month effort was responsible for the development, test, conversion, and documentation of computer software for the mission analysis of missions to halo orbits about libration points in the earth-sun system. The software consisting of two programs called NOMNAL and ERRAN is part of the Space Trajectories Error Analysis Programs. The program NOMNAL targets a transfer trajectory from earth on a given launch date to a specified halo orbit on a required arrival date. Either impulsive or finite thrust insertion maneuvers into halo orbit are permitted by the program. The transfer trajectory is consistent with a realistic launch profile input by the user. The second program ERRAN conducts error analyses of the targeted transfer trajectory. Measurements including range, doppler, star-planet angles, and apparent planet diameter are processed in a Kalman-Schmidt filter to determine the trajectory knowledge uncertainty.

15. Error analysis for the ground-based microwave ozone measurements during STOIC

Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick

1995-05-01

We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ("baseline"). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17 km at 0.2 mbar. We show the "blind" microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE II. The STOIC results and comparisons are broadly consistent with the formal analysis.

16. Development of an improved HRA method: A technique for human error analysis (ATHEANA)

SciTech Connect

Taylor, J.H.; Luckas, W.J.; Wreathall, J.

1996-03-01

Probabilistic risk assessment (PRA) has become an increasingly important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. The NRC recently published a final policy statement, SECY-95-126, encouraging the use of PRA in regulatory activities. Human reliability analysis (HRA), while a critical element of PRA, has limitations in the analysis of human actions in PRAs that have long been recognized as a constraint when using PRA. In fact, better integration of HRA into the PRA process has long been a NRC issue. Of particular concern, has been the omission of errors of commission - those errors that are associated with inappropriate interventions by operators with operating systems. To address these concerns, the NRC identified the need to develop an improved HRA method, so that human reliability can be better represented and integrated into PRA modeling and quantification.

17. An Analysis of Ripple and Error Fields Induced by a Blanket in the CFETR

Yu, Guanying; Liu, Xufeng; Liu, Songlin

2016-10-01

The Chinese Fusion Engineering Tokamak Reactor (CFETR) is an important intermediate device between ITER and DEMO. The Water Cooled Ceramic Breeder (WCCB) blanket whose structural material is mainly made of Reduced Activation Ferritic/Martensitic (RAFM) steel, is one of the candidate conceptual blanket design. An analysis of ripple and error field induced by RAFM steel in WCCB is evaluated with the method of static magnetic analysis in the ANSYS code. Significant additional magnetic field is produced by blanket and it leads to an increased ripple field. Maximum ripple along the separatrix line reaches 0.53% which is higher than 0.5% of the acceptable design value. Simultaneously, one blanket module is taken out for heating purpose and the resulting error field is calculated to be seriously against the requirement. supported by National Natural Science Foundation of China (No. 11175207) and the National Magnetic Confinement Fusion Program of China (No. 2013GB108004)

18. Error analysis for the ground-based microwave ozone measurements during STOIC

NASA Technical Reports Server (NTRS)

Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick

1995-01-01

We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ('baseline'). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17km at 0.2 mbar. We show the 'blind' microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE 2. The STOIC results and comparisons are broadly consistent with the formal analysis.

19. Laser Doppler, velocimeter system for turbine stator cascade studies and analysis of statistical biasing errors

NASA Technical Reports Server (NTRS)

Seasholtz, R. G.

1977-01-01

A laser Doppler velocimeter (LDV) built for use in the Lewis Research Center's turbine stator cascade facilities is described. The signal processing and self contained data processing are based on a computing counter. A procedure is given for mode matching the laser to the probe volume. An analysis is presented of biasing errors that were observed in turbulent flow when the mean flow was not normal to the fringes.

20. Displacement sensor with controlled measuring force and its error analysis and precision verification

Yang, Liangen; Wang, Xuanze; Lv, Wei

2011-05-01

A displacement sensor with controlled measuring force and its error analysis and precision verification are discussed in this paper. The displacement sensor consists of an electric induction transducer with high resolution and a voice coil motor (VCM). The measuring principles, structure, method enlarging measuring range, signal process of the sensor are discussed. The main error sources such as parallelism error and incline of framework by unequal length of leaf springs, rigidity of measuring rods, shape error of stylus, friction between iron core and other parts, damping of leaf springs, variation of voltage, linearity of induction transducer, resolution and stability are analyzed. A measuring system for surface topography with large measuring range is constructed based on the displacement sensor and 2D moving platform. Measuring precision and stability of the measuring system is verified. Measuring force of the sensor in measurement process of surface topography can be controlled at μN level and hardly changes. It has been used in measurement of bearing ball, bullet mark, etc. It has measuring range up to 2mm and precision of nm level.

1. Displacement sensor with controlled measuring force and its error analysis and precision verification

Yang, Liangen; Wang, Xuanze; Lv, Wei

2010-12-01

A displacement sensor with controlled measuring force and its error analysis and precision verification are discussed in this paper. The displacement sensor consists of an electric induction transducer with high resolution and a voice coil motor (VCM). The measuring principles, structure, method enlarging measuring range, signal process of the sensor are discussed. The main error sources such as parallelism error and incline of framework by unequal length of leaf springs, rigidity of measuring rods, shape error of stylus, friction between iron core and other parts, damping of leaf springs, variation of voltage, linearity of induction transducer, resolution and stability are analyzed. A measuring system for surface topography with large measuring range is constructed based on the displacement sensor and 2D moving platform. Measuring precision and stability of the measuring system is verified. Measuring force of the sensor in measurement process of surface topography can be controlled at μN level and hardly changes. It has been used in measurement of bearing ball, bullet mark, etc. It has measuring range up to 2mm and precision of nm level.

2. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

NASA Technical Reports Server (NTRS)

Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

1994-01-01

Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

3. Error analysis of overlay compensation methodologies and proposed functional tolerances for EUV photomask flatness

Ballman, Katherine; Lee, Christopher; Dunn, Thomas; Bean, Alexander

2016-05-01

Due to the impact on image placement and overlay errors inherent in all reflective lithography systems, EUV reticles will need to adhere to flatness specifications below 10nm for 2018 production. These single value metrics are near impossible to meet using current tooling infrastructure (current state of the art reticles report P-V flatness ~60nm). In order to focus innovation on areas which lack capability for flatness compensation or correction, this paper redefines flatness metrics as being "correctable" vs. "non-correctable" based on the surface topography's contributions to the final IP budget at wafer, as well as whether data driven corrections (write compensation or at scanner) are available for the reticle's specific shape. To better understand and define the limitations of write compensation and scanner corrections, an error budget for processes contributing to these two methods is presented. Photomask flatness measurement tools are now targeting 6σ reproducibility <1nm (previous 3σ reproducibility ~3nm) in order to drive down error contributions and provide more accurate data for correction techniques. Taking advantage of the high order measurement capabilities of improved metrology tooling, as well as computational capabilities which enable fast measurements and analysis of sophisticated shapes, we propose a methodology for the industry to create functional tolerances focused on the flatness errors that are not correctable with compensation.

4. The linear Fresnel lens - Solar optical analysis of tracking error effects

NASA Technical Reports Server (NTRS)

Cosby, R. M.

1977-01-01

Real sun-tracking solar concentrators imperfectly follow the solar disk, operationally sustaining both transverse and axial misalignments. This paper describes an analysis of the solar concentration performance of a line-focusing flat-base Fresnel lens in the presence of small transverse tracking errors. Simple optics and ray-tracing techniques are used to evaluate the lens solar transmittance and focal-plane imaging characteristics. Computer-generated example data for an f/1.0 lens indicate that less than a 1% transmittance degradation occurs for transverse errors up to 2.5 deg. In this range, solar-image profiles shift laterally in the focal plane, the peak concentration ratio drops, and profile asymmetry increases with tracking error. With profile shift as the primary factor, the ninety-percent target-intercept width increases rapidly for small misalignments, e.g., almost threefold for a 1-deg error. The analytical model and computational results provide a design base for tracking and absorber systems for the linear-Fresnel-lens solar concentrator.

5. Principal components analysis of reward prediction errors in a reinforcement learning task.

PubMed

Sambrook, Thomas D; Goslin, Jeremy

2016-01-01

Models of reinforcement learning represent reward and punishment in terms of reward prediction errors (RPEs), quantitative signed terms describing the degree to which outcomes are better than expected (positive RPEs) or worse (negative RPEs). An electrophysiological component known as feedback related negativity (FRN) occurs at frontocentral sites 240-340ms after feedback on whether a reward or punishment is obtained, and has been claimed to neurally encode an RPE. An outstanding question however, is whether the FRN is sensitive to the size of both positive RPEs and negative RPEs. Previous attempts to answer this question have examined the simple effects of RPE size for positive RPEs and negative RPEs separately. However, this methodology can be compromised by overlap from components coding for unsigned prediction error size, or "salience", which are sensitive to the absolute size of a prediction error but not its valence. In our study, positive and negative RPEs were parametrically modulated using both reward likelihood and magnitude, with principal components analysis used to separate out overlying components. This revealed a single RPE encoding component responsive to the size of positive RPEs, peaking at ~330ms, and occupying the delta frequency band. Other components responsive to unsigned prediction error size were shown, but no component sensitive to negative RPE size was found.

6. Statistical model and error analysis of a proposed audio fingerprinting algorithm

McCarthy, E. P.; Balado, F.; Silvestre, G. C. M.; Hurley, N. J.

2006-01-01

In this paper we present a statistical analysis of a particular audio fingerprinting method proposed by Haitsma et al.1 Due to the excellent robustness and synchronisation properties of this particular fingerprinting method, we would like to examine its performance for varying values of the parameters involved in the computation and ascertain its capabilities. For this reason, we pursue a statistical model of the fingerprint (also known as a hash, message digest or label). Initially we follow the work of a previous attempt made by Doets and Lagendijk 2-4 to obtain such a statistical model. By reformulating the representation of the fingerprint as a quadratic form, we present a model in which the parameters derived by Doets and Lagendijk may be obtained more easily. Furthermore, our model allows further insight into certain aspects of the behaviour of the fingerprinting algorithm not previously examined. Using our model, we then analyse the probability of error (P e) of the hash. We identify two particular error scenarios and obtain an expression for the probability of error in each case. We present three methods of varying accuracy to approximate P e following Gaussian noise addition to the signal of interest. We then analyse the probability of error following desynchronisation of the signal at the input of the hashing system and provide an approximation to P e for different parameters of the algorithm under varying degrees of desynchronisation.

7. Analysis of Oblique Wedges Using Analog and Numerical Models

Haq, S. S.; Koster, K.; Martin, R. S.; Flesch, L. M.

2010-12-01

Oblique plate motion is understood to be a primary factor in determining the style and location of deformation at many convergent margins. These margins are frequently characterized by a dominant strike-slip fault parallel to the margin, which accommodates margin-parallel motion and shear and is adjacent to partitioned and near margin-normal thrusting. We have performed a series of analog experiment in which we have simulated oblique wedges with frictional and layered, friction over viscous, rheologies. Using the detailed analysis of topography and strain from these analog models we have compared them to geometrically similar 2D and 3D numerical models. While our pure frictional analog wedges are characterized by numerous discrete thrust faults in the pro-wedge and a zone of shear between the pro-wedge and the retro-wedges, our layered wedges have a dominate shear zone that is long-lived. In all models the highest rate of contractional deformation is at the thrust front, while the highest rate of shear is isolated in a relatively narrow zone at the back of the pro-wedge. Because the layered analog wedge is better able isolate shear behind the pro-wedge it can better partition strain into dip-slip thrusting normal to the margin. Our numerical simulations support the assertion that a relatively small amount of extensional stress is needed to play a significant role in the structural evolution of convergent systems. However, the manner in which this stress is localized on discrete structures, and in particular, how the style of strain (extension or contraction) will evolve, is a strong function of rheology and its strength at depth for a given initial geometry.

8. Stochastic algorithms for the analysis of numerical flame simulations

SciTech Connect

Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.

2001-12-14

Recent progress in simulation methodologies and new, high-performance parallel architectures have made it is possible to perform detailed simulations of multidimensional combustion phenomena using comprehensive kinetics mechanisms. However, as simulation complexity increases, it becomes increasingly difficult to extract detailed quantitative information about the flame from the numerical solution, particularly regarding the details of chemical processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of combustion phenomena. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian viewpoint in which we follow the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system. From this perspective an ''atom'' is part of some molecule that is transported through the domain by advection and diffusion. Reactions ca use the atom to shift from one species to another with the subsequent transport given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion as a suitable random-walk process. Within this probabilistic framework, reactions can be viewed as a Markov process transforming molecule to molecule with given probabilities. In this paper, we discuss the numerical issues in more detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. We also illustrate how the method can be applied to studying the role of cyanochemistry on NOx production in a diffusion flame.

9. Stochastic algorithms for the analysis of numerical flame simulations

SciTech Connect

Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.

2004-04-26

Recent progress in simulation methodologies and high-performance parallel computers have made it is possible to perform detailed simulations of multidimensional reacting flow phenomena using comprehensive kinetics mechanisms. As simulations become larger and more complex, it becomes increasingly difficult to extract useful information from the numerical solution, particularly regarding the interactions of the chemical reaction and diffusion processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of reacting flow. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian view point that follows the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system . From this perspective an ''atom'' is part of some molecule of a species that is transported through the domain by advection and diffusion. Reactions cause the atom to shift from one chemical host species to another and the subsequent transport of the atom is given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion and chemistry as stochastic processes. In this paper, we discuss the numerical issues in detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. The capabilities of this diagnostic are then demonstrated by applications to study the modulation of carbon chemistry during a vortex-flame interaction, and the role of cyano chemistry in rm NO{sub x} production for a steady diffusion flame.

10. Numerical analysis of impact-damaged sandwich composites

Hwang, Youngkeun

Sandwich structures are used in a wide variety of structural applications due to their relative advantages over other conventional structural materials in terms of improved stability, weight savings, and ease of manufacture and repair. Foreign object impact damage in sandwich composites can result in localized damage to the facings, core, and core-facing interface. Such damage may result in drastic reductions in composite strength, elastic moduli, and durability and damage tolerance characteristics. In this study, physically-motivated numerical models have been developed for predicting the residual strength of impact-damaged sandwich composites comprised of woven-fabric graphite-epoxy facesheets and Nomex honeycomb cores subjected to compression-after-impact loading. Results from non-destructive inspection and destructive sectioning of damaged sandwich panels were used to establish initial conditions for damage (residual facesheet indentation, core crush dimension, etc.) in the numerical analysis. Honeycomb core crush test results were used to establish the nonlinear constitutive behavior for the Nomex core. The influence of initial facesheet property degradation and progressive loss of facesheet structural integrity on the residual strength of impact-damaged sandwich panels was examined. The influence of damage of various types and sizes, specimen geometry, support boundary conditions, and variable material properties on the estimated residual strength is discussed. Facesheet strains from material and geometric nonlinear finite element analyses correlated relatively well with experimentally determined values. Moreover, numerical predictions of residual strength are consistent with experimental observations. Using a methodology similar to that presented in this work, it may be possible to develop robust residual strength estimates for complex sandwich composite structural components with varying levels of in-service damage. Such studies may facilitate sandwich

11. Numerical analysis of the V-Y shaped advancement flap.

PubMed

Remache, D; Chambert, J; Pauchot, J; Jacquet, E

2015-10-01

The V-Y advancement flap is a usual technique for the closure of skin defects. A triangular flap is incised adjacent to a skin defect of rectangular shape. As the flap is advanced to close the initial defect, two smaller defects in the shape of a parallelogram are formed with respect to a reflection symmetry. The height of the defects depends on the apex angle of the flap and the closure efforts are related to the defects height. Andrades et al. 2005 have performed a geometrical analysis of the V-Y flap technique in order to reach a compromise between the flap size and the defects width. However, the geometrical approach does not consider the mechanical properties of the skin. The present analysis based on the finite element method is proposed as a complement to the geometrical one. This analysis aims to highlight the major role of the skin elasticity for a full analysis of the V-Y advancement flap. Furthermore, the study of this technique shows that closing at the flap apex seems mechanically the most interesting step. Thus different strategies of defect closure at the flap apex stemming from surgeon's know-how have been tested by numerical simulations. PMID:26342442

12. Secondary Data Analysis of Large Data Sets in Urology: Successes and Errors to Avoid

PubMed Central

Schlomer, Bruce J.; Copp, Hillary L.

2014-01-01

Purpose Secondary data analysis is the use of data collected for research by someone other than the investigator. In the last several years there has been a dramatic increase in the number of these studies being published in urological journals and presented at urological meetings, especially involving secondary data analysis of large administrative data sets. Along with this expansion, skepticism for secondary data analysis studies has increased for many urologists. Materials and Methods In this narrative review we discuss the types of large data sets that are commonly used for secondary data analysis in urology, and discuss the advantages and disadvantages of secondary data analysis. A literature search was performed to identify urological secondary data analysis studies published since 2008 using commonly used large data sets, and examples of high quality studies published in high impact journals are given. We outline an approach for performing a successful hypothesis or goal driven secondary data analysis study and highlight common errors to avoid. Results More than 350 secondary data analysis studies using large data sets have been published on urological topics since 2008 with likely many more studies presented at meetings but never published. Nonhypothesis or goal driven studies have likely constituted some of these studies and have probably contributed to the increased skepticism of this type of research. However, many high quality, hypothesis driven studies addressing research questions that would have been difficult to conduct with other methods have been performed in the last few years. Conclusions Secondary data analysis is a powerful tool that can address questions which could not be adequately studied by another method. Knowledge of the limitations of secondary data analysis and of the data sets used is critical for a successful study. There are also important errors to avoid when planning and performing a secondary data analysis study. Investigators

13. Numerical model of solar dynamic radiator for parametric analysis

NASA Technical Reports Server (NTRS)

Rhatigan, Jennifer L.

1989-01-01

Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. The SD module rejects waste heat from the power conversion cycle to space through a pumped-loop, multi-panel, deployable radiator. The baseline radiator configuration was defined during the Space Station conceptual design phase and is a function of the state point and heat rejection requirements of the power conversion unit. Requirements determined by the overall station design such as mass, system redundancy, micrometeoroid and space debris impact survivability, launch packaging, costs, and thermal and structural interaction with other station components have also been design drivers for the radiator configuration. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations. A brief description and discussion of the numerical model, it's capabilities and limitations, and results of the parametric studies performed is presented.

SciTech Connect

Sterndorff, M.J.; OBrien, P.

1995-12-31

ROLF (Retrievable Offshore Loading Facility) has been proposed as an alternative offshore oil export tanker loading system for the North Sea. The system consists of a flexible riser ascending from the seabed in a lazy wave configuration to the bow of a dynamically positioned tanker. In order to supplant and support the numerical analyses performed to design the system, an extensive model test program was carried out in a 3D offshore basin at scale 1:50. A model riser with properties equivalent to the properties of the oil filled prototype riser installed in seawater was tested in several combinations of waves and current. During the tests the forces at the bow of the tanker and at the pipeline end manifold were measured together with the motions of the tanker and the riser. The riser motions were measured by means of a video based 3D motion monitoring system. Of special importance was accurate determination of the minimum bending radius for the riser. This was derived based on the measured riser motions. The results of the model tests were compared to numerical analyses by an MCS proprietary riser analysis program.

15. Numerical Analysis of Film Cooling at High Blowing Ratio

NASA Technical Reports Server (NTRS)

El-Gabry, Lamyaa; Heidmann, James; Ameri, Ali

2009-01-01

Computational Fluid Dynamics is used in the analysis of a film cooling jet in crossflow. Predictions of film effectiveness are compared with experimental results for a circular jet at blowing ratios ranging from 0.5 to 2.0. Film effectiveness is a surface quantity which alone is insufficient in understanding the source and finding a remedy for shortcomings of the numerical model. Therefore, in addition, comparisons are made to flow field measurements of temperature along the jet centerline. These comparisons show that the CFD model is accurately predicting the extent and trajectory of the film cooling jet; however, there is a lack of agreement in the near-wall region downstream of the film hole. The effects of main stream turbulence conditions, boundary layer thickness, turbulence modeling, and numerical artificial dissipation are evaluated and found to have an insufficient impact in the wake region of separated films (i.e. cannot account for the discrepancy between measured and predicted centerline fluid temperatures). Analyses of low and moderate blowing ratio cases are carried out and results are in good agreement with data.

16. Analysis of S-box in Image Encryption Using Root Mean Square Error Method

2012-07-01

The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes

17. A Preliminary ZEUS Lightning Location Error Analysis Using a Modified Retrieval Theory

NASA Technical Reports Server (NTRS)

Elander, Valjean; Koshak, William; Phanord, Dieudonne

2004-01-01

The ZEUS long-range VLF arrival time difference lightning detection network now covers both Europe and Africa, and there are plans for further expansion into the western hemisphere. In order to fully optimize and assess ZEUS lightning location retrieval errors and to determine the best placement of future receivers expected to be added to the network, a software package is being developed jointly between the NASA Marshall Space Flight Center (MSFC) and the University of Nevada Las Vegas (UNLV). The software package, called the ZEUS Error Analysis for Lightning (ZEAL), will be used to obtain global scale lightning location retrieval error maps using both a Monte Carlo approach and chi-squared curvature matrix theory. At the core of ZEAL will be an implementation of an Iterative Oblate (IO) lightning location retrieval method recently developed at MSFC. The IO method will be appropriately modified to account for variable wave propagation speed, and the new retrieval results will be compared with the current ZEUS retrieval algorithm to assess potential improvements. In this preliminary ZEAL work effort, we defined 5000 source locations evenly distributed across the Earth. We then used the existing (as well as potential future ZEUS sites) to simulate arrival time data between source and ZEUS site. A total of 100 sources were considered at each of the 5000 locations, and timing errors were selected from a normal distribution having a mean of 0 seconds and a standard deviation of 20 microseconds. This simulated "noisy" dataset was analyzed using the IO algorithm to estimate source locations. The exact locations were compared with the retrieved locations, and the results are summarized via several color-coded "error maps."

18. Numerical analysis of ultrasound propagation and reflection intensity for biological acoustic impedance microscope.

PubMed

Gunawan, Agus Indra; Hozumi, Naohiro; Yoshida, Sachiko; Saijo, Yoshifumi; Kobayashi, Kazuto; Yamamoto, Seiji

2015-08-01

This paper proposes a new method for microscopic acoustic imaging that utilizes the cross sectional acoustic impedance of biological soft tissues. In the system, a focused acoustic beam with a wide band frequency of 30-100 MHz is transmitted across a plastic substrate on the rear side of which a soft tissue object is placed. By scanning the focal point along the surface, a 2-D reflection intensity profile is obtained. In the paper, interpretation of the signal intensity into a characteristic acoustic impedance is discussed. Because the acoustic beam is strongly focused, interpretation assuming vertical incidence may lead to significant error. To determine an accurate calibration curve, a numerical sound field analysis was performed. In these calculations, the reflection intensity from a target with an assumed acoustic impedance was compared with that from water, which was used as a reference material. The calibration curve was determined by changing the assumed acoustic impedance of the target material. The calibration curve was verified experimentally using saline solution, of which the acoustic impedance was known, as the target material. Finally, the cerebellar tissue of a rat was observed to create an acoustic impedance micro profile. In the paper, details of the numerical analysis and verification of the observation results will be described.

19. A Cartesian parametrization for the numerical analysis of material instability

DOE PAGESBeta

Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.; Ostien, Jakob T.; Lai, Zhengshou

2016-02-25

We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less

20. Preliminary Numerical and Experimental Analysis of the Spallation Phenomenon

NASA Technical Reports Server (NTRS)

Martin, Alexandre; Bailey, Sean C. C.; Panerai, Francesco; Davuluri, Raghava S. C.; Vazsonyi, Alexander R.; Zhang, Huaibao; Lippay, Zachary S.; Mansour, Nagi N.; Inman, Jennifer A.; Bathel, Brett F.; Splinter, Scott C.; Danehy, Paul M.

2015-01-01

The spallation phenomenon was studied through numerical analysis using a coupled Lagrangian particle tracking code and a hypersonic aerothermodynamics computational fluid dynamics solver. The results show that carbon emission from spalled particles results in a significant modification of the gas composition of the post shock layer. Preliminary results from a test-campaign at the NASA Langley HYMETS facility are presented. Using an automated image processing of high-speed images, two-dimensional velocity vectors of the spalled particles were calculated. In a 30 second test at 100 W/cm2 of cold-wall heat-flux, more than 1300 particles were detected, with an average velocity of 102 m/s, and most frequent observed velocity of 60 m/s.

1. Stability analysis and numerical simulation of simplified solid rocket motors

Boyer, G.; Casalis, G.; Estivalèzes, J.-L.

2013-08-01

This paper investigates the Parietal Vortex Shedding (PVS) instability that significantly influences the Pressure Oscillations of the long and segmented solid rocket motors. The eigenmodes resulting from the stability analysis of a simplified configuration, namely, a cylindrical duct with sidewall injection, are presented. They are computed taking into account the presence of a wall injection defect, which is shown to induce hydrodynamic instabilities at discrete frequencies. These instabilities exhibit eigenfunctions in good agreement with the measured PVS vortical structures. They are successfully compared in terms of temporal evolution and frequencies to the unsteady hydrodynamic fluctuations computed by numerical simulations. In addition, this study has shown that the hydrodynamic instabilities associated with the PVS are the driving force of the flow dynamics, since they are responsible for the emergence of pressure waves propagating at the same frequency.

2. Numerical analysis of the dynamics of distributed vortex configurations

Govorukhin, V. N.

2016-08-01

A numerical algorithm is proposed for analyzing the dynamics of distributed plane vortex configurations in an inviscid incompressible fluid. At every time step, the algorithm involves the computation of unsteady vortex flows, an analysis of the configuration structure with the help of heuristic criteria, the visualization of the distribution of marked particles and vorticity, the construction of streamlines of fluid particles, and the computation of the field of local Lyapunov exponents. The inviscid incompressible fluid dynamic equations are solved by applying a meshless vortex method. The algorithm is used to investigate the interaction of two and three identical distributed vortices with various initial positions in the flow region with and without the Coriolis force.

3. Asymptotic analysis of numerical wave propagation in finite difference equations

NASA Technical Reports Server (NTRS)

Giles, M.; Thompkins, W. T., Jr.

1983-01-01

An asymptotic technique is developed for analyzing the propagation and dissipation of wave-like solutions to finite difference equations. It is shown that for each fixed complex frequency there are usually several wave solutions with different wavenumbers and the slowly varying amplitude of each satisfies an asymptotic amplitude equation which includes the effects of smoothly varying coefficients in the finite difference equations. The local group velocity appears in this equation as the velocity of convection of the amplitude. Asymptotic boundary conditions coupling the amplitudes of the different wave solutions are also derived. A wavepacket theory is developed which predicts the motion, and interaction at boundaries, of wavepackets, wave-like disturbances of finite length. Comparison with numerical experiments demonstrates the success and limitations of the theory. Finally an asymptotic global stability analysis is developed.

4. Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy

Mehrubeoglu, Mehrube; McLauchlan, Lifford

2010-08-01

In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.

5. Equilibrating errors: reliable estimation of information transmission rates in biological systems with spectral analysis-based methods.

PubMed

Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti

2014-06-01

Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.

Aidi, Bilel; Case, Scott W.

2015-12-01

Experimental quasi-static tests were performed on center notched carbon fiber reinforced polymer (CFRP) composites having different stacking sequences made of G40-600/5245C prepreg. The three-dimensional Digital Image Correlation (DIC) technique was used during quasi-static tests conducted on quasi-isotropic notched samples to obtain the distribution of strains as a function of applied stress. A finite element model was built within Abaqus to predict the notched strength and the strain profiles for comparison with measured results. A user-material subroutine using the multi-continuum theory (MCT) as a failure initiation criterion and an energy-based damage evolution law as implemented by Autodesk Simulation Composite Analysis (ASCA) was used to conduct a quantitative comparison of strain components predicted by the analysis and obtained in the experiments. Good agreement between experimental data and numerical analyses results are observed. Modal analysis was carried out to investigate the effect of static damage on the dominant frequencies of the notched structure using the resulted degraded material elements. The first in-plane mode was found to be a good candidate for tracking the level of damage.

7. Geomechanical Analysis with Rigorous Error Estimates for a Double-Porosity Reservoir Model

SciTech Connect

Berryman, J G

2005-04-11

A model of random polycrystals of porous laminates is introduced to provide a means for studying geomechanical properties of double-porosity reservoirs. Calculations on the resulting earth reservoir model can proceed semi-analytically for studies of either the poroelastic or transport coefficients. Rigorous bounds of the Hashin-Shtrikman type provide estimates of overall bulk and shear moduli, and thereby also provide rigorous error estimates for geomechanical constants obtained from up-scaling based on a self-consistent effective medium method. The influence of hidden (or presumed unknown) microstructure on the final results can then be evaluated quantitatively. Detailed descriptions of the use of the model and some numerical examples showing typical results for the double-porosity poroelastic coefficients of a heterogeneous reservoir are presented.

8. Three-parameter error analysis method based on rotating coordinates in rotating birefringent polarizer system

SciTech Connect

Cao, Junjie; Jia, Hongzhi

2015-11-15

We propose error analysis using a rotating coordinate system with three parameters of linearly polarized light—incidence angle, azimuth angle on the front surface, and angle between the incidence and vibration planes—and demonstrate the method on a rotating birefringent prism system. The transmittance and angles are calculated plane-by-plane using a birefringence ellipsoid model and the final transmitted intensity equation is deduced. The effects of oblique incidence, light interference, beam convergence, and misalignment of the rotation and prism axes are discussed. We simulate the entire error model using MATLAB and conduct experiments based on a built polarimeter. The simulation and experimental results are consistent and demonstrate the rationality and validity of this method.

9. Modelling, calibration, and error analysis of seven-hole pressure probes

NASA Technical Reports Server (NTRS)

Zillac, G. G.

1993-01-01

This report describes the calibration of a nonnulling, conical, seven-hole pressure probe over a large range of flow onset angles. The calibration procedure is based on the use of differential pressures to determine the three components of velocity. The method allows determination of the flow angle and velocity magnitude to within an average error of 1.0 deg and 1.0 percent, respectively. Greater accuracy can be achieved by using high-quality pressure transducers. Also included is an examination of the factors which limit the use of the probe, a description of the measurement chain, an error analysis, and a typical experimental result. In addition, a new general analytical model of pressure probe behavior is described, and the validity of the model is demonstrated by comparing it with experimentally measured calibration data for a three-hole yaw meter and a seven-hole probe.

10. Error analysis of the quadratic nodal expansion method in slab geometry

SciTech Connect

Penland, R.C.; Turinsky, P.J.; Azmy, Y.Y.

1994-10-01

As part of an effort to develop an adaptive mesh refinement strategy for use in state-of-the-art nodal diffusion codes, the authors derive error bounds on the solution variables of the quadratic Nodal Expansion Method (NEM) in slab geometry. Closure of the system is obtained through flux discontinuity relationships and boundary conditions. In order to verify the analysis presented, the authors compare the quadratic NEM to the analytic solution of a test problem. The test problem for this investigation is a one-dimensional slab [0,20cm] with L{sup 2} = 6.495cm{sup 2} and D = 0.1429cm. The slab has a unit neutron source distributed uniformly throughout and zero flux boundary conditions. The analytic solution to this problem is used to compute the node-average fluxes over a variety of meshes, and these are used to compute the NEM maximum error on each mesh.

11. Three-parameter error analysis method based on rotating coordinates in rotating birefringent polarizer system.

PubMed

Cao, Junjie; Jia, Hongzhi

2015-11-01

We propose error analysis using a rotating coordinate system with three parameters of linearly polarized light--incidence angle, azimuth angle on the front surface, and angle between the incidence and vibration planes--and demonstrate the method on a rotating birefringent prism system. The transmittance and angles are calculated plane-by-plane using a birefringence ellipsoid model and the final transmitted intensity equation is deduced. The effects of oblique incidence, light interference, beam convergence, and misalignment of the rotation and prism axes are discussed. We simulate the entire error model using MATLAB and conduct experiments based on a built polarimeter. The simulation and experimental results are consistent and demonstrate the rationality and validity of this method. PMID:26628116

12. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and A Posteriori Error Estimation Methods

SciTech Connect

Ginting, Victor

2014-03-15

it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.

13. Error Analysis of the IGS repro2 Station Position Time Series

Rebischung, P.; Ray, J.; Benoist, C.; Metivier, L.; Altamimi, Z.

2015-12-01

Eight Analysis Centers (ACs) of the International GNSS Service (IGS) have completed a second reanalysis campaign (repro2) of the GNSS data collected by the IGS global tracking network back to 1994, using the latest available models and methodology. The AC repro2 contributions include in particular daily terrestrial frame solutions, the first time with sub-weekly resolution for the full IGS history. The AC solutions, comprising positions for 1848 stations with daily polar motion coordinates, were combined to form the IGS contribution to the next release of the International Terrestrial Reference Frame (ITRF2014). Inter-AC position consistency is excellent, about 1.5 mm horizontal and 4 mm vertical. The resulting daily combined frames were then stacked into a long-term cumulative frame assuming generally linear motions, which constitutes the GNSS input to the ITRF2014 inter-technique combination. A special challenge involved identifying the many position discontinuities, averaging about 1.8 per station. A stacked periodogram of the station position residual time series from this long-term solution reveals a number of unexpected spectral lines (harmonics of the GPS draconitic year, fortnightly tidal lines) on top of a white+flicker background noise and strong seasonal variations. In this study, we will present results from station- and AC-specific analyses of the noise and periodic errors present in the IGS repro2 station position time series. So as to better understand their sources, and in view of developing a spatio-temporal error model, we will focus in particular on the spatial distribution of the noise characteristics and of the periodic errors. By computing AC-specific long-term frames and analyzing the respective residual time series, we will additionally study how the characteristics of the noise and of the periodic errors depend on the adopted analysis strategy and reduction software.

14. Temporal scaling analysis of irradiance estimated from daily satellite data and numerical modelling

Vindel, Jose M.; Navarro, Ana A.; Valenzuela, Rita X.; Ramírez, Lourdes

2016-11-01

The temporal variability of global irradiance estimated from daily satellite data and numerical models has been compared for different spans of time. According to the time scale considered, a different behaviour can be expected for each climate. Indeed, for all climates and at small scale, the persistence decreases as this scale increases, but the mediterranean climate, and its continental variety, shows higher persistence than oceanic climate. The probabilities of maintaining the values of irradiance after a certain period of time have been used as a first approximation to analyse the quality of each source, according to the climate. In addition, probability distributions corresponding to variations of clearness indices measured at several stations located in different climate zones have been compared with those obtained from satellite and modelling estimations. For this work, daily radiation data from the reanalysis carried out by the European Centre for Medium-Range Weather Forecasts and from the Satellite Application Facilities on climate monitoring have been used for mainland Spain. According to the results, the temporal series estimation of irradiance is more accurate when using satellite data, independent of the climate considered. In fact, the coefficients of determination corresponding to the locations studied are always above 0.92 in the case of satellite data, while this coefficient decreases to 0.69 for some cases of the numerical model. This conclusion is more evident in oceanic climates, where the most important errors can be observed. Indeed, in this case, the RRMSE derived from the CM-SAF estimations is 20.93%, while in the numerical model, it is 48.33%. Analysis of the probabilities corresponding to variations in the clearness indices also shows a better behaviour of the satellite-derived estimates for oceanic climate. For the standard mediterranean climate, the satellite also provides better results, though the numerical model improves

15. Some Techniques for the Objective Analysis of Humidity for Regional Scale Numerical Weather Prediction.

Rasmussen, Robert Gary

Several topics relating to the objective analysis of humidity for regional scale numerical weather prediction are investigated. These include: (1) sampling the humidity field; (2) choosing an analysis scheme; (3) choosing an analysis variable; (4) using surface data to diagnose upper -air humidity (SFC-DIAG); (5) using cloud analysis data to diagnose surface and upper-air humidities (3DNEPH-DIAG); and (6) modeling the humidity lateral autocorrelation function. Regression equations for the diagnosed humidities and several correlation models are developed and validated. Four types of data are used in a preliminary demonstration: observations (radiosonde and surface), SFC-DIAG data, 3DNEPH-DIAG data, and forecast data from the Drexel/NCAR Limited-Area and Mesoscale Prediction System (LAMPS). The major conclusions are: (1) independent samples of relative humidity can be obtained by sampling at intervals of two days and 1750 km, on the average; (2) Gandin's optimum interpolation (OI) is preferable to Cressman's successive correction and Panofsky's surface fitting schemes; (3) relative humidity (RH) is a better analysis variable than dew-point depression; (4) RH*, the square root of (1-RH), is better than RH; (5) both surface and cloud analysis data can be used to diagnose the upper-air humidity; (6) pooling dense data prior to OI analysis can improve the quality of the analysis and reduce its computational burden; (7) iteratively pooling data is economical; (8) for the types of data considered, use of more than about eight data in an OI point analysis cannot be justified by expectations of further reducing the analysis error variance; and (9) the statistical model in OI is faulty in that an analyzed humidity can be biased too much toward the first guess.

16. Space Trajectories Error Analysis (STEAP) Programs. Volume 1: Analytic manual, update

NASA Technical Reports Server (NTRS)

1971-01-01

Manual revisions are presented for the modified and expanded STEAP series. The STEAP 2 is composed of three independent but related programs: NOMAL for the generation of n-body nominal trajectories performing a number of deterministic guidance events; ERRAN for the linear error analysis and generalized covariance analysis along specific targeted trajectories; and SIMUL for testing the mathematical models used in the navigation and guidance process. The analytic manual provides general problem description, formulation, and solution and the detailed analysis of subroutines. The programmers' manual gives descriptions of the overall structure of the programs as well as the computational flow and analysis of the individual subroutines. The user's manual provides information on the input and output quantities of the programs. These are updates to N69-36472 and N69-36473.

17. Theoretic and numerical analysis of diamagnetic levitation and its experimental verification

Ye, Zhitong; Duan, Zhiyong; Su, Yufeng

2015-02-01

Diamagnetic levitation system is studied in detail in this paper. From top to bottom, the diamagnetic levitation system is composed of a lifting magnet, a top pyrolytic graphite sheet, a floating magnet and a bottom pyrolytic graphite sheet. The gravity of the floating magnet is balanced by the attractive force between the lifting magnet and the floating magnet. And the floating magnet is stably levitated between the top and bottom graphite sheets due to their diamagnetism. The force exerted on the floating magnet is analyzed through theoretical and numerical methods, and at the same time the equilibrium position is obtained. Totally 11 groups of magnets are studied by COMSOL, in which the accumulative error is eliminated to improve the accuracy of finite element analysis(FEA). Corresponding experiments are carried out to verify the numerical results, and the error of equilibrium position is less than 10%, which shows that the FEA is precise enough to simulate the diamagnetic system. Motion characteristic is studied for group 6, in which the lifting magnet is a φ3/16"× 1/8" cylinder. For the floating magnet, the horizontal force versus the eccentric displacement and the vertical force versus the vertical displacement are calculated by COMSOL respectively. In the magnetic potential well of the lifting magnet, the floating magnet returns to the vertical central axis automatically, and the frequencies of the vertical and horizontal movements are between 4 and 5 Hz. The frequencies of the two directional movements can be tuned by the magnetic parameters of the lifting and floating magnets and the structure dimensions of the system. The method used to analyze the diamagnetic system is proved effective to design the diamagnetic levitation structure. Because of the contactless levitation of the floating magnet based on diamagnetism, the system is sensitive to very small input. This diamagnetic levitation structure is potential in micro-actuators and sensors.

18. Numerical analysis of sandstone composition, provenance, and paleogeography

SciTech Connect

Smosma, R.; Bruner, K.R.; Burns, A.

1999-09-01

Cretaceous deltaic sandstones of the National Petroleum Reserve in Alaska exhibit an extreme variability in their mineral makeup. A series of numerical techniques, however, provides some order to the petrographic characteristics of these complex rocks. Ten mineral constituents occur in the sandstones, including quartz, chert, feldspar, mica, and organic matter, plus rock fragments of volcanics, carbonates, shale, phyllite, and schist. A mixing coefficient quantities the degree of heterogeneity in each sample. Hierarchical cluster analysis then groups sandstones on the basis of similarities among all ten mineral components--in the Alaskan example, six groupings characterized mainly by the different rock fragments. Multidimensional scaling shows how the clusters relate to one another and arranges them along compositional gradients--two trends in Alaska based on varying proportions of metamorphic/volcanic and shale/carbonate rock fragments. The resulting sandstone clusters and petrographic gradients can be mapped across the study area and compared with the stratigraphic section. This study confirms the presence of three different source areas that provided diverse sediment to the Cretaceous deltas as well as the general transport directions and distances. In addition, the sand composition is shown to have changed over time, probably related to erosional unroofing in the source areas. This combination of multivariate-analysis techniques proves to be a powerful tool, revealing subtle spatial and temporal relationships among the sandstones and allowing one to enhance provenance and paleogeographic conclusions made from compositional data.

19. A hybrid neurocomputing/numerical strategy for nonlinear structural analysis

NASA Technical Reports Server (NTRS)

Szewczyk, Z. Peter; Noor, Ahmed K.

1995-01-01

A hybrid neurocomputing/numerical strategy is presented for geometrically nonlinear analysis of structures. The strategy combines model-free data processing capabilities of computational neural networks with a Pade approximants-based perturbation technique to predict partial information about the nonlinear response of structures. In the hybrid strategy, multilayer feedforward neural networks are used to extend the validity of solutions by using training samples produced by Pade approximations to the Taylor series expansion of the response function. The range of validity of the training samples is taken to be the radius of convergence of Pade approximants and is estimated by setting a tolerance on the diverging approximants. The norm of residual vector of unbalanced forces in a given element is used as a measure to assess the quality of network predictions. To further increase the accuracy and the range of network predictions, additional training data are generated by either applying linear regression to weight matrices or expanding the training data by using predicted coefficients in a Taylor series. The effectiveness of the hybrid strategy is assessed by performing large-deflection analysis of a doubly-curved composite panel with a circular cutout, and postbuckling analyses of stiffened composite panels subjected to an in-plane edge shear load. In all the problems considered, the hybrid strategy is used to predict selective information about the structural response, namely the total strain energy and the maximum displacement components only.

20. A stable and efficient numerical algorithm for unconfined aquifer analysis

SciTech Connect

Keating, Elizabeth; Zyvoloski, George

2008-01-01

The non-linearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of forward model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency, and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to solution of Richard's Equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem, as well.

1. Numerical analysis of field-scale transport of bromacil

Russo, David; Tauber-Yasur, Inbar; Laufer, Asher; Yaron, Bruno

Field-scale transport of bromacil (5-bromo-3- sec-butyl-6-methyluracil) was analyzed using two different model processes for local description of the transport. The first was the classical, one-region convection dispersion equation (CDE) model while the second was the two-region, mobile-immobile (MIM) model. The analyses were performed by means of detailed three-dimensional, numerical simulations of the flow and the transport [Russo, D., Zaidel, J. and Laufer, A., Numerical analysis of flow and transport in a three-dimensional partially saturated heterogeneous soil. Water Resour. Res., 1998, in press], employing local soil hydraulic properties parameters from field measurements and local adsorption/desorption coefficients and the first-order degradation rate coefficient from laboratory measurements. Results of the analyses suggest that for a given flow regime, mass exchange between the mobile and the immobile regions retards the bromacil degradation, considerably affects the distribution of the bromacil resident concentration, c, at relatively large travel times, slightly affects the spatial moments of the distribution of c, and increases the skewing of the bromacil breakthrough and the uncertainty in its prediction, compared with the case in which the soil contained only a single (mobile) region. Mean and standard deviation of the simulated concentration profiles at various elapsed times were compared with measurements from a field-scale transport experiment [Tauber-Yasur, I., Hadas, A., Russo, D. and Yaron, B., Leaching of terbuthylazine and bromacil through field soils. Water, Air Soil Poln., 1998, in press] conducted at the Bet Dagan site. Given the limitations of the present study (e.g. the lack of detailed field data on the spatial variability of the soil chemical properties) the main conclusion of the present study is that the field-scale transport of bromacil at the Bet Dagan site is better quantified with the MIM model than the CDE model.

2. Numerous Numerals.

ERIC Educational Resources Information Center

Henle, James M.

This pamphlet consists of 17 brief chapters, each containing a discussion of a numeration system and a set of problems on the use of that system. The numeration systems used include Egyptian fractions, ordinary continued fractions and variants of that method, and systems using positive and negative bases. The book is informal and addressed to…

3. Error Analysis System for Spacecraft Navigation Using the Global Positioning System (GPS)

NASA Technical Reports Server (NTRS)

Truong, S. H.; Hart, R. C.; Hartman, K. R.; Tomcsik, T. L.; Searl, J. E.; Bernstein, A.

1997-01-01

4. Structurally induced errors in paleomagnetic analysis of fold and thrust belts: Types, causes and detection techniques.

Pueyo, E. L.

2008-12-01

Paleomagnetic vectors are unique kinematics indicators allowing for the real understanding of the lateral transference of deformation processes and they are essential for a real 3D understanding of fold and thrust belts. The association with the bedding surface gives the only 3D reference system able to unambiguously relate the deformed and undeformed stages and their implications are, until now, relatively unexplored in structural geology. However paleomagnetic data are sometimes, misinterpreted or ignored due to the lack of reliability of some databases, where a geometric control of errors seems evident from the structural point of view. An analysis of the implicit assumptions in paleomagnetic studies of fold and thrust belts reveals three possible sources of error with an intrinsic structural (geometric) control: Assumption 1) The laboratory procedures are able to completely isolate of the original paleomagnetic vectors When this fails, the subsequent overlapped paleomagnetic directions (eg. primary record and the recent overprint) will display both declination and inclination errors, that will be controlled by the fold axis orientation, the degree of flank rotation (dip), the primary magnetic polarity as well as the degree of vector overlapping. Assumption 2) The rigid-body behavior during deformation and the absence of rock volume changes. When the rock volume undergoes active internal deformation during folding or shearing, the deformed paleomagnetic vectors will display again declination and inclination errors, but both polarities will behave similarly. In this case the errors will depend on the relation between the primary field orientation and the deformation tensor, which in fact, can be reduced to the orientation and magnitude of the shear in most cases. Assumption 3) The bedding correction is able to restore the bedding-vector couple to the ancient (paleo)geographical reference system. This restoration may fail in complex deformation zones affected by

5. Error Ellipsoid Analysis for the Diameter Measurement of Cylindroid Components Using a Laser Radar Measurement System.

PubMed

Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo

2016-05-19

The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS.

6. Error Ellipsoid Analysis for the Diameter Measurement of Cylindroid Components Using a Laser Radar Measurement System

PubMed Central

Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo

2016-01-01

The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385

7. An evaluation of the underlying mechanisms of bloodstain pattern analysis error.

PubMed

Behrooz, Nima; Hulse-Smith, Lee; Chandra, Sanjeev

2011-09-01

An experiment was designed to explore the underlying mechanisms of blood disintegration and its subsequent effect on area of origin (AO) calculations. Blood spatter patterns were created through the controlled application of pressurized air (20-80 kPa) for 0.1 msec onto suspended blood droplets (2.7-3.2 mm diameter). The resulting disintegration process was captured using high-speed photography. Straight-line triangulation resulted in a 50% height overestimation, whereas using the lowest calculated height for each spatter pattern reduced this error to 8%. Incorporation of projectile motion resulted in a 28% height underestimation. The AO xy-coordinate was found to be very accurate with a maximum offset of only 4 mm, while AO size calculations were found to be two- to fivefold greater than expected. Subsequently, reverse triangulation analysis revealed the rotational offset for 26% of stains could not be attributed to measurement error, suggesting that some portion of error is inherent in the disintegration process.

8. Error Ellipsoid Analysis for the Diameter Measurement of Cylindroid Components Using a Laser Radar Measurement System.

PubMed

Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo

2016-01-01

The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385

9. An analysis of temperature-induced errors for an ultrasound distance measuring system. M. S. Thesis

NASA Technical Reports Server (NTRS)

Wenger, David Paul

1991-01-01

The presentation of research is provided in the following five chapters. Chapter 2 presents the necessary background information and definitions for general work with ultrasound and acoustics. It also discusses the basis for errors in the slant range measurements. Chapter 3 presents a method of problem solution and an analysis of the sensitivity of the equations to slant range measurement errors. It also presents various methods by which the error in the slant range measurements can be reduced to improve overall measurement accuracy. Chapter 4 provides a description of a type of experiment used to test the analytical solution and provides a discussion of its results. Chapter 5 discusses the setup of a prototype collision avoidance system, discusses its accuracy, and demonstrates various methods of improving the accuracy along with the improvements' ramifications. Finally, Chapter 6 provides a summary of the work and a discussion of conclusions drawn from it. Additionally, suggestions for further research are made to improve upon what has been presented here.

10. Error analysis and measurement uncertainty for a fiber grating strain-temperature sensor.

PubMed

Tang, Jaw-Luen; Wang, Jian-Neng

2010-01-01

A fiber grating sensor capable of distinguishing between temperature and strain, using a reference and a dual-wavelength fiber Bragg grating, is presented. Error analysis and measurement uncertainty for this sensor are studied theoretically and experimentally. The measured root mean squared errors for temperature T and strain ε were estimated to be 0.13 °C and 6 με, respectively. The maximum errors for temperature and strain were calculated as 0.00155 T + 2.90 × 10(-6) ε and 3.59 × 10(-5) ε + 0.01887 T, respectively. Using the estimation of expanded uncertainty at 95% confidence level with a coverage factor of k = 2.205, temperature and strain measurement uncertainties were evaluated as 2.60 °C and 32.05 με, respectively. For the first time, to our knowledge, we have demonstrated the feasibility of estimating the measurement uncertainty for simultaneous strain-temperature sensing with such a fiber grating sensor.

11. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

Jones, Reese E.; Mandadapu, Kranthi K.

2012-04-01

We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

12. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis.

PubMed

Jones, Reese E; Mandadapu, Kranthi K

2012-04-21

We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

13. Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models

USGS Publications Warehouse

Phillips, D.L.; Marks, D.G.

1996-01-01

In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated

14. Skeletal mechanism generation for surrogate fuels using directed relation graph with error propagation and sensitivity analysis

SciTech Connect

Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.

2010-09-15

A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article

15. Summary of research in applied mathematics, numerical analysis, and computer sciences

NASA Technical Reports Server (NTRS)

1986-01-01

The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.

16. Numerical Analysis of a Multi-Physics Model for Trace Gas Sensors

Brennan, Brian

Trace gas sensors are currently used in many applications from leak detection to national security and may some day help with disease diagnosis. These sensors are modelled by a coupled system of complex elliptic partial differential equations for pressure and temperature. Solutions are approximated using the finite element method which we will show admits a continuous and coercive variational problem with optimal H1 and L2 error estimates. Numerically, the finite element discretization yields a skew-Hermitian dominant matrix for which classical algebraic preconditioners quickly degrade. We develop a block preconditioner that requires scalar Helmholtz solutions to apply but gives a very low outer iteration count. To handle this, we explore three preconditoners for the resulting linear system. First we analyze the classical block Jacobi and block Gauss-Seidel preconditions before presenting a custom, physics based preconditioner. We also present analysis showing eigenvalues of the custom preconditioned system are mesh-dependent but with a small coefficient. Numerical experiments confirm our theoretical discussion.

17. Bit error rate analysis of free-space optical system with spatial diversity over strong atmospheric turbulence channel with pointing errors

Krishnan, Prabu; Sriram Kumar, D.

2014-12-01

Free-space optical communication (FSO) is emerging as a captivating alternative to work out the hindrances in the connectivity problems. It can be used for transmitting signals over common lands and properties that the sender or receiver may not own. The performance of an FSO system depends on the random environmental conditions. The bit error rate (BER) performance of differential phase shift keying FSO system is investigated. A distributed strong atmospheric turbulence channel with pointing error is considered for the BER analysis. Here, the system models are developed for single-input, single-output-FSO (SISO-FSO) and single-input, multiple-output-FSO (SIMO-FSO) systems. The closed-form mathematical expressions are derived for the average BER with various combining schemes in terms of the Meijer's G function.

18. Landslide risk assessment with multi pass DInSAR analysis and error suppressing approach

yun, H.; Kim, J.; Lin, S.; Choi, Y.

2013-12-01

Landslide is one of the most dreadful natural hazards and the prime risk source causing lethal damages in many countries. In spite of various attempts to measure the landslide susceptibility by the remote sensed method including Differential Interferometric SAR (DInSAR) analysis, the construction of reliable forecasting systems still remains unsolved. Thus, we tackled the problem of DInSAR analysis for monitoring landslide risk over the mountainous areas where InSAR observations are usually contaminated by the orographic effects and other error elements. In order to measure the correct surface deformation which might be a prelude of landslide, time series analysis and atmospheric correction of DInSAR interferograms were conducted and crossly validated. The target area of this experiment is the eastern part of Korean peninsula centered in Uljin. In there, the landslide originated by the geomorphic factors such as high sloped topography and localized torrential down pour is critical issue. The landslide cases frequently occurred in the cutting side of mountainous area by the anthropogenic construction activities. Although high precision DInSAR measurements for monitoring the landslide risks are essential in such circumstances, it is difficult to attain sufficient enough accuracy because of the external factors inducing the error component in electromagnetic wave propagation. For instance, the local climate characteristics such as orographic effect and the proximity to seashore can produce the significant anomalies in the water vapor distribution and consequently result in the error components of InSAR phase angle measurements. Moreover the high altitude parts of target area cause the stratified tropospheric delay error in DInSAR measurement. After all, the improved DInSAR approaches to cope all above obstacles are highly necessary. Thus we employed two approaches i.e. StaMPS/MTI (Stanford Method for Persistent Scatterers/Multi-Temporal InSAR, Hopper et al., 2007

19. Error analysis for station position from tracking of the Lageos satellite

NASA Technical Reports Server (NTRS)

Parmenter, M. E.; Kaula, W. M.

1974-01-01

The earth physics satellite systems error analysis program was applied to the problem of predicting the relative accuracy of station position determinations under varying orbital and observing geometries. The reference case consists of nine ground stations extending over 1500 km which lasers ranged to a LAGEOS satellite, with simultaneous Doppler tracking from a geosynchronous satellite for 16 days. Eleven variations from the reference case were tested. The results showed little sensitivity to whether the LAGEOS altitude is 3700 or 5690 km. More significant were the high inclination, and that LAGEOS was tracked by a geosynchronous satellite.

20. Pressure Measurements Using an Airborne Differential Absorption Lidar. Part 1; Analysis of the Systematic Error Sources

NASA Technical Reports Server (NTRS)

Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

1999-01-01

Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

1. Measurement error in time-series analysis: a simulation study comparing modelled and monitored data

PubMed Central

2013-01-01

Background Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Methods Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003–2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). Results When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2

2. Task and error analysis balancing benefits over business of electronic medical records.

PubMed

Carstens, Deborah Sater; Rodriguez, Walter; Wood, Michael B

2014-01-01

Task and error analysis research was performed to identify: a) the process for healthcare organisations in managing healthcare for patients with mental illness or substance abuse; b) how the process can be enhanced and; c) if electronic medical records (EMRs) have a role in this process from a business and safety perspective. The research question is if EMRs have a role in enhancing the healthcare for patients with mental illness or substance abuse. A discussion on the business of EMRs is addressed to understand the balancing act between the safety and business aspects of an EMR.

3. Optimum design for optical proximity correction in submicron bipolar technology using critical shape error analysis

Arthur, Graham G.; Martin, Brian; Wallace, Christine

2000-06-01

A production application of optical proximity correction (OPC) aimed at reducing corner-rounding and line-end shortening is described. The methodology, using critical shape error analysis, to calculate the correct serif size is given and is extended to show the effect of OPC on the process window (i.e. depth-of-focus and exposure latitude). The initial calculations are made using the lithography simulation tools PROLITH/2 and SOLID-C, the results of which are transferred to the photo-cell for practical results.

4. [Analysis of variance of bacterial counts in milk. 1. Characterization of total variance and the components of variance random sampling error, methodologic error and variation between parallel errors during storage].

PubMed

Böhmer, L; Hildebrandt, G

1998-01-01

In contrast to the prevailing automatized chemical analytical methods, classical microbiological techniques are linked with considerable material- and human-dependent sources of errors. These effects must be objectively considered for assessing the reliability and representativeness of a test result. As an example for error analysis, the deviation of bacterial counts and the influence of the time of testing, bacterial species involved (total bacterial count, coliform count) and the detection method used (pour-/spread-plate) were determined in a repeated testing of parallel samples of pasteurized (stored for 8 days at 10 degrees C) and raw (stored for 3 days at 6 degrees C) milk. Separate characterization of deviation components, namely, unavoidable random sampling error as well as methodical error and variation between parallel samples, was made possible by means of a test design where variance analysis was applied. Based on the results of the study, the following conclusions can be drawn: 1. Immediately after filling, the total count deviation in milk mainly followed the POISSON-distribution model and allowed a reliable hygiene evaluation of lots even with few samples. Subsequently, regardless of the examination procedure used, the setting up of parallel dilution series can be disregarded. 2. With increasing storage period, bacterial multiplication especially of psychrotrophs leads to unpredictable changes in the bacterial profile and density. With the increase in errors between samples, it is common to find packages which have acceptable microbiological quality but are already spoiled by the time of the expiry date labeled. As a consequence, a uniform acceptance or rejection of the batch is seldom possible. 3. Because the contamination level of coliforms in certified raw milk mostly lies near the detection limit, coliform counts with high relative deviation are expected to be found in milk directly after filling. Since no bacterial multiplication takes place

5. Refractive Errors and Concomitant Strabismus: A Systematic Review and Meta-analysis

PubMed Central

Tang, Shu Min; Chan, Rachel Y. T.; Bin Lin, Shi; Rong, Shi Song; Lau, Henry H. W.; Lau, Winnie W. Y.; Yip, Wilson W. K.; Chen, Li Jia; Ko, Simon T. C.; Yam, Jason C. S.

2016-01-01

This systematic review and meta-analysis is to evaluate the risk of development of concomitant strabismus due to refractive errors. Eligible studies published from 1946 to April 1, 2016 were identified from MEDLINE and EMBASE that evaluated any kinds of refractive errors (myopia, hyperopia, astigmatism and anisometropia) as an independent factor for concomitant exotropia and concomitant esotropia. Totally 5065 published records were retrieved for screening, 157 of them eligible for detailed evaluation. Finally 7 population-based studies involving 23,541 study subjects met our criteria for meta-analysis. The combined OR showed that myopia was a risk factor for exotropia (OR: 5.23, P = 0.0001). We found hyperopia had a dose-related effect for esotropia (OR for a spherical equivalent [SE] of 2–3 diopters [D]: 10.16, P = 0.01; OR for an SE of 3-4D: 17.83, P < 0.0001; OR for an SE of 4-5D: 41.01, P < 0.0001; OR for an SE of ≥5D: 162.68, P < 0.0001). Sensitivity analysis indicated our results were robust. Results of this study confirmed myopia as a risk for concomitant exotropia and identified a dose-related effect for hyperopia as a risk of concomitant esotropia. PMID:27731389

6. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

NASA Technical Reports Server (NTRS)

Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

2005-01-01

This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

7. A comparison of some observations of the Galilean satellites with Sampson's tables. [position error analysis

NASA Technical Reports Server (NTRS)

Arlot, J.-E.

1975-01-01

Two series of photographic observations of the Galilean satellites are analyzed to determine systematic errors in the observations and errors in Sampson's (1921) theory. Satellite-satellite as well as planet-satellite positions are used in comparing theory with observation. Ten unknown errors are identified, and results are presented for three determinations of the unknown longitude error.

8. Low-dimensional Representation of Error Covariance

NASA Technical Reports Server (NTRS)

Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

2000-01-01

9. Pediatric medication errors in the postanesthesia care unit: analysis of MEDMARX data.

PubMed

Payne, Christopher H; Smith, Christopher R; Newkirk, Laura E; Hicks, Rodney W

2007-04-01

Medication errors involving pediatric patients in the postanesthesia care unit may occur as frequently as one in every 20 medication orders and are more likely to cause harm when compared to medication errors in the overall population. Researchers examined six years of records from the MEDMARX database and used consecutive nonprobability sampling and descriptive statistics to compare medication errors in the pediatric data set to those occurring in the total population data set. Nineteen different causes of error involving 28 different products were identified. The results of the study indicate that an organization can focus on causes of errors and products involved in errors to mitigate future error occurrence.

10. 1-D Numerical Analysis of RBCC Engine Performance

NASA Technical Reports Server (NTRS)

Han, Samuel S.

1998-01-01

An RBCC engine combines air breathing and rocket engines into a single engine to increase the specific impulse over an entire flight trajectory. Considerable research pertaining to RBCC propulsion was performed during the 1960's and these engines were revisited recently as a candidate propulsion system for either a single-stage-to-orbit (SSTO) or two-stage-to-orbit (TSTO) launch vehicle. There are a variety of RBCC configurations that had been evaluated and new designs are currently under development. However, the basic configuration of all RBCC systems is built around the ejector scramjet engine originally developed for the hypersonic airplane. In this configuration, a rocket engine plays as an ejector in the air-augmented initial acceleration mode, as a fuel injector in scramjet mode and the rocket in all rocket mode for orbital insertion. Computational fluid dynamics (CFD) is a useful tool for the analysis of complex transport processes in various components in RBCC propulsion systems. The objective of the present research was to develop a transient 1-D numerical model that could be used to predict flow behavior throughout a generic RBCC engine following a flight path.

11. a Numerical Method for Stability Analysis of Pinned Flexible Mechanisms

Beale, D. G.; Lee, S. W.

1996-05-01

A technique is presented to investigate the stability of mechanisms with pin-jointed flexible members. The method relies on a special floating frame from which elastic link co-ordinates are defined. Energies are easily developed for use in a Lagrange equation formulation, leading to a set of non-linear and mixed ordinary differential-algebraic equations of motion with constraints. Stability and bifurcation analysis is handled using a numerical procedure (generalized co-ordinate partitioning) that avoids the tedious and difficult task of analytically reducing the system of equations to a number equalling the system degrees of freedom. The proposed method was then applied to (1) a slider-crank mechanism with a flexible connecting rod and crank of constant rotational speed, and (2) a four-bar linkage with a flexible coupler with a constant speed crank. In both cases, a single pinned-pinned beam bending mode is employed to develop resonance curves and stability boundaries in the crank length-crank speed parameter plane. Flip and fold bifurcations are common occurrences in both mechanisms. The accuracy of the proposed method was also verified by comparison with previous experimental results [1].

12. Numerical Analysis of Heat Transfer During Quenching Process

2016-06-01

A numerical model is developed to simulate the immersion quenching process of metals. The time of quench plays an important role if the process involves a defined step quenching schedule to obtain the desired characteristics. Lumped heat capacity analysis used for this purpose requires the value of heat transfer coefficient, whose evaluation requires large experimental data. Experimentation on a sample work piece may not represent the actual component which may vary in dimension. A Fluid-Structure interaction technique with a coupled interface between the solid (metal) and liquid (quenchant) is used for the simulations. Initial times of quenching shows boiling heat transfer phenomenon with high values of heat transfer coefficients (5000-2.5 × 105 W/m2K). Shape of the work piece with equal dimension shows less influence on the cooling rate Non-uniformity in hardness at the sharp corners can be reduced by rounding off the edges. For a square piece of 20 mm thickness, with 3 mm fillet radius, this difference is reduced by 73 %. The model can be used for any metal-quenchant combination to obtain time-temperature data without the necessity of experimentation.

13. Numeric calculation of celestial bodies with spreadsheet analysis

Koch, Alexander

2016-04-01

The motion of the planets and moons in our solar system can easily be calculated for any time by the Kepler laws of planetary motion. The Kepler laws are a special case of the gravitational law of Newton, especially if you consider more than two celestial bodies. Therefore it is more basic to calculate the motion by using the gravitational law. But the problem is, that by gravitational law it is not possible to calculate the state of motion with only one step of calculation. The motion has to be numerical calculated for many time intervalls. For this reason, spreadsheet analysis is helpful for students. Skills in programmes like Excel, Calc or Gnumeric are important in professional life and can easily be learnt by students. These programmes can help to calculate the complex motions with many intervalls. The more intervalls are used, the more exact are the calculated orbits. The sutdents will first get a quick course in Excel. After that they calculate with instructions the 2-D-coordinates of the orbits of Moon and Mars. Step by step the students are coding the formulae for calculating physical parameters like coordinates, force, acceleration and velocity. The project is limited to 4 weeks or 8 lessons. So the calcualtion will only include the calculation of one body around the central mass like Earth or Sun. The three-body problem can only be shortly discussed at the end of the project.

14. Numerical Simulation and Scaling Analysis of Cell Printing

Qiao, Rui; He, Ping

2011-11-01

Cell printing, i.e., printing three dimensional (3D) structures of cells held in a tissue matrix, is gaining significant attention in the biomedical community. The key idea is to use inkjet printer or similar devices to print cells into 3D patterns with a resolution comparable to the size of mammalian cells. Achieving such a resolution in vitro can lead to breakthroughs in areas such as organ transplantation. Although the feasibility of cell printing has been demonstrated recently, the printing resolution and cell viability remain to be improved. Here we investigate a unit operation in cell printing, namely, the impact of a cell-laden droplet into a pool of highly viscous liquids. The droplet and cell dynamics are quantified using both direct numerical simulation and scaling analysis. These studies indicate that although cell experienced significant stress during droplet impact, the duration of such stress is very short, which helps explain why many cells can survive the cell printing process. These studies also revealed that cell membrane can be temporarily ruptured during cell printing, which is supported by indirect experimental evidence.

15. Analysis of Numerical Simulation Results of LIPS-200 Lifetime Experiments

Chen, Juanjuan; Zhang, Tianping; Geng, Hai; Jia, Yanhui; Meng, Wei; Wu, Xianming; Sun, Anbang

2016-06-01

Accelerator grid structural and electron backstreaming failures are the most important factors affecting the ion thruster's lifetime. During the thruster's operation, Charge Exchange Xenon (CEX) ions are generated from collisions between plasma and neutral atoms. Those CEX ions grid's barrel and wall frequently, which cause the failures of the grid system. In order to validate whether the 20 cm Lanzhou Ion Propulsion System (LIPS-200) satisfies China's communication satellite platform's application requirement for North-South Station Keeping (NSSK), this study analyzed the measured depth of the pit/groove on the accelerator grid's wall and aperture diameter's variation and estimated the operating lifetime of the ion thruster. Different from the previous method, in this paper, the experimental results after the 5500 h of accumulated operation of the LIPS-200 ion thruster are presented firstly. Then, based on these results, theoretical analysis and numerical calculations were firstly performed to predict the on-orbit lifetime of LIPS-200. The results obtained were more accurate to calculate the reliability and analyze the failure modes of the ion thruster. The results indicated that the predicted lifetime of LIPS-200's was about 13218.1 h which could satisfy the required lifetime requirement of 11000 h very well.

16. Optimal error analysis of spectral methods with emphasis on non-constant coefficients and deformed geometries

NASA Technical Reports Server (NTRS)

1989-01-01

The numerical analysis of spectral methods when non-constant coefficients appear in the equation, either due to the original statement of the equations or to take into account the deformed geometry, is presented. Particular attention is devoted to the optimality of the discretization even for low values of the discretization parameter. The effect of some overintegration is also addressed, in order to possibly improve the accuracy of the discretization.

17. Analysis of 454 sequencing error rate, error sources, and artifact recombination for detection of Low-frequency drug resistance mutations in HIV-1 DNA

PubMed Central

2013-01-01

Background 454 sequencing technology is a promising approach for characterizing HIV-1 populations and for identifying low frequency mutations. The utility of 454 technology for determining allele frequencies and linkage associations in HIV infected individuals has not been extensively investigated. We evaluated the performance of 454 sequencing for characterizing HIV populations with defined allele frequencies. Results We constructed two HIV-1 RT clones. Clone A was a wild type sequence. Clone B was identical to clone A except it contained 13 introduced drug resistant mutations. The clones were mixed at ratios ranging from 1% to 50% and were amplified by standard PCR conditions and by PCR conditions aimed at reducing PCR-based recombination. The products were sequenced using 454 pyrosequencing. Sequence analysis from standard PCR amplification revealed that 14% of all sequencing reads from a sample with a 50:50 mixture of wild type and mutant DNA were recombinants. The majority of the recombinants were the result of a single crossover event which can happen during PCR when the DNA polymerase terminates synthesis prematurely. The incompletely extended template then competes for primer sites in subsequent rounds of PCR. Although less often, a spectrum of other distinct crossover patterns was also detected. In addition, we observed point mutation errors ranging from 0.01% to 1.0% per base as well as indel (insertion and deletion) errors ranging from 0.02% to nearly 50%. The point errors (single nucleotide substitution errors) were mainly introduced during PCR while indels were the result of pyrosequencing. We then used new PCR conditions designed to reduce PCR-based recombination. Using these new conditions, the frequency of recombination was reduced 27-fold. The new conditions had no effect on point mutation errors. We found that 454 pyrosequencing was capable of identifying minority HIV-1 mutations at frequencies down to 0.1% at some nucleotide positions. Conclusion

18. Results of a nuclear power plant application of A New Technique for Human Error Analysis (ATHEANA)

SciTech Connect

Whitehead, D.W.; Forester, J.A.; Bley, D.C.

1998-03-01

A new method to analyze human errors has been demonstrated at a pressurized water reactor (PWR) nuclear power plant. This was the first application of the new method referred to as A Technique for Human Error Analysis (ATHEANA). The main goals of the demonstration were to test the ATHEANA process as described in the frame-of-reference manual and the implementation guideline, test a training package developed for the method, test the hypothesis that plant operators and trainers have significant insight into the error-forcing-contexts (EFCs) that can make unsafe actions (UAs) more likely, and to identify ways to improve the method and its documentation. A set of criteria to evaluate the success of the ATHEANA method as used in the demonstration was identified. A human reliability analysis (HRA) team was formed that consisted of an expert in probabilistic risk assessment (PRA) with some background in HRA (not ATHEANA) and four personnel from the nuclear power plant. Personnel from the plant included two individuals from their PRA staff and two individuals from their training staff. Both individuals from training are currently licensed operators and one of them was a senior reactor operator on shift until a few months before the demonstration. The demonstration was conducted over a 5-month period and was observed by members of the Nuclear Regulatory Commissions ATHEANA development team, who also served as consultants to the HRA team when necessary. Example results of the demonstration to date, including identified human failure events (HFEs), UAs, and EFCs are discussed. Also addressed is how simulator exercises are used in the ATHEANA demonstration project.

19. Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators

McKenna, S. A.; Wahi, A. K.

2003-12-01

Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid

20. A functional approach to movement analysis and error identification in sports and physical education.

PubMed

Hossner, Ernst-Joachim; Schiebl, Frank; Göhner, Ulrich

2015-01-01

In a hypothesis-and-theory paper, a functional approach to movement analysis in sports is introduced. In this approach, contrary to classical concepts, it is not anymore the "ideal" movement of elite athletes that is taken as a template for the movements produced by learners. Instead, movements are understood as the means to solve given tasks that in turn, are defined by to-be-achieved task goals. A functional analysis comprises the steps of (1) recognizing constraints that define the functional structure, (2) identifying sub-actions that subserve the achievement of structure-dependent goals, (3) explicating modalities as specifics of the movement execution, and (4) assigning functions to actions, sub-actions and modalities. Regarding motor-control theory, a functional approach can be linked to a dynamical-system framework of behavioral shaping, to cognitive models of modular effect-related motor control as well as to explicit concepts of goal setting and goal achievement. Finally, it is shown that a functional approach is of particular help for sports practice in the context of structuring part practice, recognizing functionally equivalent task solutions, finding innovative technique alternatives, distinguishing errors from style, and identifying root causes of movement errors.