Science.gov

Sample records for numerical error analysis

  1. Accounting for Errors in Model Analysis Theory: A Numerical Approach

    NASA Astrophysics Data System (ADS)

    Sommer, Steven R.; Lindell, Rebecca S.

    2004-09-01

    By studying the patterns of a group of individuals' responses to a series of multiple-choice questions, researchers can utilize Model Analysis Theory to create a probability distribution of mental models for a student population. The eigenanalysis of this distribution yields information about what mental models the students possess, as well as how consistently they utilize said mental models. Although the theory considers the probabilistic distribution to be fundamental, there exists opportunities for random errors to occur. In this paper we will discuss a numerical approach for mathematically accounting for these random errors. As an example of this methodology, analysis of data obtained from the Lunar Phases Concept Inventory will be presented. Limitations and applicability of this numerical approach will be discussed.

  2. Minimizing Errors in Numerical Analysis of Chemical Data.

    ERIC Educational Resources Information Center

    Rusling, James F.

    1988-01-01

    Investigates minimizing errors in computational methods commonly used in chemistry. Provides a series of examples illustrating the propagation of errors, finite difference methods, and nonlinear regression analysis. Includes illustrations to explain these concepts. (MVL)

  3. Error-analysis and comparison to analytical models of numerical waveforms produced by the NRAR Collaboration

    NASA Astrophysics Data System (ADS)

    Hinder, Ian; Buonanno, Alessandra; Boyle, Michael; Etienne, Zachariah B.; Healy, James; Johnson-McDaniel, Nathan K.; Nagar, Alessandro; Nakano, Hiroyuki; Pan, Yi; Pfeiffer, Harald P.; Pürrer, Michael; Reisswig, Christian; Scheel, Mark A.; Schnetter, Erik; Sperhake, Ulrich; Szilágyi, Bela; Tichy, Wolfgang; Wardell, Barry; Zenginoğlu, Anıl; Alic, Daniela; Bernuzzi, Sebastiano; Bode, Tanja; Brügmann, Bernd; Buchman, Luisa T.; Campanelli, Manuela; Chu, Tony; Damour, Thibault; Grigsby, Jason D.; Hannam, Mark; Haas, Roland; Hemberger, Daniel A.; Husa, Sascha; Kidder, Lawrence E.; Laguna, Pablo; London, Lionel; Lovelace, Geoffrey; Lousto, Carlos O.; Marronetti, Pedro; Matzner, Richard A.; Mösta, Philipp; Mroué, Abdul; Müller, Doreen; Mundim, Bruno C.; Nerozzi, Andrea; Paschalidis, Vasileios; Pollney, Denis; Reifenberger, George; Rezzolla, Luciano; Shapiro, Stuart L.; Shoemaker, Deirdre; Taracchini, Andrea; Taylor, Nicholas W.; Teukolsky, Saul A.; Thierfelder, Marcus; Witek, Helvi; Zlochower, Yosef

    2013-01-01

    The Numerical-Relativity-Analytical-Relativity (NRAR) collaboration is a joint effort between members of the numerical relativity, analytical relativity and gravitational-wave data analysis communities. The goal of the NRAR collaboration is to produce numerical-relativity simulations of compact binaries and use them to develop accurate analytical templates for the LIGO/Virgo Collaboration to use in detecting gravitational-wave signals and extracting astrophysical information from them. We describe the results of the first stage of the NRAR project, which focused on producing an initial set of numerical waveforms from binary black holes with moderate mass ratios and spins, as well as one non-spinning binary configuration which has a mass ratio of 10. All of the numerical waveforms are analysed in a uniform and consistent manner, with numerical errors evaluated using an analysis code created by members of the NRAR collaboration. We compare previously-calibrated, non-precessing analytical waveforms, notably the effective-one-body (EOB) and phenomenological template families, to the newly-produced numerical waveforms. We find that when the binary's total mass is ˜100-200M⊙, current EOB and phenomenological models of spinning, non-precessing binary waveforms have overlaps above 99% (for advanced LIGO) with all of the non-precessing-binary numerical waveforms with mass ratios ⩽4, when maximizing over binary parameters. This implies that the loss of event rate due to modelling error is below 3%. Moreover, the non-spinning EOB waveforms previously calibrated to five non-spinning waveforms with mass ratio smaller than 6 have overlaps above 99.7% with the numerical waveform with a mass ratio of 10, without even maximizing on the binary parameters.

  4. Numerical errors in the presence of steep topography: analysis and alternatives

    SciTech Connect

    Lundquist, K A; Chow, F K; Lundquist, J K

    2010-04-15

    It is well known in computational fluid dynamics that grid quality affects the accuracy of numerical solutions. When assessing grid quality, properties such as aspect ratio, orthogonality of coordinate surfaces, and cell volume are considered. Mesoscale atmospheric models generally use terrain-following coordinates with large aspect ratios near the surface. As high resolution numerical simulations are increasingly used to study topographically forced flows, a high degree of non-orthogonality is introduced, especially in the vicinity of steep terrain slopes. Numerical errors associated with the use of terrainfollowing coordinates can adversely effect the accuracy of the solution in steep terrain. Inaccuracies from the coordinate transformation are present in each spatially discretized term of the Navier-Stokes equations, as well as in the conservation equations for scalars. In particular, errors in the computation of horizontal pressure gradients, diffusion, and horizontal advection terms have been noted in the presence of sloping coordinate surfaces and steep topography. In this work we study the effects of these spatial discretization errors on the flow solution for three canonical cases: scalar advection over a mountain, an atmosphere at rest over a hill, and forced advection over a hill. This study is completed using the Weather Research and Forecasting (WRF) model. Simulations with terrain-following coordinates are compared to those using a flat coordinate, where terrain is represented with the immersed boundary method. The immersed boundary method is used as a tool which allows us to eliminate the terrain-following coordinate transformation, and quantify numerical errors through a direct comparison of the two solutions. Additionally, the effects of related issues such as the steepness of terrain slope and grid aspect ratio are studied in an effort to gain an understanding of numerical domains where terrain-following coordinates can successfully be used and

  5. Some Surprising Errors in Numerical Differentiation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2012-01-01

    Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…

  6. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  7. Numerical error in groundwater flow and solute transport simulation

    NASA Astrophysics Data System (ADS)

    Woods, Juliette A.; Teubner, Michael D.; Simmons, Craig T.; Narayan, Kumar A.

    2003-06-01

    Models of groundwater flow and solute transport may be affected by numerical error, leading to quantitative and qualitative changes in behavior. In this paper we compare and combine three methods of assessing the extent of numerical error: grid refinement, mathematical analysis, and benchmark test problems. In particular, we assess the popular solute transport code SUTRA [Voss, 1984] as being a typical finite element code. Our numerical analysis suggests that SUTRA incorporates a numerical dispersion error and that its mass-lumped numerical scheme increases the numerical error. This is confirmed using a Gaussian test problem. A modified SUTRA code, in which the numerical dispersion is calculated and subtracted, produces better results. The much more challenging Elder problem [Elder, 1967; Voss and Souza, 1987] is then considered. Calculation of its numerical dispersion coefficients and numerical stability show that the Elder problem is prone to error. We confirm that Elder problem results are extremely sensitive to the simulation method used.

  8. Error Analysis of Quadrature Rules. Classroom Notes

    ERIC Educational Resources Information Center

    Glaister, P.

    2004-01-01

    Approaches to the determination of the error in numerical quadrature rules are discussed and compared. This article considers the problem of the determination of errors in numerical quadrature rules, taking Simpson's rule as the principal example. It suggests an approach based on truncation error analysis of numerical schemes for differential…

  9. Numerical Simulation of Coherent Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, Mark

    A major goal in quantum computation is the implementation of error correction to produce a logical qubit with an error rate lower than that of the underlying physical qubits. Recent experimental progress demonstrates physical qubits can achieve error rates sufficiently low for error correction, particularly for codes with relatively high thresholds such as the surface code and color code. Motivated by experimental capabilities of neutral atom systems, we use numerical simulation to investigate whether coherent error correction can be effectively used with the 7-qubit color code. The results indicate that coherent error correction does not work at the 10-qubit level in neutral atom array quantum computers. By adding more qubits there is a possibility of making the encoding circuits fault-tolerant which could improve performance.

  10. ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.

    SciTech Connect

    LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.

    2004-07-26

    We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.

  11. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  12. Error analysis of analytic solutions for self-excited near-symmetric rigid bodies - A numerical study

    NASA Technical Reports Server (NTRS)

    Kia, T.; Longuski, J. M.

    1984-01-01

    Analytic error bounds are presented for the solutions of approximate models for self-excited near-symmetric rigid bodies. The error bounds are developed for analytic solutions to Euler's equations of motion. The results are applied to obtain a simplified analytic solution for Eulerian rates and angles. The results of a sample application of the range and error bound expressions for the case of the Galileo spacecraft experiencing transverse torques demonstrate the use of the bounds in analyses of rigid body spin change maneuvers.

  13. Analysis of positron annihilation lifetime data by numerical Laplace inversion: Corrections for source terms and zero-time shift errors

    NASA Astrophysics Data System (ADS)

    Gregory, Roger B.

    1991-05-01

    We have recently described modifications to the program CONTIN [S.W. Provencher, Comput. Phys. Commun. 27 (1982) 229] for the solution of Fredholm integral equations with convoluted kernels of the type that occur in the analysis of positron annihilation lifetime data [R.B. Gregory and Yongkang Zhu, Nucl. Instr. and Meth. A290 (1990) 172]. In this article, modifications to the program to correct for source terms in the sample and reference decay curves and for shifts in the position of the zero-time channel of the sample and reference data are described. Unwanted source components, expressed as a discrete sum of exponentials, may be removed from both the sample and reference data by modification of the sample data alone, without the need for direct knowledge of the instrument resolution function. Shifts in the position of the zero-time channel of up to half the channel width of the multichannel analyzer can be corrected. Analyses of computer-simulated test data indicate that the quality of the reconstructed annihilation rate probability density functions is improved by employing a reference material with a short lifetime and indicate that reference materials which generate free positrons by quenching positronium formation (i.e. strong oxidizing agents) have lifetimes that are too long (400-450 ps) to provide reliable estimates of the lifetime parameters for the shortlived components with the methods described here. Well-annealed single crystals of metals with lifetimes less than 200 ps, such as molybdenum (123 ps) and aluminum (166 ps) do not introduce significant errors in estimates of the lifetime parameters and are to be preferred as reference materials. The performance of our modified version of CONTIN is illustrated by application to positron annihilation in polytetrafluoroethylene.

  14. The Insufficiency of Error Analysis

    ERIC Educational Resources Information Center

    Hammarberg, B.

    1974-01-01

    The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…

  15. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  16. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  17. Error Estimates for Numerical Integration Rules

    ERIC Educational Resources Information Center

    Mercer, Peter R.

    2005-01-01

    The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.

  18. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  19. Analysis of discretization errors in LES

    NASA Technical Reports Server (NTRS)

    Ghosal, Sandip

    1995-01-01

    All numerical simulations of turbulence (DNS or LES) involve some discretization errors. The integrity of such simulations therefore depend on our ability to quantify and control such errors. In the classical literature on analysis of errors in partial differential equations, one typically studies simple linear equations (such as the wave equation or Laplace's equation). The qualitative insight gained from studying such simple situations is then used to design numerical methods for more complex problems such as the Navier-Stokes equations. Though such an approach may seem reasonable as a first approximation, it should be recognized that strongly nonlinear problems, such as turbulence, have a feature that is absent in linear problems. This feature is the simultaneous presence of a continuum of space and time scales. Thus, in an analysis of errors in the one dimensional wave equation, one may, without loss of generality, rescale the equations so that the dependent variable is always of order unity. This is not possible in the turbulence problem since the amplitudes of the Fourier modes of the velocity field have a continuous distribution. The objective of the present research is to provide some quantitative measures of numerical errors in such situations. Though the focus of this work is LES, the methods introduced here can be just as easily applied to DNS. Errors due to discretization of the time-variable are neglected for the purpose of this analysis.

  20. Error analysis in laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.

    1998-06-01

    Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.

  1. Managing numerical errors in random sequential adsorption

    NASA Astrophysics Data System (ADS)

    Cieśla, Michał; Nowak, Aleksandra

    2016-09-01

    Aim of this study is to examine the influence of a finite surface size and a finite simulation time on a packing fraction estimated using random sequential adsorption simulations. The goal of particular interest is providing hints on simulation setup to achieve desired level of accuracy. The analysis is based on properties of saturated random packing of disks on continuous and flat surfaces of different sizes.

  2. Orbital and Geodetic Error Analysis

    NASA Technical Reports Server (NTRS)

    Felsentreger, T.; Maresca, P.; Estes, R.

    1985-01-01

    Results that previously required several runs determined in more computer-efficient manner. Multiple runs performed only once with GEODYN and stored on tape. ERODYN then performs matrix partitioning and linear algebra required for each individual error-analysis run.

  3. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  4. The characteristics of key analysis errors

    NASA Astrophysics Data System (ADS)

    Caron, Jean-Francois

    This thesis investigates the characteristics of the corrections to the initial state of the atmosphere. The technique employed is the key analysis error algorithm, recently developed to estimate the initial state errors responsible for poor short-range to medium-range numerical weather prediction (NWP) forecasts. The main goal of this work is to determine to which extent the initial corrections obtained with this method can be associated with analysis errors. A secondary goal is to understand their dynamics in improving the forecast. In the first part of the thesis, we examine the realism of the initial corrections obtained from the key analysis error algorithm in terms of dynamical balance and closeness to the observations. The result showed that the initial corrections are strongly out of balance and systematically increase the departure between the control analysis and the observations suggesting that the key analysis error algorithm produced initial corrections that represent more than analysis errors. Significant artificial correction to the initial state seems to be present. The second part of this work examines a few approaches to isolate the balanced component of the initial corrections from the key analysis error method. The best results were obtained with the nonlinear balance potential vorticity (PV) inversion technique. The removal of the imbalance part of the initial corrections makes the corrected analysis slightly closer to the observations, but remains systematically further away as compared to the control analysis. Thus the balanced part of the key analysis errors cannot justifiably be associated with analysis errors. In light of the results presented, some recommendations to improve the key analysis error algorithm were proposed. In the third and last part of the thesis, a diagnosis of the evolution of the initial corrections from the key analysis error method is presented using a PV approach. The initial corrections tend to grow rapidly in time

  5. Analysis of Medication Error Reports

    SciTech Connect

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  6. A Classroom Note on: Building on Errors in Numerical Integration

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2011-01-01

    In both baseball and mathematics education, the conventional wisdom is to avoid errors at all costs. That advice might be on target in baseball, but in mathematics, it is not always the best strategy. Sometimes an analysis of errors provides much deeper insights into mathematical ideas and, rather than something to eschew, certain types of errors…

  7. Having Fun with Error Analysis

    ERIC Educational Resources Information Center

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  8. Condition and Error Estimates in Numerical Matrix Computations

    SciTech Connect

    Konstantinov, M. M.; Petkov, P. H.

    2008-10-30

    This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.

  9. Numerical analysis of bifurcations

    SciTech Connect

    Guckenheimer, J.

    1996-06-01

    This paper is a brief survey of numerical methods for computing bifurcations of generic families of dynamical systems. Emphasis is placed upon algorithms that reflect the structure of the underlying mathematical theory while retaining numerical efficiency. Significant improvements in the computational analysis of dynamical systems are to be expected from more reliance of geometric insight coming from dynamical systems theory. {copyright} {ital 1996 American Institute of Physics.}

  10. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    PubMed

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods. PMID:26328545

  11. Numerical study of error propagation in Monte Carlo depletion simulations

    SciTech Connect

    Wyant, T.; Petrovic, B.

    2012-07-01

    Improving computer technology and the desire to more accurately model the heterogeneity of the nuclear reactor environment have made the use of Monte Carlo depletion codes more attractive in recent years, and feasible (if not practical) even for 3-D depletion simulation. However, in this case statistical uncertainty is combined with error propagating through the calculation from previous steps. In an effort to understand this error propagation, a numerical study was undertaken to model and track individual fuel pins in four 17 x 17 PWR fuel assemblies. By changing the code's initial random number seed, the data produced by a series of 19 replica runs was used to investigate the true and apparent variance in k{sub eff}, pin powers, and number densities of several isotopes. While this study does not intend to develop a predictive model for error propagation, it is hoped that its results can help to identify some common regularities in the behavior of uncertainty in several key parameters. (authors)

  12. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V., II

    1987-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.

  13. Error Analysis and the EFL Classroom Teaching

    ERIC Educational Resources Information Center

    Xie, Fang; Jiang, Xue-mei

    2007-01-01

    This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…

  14. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  15. Numerical thermal analysis

    SciTech Connect

    Ketkar, S.P.

    1999-07-01

    This new volume is written for both practicing engineers who want to refresh their knowledge in the fundamentals of numerical thermal analysis as well as for students of numerical heat transfer. it is a handy desktop reference that covers all the basics of finite difference, finite element, and control volume methods. In this volume, the author presents a unique hybrid method that combines the best features of finite element modeling and the computational efficiency of finite difference network solution techniques. It is a robust technique that is used in commercially available software. The contents include: heat conduction: fundamentals and governing equations; finite difference method; control volume method; finite element method; the hybrid method; and software selection.

  16. Error analysis of friction drive elements

    NASA Astrophysics Data System (ADS)

    Wang, Guomin; Yang, Shihai; Wang, Daxing

    2008-07-01

    Friction drive is used in some large astronomical telescopes in recent years. Comparing to the direct drive, friction drive train consists of more buildup parts. Usually, the friction drive train consists of motor-tachometer unit, coupling, reducer, driving roller, big wheel, encoder and encoder coupling. Normally, these buildup parts will introduce somewhat errors to the drive system. Some of them are random error and some of them are systematic error. For the random error, the effective way is to estimate their contributions and try to find proper way to decrease its influence. For the systematic error, the useful way is to analyse and test them quantitively, and then feedback the error to the control system to correct them. The main task of this paper is to analyse these error sources and find out their characteristics, such as random error, systematic error and contributions. The methods or equations used in the analysis will be also presented detail in this paper.

  17. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V., II

    1985-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.

  18. Analysis and classification of human error

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.; Rouse, S. H.

    1983-01-01

    The literature on human error is reviewed with emphasis on theories of error and classification schemes. A methodology for analysis and classification of human error is then proposed which includes a general approach to classification. Identification of possible causes and factors that contribute to the occurrence of errors is also considered. An application of the methodology to the use of checklists in the aviation domain is presented for illustrative purposes.

  19. Error analysis using organizational simulation.

    PubMed Central

    Fridsma, D. B.

    2000-01-01

    Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885

  20. Synthetic aperture interferometry: error analysis

    SciTech Connect

    Biswas, Amiya; Coupland, Jeremy

    2010-07-10

    Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

  1. Error analysis of finite element solutions for postbuckled cylinders

    NASA Technical Reports Server (NTRS)

    Sistla, Rajaram; Thurston, Gaylen A.

    1989-01-01

    A general method of error analysis and correction is investigated for the discrete finite-element results for cylindrical shell structures. The method for error analysis is an adaptation of the method of successive approximation. When applied to the equilibrium equations of shell theory, successive approximations derive an approximate continuous solution from the discrete finite-element results. The advantage of this continuous solution is that it contains continuous partial derivatives of an order higher than the basis functions of the finite-element solution. Preliminary numerical results are presented in this paper for the error analysis of finite-element results for a postbuckled stiffened cylindrical panel modeled by a general purpose shell code. Numerical results from the method have previously been reported for postbuckled stiffened plates. A procedure for correcting the continuous approximate solution by Newton's method is outlined.

  2. Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei

    2010-01-01

    This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…

  3. Integrated analysis of error detection and recovery

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Lee, Y. H.

    1985-01-01

    An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.

  4. Numerical Errors in Coupling Micro- and Macrophysics in the Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Gardner, D. J.; Caldwell, P.; Sexton, J. M.; Woodward, C. S.

    2014-12-01

    In this study, we investigate numerical errors in version 2 of the Morrison-Gettelman microphysics scheme (MG2) and its coupling to a development version of the macrophysics (condensation/evaporation) scheme used in version 5 of the Community Atmosphere Model (CAM5). Our analysis is performed using a modified version of the Kinematic Driver (KiD) framework, which combines the full macro- and microphysics schemes from CAM5 with idealizations of all other model components. The benefit of this framework is that its simplicity makes diagnosing problems easier and its efficiency allows us to test a variety of numerical schemes. Initial results suggest that numerical convergence requires time steps much shorter than those typically used in CAM5.

  5. Numerical Relativity meets Data Analysis

    NASA Astrophysics Data System (ADS)

    Schmidt, Patricia

    2016-03-01

    Gravitational waveforms (GW) from coalescing black hole binaries obtained by Numerical Relativity (NR) play a crucial role in the construction and validation of waveform models used as templates in GW matched filter searches and parameter estimation. In previous efforts, notably the NINJA and NINJA-2 collaborations, NR groups and data analysts worked closely together to use NR waveforms as mock GW signals to test the search and parameter estimation pipelines employed by LIGO. Recently, however, NR groups have been able to simulate hundreds of different binary black holes systems. It is desirable to directly use these waveforms in GW data analysis, for example to assess systematic errors in waveform models, to test general relativity or to appraise the limitations of aligned-spin searches among many other applications. In this talk, I will introduce recent developments that aim to fully integrate NR waveforms into the data analysis pipelines through a standardized interface. I will highlight a number of select applications for this new infrastructure.

  6. Errors of DWPF Frit analysis

    SciTech Connect

    Schumacher, R.F.

    1992-01-24

    Glass frit will be a major raw material for the operation of the Defense Waste Processing Facility. The frit will be controlled by certificate of conformance and a confirmatory analysis by a commercial laboratory. The following effort provides additional quantitative information on the variability of frit analyses at two commercial laboratories.

  7. Measurement error analysis of taxi meter

    NASA Astrophysics Data System (ADS)

    He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu

    2011-12-01

    The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.

  8. Error analysis of finite element solutions for postbuckled plates

    NASA Technical Reports Server (NTRS)

    Sistla, Rajaram; Thurston, Gaylen A.

    1988-01-01

    An error analysis of results from finite-element solutions of problems in shell structures is further developed, incorporating the results of an additional numerical analysis by which oscillatory behavior is eliminated. The theory is extended to plates with initial geometric imperfections, and this novel analysis is programmed as a postprocessor for a general-purpose finite-element code. Numerical results are given for the case of a stiffened panel in compression and a plate loaded in shear by a 'picture-frame' test fixture.

  9. Analysis of field errors in existing undulators

    SciTech Connect

    Kincaid, B.M.

    1990-01-01

    The Advanced Light Source (ALS) and other third generation synchrotron light sources have been designed for optimum performance with undulator insertion devices. The performance requirements for these new undulators are explored, with emphasis on the effects of errors on source spectral brightness. Analysis of magnetic field data for several existing hybrid undulators is presented, decomposing errors into systematic and random components. An attempts is made to identify the sources of these errors, and recommendations are made for designing future insertion devices. 12 refs., 16 figs.

  10. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    NASA Astrophysics Data System (ADS)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  11. Error analysis of quartz crystal resonator applications

    SciTech Connect

    Lucklum, R.; Behling, C.; Hauptmann, P.; Cernosek, R.W.; Martin, S.J.

    1996-12-31

    Quartz crystal resonators in chemical sensing applications are usually configured as the frequency determining element of an electrical oscillator. By contrast, the shear modulus determination of a polymer coating needs a complete impedance analysis. The first part of this contribution reports the error made if common approximations are used to relate the frequency shift to the sorbed mass. In the second part the authors discuss different error sources in the procedure to determine shear parameters.

  12. TOA/FOA geolocation error analysis.

    SciTech Connect

    Mason, John Jeffrey

    2008-08-01

    This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

  13. Battling hydrological monsters: Distinguishing between data uncertainty, structural errors and numerical artifacts in rainfall-runoff modelling

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Renard, Benjamin; Clark, Martyn P.; Fenicia, Fabrizio; Thyer, Mark; Kuczera, George; Franks, Stewart W.

    2010-05-01

    Confronted with frequently poor model performance, rainfall-runoff modellers have in the past blamed a plethora of sources of uncertainty, including rainfall and runoff errors, non-Gaussianities, model nonlinearities, parameter uncertainty, and just about everything else from Pandorra's box. Moreover, recent work has suggested astonishing numerical artifacts may arise from poor model numerics and confound the Hydrologist. There is a growing recognition that maintaining the lumped nebulous conspiracy of these errors is impeding progress in terms of understanding and, when possible, reducing predictive errors and gaining insights into catchment dynamics. In this study, we take the hydrological bull by its horns and begin disentangling individual sources of error. First, we outline robust and efficient error-control methods that ensure adequate numerical accuracy. We then demonstrate that the formidable interaction between data and structural errors, irresolvable in the absence of independent knowledge of data accuracy, can be tackled using geostatistical analysis of rainfall gauge networks and rating curve data. Structural model deficiencies can then begin being identified using flexible model configurations, paving the way for meaningful model comparison and improvement. Importantly, informative diagnostic measures are available for each component of the analysis. This paper surveys several recent developments along these research directions, summarized in a series of real-data case studies, and indicates areas of future interest.

  14. Numeracy, Literacy and Newman's Error Analysis

    ERIC Educational Resources Information Center

    White, Allan Leslie

    2010-01-01

    Newman (1977, 1983) defined five specific literacy and numeracy skills as crucial to performance on mathematical word problems: reading, comprehension, transformation, process skills, and encoding. Newman's Error Analysis (NEA) provided a framework for considering the reasons that underlay the difficulties students experienced with mathematical…

  15. Study of geopotential error models used in orbit determination error analysis

    NASA Technical Reports Server (NTRS)

    Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.

    1991-01-01

    The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic

  16. Error Propagation Analysis for Quantitative Intracellular Metabolomics

    PubMed Central

    Tillack, Jana; Paczia, Nicole; Nöh, Katharina; Wiechert, Wolfgang; Noack, Stephan

    2012-01-01

    Model-based analyses have become an integral part of modern metabolic engineering and systems biology in order to gain knowledge about complex and not directly observable cellular processes. For quantitative analyses, not only experimental data, but also measurement errors, play a crucial role. The total measurement error of any analytical protocol is the result of an accumulation of single errors introduced by several processing steps. Here, we present a framework for the quantification of intracellular metabolites, including error propagation during metabolome sample processing. Focusing on one specific protocol, we comprehensively investigate all currently known and accessible factors that ultimately impact the accuracy of intracellular metabolite concentration data. All intermediate steps are modeled, and their uncertainty with respect to the final concentration data is rigorously quantified. Finally, on the basis of a comprehensive metabolome dataset of Corynebacterium glutamicum, an integrated error propagation analysis for all parts of the model is conducted, and the most critical steps for intracellular metabolite quantification are detected. PMID:24957773

  17. Control of coupling mass balance error in a process-based numerical model of surface-subsurface flow interaction

    NASA Astrophysics Data System (ADS)

    Fiorentini, Marcello; Orlandini, Stefano; Paniconi, Claudio

    2015-07-01

    A process-based numerical model of integrated surface-subsurface flow is analyzed in order to identify, track, and reduce the mass balance errors affiliated with the model's coupling scheme. The sources of coupling error include a surface-subsurface grid interface that requires node-to-cell and cell-to-node interpolation of exchange fluxes and ponding heads, and a sequential iterative time matching procedure that includes a time lag in these same exchange terms. Based on numerical experiments carried out for two synthetic test cases and for a complex drainage basin in northern Italy, it is shown that the coupling mass balance error increases during the flood recession limb when the rate of change in the fluxes exchanged between the surface and subsurface is highest. A dimensionless index that quantifies the degree of coupling and a saturated area index are introduced to monitor the sensitivity of the model to coupling error. Error reduction is achieved through improvements to the heuristic procedure used to control and adapt the time step interval and to the interpolation algorithm used to pass exchange variables from nodes to cells. The analysis presented illustrates the trade-offs between a flexible description of surface and subsurface flow processes and the numerical errors inherent in sequential iterative coupling with staggered nodal points at the land surface interface, and it reveals mitigation strategies that are applicable to all integrated models sharing this coupling and discretization approach.

  18. Error analysis of stochastic gradient descent ranking.

    PubMed

    Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan

    2013-06-01

    Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error. PMID:24083315

  19. Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.

  20. Numerical analysis of engine instability

    NASA Astrophysics Data System (ADS)

    Habiballah, M.; Dubois, I.

    Following a literature review on numerical analyses of combustion instability, to give the state of the art in the area, the paper describes the ONERA methodology used to analyze the combustion instability in liquid propellant engines. Attention is also given to a model (named Phedre) which describes the unsteady turbulent two-phase reacting flow in a liquid rocket engine combustion chamber. The model formulation includes axial or radial propellant injection, baffles, and acoustic resonators modeling, and makes it possible to treat different engine types. A numerical analysis of a cryogenic engine stability is presented, and the results of the analysis are compared with results of tests of the Viking engine and the gas generator of the Vulcain engine, showing good qualitative agreement and some general trends between experiments and numerical analysis.

  1. Statistical Error Analysis for Digital Recursive Filters

    NASA Astrophysics Data System (ADS)

    Wu, Kevin Chi-Rung

    The study of arithmetic roundoff error has attracted many researchers to investigate how the signal-to-noise ratio (SNR) is affected by algorithmic parameters, especially since the VLSI (Very Large Scale Integrated circuits) technologies have become more promising for digital signal processing. Typically, digital signal processing involving, either with or without matrix inversion, will have tradeoffs on speed and processor cost. Hence, the problems of an area-time efficient matrix computation and roundoff error behavior analysis will play an important role in this dissertation. A newly developed non-Cholesky square-root matrix will be discussed which precludes the arithmetic roundoff error over some interesting operations, such as complex -valued matrix inversion with its SNR analysis and error propagation effects. A non-CORDIC parallelism approach for complex-valued matrix will be presented to upgrade speed at the cost of moderate increase of processor. The lattice filter will also be looked into, in such a way, that one can understand the SNR behavior under the conditions of different inputs in the joint process system. Pipelining technique will be demonstrated to manifest the possibility of high-speed non-matrix-inversion lattice filter. Floating point arithmetic modelings used in this study have been focused on effective methodologies that have been proved to be reliable and feasible. With the models in hand, we study the roundoff error behavior based on some statistical assumptions. Results are demonstrated by carrying out simulation to show the feasibility of SNR analysis. We will observe that non-Cholesky square-root matrix has advantage of saving a time of O(n^3) as well as a reduced realization cost. It will be apparent that for a Kalman filter the register size is increasing significantly, if pole of the system matrix is moving closer to the edge of the unit circle. By comparing roundoff error effect due to floating-point and fixed-point arithmetics, we

  2. Error estimates of numerical methods for the nonlinear Dirac equation in the nonrelativistic limit regime

    NASA Astrophysics Data System (ADS)

    Bao, WeiZhu; Cai, YongYong; Jia, XiaoWei; Yin, Jia

    2016-08-01

    We present several numerical methods and establish their error estimates for the discretization of the nonlinear Dirac equation in the nonrelativistic limit regime, involving a small dimensionless parameter $0<\\varepsilon\\ll 1$ which is inversely proportional to the speed of light. In this limit regime, the solution is highly oscillatory in time, i.e. there are propagating waves with wavelength $O(\\varepsilon^2)$ and $O(1)$ in time and space, respectively. We begin with the conservative Crank-Nicolson finite difference (CNFD) method and establish rigorously its error estimate which depends explicitly on the mesh size $h$ and time step $\\tau$ as well as the small parameter $0<\\varepsilon\\le 1$. Based on the error bound, in order to obtain `correct' numerical solutions in the nonrelativistic limit regime, i.e. $0<\\varepsilon\\ll 1$, the CNFD method requests the $\\varepsilon$-scalability: $\\tau=O(\\varepsilon^3)$ and $h=O(\\sqrt{\\varepsilon})$. Then we propose and analyze two numerical methods for the discretization of the nonlinear Dirac equation by using the Fourier spectral discretization for spatial derivatives combined with the exponential wave integrator and time-splitting technique for temporal derivatives, respectively. Rigorous error bounds for the two numerical methods show that their $\\varepsilon$-scalability is improved to $\\tau=O(\\varepsilon^2)$ and $h=O(1)$ when $0<\\varepsilon\\ll 1$ compared with the CNFD method. Extensive numerical results are reported to confirm our error estimates.

  3. Microlens assembly error analysis for light field camera based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping

    2016-08-01

    This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.

  4. Error Analysis of Modified Langevin Dynamics

    NASA Astrophysics Data System (ADS)

    Redon, Stephane; Stoltz, Gabriel; Trstanova, Zofia

    2016-06-01

    We consider Langevin dynamics associated with a modified kinetic energy vanishing for small momenta. This allows us to freeze slow particles, and hence avoid the re-computation of inter-particle forces, which leads to computational gains. On the other hand, the statistical error may increase since there are a priori more correlations in time. The aim of this work is first to prove the ergodicity of the modified Langevin dynamics (which fails to be hypoelliptic), and next to analyze how the asymptotic variance on ergodic averages depends on the parameters of the modified kinetic energy. Numerical results illustrate the approach, both for low-dimensional systems where we resort to a Galerkin approximation of the generator, and for more realistic systems using Monte Carlo simulations.

  5. Error Analysis of Modified Langevin Dynamics

    NASA Astrophysics Data System (ADS)

    Redon, Stephane; Stoltz, Gabriel; Trstanova, Zofia

    2016-08-01

    We consider Langevin dynamics associated with a modified kinetic energy vanishing for small momenta. This allows us to freeze slow particles, and hence avoid the re-computation of inter-particle forces, which leads to computational gains. On the other hand, the statistical error may increase since there are a priori more correlations in time. The aim of this work is first to prove the ergodicity of the modified Langevin dynamics (which fails to be hypoelliptic), and next to analyze how the asymptotic variance on ergodic averages depends on the parameters of the modified kinetic energy. Numerical results illustrate the approach, both for low-dimensional systems where we resort to a Galerkin approximation of the generator, and for more realistic systems using Monte Carlo simulations.

  6. Accumulation of errors in numerical simulations of chemically reacting gas dynamics

    NASA Astrophysics Data System (ADS)

    Smirnov, N. N.; Betelin, V. B.; Nikitin, V. F.; Stamov, L. I.; Altoukhov, D. I.

    2015-12-01

    The aim of the present study is to investigate problems of numerical simulations precision and stochastic errors accumulation in solving problems of detonation or deflagration combustion of gas mixtures in rocket engines. Computational models for parallel computing on supercomputers incorporating CPU and GPU units were tested and verified. Investigation of the influence of computational grid size on simulation precision and computational speed was performed. Investigation of accumulation of errors for simulations implying different strategies of computation were performed.

  7. "Error Analysis." A Hard Look at Method in Madness.

    ERIC Educational Resources Information Center

    Brown, Cheryl

    1976-01-01

    The origins of error analysis as a pedagogical tool can be traced to the beginnings of the notion of interference and the use of contrastive analysis (CA) to predict learners' errors. With the focus narrowing to actual errors committed by students, it was found that all learners of English as a second language seemed to make errors in the same…

  8. First-order approximation error analysis of Risley-prism-based beam directing system.

    PubMed

    Zhao, Yanyan; Yuan, Yan

    2014-12-01

    To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration. PMID:25607958

  9. Error analysis for the Fourier domain offset estimation algorithm

    NASA Astrophysics Data System (ADS)

    Wei, Ling; He, Jieling; He, Yi; Yang, Jinsheng; Li, Xiqi; Shi, Guohua; Zhang, Yudong

    2016-02-01

    The offset estimation algorithm is crucial for the accuracy of the Shack-Hartmann wave-front sensor. Recently, the Fourier Domain Offset (FDO) algorithm has been proposed for offset estimation. Similar to other algorithms, the accuracy of FDO is affected by noise such as background noise, photon noise, and 'fake' spots. However, no adequate quantitative error analysis has been performed for FDO in previous studies, which is of great importance for practical applications of the FDO. In this study, we quantitatively analysed how the estimation error of FDO is affected by noise based on theoretical deduction, numerical simulation, and experiments. The results demonstrate that the standard deviation of the wobbling error is: (1) inversely proportional to the raw signal to noise ratio, and proportional to the square of the sub-aperture size in the presence of background noise; and (2) proportional to the square root of the intensity in the presence of photonic noise. Furthermore, the upper bound of the estimation error is proportional to the intensity of 'fake' spots and the sub-aperture size. The results of the simulation and experiments agreed with the theoretical analysis.

  10. Towards a More Rigorous Analysis of Foreign Language Errors.

    ERIC Educational Resources Information Center

    Abbott, Gerry

    1980-01-01

    Presents a precise and detailed process to be used in error analysis. The process is proposed as a means of making research in error analysis more accessible and useful to others, as well as assuring more objectivity. (Author/AMH)

  11. Error propagation in the numerical solutions of the differential equations of orbital mechanics

    NASA Technical Reports Server (NTRS)

    Bond, V. R.

    1982-01-01

    The relationship between the eigenvalues of the linearized differential equations of orbital mechanics and the stability characteristics of numerical methods is presented. It is shown that the Cowell, Encke, and Encke formulation with an independent variable related to the eccentric anomaly all have a real positive eigenvalue when linearized about the initial conditions. The real positive eigenvalue causes an amplification of the error of the solution when used in conjunction with a numerical integration method. In contrast an element formulation has zero eigenvalues and is numerically stable.

  12. Numerical analysis of Stirling engine

    NASA Astrophysics Data System (ADS)

    Sekiya, Hiroshi

    1992-11-01

    A simulation model of the Stirling engine based on the third order method of analysis is presented. The fundamental equations are derived by applying conservation laws of physics to the machine model, the characteristic equations for heat transfer and gas flow are represented, and a numerical calculation technique using these equations is discussed. A numerical model of the system for balancing pressure in four cylinders is included in the simulation model. Calculations results from the model are compared with experimental results. A comparable study of engine performance using helium and hydrogen as working gas is conducted, clarifying the heat transfer and gas flow characteristics, and the effects of temperature conditions in the hot and cold engine sections on driving conditions. The design optimization of the heat exchanger is addressed.

  13. Trends in MODIS Geolocation Error Analysis

    NASA Technical Reports Server (NTRS)

    Wolfe, R. E.; Nishihama, Masahiro

    2009-01-01

    Data from the two MODIS instruments have been accurately geolocated (Earth located) to enable retrieval of global geophysical parameters. The authors describe the approach used to geolocate with sub-pixel accuracy over nine years of data from M0DIS on NASA's E0S Terra spacecraft and seven years of data from MODIS on the Aqua spacecraft. The approach uses a geometric model of the MODIS instruments, accurate navigation (orbit and attitude) data and an accurate Earth terrain model to compute the location of each MODIS pixel. The error analysis approach automatically matches MODIS imagery with a global set of over 1,000 ground control points from the finer-resolution Landsat satellite to measure static biases and trends in the MO0lS geometric model parameters. Both within orbit and yearly thermally induced cyclic variations in the pointing have been found as well as a general long-term trend.

  14. Experimental and numerical study of error fields in the CNT stellarator

    NASA Astrophysics Data System (ADS)

    Hammond, K. C.; Anichowski, A.; Brenner, P. W.; Pedersen, T. S.; Raftopoulos, S.; Traverso, P.; Volpe, F. A.

    2016-07-01

    Sources of error fields were indirectly inferred in a stellarator by reconciling computed and numerical flux surfaces. Sources considered so far include the displacements and tilts of the four circular coils featured in the simple CNT stellarator. The flux surfaces were measured by means of an electron beam and fluorescent rod, and were computed by means of a Biot–Savart field-line tracing code. If the ideal coil locations and orientations are used in the computation, agreement with measurements is poor. Discrepancies are ascribed to errors in the positioning and orientation of the in-vessel interlocked coils. To that end, an iterative numerical method was developed. A Newton–Raphson algorithm searches for the coils’ displacements and tilts that minimize the discrepancy between the measured and computed flux surfaces. This method was verified by misplacing and tilting the coils in a numerical model of CNT, calculating the flux surfaces that they generated, and testing the algorithm’s ability to deduce the coils’ displacements and tilts. Subsequently, the numerical method was applied to the experimental data, arriving at a set of coil displacements whose resulting field errors exhibited significantly improved agreement with the experimental results.

  15. Pathway Analysis Software: Annotation Errors and Solutions

    PubMed Central

    Henderson-MacLennan, Nicole K.; Papp, Jeanette C.; Talbot, C. Conover; McCabe, Edward R.B.; Presson, Angela P.

    2010-01-01

    Genetic databases contain a variety of annotation errors that often go unnoticed due to the large size of modern genetic data sets. Interpretation of these data sets requires bioinformatics tools that may contribute to this problem. While providing gene symbol annotations for identifiers (IDs) such as microarray probeset, RefSeq, GenBank and Entrez Gene is seemingly trivial, the accuracy is fundamental to any subsequent conclusions. We examine gene symbol annotations and results from three commercial pathway analysis software (PAS) packages: Ingenuity Pathways Analysis, GeneGO and Pathway Studio. We compare gene symbol annotations and canonical pathway results over time and among different input ID types. We find that PAS results can be affected by variation in gene symbol annotations across software releases and the input ID type analyzed. As a result, we offer suggestions for using commercial PAS and reporting microarray results to improve research quality. We propose a wiki type website to facilitate communication of bioinformatics software problems within the scientific community. PMID:20663702

  16. A technique for human error analysis (ATHEANA)

    SciTech Connect

    Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W.

    1996-05-01

    Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.

  17. Reducing the error growth in the numerical propagation of satellite orbits

    NASA Astrophysics Data System (ADS)

    Ferrandiz, Jose M.; Vigo, Jesus; Martin, P.

    1991-12-01

    An algorithm especially designed for the long term numerical integration of perturbed oscillators, in one or several frequencies, is presented. The method is applied to the numerical propagation of satellite orbits, using focal variables, and the results concerning highly eccentric or nearly circular cases are reported. The method performs particularly well for high eccentricity. For e = 0.99 and J2 + J3 perturbations it allows the last perigee after 1000 revolutions with an error less than 1 cm, with only 80 derivative evaluations per revolution. In general the approach provides about a hundred times more accuracy than Bettis methods over one thousand revolutions.

  18. Investigating Convergence Patterns for Numerical Methods Using Data Analysis

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2013-01-01

    The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…

  19. The Vertical Error Characteristics of GOES-derived Winds: Description and Impact on Numerical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Rao, P. Anil; Velden, Christopher S.; Braun, Scott A.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Errors in the height assignment of some satellite-derived winds exist because the satellites sense radiation emitted from a finite layer of the atmosphere rather than a specific level. Potential problems in data assimilation may arise because the motion of a measured layer is often represented by a single-level value. In this research, cloud and water vapor motion winds that are derived from the Geostationary Operational Environmental Satellites (GOES winds) are compared to collocated rawinsonde observations (RAOBs). An important aspect of this work is that in addition to comparisons at each assigned height, the GOES winds are compared to the entire profile of the collocated RAOB data to determine the vertical error characteristics of the GOES winds. The impact of these results on numerical weather prediction is then investigated. The comparisons at individual vector height assignments indicate that the error of the GOES winds range from approx. 3 to 10 m/s and generally increase with height. However, if taken as a percentage of the total wind speed, accuracy is better at upper levels. As expected, comparisons with the entire profile of the collocated RAOBs indicate that clear-air water vapor winds represent deeper layers than do either infrared or water vapor cloud-tracked winds. This is because in cloud-free regions the signal from water vapor features may result from emittance over a thicker layer. To further investigate characteristics of the clear-air water vapor winds, they are stratified into two categories that are dependent on the depth of the layer represented by the vector. It is found that if the vertical gradient of moisture is smooth and uniform from near the height assignment upwards, the clear-air water vapor wind tends to represent a relatively deep layer. The information from the comparisons is then used in numerical model simulations of two separate events to determine the forecast impacts. Four simulations are performed for each case: 1) A

  20. Using PASCAL for numerical analysis

    NASA Technical Reports Server (NTRS)

    Volper, D.; Miller, T. C.

    1978-01-01

    The data structures and control structures of PASCAL enhance the coding ability of the programmer. Proposed extensions to the language further increase its usefulness in writing numeric programs and support packages for numeric programs.

  1. Solar Tracking Error Analysis of Fresnel Reflector

    PubMed Central

    Zheng, Jiantao; Yan, Junjie; Pei, Jie; Liu, Guanjie

    2014-01-01

    Depending on the rotational structure of Fresnel reflector, the rotation angle of the mirror was deduced under the eccentric condition. By analyzing the influence of the sun tracking rotation angle error caused by main factors, the change rule and extent of the influence were revealed. It is concluded that the tracking errors caused by the difference between the rotation axis and true north meridian, at noon, were maximum under certain conditions and reduced at morning and afternoon gradually. The tracking error caused by other deviations such as rotating eccentric, latitude, and solar altitude was positive at morning, negative at afternoon, and zero at a certain moment of noon. PMID:24895664

  2. Analysis of thematic map classification error matrices.

    USGS Publications Warehouse

    Rosenfield, G.H.

    1986-01-01

    The classification error matrix expresses the counts of agreement and disagreement between the classified categories and their verification. Thematic mapping experiments compare variables such as multiple photointerpretation or scales of mapping, and produce one or more classification error matrices. This paper presents a tutorial to implement a typical problem of a remotely sensed data experiment for solution by the linear model method.-from Author

  3. Analysis of modeling errors in system identification

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1986-01-01

    This paper is concerned with the identification of a system in the presence of several error sources. Following some basic definitions, the notion of 'near-equivalence in probability' is introduced using the concept of near-equivalence between a model and process. Necessary and sufficient conditions for the identifiability of system parameters are given. The effect of structural error on the parameter estimates for both deterministic and stochastic cases are considered.

  4. Error Analysis of Terrestrial Laser Scanning Data by Means of Spherical Statistics and 3D Graphs

    PubMed Central

    Cuartero, Aurora; Armesto, Julia; Rodríguez, Pablo G.; Arias, Pedro

    2010-01-01

    This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS) data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles) that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP) and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis) and the proposed method (angular errors analysis) by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics. PMID:22163461

  5. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, Ronald M.

    2015-01-01

    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  6. On the continuum-scale simulation of gravity-driven fingers with hysteretic Richards equation: Trucation error induced numerical artifacts

    SciTech Connect

    ELIASSI,MEHDI; GLASS JR.,ROBERT J.

    2000-03-08

    The authors consider the ability of the numerical solution of Richards equation to model gravity-driven fingers. Although gravity-driven fingers can be easily simulated using a partial downwind averaging method, they find the fingers are purely artificial, generated by the combined effects of truncation error induced oscillations and capillary hysteresis. Since Richards equation can only yield a monotonic solution for standard constitutive relations and constant flux boundary conditions, it is not the valid governing equation to model gravity-driven fingers, and therefore is also suspect for unsaturated flow in initially dry, highly nonlinear, and hysteretic media where these fingers occur. However, analysis of truncation error at the wetting front for the partial downwind method suggests the required mathematical behavior of a more comprehensive and physically based modeling approach for this region of parameter space.

  7. Asteroid orbital error analysis: Theory and application

    NASA Technical Reports Server (NTRS)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  8. Numerical likelihood analysis of cosmic ray anisotropies

    SciTech Connect

    Carlos Hojvat et al.

    2003-07-02

    A numerical likelihood approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.

  9. Two Ways of Looking at Error-Analysis.

    ERIC Educational Resources Information Center

    Strevens, Peter

    In this paper the author discusses "error-analysis"; its emergence as a recognized technique in applied linguistics, with a function in the preparation of new or improved teaching materials; and its new place in relation to theories of language learning and language teaching. He believes that error-analysis has suddenly found a new importance, and…

  10. Dose error analysis for a scanned proton beam delivery system

    NASA Astrophysics Data System (ADS)

    Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.

    2010-12-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  11. Numerical Package in Computer Supported Numeric Analysis Teaching

    ERIC Educational Resources Information Center

    Tezer, Murat

    2007-01-01

    At universities in the faculties of Engineering, Sciences, Business and Economics together with higher education in Computing, it is stated that because of the difficulty, calculators and computers can be used in Numerical Analysis (NA). In this study, the learning computer supported NA will be discussed together with important usage of the…

  12. Size and Shape Analysis of Error-Prone Shape Data

    PubMed Central

    Du, Jiejun; Dryden, Ian L.; Huang, Xianzheng

    2015-01-01

    We consider the problem of comparing sizes and shapes of objects when landmark data are prone to measurement error. We show that naive implementation of ordinary Procrustes analysis that ignores measurement error can compromise inference. To account for measurement error, we propose the conditional score method for matching configurations, which guarantees consistent inference under mild model assumptions. The effects of measurement error on inference from naive Procrustes analysis and the performance of the proposed method are illustrated via simulation and application in three real data examples. Supplementary materials for this article are available online. PMID:26109745

  13. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

    SciTech Connect

    Beckerman, M.; Jones, J.P.

    1999-02-01

    We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

  14. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

    NASA Technical Reports Server (NTRS)

    Fiske, David R.

    2004-01-01

    In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

  15. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis. Revision 1.12

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1997-01-01

    We proposed a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and is required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has two important applications, which we term the assessment application and the objective analysis application. For the assessment application, our approach results in new objective measures of forecast skill which are more in line with subjective measures of forecast skill and which are useful in validating models and diagnosing their shortcomings. With regard to the objective analysis application, meteorological analysis schemes balance forecast error and observational error to obtain an optimal analysis. Presently, representations of the error covariance matrix used to measure the forecast error are severely limited. For the objective analysis application our approach will improve analyses by providing a more realistic measure of the forecast error. We expect, a priori, that our approach should greatly improve the utility of remotely sensed data which have relatively high horizontal resolution, but which are indirectly related to the conventional atmospheric variables. In this project, we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP) and 500 hPa geopotential height fields for forecasts of the short and medium range. Since the forecasts are generated by the GEOS (Goddard Earth Observing System) data assimilation system with and without ERS 1 scatterometer data, these preliminary studies serve several purposes. They (1) provide a

  16. The error analysis and online measurement of linear slide motion error in machine tools

    NASA Astrophysics Data System (ADS)

    Su, H.; Hong, M. S.; Li, Z. J.; Wei, Y. L.; Xiong, S. B.

    2002-06-01

    A new accurate two-probe time domain method is put forward to measure the straight-going component motion error in machine tools. The characteristics of non-periodic and non-closing in the straightness profile error are liable to bring about higher-order harmonic component distortion in the measurement results. However, this distortion can be avoided by the new accurate two-probe time domain method through the symmetry continuation algorithm, uniformity and least squares method. The harmonic suppression is analysed in detail through modern control theory. Both the straight-going component motion error in machine tools and the profile error in a workpiece that is manufactured on this machine can be measured at the same time. All of this information is available to diagnose the origin of faults in machine tools. The analysis result is proved to be correct through experiment.

  17. Mode error analysis of impedance measurement using twin wires

    NASA Astrophysics Data System (ADS)

    Huang, Liang-Sheng; Yoshiro, Irie; Liu, Yu-Dong; Wang, Sheng

    2015-03-01

    Both longitudinal and transverse coupling impedance for some critical components need to be measured for accelerator design. The twin wires method is widely used to measure longitudinal and transverse impedance on the bench. A mode error is induced when the twin wires method is used with a two-port network analyzer. Here, the mode error is analyzed theoretically and an example analysis is given. Moreover, the mode error in the measurement is a few percent when a hybrid with no less than 25 dB isolation and a splitter with no less than 20 dB magnitude error are used. Supported by Natural Science Foundation of China (11175193, 11275221)

  18. Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation

    PubMed Central

    Barbero, Sergio; Thibos, Larry N.

    2007-01-01

    Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302

  19. A Numerical Study of Some Potential Sources of Error in Side-by-Side Seismometer Evaluations

    USGS Publications Warehouse

    Holcomb, L. Gary

    1990-01-01

    INTRODUCTION This report presents the results of a series of computer simulations of potential errors in test data, which might be obtained when conducting side-by-side comparisons of seismometers. These results can be used as guides in estimating potential sources and magnitudes of errors one might expect when analyzing real test data. First, the derivation of a direct method for calculating the noise levels of two sensors in a side-by-side evaluation is repeated and extended slightly herein. This bulk of this derivation was presented previously (see Holcomb 1989); it is repeated here for easy reference. This method is applied to the analysis of a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of white noise spectra with known signal-to-noise ratios (SNR's). This report extends this analysis to high SNR's to determine the limitations of the direct method for calculating the noise levels at signal-to-noise levels which are much higher than presented previously (see Holcomb 1989). Next, the method is used to analyze a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of bandshaped noise spectra with known signal-to-noise ratios. This is a much more realistic representation of real world data because the earth's background spectrum is certainly not flat. Finally, the results of the analysis of simulated white and bandshaped side-by-side test data are used to assist in interpreting the analysis of the effects of simulated azimuthal misalignment in side-by-side sensor evaluations. A thorough understanding of azimuthal misalignment errors is important because of the physical impossibility of perfectly aligning two sensors in a real world situation. The analysis herein indicates that alignment errors place lower limits on the levels of system noise which can be resolved in a side-by-side measurement. It also indicates that alignment errors are the source of the fact that

  20. Classification error analysis in stereo vision

    NASA Astrophysics Data System (ADS)

    Gross, Eitan

    2015-07-01

    Depth perception in humans is obtained by comparing images generated by the two eyes to each other. Given the highly stochastic nature of neurons in the brain, this comparison requires maximizing the mutual information (MI) between the neuronal responses in the two eyes by distributing the coding information across a large number of neurons. Unfortunately, MI is not an extensive quantity, making it very difficult to predict how the accuracy of depth perception will vary with the number of neurons (N) in each eye. To address this question we present a two-arm, distributed decentralized sensors detection model. We demonstrate how the system can extract depth information from a pair of discrete valued stimuli represented here by a pair of random dot-matrix stereograms. Using the theory of large deviations we calculated the rate at which the global average error probability of our detector; and the MI between the two arms' output, vary with N. We found that MI saturates exponentially with N at a rate which decays as 1 / N. The rate function approached the Chernoff distance between the two probability distributions asymptotically. Our results may have implications in computer stereo vision that uses Hebbian-based algorithms for terrestrial navigation.

  1. Attitude Determination Error Analysis System (ADEAS) mathematical specifications document

    NASA Technical Reports Server (NTRS)

    Nicholson, Mark; Markley, F.; Seidewitz, E.

    1988-01-01

    The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.

  2. Error analysis of large aperture static interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Li, Fan; Zhang, Guo

    2015-12-01

    Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.

  3. Recent results on parametric analysis of differential Omega error

    NASA Technical Reports Server (NTRS)

    Baxa, E. G., Jr.; Piserchia, P. V.

    1974-01-01

    Previous tests of the differential Omega concept and an analysis of the characteristics of VLF propagation make it possible to delineate various factors which might contribute to the variation of errors in phase measurements at an Omega receiver site. An experimental investigation is conducted to determine the effect of each of a number of parameters on differential Omega accuracy and to develop prediction equations. The differential Omega error form is considered and preliminary results are presented of the regression analysis used to study differential error.

  4. Errors of DWPF frit analysis: Final report

    SciTech Connect

    Schumacher, R.F.

    1993-01-20

    Glass frit will be a major raw material for the operation of the Defense Waste Processing Facility. The frit will be controlled by certificate of conformance and a confirmatory analysis from a commercial analytical laboratory. The following effort provides additional quantitative information on the variability of frit chemical analyses at two commercial laboratories. Identical samples of IDMS Frit 202 were chemically analyzed at two commercial laboratories and at three different times over a period of four months. The SRL-ADS analyses, after correction with the reference standard and normalization, provided confirmatory information, but did not detect the low silica level in one of the frit samples. A methodology utilizing elliptical limits for confirming the certificate of conformance or confirmatory analysis was introduced and recommended for use when the analysis values are close but not within the specification limits. It was also suggested that the lithia specification limits might be reduced as long as CELS is used to confirm the analysis.

  5. Dose error analysis for a scanned proton beam delivery system.

    PubMed

    Coutrakon, G; Wang, N; Miller, D W; Yang, Y

    2010-12-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm(3) target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy. PMID:21076200

  6. Data Analysis & Statistical Methods for Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  7. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

    2013-01-01

    Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

  8. Abundance recovery error analysis using simulated AVIRIS data

    NASA Technical Reports Server (NTRS)

    Stoner, William W.; Harsanyi, Joseph C.; Farrand, William H.; Wong, Jennifer A.

    1992-01-01

    Measurement noise and imperfect atmospheric correction translate directly into errors in the determination of the surficial abundance of materials from imaging spectrometer data. The effects of errors on abundance recovery were investigated previously using Monte Carlo simulation methods by Sabol et. al. The drawback of the Monte Carlo approach is that thousands of trials are needed to develop good statistics on the probable error in abundance recovery. This computational burden invariably limits the number of scenarios of interest that can practically be investigated. A more efficient approach is based on covariance analysis. The covariance analysis approach expresses errors in abundance as a function of noise in the spectral measurements and provides a closed form result eliminating the need for multiple trials. Monte Carlo simulation and covariance analysis are used to predict confidence limits for abundance recovery for a scenario which is modeled as being derived from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).

  9. Errors of DWPF Frit analysis. Final report

    SciTech Connect

    Schumacher, R.F.

    1992-01-24

    Glass frit will be a major raw material for the operation of the Defense Waste Processing Facility. The frit will be controlled by certificate of conformance and a confirmatory analysis by a commercial laboratory. The following effort provides additional quantitative information on the variability of frit analyses at two commercial laboratories.

  10. Error Analysis of p-Version Discontinuous Galerkin Method for Heat Transfer in Built-up Structures

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki; Bey, Kim S.

    2004-01-01

    The purpose of this paper is to provide an error analysis for the p-version of the discontinuous Galerkin finite element method for heat transfer in built-up structures. As a special case of the results in this paper, a theoretical error estimate for the numerical experiments recently conducted by James Tomey is obtained.

  11. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. PMID:25649961

  12. Parameter estimation and error analysis in environmental modeling and computation

    NASA Technical Reports Server (NTRS)

    Kalmaz, E. E.

    1986-01-01

    A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.

  13. Numerical analysis of wave scattering

    NASA Astrophysics Data System (ADS)

    Beran, Mark J.

    1994-12-01

    The following topics were studied in detail during the report period: (1) Combined volume and surface scattering in a channel, using a modal formulation. (2) Two-way formulation to account for backscattering in a channel. (3) Data analysis to determine vertical and horizontal correlation lengths of the random index-of-refraction fluctuations in a channel. (4) The effect of random fluctuations on the two-frequency coherence function in a shallow channel. (5) Approximate eigenfunctions and eigenvalues for linear sound-speed profiles. (6) The effect of sea-water absorption on scattering in a shallow channel.

  14. Optimization design and error analysis of photoelectric autocollimator

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Yan, Bixi; Hu, Mingjun; Dong, Mingli

    2012-11-01

    A photoelectric autocollimator employing an area Charge Coupled Device (CCD) as its target receiver, which is specially used in numerical stage calibration is optimized, and the various error factors are analyzed. By using the ZEMAX software, the image qualities are optimized to ensure the spherical and coma aberrations of the collimating system are less than 0.27mm and 0.035mm respectively; the Root Mean Square (RMS) radius is close to 6.45 microns, which is identified with the resolution of the CCD, and the Modulation Transfer Function (MTF) is greater than 0.3 in the full field of view, 0.5 in the centre field at the corresponding frequency. The errors origin mainly from fabrication and alignment, which are all about 0.4" . The error synthesis shows that the instrument can meet the demands of the design accuracy, which is also consistent with the experiment.

  15. Numerical solutions and error estimations for the space fractional diffusion equation with variable coefficients via Fibonacci collocation method.

    PubMed

    Bahşı, Ayşe Kurt; Yalçınbaş, Salih

    2016-01-01

    In this study, the Fibonacci collocation method based on the Fibonacci polynomials are presented to solve for the fractional diffusion equations with variable coefficients. The fractional derivatives are described in the Caputo sense. This method is derived by expanding the approximate solution with Fibonacci polynomials. Using this method of the fractional derivative this equation can be reduced to a set of linear algebraic equations. Also, an error estimation algorithm which is based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation algorithm. If the exact solution of the problem is not known, the absolute error function of the problems can be approximately computed by using the Fibonacci polynomial solution. By using this error estimation function, we can find improved solutions which are more efficient than direct numerical solutions. Numerical examples, figures, tables are comparisons have been presented to show efficiency and usable of proposed method. PMID:27610294

  16. Sensitivity analysis of DOA estimation algorithms to sensor errors

    NASA Astrophysics Data System (ADS)

    Li, Fu; Vaccaro, Richard J.

    1992-07-01

    A unified statistical performance analysis using subspace perturbation expansions is applied to subspace-based algorithms for direction-of-arrival (DOA) estimation in the presence of sensor errors. In particular, the multiple signal classification (MUSIC), min-norm, state-space realization (TAM and DDA) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms are analyzed. This analysis assumes that only a finite amount of data is available. An analytical expression for the mean-squared error of the DOA estimates is developed for theoretical comparison in a simple and self-contained fashion. The tractable formulas provide insight into the algorithms. Simulation results verify the analysis.

  17. Error control in the GCF: An information-theoretic model for error analysis and coding

    NASA Technical Reports Server (NTRS)

    Adeyemi, O.

    1974-01-01

    The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.

  18. Application of human error analysis to aviation and space operations

    SciTech Connect

    Nelson, W.R.

    1998-03-01

    For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) the authors have been working to apply methods of human error analysis to the design of complex systems. They have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. They are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. The primary vehicle the authors have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. They are currently adapting their methods and tools of human error analysis to the domain of air traffic management (ATM) systems. Under the NASA-sponsored Advanced Air Traffic Technologies (AATT) program they are working to address issues of human reliability in the design of ATM systems to support the development of a free flight environment for commercial air traffic in the US. They are also currently testing the application of their human error analysis approach for space flight operations. They have developed a simplified model of the critical habitability functions for the space station Mir, and have used this model to assess the affects of system failures and human errors that have occurred in the wake of the collision incident last year. They are developing an approach so that lessons learned from Mir operations can be systematically applied to design and operation of long-term space missions such as the International Space Station (ISS) and the manned Mars mission.

  19. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Browne, Michael W.

    2010-01-01

    Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

  20. Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors

    ERIC Educational Resources Information Center

    Sarcevic, Aleksandra

    2009-01-01

    An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…

  1. Cloud retrieval using infrared sounder data - Error analysis

    NASA Technical Reports Server (NTRS)

    Wielicki, B. A.; Coakley, J. A., Jr.

    1981-01-01

    An error analysis is presented for cloud-top pressure and cloud-amount retrieval using infrared sounder data. Rms and bias errors are determined for instrument noise (typical of the HIRS-2 instrument on Tiros-N) and for uncertainties in the temperature profiles and water vapor profiles used to estimate clear-sky radiances. Errors are determined for a range of test cloud amounts (0.1-1.0) and cloud-top pressures (920-100 mb). Rms errors vary by an order of magnitude depending on the cloud height and cloud amount within the satellite's field of view. Large bias errors are found for low-altitude clouds. These bias errors are shown to result from physical constraints placed on retrieved cloud properties, i.e., cloud amounts between 0.0 and 1.0 and cloud-top pressures between the ground and tropopause levels. Middle-level and high-level clouds (above 3-4 km) are retrieved with low bias and rms errors.

  2. Linear error analysis of slope-area discharge determinations

    USGS Publications Warehouse

    Kirby, W.H.

    1987-01-01

    The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

  3. Geometric error analysis for shuttle imaging spectrometer experiment

    NASA Technical Reports Server (NTRS)

    Wang, S. J.; Ih, C. H.

    1984-01-01

    The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.

  4. The influence of observation errors on analysis error and forecast skill investigated with an observing system simulation experiment

    NASA Astrophysics Data System (ADS)

    Privé, N. C.; Errico, R. M.; Tai, K.-S.

    2013-06-01

    The National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a 1 month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 h forecast, increased observation error only yields a slight decline in forecast skill in the extratropics and no discernible degradation of forecast skill in the tropics.

  5. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, R. M.; Tai, K.-S.

    2013-01-01

    The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

  6. Simple Numerical Analysis of Longboard Speedometer Data

    ERIC Educational Resources Information Center

    Hare, Jonathan

    2013-01-01

    Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 "Phys. Educ." 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as…

  7. Numerical analysis of randomly forced glycolitic oscillations

    SciTech Connect

    Ryashko, Lev

    2015-03-10

    Randomly forced glycolytic oscillations in Higgins model are studied both numerically and analytically. Numerical analysis is based on the direct simulation of the solutions of stochastic system. Non-uniformity of the stochastic bundle along the deterministic cycle is shown. For the analytical investigation of the randomly forced Higgins model, the stochastic sensitivity function technique and confidence domains method are applied. Results of the influence of additive noise on the cycle of this model are given.

  8. SINFAC - SYSTEMS IMPROVED NUMERICAL FLUIDS ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Costello, F. A.

    1994-01-01

    The Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to the April 1983 revision of SINDA, a general thermal analyzer program. The purpose of the additional routines is to allow for the modeling of active heat transfer loops. The modeler can simulate the steady-state and pseudo-transient operations of 16 different heat transfer loop components including radiators, evaporators, condensers, mechanical pumps, reservoirs and many types of valves and fittings. In addition, the program contains a property analysis routine that can be used to compute the thermodynamic properties of 20 different refrigerants. SINFAC can simulate the response to transient boundary conditions. SINFAC was first developed as a method for computing the steady-state performance of two phase systems. It was then modified using CNFRWD, SINDA's explicit time-integration scheme, to accommodate transient thermal models. However, SINFAC cannot simulate pressure drops due to time-dependent fluid acceleration, transient boil-out, or transient fill-up, except in the accumulator. SINFAC also requires the user to be familiar with SINDA. The solution procedure used by SINFAC is similar to that which an engineer would use to solve a system manually. The solution to a system requires the determination of all of the outlet conditions of each component such as the flow rate, pressure, and enthalpy. To obtain these values, the user first estimates the inlet conditions to the first component of the system, then computes the outlet conditions from the data supplied by the manufacturer of the first component. The user then estimates the temperature at the outlet of the third component and computes the corresponding flow resistance of the second component. With the flow resistance of the second component, the user computes the conditions down stream, namely the inlet conditions of the third. The computations follow for the rest of the system, back to the first component

  9. A Case of Error Disclosure: A Communication Privacy Management Analysis

    PubMed Central

    Petronio, Sandra; Helft, Paul R.; Child, Jeffrey T.

    2013-01-01

    To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios’s theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient’s family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public health Much of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information

  10. Clustered Numerical Data Analysis Using Markov Lie Monoid Based Networks

    NASA Astrophysics Data System (ADS)

    Johnson, Joseph

    2016-03-01

    We have designed and build an optimal numerical standardization algorithm that links numerical values with their associated units, error level, and defining metadata thus supporting automated data exchange and new levels of artificial intelligence (AI). The software manages all dimensional and error analysis and computational tracing. Tables of entities verses properties of these generalized numbers (called ``metanumbers'') support a transformation of each table into a network among the entities and another network among their properties where the network connection matrix is based upon a proximity metric between the two items. We previously proved that every network is isomorphic to the Lie algebra that generates continuous Markov transformations. We have also shown that the eigenvectors of these Markov matrices provide an agnostic clustering of the underlying patterns. We will present this methodology and show how our new work on conversion of scientific numerical data through this process can reveal underlying information clusters ordered by the eigenvalues. We will also show how the linking of clusters from different tables can be used to form a ``supernet'' of all numerical information supporting new initiatives in AI.

  11. Quantitative determination of the discretization and truncation errors in numerical renormalization-group calculations of spectral functions

    NASA Astrophysics Data System (ADS)

    Žitko, Rok

    2011-08-01

    In the numerical renormalization-group (NRG) calculations of spectral functions of quantum impurity models, the results are always affected by discretization and truncation errors. The discretization errors can be alleviated by averaging over different discretization meshes (“z-averaging”), but since each partial calculation is performed for a finite discrete system, there are always some residual discretization and finite-size errors. The truncation errors affect the energies of the states and result in the displacement of the delta-peak spectral contributions from their correct positions. The two types of errors are interrelated: for coarser discretization, the discretization errors increase, but the truncation errors decrease since the separation of energy scales is enhanced. In this work, it is shown that by calculating a series of spectral functions for a range of the total number of states kept in the NRG truncation, it is possible to estimate the errors and determine the error bars for spectral functions, which is important when making accurate comparison to the results obtained by other methods and for determining the errors in the extracted quantities (such as peak positions, heights, and widths). The closely related problem of spectral broadening is also discussed: it is shown that the overbroadening contorts the results without, surprisingly, reducing the variance of the curves. It is thus important to determine the results in the limit of zero broadening. The method is applied to determine the error bounds for the Kondo peak splitting in an external magnetic field. For moderately strong fields, the results are consistent with the Bethe ansatz study by Moore and Wen [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.85.1722 85, 1722 (2000)]. We also discuss the regime of large U/Γ ratio. It is shown that in the strong-field limit, a spectral step is observed in the spectrum precisely at the Zeeman frequency until the field becomes so strong that

  12. Monte Carlo analysis of localization errors in magnetoencephalography

    SciTech Connect

    Medvick, P.A.; Lewis, P.S.; Aine, C.; Flynn, E.R.

    1989-01-01

    In magnetoencephalography (MEG), the magnetic fields created by electrical activity in the brain are measured on the surface of the skull. To determine the location of the activity, the measured field is fit to an assumed source generator model, such as a current dipole, by minimizing chi-square. For current dipoles and other nonlinear source models, the fit is performed by an iterative least squares procedure such as the Levenberg-Marquardt algorithm. Once the fit has been computed, analysis of the resulting value of chi-square can determine whether the assumed source model is adequate to account for the measurements. If the source model is adequate, then the effect of measurement error on the fitted model parameters must be analyzed. Although these kinds of simulation studies can provide a rough idea of the effect that measurement error can be expected to have on source localization, they cannot provide detailed enough information to determine the effects that the errors in a particular measurement situation will produce. In this work, we introduce and describe the use of Monte Carlo-based techniques to analyze model fitting errors for real data. Given the details of the measurement setup and a statistical description of the measurement errors, these techniques determine the effects the errors have on the fitted model parameters. The effects can then be summarized in various ways such as parameter variances/covariances or multidimensional confidence regions. 8 refs., 3 figs.

  13. Error analysis for momentum conservation in Atomic-Continuum Coupled Model

    NASA Astrophysics Data System (ADS)

    Yang, Yantao; Cui, Junzhi; Han, Tiansi

    2016-04-01

    Atomic-Continuum Coupled Model (ACCM) is a multiscale computation model proposed by Xiang et al. (in IOP conference series materials science and engineering, 2010), which is used to study and simulate dynamics and thermal-mechanical coupling behavior of crystal materials, especially metallic crystals. In this paper, we construct a set of interpolation basis functions for the common BCC and FCC lattices, respectively, implementing the computation of ACCM. Based on this interpolation approximation, we give a rigorous mathematical analysis of the error of momentum conservation equation introduced by ACCM, and derive a sequence of inequalities that bound the error. Numerical experiment is carried out to verify our result.

  14. Error analysis for momentum conservation in Atomic-Continuum Coupled Model

    NASA Astrophysics Data System (ADS)

    Yang, Yantao; Cui, Junzhi; Han, Tiansi

    2016-08-01

    Atomic-Continuum Coupled Model (ACCM) is a multiscale computation model proposed by Xiang et al. (in IOP conference series materials science and engineering, 2010), which is used to study and simulate dynamics and thermal-mechanical coupling behavior of crystal materials, especially metallic crystals. In this paper, we construct a set of interpolation basis functions for the common BCC and FCC lattices, respectively, implementing the computation of ACCM. Based on this interpolation approximation, we give a rigorous mathematical analysis of the error of momentum conservation equation introduced by ACCM, and derive a sequence of inequalities that bound the error. Numerical experiment is carried out to verify our result.

  15. The Delta x B = 0 Constraint Versus Minimization of Numerical Errors in MHD Simulations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The MHD equations are a system of non-strictly hyperbolic conservation laws. The non-convexity of the inviscid flux vector resulted in corresponding Jacobian matrices with undesirable properties. It has previously been shown by Powell et al. (1995) that an 'almost' equivalent MHD system in non-conservative form can be derived. This non-conservative system has a better conditioned eigensystem. Aside from Powell et al., the MHD equations can be derived from basic principles in either conservative or non-conservative form. The Delta x B = 0 constraint of the MHD equations is only an initial condition constraint, it is very different from the incompressible Navier-Stokes equations in which the divergence condition is needed to close the system (i.e., to have the same number of equations and the same number of unknown). In the MHD formulations, if Delta x B = 0 initially, all one needs is to construct appropriate numerical schemes that preserve this constraint at later time evolutions. In other words, one does not need the Delta x B condition to close the MHD system. We formulate our new scheme together with the Cargo & Gallice (1997) form of the MHD approximate Riemann solver in curvilinear grids for both versions of the MHD equations. A novel feature of our new method is that the well-conditioned eigen-decomposition of the non-conservative MHD equations is used to solve the conservative equations. This new feature of the method provides well-conditioned eigenvectors for the conservative formulation, so that correct wave speeds for discontinuities are assured. The justification for using the non-conservative eigen-decomposition to solve the conservative equations is that our scheme has a better control of the numerical error associated with the divergence of the magnetic condition. Consequently, computing both forms of the equations with the same eigen-decomposition is almost equivalent. It will be shown that this approach, using the non-conservative eigensystem when

  16. Error analysis of sub-aperture stitching interferometry

    NASA Astrophysics Data System (ADS)

    Jia, Xin; Xu, Fuchao; Xie, Weimin; Xing, Tingwen

    2012-10-01

    Large-aperture optical elements are widely employed in high-power laser system, astronomy, and outer-space technology. Sub-aperture stitching is an effective way to extend the lateral and vertical dynamic range of a conventional interferometer. With the aim to provide the accuracy of equipment, this paper simulates the arithmetic to analyze the errors. The Selection of stitching mode and setting of the number of subaperture is given. According to the programmed algorithms simulation stitching is performed for testing the algorithm. In this paper, based on the Matlab we simulate the arithmetic of Sub-aperture stitching. The sub-aperture stitching method can also be used to test the free formed surface. The freeformed surface is created by Zernike polynomials. The accuracy has relationship with the errors of tilting, positioning. Through the stitching the medium spatial frequency of the surface can be tested. The results of errors analysis by means of Matlab are shown that how the tilting and positioning errors to influence the testing accuracy. The analysis of errors can also be used in other interferometer systems.

  17. Error estimate evaluation in numerical approximations of partial differential equations: A pilot study using data mining methods

    NASA Astrophysics Data System (ADS)

    Assous, Franck; Chaskalovic, Joël

    2013-03-01

    In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.

  18. Analysis of systematic errors in lateral shearing interferometry for EUV optical testing

    SciTech Connect

    Miyakawa, Ryan; Naulleau, Patrick; Goldberg, Kenneth A.

    2009-02-24

    Lateral shearing interferometry (LSI) provides a simple means for characterizing the aberrations in optical systems at EUV wavelengths. In LSI, the test wavefront is incident on a low-frequency grating which causes the resulting diffracted orders to interfere on the CCD. Due to its simple experimental setup and high photon efficiency, LSI is an attractive alternative to point diffraction interferometry and other methods that require spatially filtering the wavefront through small pinholes which notoriously suffer from low contrast fringes and improper alignment. In order to demonstrate that LSI can be accurate and robust enough to meet industry standards, analytic models are presented to study the effects of unwanted grating and detector tilt on the system aberrations, and a method for identifying and correcting for these errors in alignment is proposed. The models are subsequently verified by numerical simulation. Finally, an analysis is performed of how errors in the identification and correction of grating and detector misalignment propagate to errors in fringe analysis.

  19. Canonical Correlation Analysis that Incorporates Measurement and Sampling Error Considerations.

    ERIC Educational Resources Information Center

    Thompson, Bruce; Daniel, Larry

    Multivariate methods are being used with increasing frequency in educational research because these methods control "experimentwise" error rate inflation, and because the methods best honor the nature of the reality to which the researcher wishes to generalize. This paper: explains the basic logic of canonical analysis; illustrates that canonical…

  20. [The analysis of the medication error, in practice].

    PubMed

    Didelot, Nicolas; Cistio, Céline

    2016-01-01

    By performing a systemic analysis of medication errors which occur in practice, the multidisciplinary teams can avoid a reoccurrence with the aid of an improvement action plan. The methods must take into account all the factors which might have contributed to or favoured the occurrence of a medication incident or accident. PMID:27177485

  1. Analysis of possible systematic errors in the Oslo method

    SciTech Connect

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-03-15

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  2. Analysis of possible systematic errors in the Oslo method

    NASA Astrophysics Data System (ADS)

    Larsen, A. C.; Guttormsen, M.; Krtička, M.; Běták, E.; Bürger, A.; Görgen, A.; Nyhus, H. T.; Rekstad, J.; Schiller, A.; Siem, S.; Toft, H. K.; Tveten, G. M.; Voinov, A. V.; Wikan, K.

    2011-03-01

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  3. Numerical Analysis of Robust Phase Estimation

    NASA Astrophysics Data System (ADS)

    Rudinger, Kenneth; Kimmel, Shelby

    Robust phase estimation (RPE) is a new technique for estimating rotation angles and axes of single-qubit operations, steps necessary for developing useful quantum gates [arXiv:1502.02677]. As RPE only diagnoses a few parameters of a set of gate operations while at the same time achieving Heisenberg scaling, it requires relatively few resources compared to traditional tomographic procedures. In this talk, we present numerical simulations of RPE that show both Heisenberg scaling and robustness against state preparation and measurement errors, while also demonstrating numerical bounds on the procedure's efficacy. We additionally compare RPE to gate set tomography (GST), another Heisenberg-limited tomographic procedure. While GST provides a full gate set description, it is more resource-intensive than RPE, leading to potential tradeoffs between the procedures. We explore these tradeoffs and numerically establish criteria to guide experimentalists in deciding when to use RPE or GST to characterize their gate sets.Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  4. An analysis of pilot error-related aircraft accidents

    NASA Technical Reports Server (NTRS)

    Kowalsky, N. B.; Masters, R. L.; Stone, R. B.; Babcock, G. L.; Rypka, E. W.

    1974-01-01

    A multidisciplinary team approach to pilot error-related U.S. air carrier jet aircraft accident investigation records successfully reclaimed hidden human error information not shown in statistical studies. New analytic techniques were developed and applied to the data to discover and identify multiple elements of commonality and shared characteristics within this group of accidents. Three techniques of analysis were used: Critical element analysis, which demonstrated the importance of a subjective qualitative approach to raw accident data and surfaced information heretofore unavailable. Cluster analysis, which was an exploratory research tool that will lead to increased understanding and improved organization of facts, the discovery of new meaning in large data sets, and the generation of explanatory hypotheses. Pattern recognition, by which accidents can be categorized by pattern conformity after critical element identification by cluster analysis.

  5. Star tracker error analysis: Roll-to-pitch nonorthogonality

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1979-01-01

    An error analysis is described on an anomaly isolated in the star tracker software line of sight (LOS) rate test. The LOS rate cosine was found to be greater than one in certain cases which implied that either one or both of the star tracker measured end point unit vectors used to compute the LOS rate cosine had lengths greater than unity. The roll/pitch nonorthogonality matrix in the TNB CL module of the IMU software is examined as the source of error.

  6. Numerical error in electron orbits with large. omega. sub ce. delta. t

    SciTech Connect

    Parker, S.E.; Birdsall, C.K.

    1989-12-20

    We have found that running electrostatic particle codes relatively large {omega}{sub ce}{Delta}t in some circumstances does not significantly affect the physical results. We first present results from a single particle mover finding the correct first order drifts for large {omega}{sub ce}{Delta}t. We then characterize the numerical orbit of the Boris algorithm for rotation when {omega}{sub ce}{Delta}t {much gt} 1. Next, an analysis of the guiding center motion is given showing why the first order drift is retained at large {omega}{sub ce}{Delta}t. Lastly, we present a plasma simulation of a one dimensional cross field sheath, with large and small {omega}{sub ce}{Delta}t, with very little difference in the results. 15 refs., 7 figs., 1 tab.

  7. Analysis of the Error Associated With the Domenico Solution

    NASA Astrophysics Data System (ADS)

    Srinivasan, V.; Clement, T.; Lee, K.

    2006-12-01

    The Domenico solution is one of the widely used analytical solutions used in screening-level ground water contaminant transport models; e.g., BIOCHLOR and BIOSCREEN. This approximate solution describes the transport of a decaying contaminant subjected to advection in one dimension and dispersion in all three dimensions. However, the development of this solution as presented by the original authors involves approximations that are more heuristic than rigorous. This makes it difficult to predict the nature of the error associated with these approximations. Hence, several ground water modelers have expressed skepticism regarding the validity of this solution. To address the issues stated above, it is necessary to perform a rigorous mathematical analysis on the Domenico solution. In this work a rigorous mathematical approach to derive the Domenico solution is presented. Furthermore, the limits of this approximation are explored to provide a qualitative assessment of the error associated with the Domenico solution. The analysis indicates that the Domenico solution is an exact analytical solution when the value of the longitudinal dispersivity is zero. For all non-zero longitudinal dispersivity values, the Domenico solution will have a finite error. The results of our analysis also indicate that this error is highly sensitive to the value of the longitudinal dispersivity and the position of the advective front. Based on these inferences some general guidelines for the appropriate use of this solution are suggested.

  8. A Method for Treating Discretization Error in Nondeterministic Analysis

    SciTech Connect

    Alvin, K.F.

    1999-01-27

    A response surface methodology-based technique is presented for treating discretization error in non-deterministic analysis. The response surface, or metamodel, is estimated from computer experiments which vary both uncertain physical parameters and the fidelity of the computational mesh. The resultant metamodel is then used to propagate the variabilities in the continuous input parameters, while the mesh size is taken to zero, its asymptotic limit. With respect to mesh size, the metamodel is equivalent to Richardson extrapolation, in which solutions on coarser and finer meshes are used to estimate discretization error. The method is demonstrated on a one dimensional prismatic bar, in which uncertainty in the third vibration frequency is estimated by propagating variations in material modulus, density, and bar length. The results demonstrate the efficiency of the method for combining non-deterministic analysis with error estimation to obtain estimates of total simulation uncertainty. The results also show the relative sensitivity of failure estimates to solution bias errors in a reliability analysis, particularly when the physical variability of the system is low.

  9. Key Curriculum Reform Research on Numerical Analysis

    NASA Astrophysics Data System (ADS)

    Li, Zhong; Peng, Chensong

    Based on the current undergraduate teaching characteristics and the actual teaching situation of numerical analysis curriculum, this paper gives a useful discussion and appropriate adjustments for this course's teaching content and style, and it also proposes some new curriculum reform plans to improve the teaching effectiveness which can develop student's abilities of mathematical thinking and computational practice.

  10. Systems Improved Numerical Fluids Analysis Code

    NASA Technical Reports Server (NTRS)

    Costello, F. A.

    1990-01-01

    Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to April, 1983, version of SINDA. Additional routines provide for mathematical modeling of active heat-transfer loops. Simulates steady-state and pseudo-transient operations of 16 different components of heat-transfer loops, including radiators, evaporators, condensers, mechanical pumps, reservoirs, and many types of valves and fittings. Program contains property-analysis routine used to compute thermodynamic properties of 20 different refrigerants. Source code written in FORTRAN 77.

  11. How psychotherapists handle treatment errors – an ethical analysis

    PubMed Central

    2013-01-01

    Background Dealing with errors in psychotherapy is challenging, both ethically and practically. There is almost no empirical research on this topic. We aimed (1) to explore psychotherapists’ self-reported ways of dealing with an error made by themselves or by colleagues, and (2) to reconstruct their reasoning according to the two principle-based ethical approaches that are dominant in the ethics discourse of psychotherapy, Beauchamp & Childress (B&C) and Lindsay et al. (L). Methods We conducted 30 semi-structured interviews with 30 psychotherapists (physicians and non-physicians) and analysed the transcripts using qualitative content analysis. Answers were deductively categorized according to the two principle-based ethical approaches. Results Most psychotherapists reported that they preferred to an disclose error to the patient. They justified this by spontaneous intuitions and common values in psychotherapy, rarely using explicit ethical reasoning. The answers were attributed to the following categories with descending frequency: 1. Respect for patient autonomy (B&C; L), 2. Non-maleficence (B&C) and Responsibility (L), 3. Integrity (L), 4. Competence (L) and Beneficence (B&C). Conclusions Psychotherapists need specific ethical and communication training to complement and articulate their moral intuitions as a support when disclosing their errors to the patients. Principle-based ethical approaches seem to be useful for clarifying the reasons for disclosure. Further research should help to identify the most effective and acceptable ways of error disclosure in psychotherapy. PMID:24321503

  12. A numerical study of geometry dependent errors in velocity, temperature, and density measurements from single grid planar retarding potential analyzers

    SciTech Connect

    Davidson, R. L.; Earle, G. D.; Heelis, R. A.; Klenzing, J. H.

    2010-08-15

    Planar retarding potential analyzers (RPAs) have been utilized numerous times on high profile missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellite Program to measure plasma composition, temperature, density, and the velocity component perpendicular to the plane of the instrument aperture. These instruments use biased grids to approximate ideal biased planes. These grids introduce perturbations in the electric potential distribution inside the instrument and when unaccounted for cause errors in the measured plasma parameters. Traditionally, the grids utilized in RPAs have been made of fine wires woven into a mesh. Previous studies on the errors caused by grids in RPAs have approximated woven grids with a truly flat grid. Using a commercial ion optics software package, errors in inferred parameters caused by both woven and flat grids are examined. A flat grid geometry shows the smallest temperature and density errors, while the double thick flat grid displays minimal errors for velocities over the temperature and velocity range used. Wire thickness along the dominant flow direction is found to be a critical design parameter in regard to errors in all three inferred plasma parameters. The results shown for each case provide valuable design guidelines for future RPA development.

  13. Manufacturing in space: Fluid dynamics numerical analysis

    NASA Technical Reports Server (NTRS)

    Robertson, S. J.; Nicholson, L. A.; Spradley, L. W.

    1981-01-01

    Natural convection in a spherical container with cooling at the center was numerically simulated using the Lockheed-developed General Interpolants Method (GIM) numerical fluid dynamic computer program. The numerical analysis was simplified by assuming axisymmetric flow in the spherical container, with the symmetry axis being a sphere diagonal parallel to the gravity vector. This axisymmetric spherical geometry was intended as an idealization of the proposed Lal/Kroes growing experiments to be performed on board Spacelab. Results were obtained for a range of Rayleigh numbers from 25 to 10,000. For a temperature difference of 10 C from the cooling sting at the center to the container surface, and a gravitional loading of 0.000001 g a computed maximum fluid velocity of about 2.4 x 0.00001 cm/sec was reached after about 250 sec. The computed velocities were found to be approximately proportional to the Rayleigh number over the range of Rayleigh numbers investigated.

  14. Position determination and measurement error analysis for the spherical proof mass with optical shadow sensing

    NASA Astrophysics Data System (ADS)

    Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin

    2016-09-01

    To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.

  15. Error analysis of a 3D imaging system based on fringe projection technique

    NASA Astrophysics Data System (ADS)

    Zhang, Zonghua; Dai, Jie

    2013-12-01

    In the past few years, optical metrology has found numerous applications in scientific and commercial fields owing to its non-contact nature. One of the most popular methods is the measurement of 3D surface based on fringe projection techniques because of the advantages of non-contact operation, full-field and fast acquisition and automatic data processing. In surface profilometry by using digital light processing (DLP) projector, many factors affect the accuracy of 3D measurement. However, there is no research to give the complete error analysis of a 3D imaging system. This paper will analyze some possible error sources of a 3D imaging system, for example, nonlinear response of CCD camera and DLP projector, sampling error of sinusoidal fringe pattern, variation of ambient light and marker extraction during calibration. These error sources are simulated in a software environment to demonstrate their effects on measurement. The possible compensation methods are proposed to give high accurate shape data. Some experiments were conducted to evaluate the effects of these error sources on 3D shape measurement. Experimental results and performance evaluation show that these errors have great effect on measuring 3D shape and it is necessary to compensate for them for accurate measurement.

  16. Dispersion analysis and linear error analysis capabilities of the space vehicle dynamics simulation program

    NASA Technical Reports Server (NTRS)

    Snow, L. S.; Kuhn, A. E.

    1975-01-01

    Previous error analyses conducted by the Guidance and Dynamics Branch of NASA have used the Guidance Analysis Program (GAP) as the trajectory simulation tool. Plans are made to conduct all future error analyses using the Space Vehicle Dynamics Simulation (SVDS) program. A study was conducted to compare the inertial measurement unit (IMU) error simulations of the two programs. Results of the GAP/SVDS comparison are presented and problem areas encountered while attempting to simulate IMU errors, vehicle performance uncertainties and environmental uncertainties using SVDS are defined. An evaluation of the SVDS linear error analysis capability is also included.

  17. ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS

    NASA Technical Reports Server (NTRS)

    Putney, B.

    1994-01-01

    The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and

  18. Ferrofluids: Modeling, numerical analysis, and scientific computation

    NASA Astrophysics Data System (ADS)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

  19. Structure function analysis of mirror fabrication and support errors

    NASA Astrophysics Data System (ADS)

    Hvisc, Anastacia M.; Burge, James H.

    2007-09-01

    Telescopes are ultimately limited by atmospheric turbulence, which is commonly characterized by a structure function. The telescope optics will not further degrade the performance if their errors are small compared to the atmospheric effects. Any further improvement to the mirrors is not economical since there is no increased benefit to performance. Typically the telescope specification is written in terms of an image size or encircled energy and is derived from the best seeing that is expected at the site. Ideally, the fabrication and support errors should never exceed atmospheric turbulence at any spatial scale, so it is instructive to look at how these errors affect the structure function of the telescope. The fabrication and support errors are most naturally described by Zernike polynomials or by bending modes for the active mirrors. This paper illustrates an efficient technique for relating this modal analysis to wavefront structure functions. Data is provided for efficient calculation of structure function given coefficients for Zernike annular polynomials. An example of this procedure for the Giant Magellan Telescope primary mirror is described.

  20. Eigenvector method for umbrella sampling enables error analysis.

    PubMed

    Thiede, Erik H; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R

    2016-08-28

    Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912

  1. Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis

    NASA Technical Reports Server (NTRS)

    Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl

    2009-01-01

    The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.

  2. Error analysis of 3D laser scanning system for gangue monitoring

    NASA Astrophysics Data System (ADS)

    Hu, Shaoxing; Xia, Yuyang; Zhang, Aiwu

    2012-01-01

    The paper put forward the system error evaluation method of 3D scanning system for gangue monitoring; analyzed system errors including integrated error which can be avoided, and measurement error which needed whole analysis; firstly established the system equation after understanding the relationship of each structure. Then, used error independent effect and spread law to set up the entire error analysis system, and simulated the trend of error changing along X, Y, Z directions. At last, it is analytic that the laser rangefinder carries some weight in system error, and the horizontal and vertical scanning angles have some influences on system error in the certain vertical and horizontal scanning parameters.

  3. Multiple boundary condition testing error analysis. [for large flexible space structures

    NASA Technical Reports Server (NTRS)

    Glaser, R. J.; Kuo, C. P.; Wada, B. K.

    1989-01-01

    Techniques for interpreting data from multiple-boundary-condition (MBC) ground tests of large space structures are developed analytically and demonstrated. The use of MBC testing to validate structures too large to stand alone on the ground is explained; the generalized least-squares mass and stiffness curve-fitting methods typically applied to MBC test data are reviewed; and a detailed error analysis is performed. Consideration is given to sensitivity coefficients, covariance-matrix theory, the correspondence between test and analysis modes, constraints and step sizes, convergence criteria, and factor-analysis theory. Numerical results for a simple beam problem are presented in tables and briefly characterized. The improved error-updating capabilities of MBC testing are confirmed, and it is concluded that reasonably accurate results can be obtained using a diagonal covariance matrix.

  4. Effect of rawinsonde errors on rocketsonde density and pressure profiles: An error analysis of the Rawinsonde System

    NASA Technical Reports Server (NTRS)

    Luers, J. K.

    1980-01-01

    An initial value of pressure is required to derive the density and pressure profiles of the rocketborne rocketsonde sensor. This tie-on pressure value is obtained from the nearest rawinsonde launch at an altitude where overlapping rawinsonde and rocketsonde measurements occur. An error analysis was performed of the error sources in these sensors that contribute to the error in the tie-on pressure. It was determined that significant tie-on pressure errors result from radiation errors in the rawinsonde rod thermistor, and temperature calibration bias errors. To minimize the effect of these errors radiation corrections should be made to the rawinsonde temperature and the tie-on altitude should be chosen at the lowest altitude of overlapping data. Under these conditions the tie-on error, and consequently the resulting error in the Datasonde pressure and density profiles is less tha 1%. The effect of rawinsonde pressure and temperature errors on the wind and temperature versus height profiles of the rawinsonde was also determined.

  5. Computing the surveillance error grid analysis: procedure and examples.

    PubMed

    Kovatchev, Boris P; Wakeman, Christian A; Breton, Marc D; Kost, Gerald J; Louie, Richard F; Tran, Nam K; Klonoff, David C

    2014-07-01

    The surveillance error grid (SEG) analysis is a tool for analysis and visualization of blood glucose monitoring (BGM) errors, based on the opinions of 206 diabetes clinicians who rated 4 distinct treatment scenarios. Resulting from this large-scale inquiry is a matrix of 337 561 risk ratings, 1 for each pair of (reference, BGM) readings ranging from 20 to 580 mg/dl. The computation of the SEG is therefore complex and in need of automation. The SEG software introduced in this article automates the task of assigning a degree of risk to each data point for a set of measured and reference blood glucose values so that the data can be distributed into 8 risk zones. The software's 2 main purposes are to (1) distribute a set of BG Monitor data into 8 risk zones ranging from none to extreme and (2) present the data in a color coded display to promote visualization. Besides aggregating the data into 8 zones corresponding to levels of risk, the SEG computes the number and percentage of data pairs in each zone and the number/percentage of data pairs above/below the diagonal line in each zone, which are associated with BGM errors creating risks for hypo- or hyperglycemia, respectively. To illustrate the action of the SEG software we first present computer-simulated data stratified along error levels defined by ISO 15197:2013. This allows the SEG to be linked to this established standard. Further illustration of the SEG procedure is done with a series of previously published data, which reflect the performance of BGM devices and test strips under various environmental conditions. We conclude that the SEG software is a useful addition to the SEG analysis presented in this journal, developed to assess the magnitude of clinical risk from analytically inaccurate data in a variety of high-impact situations such as intensive care and disaster settings. PMID:25562887

  6. Sequential analysis of the numerical Stroop effect reveals response suppression.

    PubMed

    Cohen Kadosh, Roi; Gevers, Wim; Notebaert, Wim

    2011-09-01

    Automatic processing of irrelevant stimulus dimensions has been demonstrated in a variety of tasks. Previous studies have shown that conflict between relevant and irrelevant dimensions can be reduced when a feature of the irrelevant dimension is repeated. The specific level at which the automatic process is suppressed (e.g., perceptual repetition, response repetition), however, is less understood. In the current experiment we used the numerical Stroop paradigm, in which the processing of irrelevant numerical values of 2 digits interferes with the processing of their physical size, to pinpoint the precise level of the suppression. Using a sequential analysis, we dissociated perceptual repetition from response repetition of the relevant and irrelevant dimension. Our analyses of reaction times, error rates, and diffusion modeling revealed that the congruity effect is significantly reduced or even absent when the response sequence of the irrelevant dimension, rather than the numerical value or the physical size, is repeated. These results suggest that automatic activation of the irrelevant dimension is suppressed at the response level. The current results shed light on the level of interaction between numerical magnitude and physical size as well as the effect of variability of responses and stimuli on automatic processing. PMID:21500951

  7. Fast computation of Lagrangian coherent structures: algorithms and error analysis

    NASA Astrophysics Data System (ADS)

    Brunton, Steven; Rowley, Clarence

    2009-11-01

    This work investigates a number of efficient methods for computing finite time Lyapunov exponent (FTLE) fields in unsteady flows by approximating the particle flow map and eliminating redundant particle integrations in neighboring flow maps. Ridges of the FTLE fields are Lagrangian coherent structures (LCS) and provide an unsteady analogue of invariant manifolds from dynamical systems theory. The fast methods fall into two categories, unidirectional and bidirectional, depending on whether flow maps in one or both time directions are composed to form an approximate flow map. An error analysis is presented which shows that the unidirectional methods are accurate while the bidirectional methods have significant error which is aligned with the opposite time coherent structures. This relies on the fact that material from the positive time LCS attracts onto the negative time LCS near time-dependent saddle points.

  8. Beam line error analysis, position correction, and graphic processing

    NASA Astrophysics Data System (ADS)

    Wang, Fuhua; Mao, Naifeng

    1993-12-01

    A beam transport line error analysis and beam position correction code called ``EAC'' has been enveloped associated with a graphics and data post processing package for TRANSPORT. Based on the linear optics design using TRANSPORT or other general optics codes, EAC independently analyzes effects of magnet misalignments, systematic and statistical errors of magnetic fields as well as the effects of the initial beam positions, on the central trajectory and upon the transverse beam emittance dilution. EAC also provides an efficient way to develop beam line trajectory correcting schemes. The post processing package generates various types of graphics such as the beam line geometrical layout, plots of the Twiss parameters, beam envelopes, etc. It also generates an EAC input file, thus connecting EAC with general optics codes. EAC and the post processing package are small size codes, that are easy to access and use. They have become useful tools for the design of transport lines at SSCL.

  9. Jason-2 systematic error analysis in the GPS derived orbits

    NASA Astrophysics Data System (ADS)

    Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.

    2011-12-01

    Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced

  10. Numerical Analysis of Rocket Exhaust Cratering

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Supersonic jet exhaust impinging onto a flat surface is a fundamental flow encountered in space or with a missile launch vehicle system. The flow is important because it can endanger launch operations. The purpose of this study is to evaluate the effect of a landing rocket s exhaust on soils. From numerical simulations and analysis, we developed characteristic expressions and curves, which we can use, along with rocket nozzle performance, to predict cratering effects during a soft-soil landing. We conducted a series of multiphase flow simulations with two phases: exhaust gas and sand particles. The main objective of the simulation was to obtain the numerical results as close to the experimental results as possible. After several simulating test runs, the results showed that packing limit and the angle of internal friction are the two critical and dominant factors in the simulations.

  11. Numerical Analysis of Orbital Perturbation Effects on Inclined Geosynchronous SAR.

    PubMed

    Dong, Xichao; Hu, Cheng; Long, Teng; Li, Yuanhao

    2016-01-01

    The geosynchronous synthetic aperture radar (GEO SAR) is susceptible to orbit perturbations, leading to orbit drifts and variations. The influences behave very differently from those in low Earth orbit (LEO) SAR. In this paper, the impacts of perturbations on GEO SAR orbital elements are modelled based on the perturbed dynamic equations, and then, the focusing is analyzed theoretically and numerically by using the Systems Tool Kit (STK) software. The accurate GEO SAR slant range histories can be calculated according to the perturbed orbit positions in STK. The perturbed slant range errors are mainly the first and second derivatives, leading to image drifts and defocusing. Simulations of the point target imaging are performed to validate the aforementioned analysis. In the GEO SAR with an inclination of 53° and an argument of perigee of 90°, the Doppler parameters and the integration time are different and dependent on the geometry configurations. Thus, the influences are varying at different orbit positions: at the equator, the first-order phase errors should be mainly considered; at the perigee and apogee, the second-order phase errors should be mainly considered; at other positions, first-order and second-order exist simultaneously. PMID:27598168

  12. Theoretical and Numerical Assessment of Strain Pattern Analysis

    NASA Astrophysics Data System (ADS)

    Milne, R. D.; Simpson, A.

    1996-04-01

    The Strain Pattern Analysis (SPA) method was conceived at the RAE in the 1970s as a means of estimating the displacement shape of a helicopter rotor blade by using only strain gauge data, but no attempt was made to provide theoretical justification for the procedure. In this paper, the SPA method is placed on a firm mathematical basis by the use of vector space theory. It is shown that the natural normwhich underlies the SPA projection is the strain energy functionalof the structure under consideration. The natural norm is a weightedversion of the original SPA norm. Numerical experiments on simple flexure and coupled flexure-torsion systems indicate that the use of the natural norm yields structural deflection estimates of significantly greater accuracy than those obtained from the original SPA procedure and that measurement error tolerance is also enhanced. Extensive numerical results are presented for an emulation of the SPA method as applied to existing mathematical models of the main rotor of the DRA Lynx ZD559 helicopter. The efficacy of SPA is demonstrated by using a quasi-linear rotor model in the frequency domain and a fully non-linear, kinematically exact model in the time domain: the procedure based on the natural (or weighted) norm is again found to be superior to that based on the original SPA method, both in respect of displacement estimates and measurement error tolerance.

  13. Prior-predictive value from fast-growth simulations: Error analysis and bias estimation

    NASA Astrophysics Data System (ADS)

    Favaro, Alberto; Nickelsen, Daniel; Barykina, Elena; Engel, Andreas

    2015-01-01

    Variants of fluctuation theorems recently discovered in the statistical mechanics of nonequilibrium processes may be used for the efficient determination of high-dimensional integrals as typically occurring in Bayesian data analysis. In particular for multimodal distributions, Monte Carlo procedures not relying on perfect equilibration are advantageous. We provide a comprehensive statistical error analysis for the determination of the prior-predictive value (the evidence) in a Bayes problem, building on a variant of the Jarzynski equation. Special care is devoted to the characterization of the bias intrinsic to the method and statistical errors arising from exponential averages. We also discuss the determination of averages over multimodal posterior distributions with the help of a consequence of the Crooks relation. All our findings are verified by extensive numerical simulations of two model systems with bimodal likelihoods.

  14. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    SciTech Connect

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

  15. Convergence and error estimation in free energy calculations using the weighted histogram analysis method

    PubMed Central

    Zhu, Fangqiang; Hummer, Gerhard

    2012-01-01

    The weighted histogram analysis method (WHAM) has become the standard technique for the analysis of umbrella sampling simulations. In this paper, we address the challenges (1) of obtaining fast and accurate solutions of the coupled nonlinear WHAM equations, (2) of quantifying the statistical errors of the resulting free energies, (3) of diagnosing possible systematic errors, and (4) of optimal allocation of the computational resources. Traditionally, the WHAM equations are solved by a fixed-point direct iteration method, despite poor convergence and possible numerical inaccuracies in the solutions. Here we instead solve the mathematically equivalent problem of maximizing a target likelihood function, by using superlinear numerical optimization algorithms with a significantly faster convergence rate. To estimate the statistical errors in one-dimensional free energy profiles obtained from WHAM, we note that for densely spaced umbrella windows with harmonic biasing potentials, the WHAM free energy profile can be approximated by a coarse-grained free energy obtained by integrating the mean restraining forces. The statistical errors of the coarse-grained free energies can be estimated straightforwardly and then used for the WHAM results. A generalization to multidimensional WHAM is described. We also propose two simple statistical criteria to test the consistency between the histograms of adjacent umbrella windows, which help identify inadequate sampling and hysteresis in the degrees of freedom orthogonal to the reaction coordinate. Together, the estimates of the statistical errors and the diagnostics of inconsistencies in the potentials of mean force provide a basis for the efficient allocation of computational resources in free energy simulations. PMID:22109354

  16. A missing error term in benefit-cost analysis.

    PubMed

    Farrow, R Scott

    2012-03-01

    Benefit-cost models are frequently used to inform environmental policy and management decisions. However, they typically omit a random or pure error which biases downward any estimated forecast variance. Ex-ante benefit-cost analyses create a particular problem because there are no historically observed values of the dependent variable, such as net present social value, on which to construct a historically based variance as is the usual statistical approach. To correct this omission, an estimator for the random error variance in this situation is developed based on analysis of variance measures and the coefficient of determination, R(2). A larger variance may affect decision-maker's choices if they are risk averse, consider confidence intervals, exceedance probabilities, or other measures related to the variance. When applied to a model of the net benefits of the Clean Air Act, although the probability of large net benefits increases, the probability that the net present value is negative also increases from 0.2 to 4.5%. A framework is also provided to assist in determining when a variance estimate would be better, in a utility sense, than using the current default of a zero error variance. PMID:22145927

  17. An error analysis of higher-order finite element methods: Effect of degenerate coupling on simulation of elastic wave propagation

    NASA Astrophysics Data System (ADS)

    Hasegawa, Kei; Geller, Robert J.; Hirabayashi, Nobuyasu

    2016-02-01

    We present a theoretical analysis of the error of synthetic seismograms computed by higher-order finite element methods (ho-FEMs). We show the existence of a previously unrecognized type of error due to degenerate coupling between waves with the same frequency but different wavenumbers. These results are confirmed by simple numerical experiments using the spectral element method (SEM) as an example of ho-FEMs. Errors of the type found by this study may occur generally in applications of ho-FEMs.

  18. The Communication Link and Error ANalysis (CLEAN) simulator

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.; Crowe, Shane

    1993-01-01

    During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.

  19. Rasch Analysis of the Student Refractive Error and Eyeglass Questionnaire

    PubMed Central

    Crescioni, Mabel; Messer, Dawn H.; Warholak, Terri L.; Miller, Joseph M.; Twelker, J. Daniel; Harvey, Erin M.

    2014-01-01

    Purpose To evaluate and refine a newly developed instrument, the Student Refractive Error and Eyeglasses Questionnaire (SREEQ), designed to measure the impact of uncorrected and corrected refractive error on vision-related quality of life (VRQoL) in school-aged children. Methods. A 38 statement instrument consisting of two parts was developed: Part A relates to perceptions regarding uncorrected vision and Part B relates to perceptions regarding corrected vision and includes other statements regarding VRQoL with spectacle correction. The SREEQ was administered to 200 Native American 6th through 12th grade students known to have previously worn and who currently require eyeglasses. Rasch analysis was conducted to evaluate the functioning of the SREEQ. Statements on Part A and Part B were analyzed to examine the dimensionality and constructs of the questionnaire, how well the items functioned, and the appropriateness of the response scale used. Results Rasch analysis suggested two items be eliminated and the measurement scale for matching items be reduced from a 4-point response scale to a 3-point response scale. With these modifications, categorical data were converted to interval level data, to conduct an item and person analysis. A shortened version of the SREEQ was constructed with these modifications, the SREEQ-R, which included the statements that were able to capture changes in VRQoL associated with spectacle wear for those with significant refractive error in our study population. Conclusions While the SREEQ Part B appears to be a have less than optimal reliability to assess the impact of spectacle correction on VRQoL in our student population, it is also able to detect statistically significant differences from pretest to posttest on both the group and individual levels to show that the instrument can assess the impact that glasses have on VRQoL. Further modifications to the questionnaire, such as those included in the SREEQ-R, could enhance its functionality

  20. Numerical analysis method for linear induction machines.

    NASA Technical Reports Server (NTRS)

    Elliott, D. G.

    1972-01-01

    A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.

  1. Linearised and non-linearised isotherm models optimization analysis by error functions and statistical means

    PubMed Central

    2014-01-01

    In adsorption study, to describe sorption process and evaluation of best-fitting isotherm model is a key analysis to investigate the theoretical hypothesis. Hence, numerous statistically analysis have been extensively used to estimate validity of the experimental equilibrium adsorption values with the predicted equilibrium values. Several statistical error analysis were carried out. In the present study, the following statistical analysis were carried out to evaluate the adsorption isotherm model fitness, like the Pearson correlation, the coefficient of determination and the Chi-square test, have been used. The ANOVA test was carried out for evaluating significance of various error functions and also coefficient of dispersion were evaluated for linearised and non-linearised models. The adsorption of phenol onto natural soil (Local name Kalathur soil) was carried out, in batch mode at 30 ± 20 C. For estimating the isotherm parameters, to get a holistic view of the analysis the models were compared between linear and non-linear isotherm models. The result reveled that, among above mentioned error functions and statistical functions were designed to determine the best fitting isotherm. PMID:25018878

  2. Analysis of Random Segment Errors on Coronagraph Performance

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart B.; N'Diaye, Mamadou; Stahl, Mark T.; Stahl, H. Philip

    2016-01-01

    At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt

  3. Analysis of Solar Two Heliostat Tracking Error Sources

    SciTech Connect

    Jones, S.A.; Stone, K.W.

    1999-01-28

    This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.

  4. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1998-01-01

    We proposed a novel characterization of errors for numerical weather predictions. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has several important applications, including the model assessment application and the objective analysis application. In this project, we have focused on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP), the 500 hPa geopotential height, and the 315 K potential vorticity fields for forecasts of the short and medium range. The forecasts are generated by the Goddard Earth Observing System (GEOS) data assimilation system with and without ERS-1 scatterometer data. A great deal of novel work has been accomplished under the current contract. In broad terms, we have developed and tested an efficient algorithm for determining distortions. The algorithm and constraints are now ready for application to larger data sets to be used to determine the statistics of the distortion as outlined above, and to be applied in data analysis by using GEOS water vapor imagery to correct short-term forecast errors.

  5. Laser measurement and analysis of reposition error in polishing systems

    NASA Astrophysics Data System (ADS)

    Liu, Weisen; Wang, Junhua; Xu, Min; He, Xiaoying

    2015-10-01

    In this paper, robotic reposition error measurement method based on laser interference remote positioning is presented, the geometric error is analyzed in the polishing system based on robot and the mathematical model of the tilt error is presented. Studies show that less than 1 mm error is mainly caused by the tilt error with small incident angle. Marking spot position with interference fringe enhances greatly the error measurement precision, the measurement precision of tilt error can reach 5 um. Measurement results show that reposition error of the polishing system is mainly from the tilt error caused by the motor A, repositioning precision is greatly increased after polishing system improvement. The measurement method has important applications in the actual error measurement with low cost, simple operation.

  6. Numerical Analysis of Convection/Transpiration Cooling

    NASA Technical Reports Server (NTRS)

    Glass, David E.; Dilley, Arthur D.; Kelly, H. Neale

    1999-01-01

    An innovative concept utilizing the natural porosity of refractory-composite materials and hydrogen coolant to provide CONvective and TRANspiration (CONTRAN) cooling and oxidation protection has been numerically studied for surfaces exposed to a high heat flux high temperature environment such as hypersonic vehicle engine combustor walls. A boundary layer code and a porous media finite difference code were utilized to analyze the effect of convection and transpiration cooling on surface heat flux and temperature. The boundary layer code determined that transpiration flow is able to provide blocking of the surface heat flux only if it is above a minimum level due to heat addition from combustion of the hydrogen transpirant. The porous media analysis indicated that cooling of the surface is attained with coolant flow rates that are in the same range as those required for blocking, indicating that a coupled analysis would be beneficial.

  7. Numerical Analysis of Convection/Transpiration Cooling

    NASA Technical Reports Server (NTRS)

    Glass, David E.; Dilley, Arthur D.; Kelly, H. Neale

    1999-01-01

    An innovative concept utilizing the natural porosity of refractory-composite materials and hydrogen coolant to provide CONvective and TRANspiration (CONTRAN) cooling and oxidation protection has been numerically studied for surfaces exposed to a high heat flux, high temperature environment such as hypersonic vehicle engine combustor walls. A boundary layer code and a porous media finite difference code were utilized to analyze the effect of convection and transpiration cooling on surface heat flux and temperature. The boundary, layer code determined that transpiration flow is able to provide blocking of the surface heat flux only if it is above a minimum level due to heat addition from combustion of the hydrogen transpirant. The porous media analysis indicated that cooling of the surface is attained with coolant flow rates that are in the same range as those required for blocking, indicating that a coupled analysis would be beneficial.

  8. Three Dimensional Numerical Analysis on Discharge Properties

    NASA Astrophysics Data System (ADS)

    Takaishi, Kenji; Katsurai, Makoto

    2003-10-01

    A three dimensional simulation code with the finite difference time domain (FDTD) method combined with the two fluids model for electron and ion has been developed for the microwave excited surface wave plasma in the RDL-SWP device. This code permits the numerical analysis of the spatial distributions of electric field, power absorption, electron density and electron temperature. At low gas pressure of about 10 mTorr, the numerical results compared with the experimental measurements that shows the validity of this 3-D simulation code. A simplified analysis assuming that an electron density is spatially uniform has been studied and its applicability is evaluated by 3-D simulation. The surface wave eigenmodes are determined by electron density, and it is found that the structure of the device strongly influences to the spatial distribution of the electric fields of surface wave in a low density area. A method to irradiate a microwave to the whole surface area of the plasma is proposed which is found to be effective to obtain a high uniformity distribution of electron density.

  9. Improved iterative error analysis for endmember extraction from hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Sun, Lixin; Zhang, Ying; Guindon, Bert

    2008-08-01

    Automated image endmember extraction from hyperspectral imagery is a challenge and a critical step in spectral mixture analysis (SMA). Over the past years, great efforts were made and a large number of algorithms have been proposed to address this issue. Iterative error analysis (IEA) is one of the well-known existing endmember extraction methods. IEA identifies pixel spectra as a number of image endmembers by an iterative process. In each of the iterations, a fully constrained (abundance nonnegativity and abundance sum-to-one constraints) spectral unmixing based on previously identified endmembers is performed to model all image pixels. The pixel spectrum with the largest residual error is then selected as a new image endmember. This paper proposes an updated version of IEA by making improvements on three aspects of the method. First, fully constrained spectral unmixing is replaced by a weakly constrained (abundance nonnegativity and abundance sum-less-or-equal-to-one constraints) alternative. This is necessary due to the fact that only a subset of endmembers exhibit in a hyperspectral image have been extracted up to an intermediate iteration and the abundance sum-to-one constraint is invalid at the moment. Second, the search strategy for achieving an optimal set of image endmembers is changed from sequential forward selection (SFS) to sequential forward floating selection (SFFS) to reduce the so-called "nesting effect" in resultant set of endmembers. Third, a pixel spectrum is identified as a new image endmember depending on both its spectral extremity in the feature hyperspace of a dataset and its capacity to characterize other mixed pixels. This is achieved by evaluating a set of extracted endmembers using a criterion function, which is consisted of the mean and standard deviation of residual error image. Preliminary comparison between the image endmembers extracted using improved and original IEA are conducted based on an airborne visible infrared imaging

  10. Starlight emergence angle error analysis of star simulator

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Zhang, Guo-yu

    2015-10-01

    With continuous development of the key technologies of star sensor, the precision of star simulator have been to be further improved, for it directly affects the accuracy of star sensor laboratory calibration. For improving the accuracy level of the star simulator, a theoretical accuracy analysis model need to be proposed. According the ideal imaging model of star simulator, the theoretical accuracy analysis model can be established. Based on theoretically analyzing the theoretical accuracy analysis model we can get that the starlight emergent angle deviation is primarily affected by star position deviation, main point position deviation, focal length deviation, distortion deviation and object plane tilt deviation. Based on the above affecting factors, a comprehensive deviation model can be established. According to the model, the formula of each factors deviation model separately and the comprehensive deviation model can be summarized and concluded out. By analyzing the properties of each factors deviation model and the comprehensive deviation model formula, concluding the characteristics of each factors respectively and the weight relationship among them. According the result of analysis of the comprehensive deviation model, a reasonable designing indexes can be given by considering the star simulator optical system requirements and the precision of machining and adjustment. So, starlight emergence angle error analysis of star simulator is very significant to guide the direction of determining and demonstrating the index of star simulator, analyzing and compensating the error of star simulator for improving the accuracy of star simulator and establishing a theoretical basis for further improving the starlight angle precision of the star simulator can effectively solve the problem.

  11. SIRTF Focal Plane Survey: A Pre-flight Error Analysis

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Brugarolas, Paul B.; Boussalis, Dhemetrios; Kang, Bryan H.

    2003-01-01

    This report contains a pre-flight error analysis of the calibration accuracies expected from implementing the currently planned SIRTF focal plane survey strategy. The main purpose of this study is to verify that the planned strategy will meet focal plane survey calibration requirements (as put forth in the SIRTF IOC-SV Mission Plan [4]), and to quantify the actual accuracies expected. The error analysis was performed by running the Instrument Pointing Frame (IPF) Kalman filter on a complete set of simulated IOC-SV survey data, and studying the resulting propagated covariances. The main conclusion of this study is that the all focal plane calibration requirements can be met with the currently planned survey strategy. The associated margins range from 3 to 95 percent, and tend to be smallest for frames having a 0.14" requirement, and largest for frames having a more generous 0.28" (or larger) requirement. The smallest margin of 3 percent is associated with the IRAC 3.6 and 5.8 micron array centers (frames 068 and 069), and the largest margin of 95 percent is associated with the MIPS 160 micron array center (frame 087). For pointing purposes, the most critical calibrations are for the IRS Peakup sweet spots and short wavelength slit centers (frames 019, 023, 052, 028, 034). Results show that these frames are meeting their 0.14" requirements with an expected accuracy of approximately 0.1", which corresponds to a 28 percent margin.

  12. Analysis of Spherical Form Errors to Coordinate Measuring Machine Data

    NASA Astrophysics Data System (ADS)

    Chen, Mu-Chen

    Coordinates measuring machines (CMMs) are commonly utilized to take measurement data from manufactured surfaces for inspection purposes. The measurement data are then used to evaluate the geometric form errors associated with the surface. Traditionally, the evaluation of spherical form errors involves an optimization process of fitting a substitute sphere to the sampled points. This paper proposes the computational strategies for sphericity with respect to ASME Y14.5M-1994 standard. The proposed methods consider the trade-off between the accuracy of sphericity and the efficiency of inspection. Two approaches of computational metrology based on genetic algorithms (GAs) are proposed to explore the optimality of sphericity measurements and the sphericity feasibility analysis, respectively. The proposed algorithms are verified by using several CMM data sets. Observing from the computational results, the proposed algorithms are practical for on-line implementation to the sphericity evaluation. Using the GA-based computational techniques, the accuracy of sphericity assessment and the efficiency of sphericity feasibility analysis are agreeable.

  13. Verifying the error bound of numerical computation implemented in computer systems

    SciTech Connect

    Sawada, Jun

    2013-03-12

    A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.

  14. Analysis of accuracy of approximate, simultaneous, nonlinear confidence intervals on hydraulic heads in analytical and numerical test cases

    USGS Publications Warehouse

    Hill, M.C.

    1989-01-01

    Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author

  15. Error analysis for earth orientation recovery from GPS data

    NASA Technical Reports Server (NTRS)

    Zelensky, N.; Ray, J.; Liebrecht, P.

    1990-01-01

    The use of GPS navigation satellites to study earth-orientation parameters in real-time is examined analytically with simulations of network geometries. The Orbit Analysis covariance-analysis program is employed to simulate the block-II constellation of 18 GPS satellites, and attention is given to the budget for tracking errors. Simultaneous solutions are derived for earth orientation given specific satellite orbits, ground clocks, and station positions with tropospheric scaling at each station. Media effects and measurement noise are found to be the main causes of uncertainty in earth-orientation determination. A program similar to the Polaris network using single-difference carrier-phase observations can provide earth-orientation parameters with accuracies similar to those for the VLBI program. The GPS concept offers faster data turnaround and lower costs in addition to more accurate determinations of UT1 and pole position.

  16. Soft X Ray Telescope (SXT) focus error analysis

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees

    1991-01-01

    The analysis performed on the soft x-ray telescope (SXT) to determine the correct thickness of the spacer to position the CCD camera at the best focus of the telescope and to determine the maximum uncertainty in this focus position due to a number of metrology and experimental errors, and thermal, and humidity effects is presented. This type of analysis has been performed by the SXT prime contractor, Lockheed Palo Alto Research Lab (LPARL). The SXT project office at MSFC formed an independent team of experts to review the LPARL work, and verify the analysis performed by them. Based on the recommendation of this team, the project office will make a decision if an end to end focus test is required for the SXT prior to launch. The metrology and experimental data, and the spreadsheets provided by LPARL are used at the basis of the analysis presented. The data entries in these spreadsheets have been verified as far as feasible, and the format of the spreadsheets has been improved to make these easier to understand. The results obtained from this analysis are very close to the results obtained by LPARL. However, due to the lack of organized documentation the analysis uncovered a few areas of possibly erroneous metrology data, which may affect the results obtained by this analytical approach.

  17. Numerical Analysis of a Finite Element/Volume Penalty Method

    NASA Astrophysics Data System (ADS)

    Maury, Bertrand

    The penalty method makes it possible to incorporate a large class of constraints in general purpose Finite Element solvers like freeFEM++. We present here some contributions to the numerical analysis of this method. We propose an abstract framework for this approach, together with some general error estimates based on the discretization parameter ɛ and the space discretization parameter h. As this work is motivated by the possibility to handle constraints like rigid motion for fluid-particle flows, we shall pay a special attention to a model problem of this kind, where the constraint is prescribed over a subdomain. We show how the abstract estimate can be applied to this situation, in the case where a non-body-fitted mesh is used. In addition, we describe how this method provides an approximation of the Lagrange multiplier associated to the constraint.

  18. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  19. Design and analysis of vector color error diffusion halftoning systems.

    PubMed

    Damera-Venkata, N; Evans, B L

    2001-01-01

    Traditional error diffusion halftoning is a high quality method for producing binary images from digital grayscale images. Error diffusion shapes the quantization noise power into the high frequency regions where the human eye is the least sensitive. Error diffusion may be extended to color images by using error filters with matrix-valued coefficients to take into account the correlation among color planes. For vector color error diffusion, we propose three contributions. First, we analyze vector color error diffusion based on a new matrix gain model for the quantizer, which linearizes vector error diffusion. The model predicts the key characteristics of color error diffusion, esp. image sharpening and noise shaping. The proposed model includes linear gain models for the quantizer by Ardalan and Paulos (1987) and by Kite et al. (1997) as special cases. Second, based on our model, we optimize the noise shaping behavior of color error diffusion by designing error filters that are optimum with respect to any given linear spatially-invariant model of the human visual system. Our approach allows the error filter to have matrix-valued coefficients and diffuse quantization error across color channels in an opponent color representation. Thus, the noise is shaped into frequency regions of reduced human color sensitivity. To obtain the optimal filter, we derive a matrix version of the Yule-Walker equations which we solve by using a gradient descent algorithm. Finally, we show that the vector error filter has a parallel implementation as a polyphase filterbank. PMID:18255498

  20. Analysis of infusion pump error logs and their significance for health care.

    PubMed

    Lee, Paul T; Thompson, Frankle; Thimbleby, Harold

    Infusion therapy is one of the largest practised therapies in any healthcare organisation, and infusion pumps are used to deliver millions of infusions every year in the NHS. The aircraft industry downloads information from 'black boxes' to help design better systems and reduce risk; however, the same cannot be said about error logs and data logs from infusion pumps. This study downloaded and analysed approximately 360 000 hours of infusion pump error logs from 131 infusion pumps used for up to 2 years in one large acute hospital. Staff had to manage 260 129 alarms; this accounted for approximately 5% of total infusion time, costing about £1000 per pump per year. This paper describes many such insights, including numerous technical errors, propensity for certain alarms in clinical conditions, logistical issues and how infrastructure problems can lead to an increase in alarm conditions. Routine use of error log analysis, combined with appropriate management of pumps to help identify improved device design, use and application is recommended. PMID:22629592

  1. Numerical analysis of flows in reciprocating engines

    NASA Astrophysics Data System (ADS)

    Takata, H.; Kojima, M.

    1986-07-01

    A numerical method of the analysis for three-dimensional turbulent flow in cylinders of reciprocating engines with arbitrary geometry is described. A scheme of the finite volume/finite element methods is used, employing a large number of small elements of arbitrary shapes to form a cylinder. The fluid dynamic equations are expressed in integral form for each element, taking into account the deformation of the element shape according to the piston movements, and are solved in the physical space using rectangular coordinates. The conventional k-epsilon two-equation model is employed to describe the flow turbulence. Example calculations are presented for simple pancake-type combustion chambers having an annular intake port at either center or asymmetric position of the cylinder head. The suction inflow direction is also changed in several ways. The results show a good simulation of overall fluid movements within the engine cylinder.

  2. Incremental communication for multilayer neural networks: error analysis.

    PubMed

    Ghorbani, A A; Bhavsar, V C

    1998-01-01

    Artificial neural networks (ANNs) involve a large amount of internode communications. To reduce the communication cost as well as the time of learning process in ANNs, we earlier proposed (1995) an incremental internode communication method. In the incremental communication method, instead of communicating the full magnitude of the output value of a node, only the increment or decrement to its previous value is sent to a communication link. In this paper, the effects of the limited precision incremental communication method on the convergence behavior and performance of multilayer neural networks are investigated. The nonlinear aspects of representing the incremental values with reduced (limited) precision for the commonly used error backpropagation training algorithm are analyzed. It is shown that the nonlinear effect of small perturbations in the input(s)/output of a node does not cause instability. The analysis is supported by simulation studies of two problems. The simulation results demonstrate that the limited precision errors are bounded and do not seriously affect the convergence of multilayer neural networks. PMID:18252431

  3. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  4. Error Analysis in Composition of Iranian Lower Intermediate Students

    ERIC Educational Resources Information Center

    Taghavi, Mehdi

    2012-01-01

    Learners make errors during the process of learning languages. This study examines errors in writing task of twenty Iranian lower intermediate male students aged between 13 and 15. A subject was given to the participants was a composition about the seasons of a year. All of the errors were identified and classified. Corder's classification (1967)…

  5. Analysis of personnel error occurrence reports across Defense Program facilities

    SciTech Connect

    Stock, D.A.; Shurberg, D.A.; O`Brien, J.N.

    1994-05-01

    More than 2,000 reports from the Occurrence Reporting and Processing System (ORPS) database were examined in order to identify weaknesses in the implementation of the guidance for the Conduct of Operations (DOE Order 5480.19) at Defense Program (DP) facilities. The analysis revealed recurrent problems involving procedures, training of employees, the occurrence of accidents, planning and scheduling of daily operations, and communications. Changes to DOE 5480.19 and modifications of the Occurrence Reporting and Processing System are recommended to reduce the frequency of these problems. The primary tool used in this analysis was a coding scheme based on the guidelines in 5480.19, which was used to classify the textual content of occurrence reports. The occurrence reports selected for analysis came from across all DP facilities, and listed personnel error as a cause of the event. A number of additional reports, specifically from the Plutonium Processing and Handling Facility (TA55), and the Chemistry and Metallurgy Research Facility (CMR), at Los Alamos National Laboratory, were analyzed separately as a case study. In total, 2070 occurrence reports were examined for this analysis. A number of core issues were consistently found in all analyses conducted, and all subsets of data examined. When individual DP sites were analyzed, including some sites which have since been transferred, only minor variations were found in the importance of these core issues. The same issues also appeared in different time periods, in different types of reports, and at the two Los Alamos facilities selected for the case study.

  6. The Impact of Text Genre on Iranian Intermediate EFL Students' Writing Errors: An Error Analysis Perspective

    ERIC Educational Resources Information Center

    Moqimipour, Kourosh; Shahrokhi, Mohsen

    2015-01-01

    The present study aimed at analyzing writing errors caused by the interference of the Persian language, regarded as the first language (L1), in three writing genres, namely narration, description, and comparison/contrast by Iranian EFL students. 65 English paragraphs written by the participants, who were at the intermediate level based on their…

  7. Reduction of S-parameter errors using singular spectrum analysis.

    PubMed

    Ozturk, Turgut; Uluer, İhsan; Ünal, İlhami

    2016-07-01

    A free space measurement method, which consists of two horn antennas, a network analyzer, two frequency extenders, and a sample holder, is used to measure transmission (S21) coefficients in 75-110 GHz (W-Band) frequency range. Singular spectrum analysis method is presented to eliminate the error and noise of raw S21 data after calibration and measurement processes. The proposed model can be applied easily to remove the repeated calibration process for each sample measurement. Hence, smooth, reliable, and accurate data are obtained to determine the dielectric properties of materials. In addition, the dielectric constant of materials (paper, polyvinylchloride-PVC, Ultralam® 3850HT, and glass) is calculated by thin sheet approximation and Newton-Raphson extracting techniques using a filtered S21 transmission parameter. PMID:27475579

  8. Reduction of S-parameter errors using singular spectrum analysis

    NASA Astrophysics Data System (ADS)

    Ozturk, Turgut; Uluer, Ihsan; Ünal, Ilhami

    2016-07-01

    A free space measurement method, which consists of two horn antennas, a network analyzer, two frequency extenders, and a sample holder, is used to measure transmission (S21) coefficients in 75-110 GHz (W-Band) frequency range. Singular spectrum analysis method is presented to eliminate the error and noise of raw S21 data after calibration and measurement processes. The proposed model can be applied easily to remove the repeated calibration process for each sample measurement. Hence, smooth, reliable, and accurate data are obtained to determine the dielectric properties of materials. In addition, the dielectric constant of materials (paper, polyvinylchloride-PVC, Ultralam® 3850HT, and glass) is calculated by thin sheet approximation and Newton-Raphson extracting techniques using a filtered S21 transmission parameter.

  9. Error analysis and data reduction for interferometric surface measurements

    NASA Astrophysics Data System (ADS)

    Zhou, Ping

    High-precision optical systems are generally tested using interferometry, since it often is the only way to achieve the desired measurement precision and accuracy. Interferometers can generally measure a surface to an accuracy of one hundredth of a wave. In order to achieve an accuracy to the next order of magnitude, one thousandth of a wave, each error source in the measurement must be characterized and calibrated. Errors in interferometric measurements are classified into random errors and systematic errors. An approach to estimate random errors in the measurement is provided, based on the variation in the data. Systematic errors, such as retrace error, imaging distortion, and error due to diffraction effects, are also studied in this dissertation. Methods to estimate the first order geometric error and errors due to diffraction effects are presented. Interferometer phase modulation transfer function (MTF) is another intrinsic error. The phase MTF of an infrared interferometer is measured with a phase Siemens star, and a Wiener filter is designed to recover the middle spatial frequency information. Map registration is required when there are two maps tested in different systems and one of these two maps needs to be subtracted from the other. Incorrect mapping causes wavefront errors. A smoothing filter method is presented which can reduce the sensitivity to registration error and improve the overall measurement accuracy. Interferometric optical testing with computer-generated holograms (CGH) is widely used for measuring aspheric surfaces. The accuracy of the drawn pattern on a hologram decides the accuracy of the measurement. Uncertainties in the CGH manufacturing process introduce errors in holograms and then the generated wavefront. An optimal design of the CGH is provided which can reduce the sensitivity to fabrication errors and give good diffraction efficiency for both chrome-on-glass and phase etched CGHs.

  10. Error analysis of exponential integrators for oscillatory second-order differential equations

    NASA Astrophysics Data System (ADS)

    Grimm, Volker; Hochbruck, Marlis

    2006-05-01

    In this paper, we analyse a family of exponential integrators for second-order differential equations in which high-frequency oscillations in the solution are generated by a linear part. Conditions are given which guarantee that the integrators allow second-order error bounds independent of the product of the step size with the frequencies. Our convergence analysis generalizes known results on the mollified impulse method by García-Archilla, Sanz-Serna and Skeel (1998, SIAM J. Sci. Comput. 30 930-63) and on Gautschi-type exponential integrators (Hairer E, Lubich Ch and Wanner G 2002 Geometric Numerical Integration (Berlin: Springer), Hochbruck M and Lubich Ch 1999 Numer. Math. 83 403-26).

  11. Fixed-point error analysis of Winograd Fourier transform algorithms

    NASA Technical Reports Server (NTRS)

    Patterson, R. W.; Mcclellan, J. H.

    1978-01-01

    The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

  12. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2012-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

  13. Two numerical models for landslide dynamic analysis

    NASA Astrophysics Data System (ADS)

    Hungr, Oldrich; McDougall, Scott

    2009-05-01

    Two microcomputer-based numerical models (Dynamic ANalysis (DAN) and three-dimensional model DAN (DAN3D)) have been developed and extensively used for analysis of landslide runout, specifically for the purposes of practical landslide hazard and risk assessment. The theoretical basis of both models is a system of depth-averaged governing equations derived from the principles of continuum mechanics. Original features developed specifically during this work include: an open rheological kernel; explicit use of tangential strain to determine the tangential stress state within the flowing sheet, which is both more realistic and beneficial to the stability of the model; orientation of principal tangential stresses parallel with the direction of motion; inclusion of the centripetal forces corresponding to the true curvature of the path in the motion direction and; the use of very simple and highly efficient free surface interpolation methods. Both models yield similar results when applied to the same sets of input data. Both algorithms are designed to work within the semi-empirical framework of the "equivalent fluid" approach. This approach requires selection of material rheology and calibration of input parameters through back-analysis of real events. Although approximate, it facilitates simple and efficient operation while accounting for the most important characteristics of extremely rapid landslides. The two models have been verified against several controlled laboratory experiments with known physical basis. A large number of back-analyses of real landslides of various types have also been carried out. One example is presented. Calibration patterns are emerging, which give a promise of predictive capability.

  14. Analysis of error-correction constraints in an optical disk.

    PubMed

    Roberts, J D; Ryley, A; Jones, D M; Burke, D

    1996-07-10

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check. PMID:21102793

  15. Analysis of the "naming game" with learning errors in communications.

    PubMed

    Lou, Yang; Chen, Guanrong

    2015-01-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective. PMID:26178457

  16. Error analysis for encoding a qubit in an oscillator

    SciTech Connect

    Glancy, S.; Knill, E.

    2006-01-15

    In Phys. Rev. A 64, 012310 (2001), Gottesman, Kitaev, and Preskill described a method to encode a qubit in the continuous Hilbert space of an oscillator's position and momentum variables. This encoding provides a natural error-correction scheme that can correct errors due to small shifts of the position or momentum wave functions (i.e., use of the displacement operator). We present bounds on the size of correctable shift errors when both qubit and ancilla states may contain errors. We then use these bounds to constrain the quality of input qubit and ancilla states.

  17. Flight instrumentation specification for parameter identification: Program user's guide. [instrument errors/error analysis

    NASA Technical Reports Server (NTRS)

    Mohr, R. L.

    1975-01-01

    A set of four digital computer programs is presented which can be used to investigate the effects of instrumentation errors on the accuracy of aircraft and helicopter stability-and-control derivatives identified from flight test data. The programs assume that the differential equations of motion are linear and consist of small perturbations about a quasi-steady flight condition. It is also assumed that a Newton-Raphson optimization technique is used for identifying the estimates of the parameters. Flow charts and printouts are included.

  18. Nonclassicality thresholds for multiqubit states: Numerical analysis

    SciTech Connect

    Gruca, Jacek; Zukowski, Marek; Laskowski, Wieslaw; Kiesel, Nikolai; Wieczorek, Witlef; Weinfurter, Harald; Schmid, Christian

    2010-07-15

    States that strongly violate Bell's inequalities are required in many quantum-informational protocols as, for example, in cryptography, secret sharing, and the reduction of communication complexity. We investigate families of such states with a numerical method which allows us to reveal nonclassicality even without direct knowledge of Bell's inequalities for the given problem. An extensive set of numerical results is presented and discussed.

  19. Factor Rotation and Standard Errors in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.

    2015-01-01

    In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…

  20. Analysis of Children's Computational Errors: A Qualitative Approach

    ERIC Educational Resources Information Center

    Engelhardt, J. M.

    1977-01-01

    This study was designed to replicate and extend Roberts' (1968) efforts at classifying computational errors. 198 elementary school students were administered an 84-item arithmetic computation test. Eight types of errors were described which led to several tentative generalizations. (Editor/RK)

  1. English Majors' Errors in Translating Arabic Endophora: Analysis and Remedy

    ERIC Educational Resources Information Center

    Abdellah, Antar Solhy

    2007-01-01

    Egyptian English majors in the faculty of Education, South Valley University tend to mistranslate the plural inanimate Arabic pronoun with the singular inanimate English pronoun. A diagnostic test was designed to analyze this error. Results showed that a large number of students (first year and fourth year students) make this error, that the error…

  2. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    PubMed

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. PMID:26851473

  3. Analysis on the alignment errors of segmented Fresnel lens

    NASA Astrophysics Data System (ADS)

    Zhou, Xudong; Wu, Shibin; Yang, Wei; Wang, Lihua

    2014-09-01

    Stitching Fresnel lens are designed for the application in the micro-focus X-ray, but splicing errors between sub-apertures will affect optical performance of the entire mirror. The offset error tolerance of different degrees of freedom between the sub-apertures are analyzed theoretically according to the wave-front aberration theory and with the Rayleigh criterion as evaluation criteria, and then validate the correctness of the theory using simulation software of ZEMAX. The results show that Z-axis piston error tolerance and translation error tolerance of XY axis increases with the increasing F-number of stitching Fresnel lens, and tilt error tolerance of XY axis decreases with increasing diameter. The results provide a theoretical basis and guidance for the design, detection and alignment of stitching Fresnel lens.

  4. Error Analysis of Stereophotoclinometry in Support of the OSIRIS-REx Mission

    NASA Astrophysics Data System (ADS)

    Palmer, Eric; Gaskell, Robert W.; Weirich, John R.

    2015-11-01

    Stereophotoclinometry has been used on numerous planetary bodies to derive the shape model, most recently 67P-Churyumov-Gerasimenko (Jorda, et al., 2014), the Earth (Palmer, et al., 2014) and Vesta (Gaskell, 2012). SPC is planned to create the ultra-high resolution topography for the upcoming mission OSIRIS-REx that will sample the asteroid Bennu, arriving in 2018. This shape model will be used both for scientific analysis as well as operational navigation, to include providing the topography that will ensure a safe collection of the surface.We present the initial results of error analysis of SPC, with specific focus on how both systematic and non-systematic error propagate through SPC into the shape model. For this testing, we have created a notional global truth model at 5cm and a single region at 2.5mm ground sample distance. These truth models were used to create images using GSFC's software Freespace. Then these images were used by SPC to form a derived shape model with a ground sample distance of 5cm.We will report on both the absolute and relative error that the derived shape model has compared to the original truth model as well as other empirical and theoretical measurement of errors within SPC.Jorda, L. et al (2014) "The Shape of Comet 67P/Churyumov-Gerasimenko from Rosetta/Osiris Images", AGU Fall Meeting, #P41C-3943. Gaskell, R (2012) "SPC Shape and Topography of Vesta from DAWN Imaging Data", DSP Meeting #44, #209.03. Palmer, L. Sykes, M. V. Gaskll, R.W. (2014) "Mercator — Autonomous Navigation Using Panoramas", LPCS 45, #1777.

  5. Diagnosing non-Gaussianity of forecast and analysis errors in a convective-scale model

    NASA Astrophysics Data System (ADS)

    Legrand, R.; Michel, Y.; Montmerle, T.

    2016-01-01

    In numerical weather prediction, the problem of estimating initial conditions with a variational approach is usually based on a Bayesian framework associated with a Gaussianity assumption of the probability density functions of both observations and background errors. In practice, Gaussianity of errors is tied to linearity, in the sense that a nonlinear model will yield non-Gaussian probability density functions. In this context, standard methods relying on Gaussian assumption may perform poorly. This study aims to describe some aspects of non-Gaussianity of forecast and analysis errors in a convective-scale model using a Monte Carlo approach based on an ensemble of data assimilations. For this purpose, an ensemble of 90 members of cycled perturbed assimilations has been run over a highly precipitating case of interest. Non-Gaussianity is measured using the K2 statistics from the D'Agostino test, which is related to the sum of the squares of univariate skewness and kurtosis. Results confirm that specific humidity is the least Gaussian variable according to that measure and also that non-Gaussianity is generally more pronounced in the boundary layer and in cloudy areas. The dynamical control variables used in our data assimilation, namely vorticity and divergence, also show distinct non-Gaussian behaviour. It is shown that while non-Gaussianity increases with forecast lead time, it is efficiently reduced by the data assimilation step especially in areas well covered by observations. Our findings may have implication for the choice of the control variables.

  6. Numerical analysis and measurement in corner-fired furnace

    SciTech Connect

    Zhengjun, S.; Rongsheng, G.

    1999-07-01

    For several years, numerical analysis has been successfully used by Dongfang Boiler (Group) Co., Ltd. at a 200MW boiler, a 300MW boiler and so on, which were designed and made by DBC. The distribution of results is agreement each other between numerical analysis and measurement. In conclusion, it is considered that numerical analysis can be used as an important reference method in pulverized coal boiler design and test.

  7. Motion error analysis of the 3D coordinates of airborne lidar for typical terrains

    NASA Astrophysics Data System (ADS)

    Peng, Tao; Lan, Tian; Ni, Guoqiang

    2013-07-01

    A motion error model of 3D coordinates is established and the impact on coordinate errors caused by the non-ideal movement of the airborne platform is analyzed. The simulation results of the model show that when the lidar system operates at high altitude, the influence on the positioning errors derived from laser point cloud spacing is small. For the model the positioning errors obey simple harmonic vibration whose amplitude envelope gradually reduces with the increase of the vibration frequency. When the vibration period number is larger than 50, the coordinate errors are almost uncorrelated with time. The elevation error is less than the plane error and in the plane the error in the scanning direction is less than the error in the flight direction. Through the analysis of flight test data, the conclusion is verified.

  8. The slider motion error analysis by positive solution method in parallel mechanism

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoqing; Zhang, Lisong; Zhu, Liang; Yang, Wenguo; Hu, Penghao

    2016-01-01

    Motion error of slider plays key role in 3-PUU parallel coordinates measuring machine (CMM) performance and influence the CMM accuracy, which attracts lots of experts eyes in the world, Generally, the analysis method is based on the view of space 6-DOF. Here, a new analysis method is provided. First, the structure relation of slider and guideway can be abstracted as a 4-bar parallel mechanism. So, the sliders can be considered as moving platform in parallel kinematic mechanism PKM. Its motion error analysis is also transferred to moving platform position analysis in PKM. Then, after establishing the positive and negative solutions, some existed theory and technology for PKM can be applied to analyze slider straightness motion error and angular motion error simultaneously. Thirdly, some experiments by autocollimator are carried out to capture the original error data about guideway its own error, the data can be described as straightness error function by fitting curvilinear equation. Finally, the Straightness error of two guideways are considered as the variation of rod length in parallel mechanism, the slider's straightness error and angular error can be obtained by putting data into the established model. The calculated result is generally consistent with experiment result. The idea will be beneficial on accuracy calibration and error correction of 3-PUU CMM and also provides a new thought to analyze kinematic error of guideway in precision machine tool and precision instrument.

  9. Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.

    ERIC Educational Resources Information Center

    Monagle, E. Brette

    The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…

  10. Errors Analysis of Solving Linear Inequalities among the Preparatory Year Students at King Saud University

    ERIC Educational Resources Information Center

    El-khateeb, Mahmoud M. A.

    2016-01-01

    The purpose of this study aims to investigate the errors classes occurred by the Preparatory year students at King Saud University, through analysis student responses to the items of the study test, and to identify the varieties of the common errors and ratios of common errors that occurred in solving inequalities. In the collection of the data,…

  11. Numerical analysis of granular soil fabrics

    NASA Astrophysics Data System (ADS)

    Torbahn, L.; Huhn, K.

    2012-04-01

    Soil stability strongly depends on the material strength that is in general influenced by deformation processes and vice versa. Hence, investigation of material strength is of great interest in many geoscientific studies where soil deformations occur, e.g. the destabilization of slopes or the evolution of fault gouges. Particularly in the former case, slope failure occurs if the applied forces exceed the shear strength of slope material. Hence, the soil resistance or respectively the material strength acts contrary to deformation processes. Besides, geotechnical experiments, e.g. direct shear or ring shear tests, suggest that shear resistance mainly depends on properties of soil structure, texture and fabric. Although laboratory tests enable investigations of soil structure and texture during shear, detailed observations inside the sheared specimen during the failure processes as well as fabric effects are very limited. So, high-resolution information in space and time regarding texture evolution and/or grain behavior during shear is refused. However, such data is essential to gain a deeper insight into the key role of soil structure, texture, etc. on material strength and the physical processes occurring during material deformation on a micro-scaled level. Additionally, laboratory tests are not completely reproducible enabling a detailed statistical investigation of fabric during shear. So, almost identical setups to run methodical tests investigating the impact of fabric on soil resistance are hard to archive under laboratory conditions. Hence, we used numerical shear test experiments utilizing the Discrete Element Method to quantify the impact of different material fabrics on the shear resistance of soil as this granular model approach enables to investigate failure processes on a grain-scaled level. Our numerical setup adapts general settings from laboratory tests while the model characteristics are fixed except for the soil structure particularly the used

  12. Systematic error analysis for 3D nanoprofiler tracing normal vector

    NASA Astrophysics Data System (ADS)

    Kudo, Ryota; Tokuta, Yusuke; Nakano, Motohiro; Yamamura, Kazuya; Endo, Katsuyoshi

    2015-10-01

    In recent years, demand for an optical element having a high degree of freedom shape is increased. High-precision aspherical shape is required for the X-ray focusing mirror etc. For the head-mounted display etc., optical element of the free-form surface is used. For such an optical device fabrication, measurement technology is essential. We have developed a high- precision 3D nanoprofiler. By nanoprofiler, the normal vector information of the sample surface is obtained on the basis of the linearity of light. Normal vector information is differential value of the shape, it is possible to determine the shape by integrating. Repeatability of sub-nanometer has been achieved by nanoprofiler. To pursue the accuracy of shapes, systematic error is analyzed. The systematic errors are figure error of sample and assembly errors of the device. This method utilizes the information of the ideal shape of the sample, and the measurement point coordinates and normal vectors are calculated. However, measured figure is not the ideal shape by the effect of systematic errors. Therefore, the measurement point coordinate and the normal vector is calculated again by feeding back the measured figure. Correction of errors have been attempted by figure re-derivation. It was confirmed theoretically effectiveness by simulation. This approach also applies to the experiment, it was confirmed the possibility of about 4 nm PV figure correction in the employed sample.

  13. The impact of response measurement error on the analysis of designed experiments

    DOE PAGESBeta

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2015-12-21

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  14. The impact of response measurement error on the analysis of designed experiments

    SciTech Connect

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2015-12-21

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.

  15. SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1994-01-01

    SAMSAN algorithm; however, it is generally agreed by experienced users, and in the numerical error analysis literature, that computation with non-symmetric matrices of order greater than about 200 should be avoided or treated with extreme care. SAMSAN attempts to support the needs of application oriented analysis by providing: 1) a methodology with unlimited growth potential, 2) a methodology to insure that associated documentation is current and available "on demand", 3) a foundation of basic computational algorithms that most controls analysis procedures are based upon, 4) a set of check out and evaluation programs which demonstrate usage of the algorithms on a series of problems which are structured to expose the limits of each algorithm's applicability, and 5) capabilities which support both a priori and a posteriori error analysis for the computational algorithms provided. The SAMSAN algorithms are coded in FORTRAN 77 for batch or interactive execution and have been implemented on a DEC VAX computer under VMS 4.7. An effort was made to assure that the FORTRAN source code was portable and thus SAMSAN may be adaptable to other machine environments. The documentation is included on the distribution tape or can be purchased separately at the price below. SAMSAN version 2.0 was developed in 1982 and updated to version 3.0 in 1988.

  16. Numerical Analysis of the SCHOLAR Supersonic Combustor

    NASA Technical Reports Server (NTRS)

    Rodriguez, Carlos G.; Cutler, Andrew D.

    2003-01-01

    The SCHOLAR scramjet experiment is the subject of an ongoing numerical investigation. The facility nozzle and combustor were solved separate and sequentially, with the exit conditions of the former used as inlet conditions for the latter. A baseline configuration for the numerical model was compared with the available experimental data. It was found that ignition-delay was underpredicted and fuel-plume penetration overpredicted, while the pressure rise was close to experimental values. In addition, grid-convergence by means of grid-sequencing could not be established. The effects of the different turbulence parameters were quantified. It was found that it was not possible to simultaneously predict the three main parameters of this flow: pressure-rise, ignition-delay, and fuel-plume penetration.

  17. A Numerical Model for Atomtronic Circuit Analysis

    SciTech Connect

    Chow, Weng W.; Straatsma, Cameron J. E.; Anderson, Dana Z.

    2015-07-16

    A model for studying atomtronic devices and circuits based on finite-temperature Bose-condensed gases is presented. The approach involves numerically solving equations of motion for atomic populations and coherences, derived using the Bose-Hubbard Hamiltonian and the Heisenberg picture. The resulting cluster expansion is truncated at a level giving balance between physics rigor and numerical demand mitigation. This approach allows parametric studies involving time scales that cover both the rapid population dynamics relevant to nonequilibrium state evolution, as well as the much longer time durations typical for reaching steady-state device operation. This model is demonstrated by studying the evolution of a Bose-condensed gas in the presence of atom injection and extraction in a double-well potential. In this configuration phase locking between condensates in each well of the potential is readily observed, and its influence on the evolution of the system is studied.

  18. Numerical model for atomtronic circuit analysis

    NASA Astrophysics Data System (ADS)

    Chow, Weng W.; Straatsma, Cameron J. E.; Anderson, Dana Z.

    2015-07-01

    A model for studying atomtronic devices and circuits based on finite-temperature Bose-condensed gases is presented. The approach involves numerically solving equations of motion for atomic populations and coherences, derived using the Bose-Hubbard Hamiltonian and the Heisenberg picture. The resulting cluster expansion is truncated at a level giving balance between physics rigor and numerical demand mitigation. This approach allows parametric studies involving time scales that cover both the rapid population dynamics relevant to nonequilibrium state evolution, as well as the much longer time durations typical for reaching steady-state device operation. The model is demonstrated by studying the evolution of a Bose-condensed gas in the presence of atom injection and extraction in a double-well potential. In this configuration phase locking between condensates in each well of the potential is readily observed, and its influence on the evolution of the system is studied.

  19. Normal-reciprocal error models for quantitative ERT in permafrost environments: bin analysis versus histogram analysis

    NASA Astrophysics Data System (ADS)

    Verleysdonk, Sarah; Flores-Orozco, Adrian; Krautblatter, Michael; Kemna, Andreas

    2010-05-01

    Electrical resistivity tomography (ERT) has been used for the monitoring of permafrost-affected rock walls for some years now. To further enhance the interpretation of ERT measurements a deeper insight into error sources and the influence of error model parameters on the imaging results is necessary. Here, we present the effect of different statistical schemes for the determination of error parameters from the discrepancies between normal and reciprocal measurements - bin analysis and histogram analysis - using a smoothness-constrained inversion code (CRTomo) with an incorporated appropriate error model. The study site is located in galleries adjacent to the Zugspitze North Face (2800 m a.s.l.) at the border between Austria and Germany. A 20 m * 40 m rock permafrost body and its surroundings have been monitored along permanently installed transects - with electrode spacings of 1.5 m and 4.6 m - from 2007 to 2009. For data acquisition, a conventional Wenner survey was conducted as this array has proven to be the most robust array in frozen rock walls. Normal and reciprocal data were collected directly one after another to ensure identical conditions. The ERT inversion results depend strongly on the chosen parameters of the employed error model, i.e., the absolute resistance error and the relative resistance error. These parameters were derived (1) for large normal/reciprocal data sets by means of bin analyses and (2) for small normal/reciprocal data sets by means of histogram analyses. Error parameters were calculated independently for each data set of a monthly monitoring sequence to avoid the creation of artefacts (over-fitting of the data) or unnecessary loss of contrast (under-fitting of the data) in the images. The inversion results are assessed with respect to (1) raw data quality as described by the error model parameters, (2) validation via available (rock) temperature data and (3) the interpretation of the images from a geophysical as well as a

  20. Direct numerical simulations in solid mechanics for quantifying the macroscale effects of microstructure and material model-form error

    DOE PAGESBeta

    Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; Littlewood, David J.; Baines, Andrew J.

    2016-03-16

    Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less

  1. Direct Numerical Simulations in Solid Mechanics for Quantifying the Macroscale Effects of Microstructure and Material Model-Form Error

    NASA Astrophysics Data System (ADS)

    Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; Littlewood, David J.; Baines, Andrew J.

    2016-05-01

    Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cell represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Ultimately, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.

  2. Error Analysis of Weekly Station Coordinates in the DORIS Network

    NASA Astrophysics Data System (ADS)

    Williams, Simon D. P.; Willis, Pascal

    2006-11-01

    Twelve years of DORIS data from 31 selected sites of the IGN/JPL (Institut Géographique National/Jet Propulsion Laboratory) solution IGNWD05 have been analysed using maximum likelihood estimation (MLE) in an attempt to understand the nature of the noise in the weekly station coordinate time-series. Six alternative noise models in a total of 12 different combinations were used as possible descriptions of the noise. The six noise models can be divided into two natural groups, temporally uncorrelated (white) noise and temporally correlated (coloured) noise. The noise can be described as a combination of variable white noise and one of flicker, first-order Gauss Markov or power-law noise. The data set as a whole is best described as a combination of variable white noise plus flicker noise. The variable white noise, which is white noise with variable amplitude that is a function of the weekly formal errors multiplied by an estimated scale factor, shows a dependence on site latitude and the number of DORIS-equipped satellites used in the solution. The latitude dependence is largest in the east component due to the near polar orbit of the SPOT satellites. The amplitude of the flicker noise is similar in all three components and equal to about 20 mm/year1/4. There appears to be no latitude dependence of the flicker noise amplitude. The uncertainty in rates (site velocities) after 12 years is just under 1 mm/year. These uncertainties are around 3 4 times larger than if only variable white noise had been assumed, i.e., no temporally correlated noise. A rate uncertainty of 1 mm/year after 12 years in the vertical is similar to that achieved using Global Positioning System (GPS) data but it takes DORIS twice as long to reach 1 mm/year than GPS in the horizontal. The analysis has also helped to identify sites with either anomalous noise characteristics or large noise amplitudes, and tested the validity of previously proposed discontinuities. In addition, several new offsets

  3. Error analysis of householder transformations as applied to the standard and generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Ward, R. C.

    1974-01-01

    Backward error analyses of the application of Householder transformations to both the standard and the generalized eigenvalue problems are presented. The analysis for the standard eigenvalue problem determines the error from the application of an exact similarity transformation, and the analysis for the generalized eigenvalue problem determines the error from the application of an exact equivalence transformation. Bounds for the norms of the resulting perturbation matrices are presented and compared with existing bounds when known.

  4. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    NASA Astrophysics Data System (ADS)

    Pan, B.; Wang, B.; Lubineau, G.

    2016-07-01

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work.

  5. Pitch Error Analysis of Young Piano Students' Music Reading Performances

    ERIC Educational Resources Information Center

    Rut Gudmundsdottir, Helga

    2010-01-01

    This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…

  6. Analysis of Students' Error in Learning of Quadratic Equations

    ERIC Educational Resources Information Center

    Zakaria, Effandi; Ibrahim; Maat, Siti Mistima

    2010-01-01

    The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…

  7. Shape error analysis for reflective nano focusing optics

    SciTech Connect

    Modi, Mohammed H.; Idir, Mourad

    2010-06-23

    Focusing performance of reflective x-ray optics is determined by surface figure accuracy. Any surface imperfection present on such optics introduces a phase error in the outgoing wave fields. Therefore converging beam at the focal spot will differ from the desired performance. Effect of these errors on focusing performance can be calculated by wave optical approach considering a coherent wave field illumination of optical elements. We have developed a wave optics simulator using Fresnel-Kirchhoff diffraction integral to calculate the mirror pupil function. Both analytically calculated and measured surface topography data can be taken as an aberration source to outgoing wave fields. Simulations are performed to study the effect of surface height fluctuations on focusing performances over wide frequency range in high, mid and low frequency band. The results using real shape profile measured with long trace profilometer (LTP) suggest that the shape error of {lambda}/4 PV (peak to valley) is tolerable to achieve diffraction limited performance. It is desirable to remove shape error of very low frequency as 0.1 mm{sup -1} which otherwise will generate beam waist or satellite peaks. All other frequencies above this limit will not affect the focused beam profile but only caused a loss in intensity.

  8. Reading and Spelling Error Analysis of Native Arabic Dyslexic Readers

    ERIC Educational Resources Information Center

    Abu-rabia, Salim; Taha, Haitham

    2004-01-01

    This study was an investigation of reading and spelling errors of dyslexic Arabic readers ("n"=20) compared with two groups of normal readers: a young readers group, matched with the dyslexics by reading level ("n"=20) and an age-matched group ("n"=20). They were tested on reading and spelling of texts, isolated words and pseudowords. Two…

  9. Oral Definitions of Newly Learned Words: An Error Analysis

    ERIC Educational Resources Information Center

    Steele, Sara C.

    2012-01-01

    This study examined and compared patterns of errors in the oral definitions of newly learned words. Fifteen 9- to 11-year-old children with language learning disability (LLD) and 15 typically developing age-matched peers inferred the meanings of 20 nonsense words from four novel reading passages. After reading, children provided oral definitions…

  10. Analysis of Errors Made by Students Solving Genetics Problems.

    ERIC Educational Resources Information Center

    Costello, Sandra Judith

    The purpose of this study was to analyze the errors made by students solving genetics problems. A sample of 10 non-science undergraduate students was obtained from a private college in Northern New Jersey. The results support prior research in the area of genetics education and show that a weak understanding of the relationship of meiosis to…