Accounting for Errors in Model Analysis Theory: A Numerical Approach
NASA Astrophysics Data System (ADS)
Sommer, Steven R.; Lindell, Rebecca S.
2004-09-01
By studying the patterns of a group of individuals' responses to a series of multiple-choice questions, researchers can utilize Model Analysis Theory to create a probability distribution of mental models for a student population. The eigenanalysis of this distribution yields information about what mental models the students possess, as well as how consistently they utilize said mental models. Although the theory considers the probabilistic distribution to be fundamental, there exists opportunities for random errors to occur. In this paper we will discuss a numerical approach for mathematically accounting for these random errors. As an example of this methodology, analysis of data obtained from the Lunar Phases Concept Inventory will be presented. Limitations and applicability of this numerical approach will be discussed.
Minimizing Errors in Numerical Analysis of Chemical Data.
ERIC Educational Resources Information Center
Rusling, James F.
1988-01-01
Investigates minimizing errors in computational methods commonly used in chemistry. Provides a series of examples illustrating the propagation of errors, finite difference methods, and nonlinear regression analysis. Includes illustrations to explain these concepts. (MVL)
NASA Astrophysics Data System (ADS)
Hinder, Ian; Buonanno, Alessandra; Boyle, Michael; Etienne, Zachariah B.; Healy, James; Johnson-McDaniel, Nathan K.; Nagar, Alessandro; Nakano, Hiroyuki; Pan, Yi; Pfeiffer, Harald P.; Pürrer, Michael; Reisswig, Christian; Scheel, Mark A.; Schnetter, Erik; Sperhake, Ulrich; Szilágyi, Bela; Tichy, Wolfgang; Wardell, Barry; Zenginoğlu, Anıl; Alic, Daniela; Bernuzzi, Sebastiano; Bode, Tanja; Brügmann, Bernd; Buchman, Luisa T.; Campanelli, Manuela; Chu, Tony; Damour, Thibault; Grigsby, Jason D.; Hannam, Mark; Haas, Roland; Hemberger, Daniel A.; Husa, Sascha; Kidder, Lawrence E.; Laguna, Pablo; London, Lionel; Lovelace, Geoffrey; Lousto, Carlos O.; Marronetti, Pedro; Matzner, Richard A.; Mösta, Philipp; Mroué, Abdul; Müller, Doreen; Mundim, Bruno C.; Nerozzi, Andrea; Paschalidis, Vasileios; Pollney, Denis; Reifenberger, George; Rezzolla, Luciano; Shapiro, Stuart L.; Shoemaker, Deirdre; Taracchini, Andrea; Taylor, Nicholas W.; Teukolsky, Saul A.; Thierfelder, Marcus; Witek, Helvi; Zlochower, Yosef
2013-01-01
The Numerical-Relativity-Analytical-Relativity (NRAR) collaboration is a joint effort between members of the numerical relativity, analytical relativity and gravitational-wave data analysis communities. The goal of the NRAR collaboration is to produce numerical-relativity simulations of compact binaries and use them to develop accurate analytical templates for the LIGO/Virgo Collaboration to use in detecting gravitational-wave signals and extracting astrophysical information from them. We describe the results of the first stage of the NRAR project, which focused on producing an initial set of numerical waveforms from binary black holes with moderate mass ratios and spins, as well as one non-spinning binary configuration which has a mass ratio of 10. All of the numerical waveforms are analysed in a uniform and consistent manner, with numerical errors evaluated using an analysis code created by members of the NRAR collaboration. We compare previously-calibrated, non-precessing analytical waveforms, notably the effective-one-body (EOB) and phenomenological template families, to the newly-produced numerical waveforms. We find that when the binary's total mass is ˜100-200M⊙, current EOB and phenomenological models of spinning, non-precessing binary waveforms have overlaps above 99% (for advanced LIGO) with all of the non-precessing-binary numerical waveforms with mass ratios ⩽4, when maximizing over binary parameters. This implies that the loss of event rate due to modelling error is below 3%. Moreover, the non-spinning EOB waveforms previously calibrated to five non-spinning waveforms with mass ratio smaller than 6 have overlaps above 99.7% with the numerical waveform with a mass ratio of 10, without even maximizing on the binary parameters.
Numerical errors in the presence of steep topography: analysis and alternatives
Lundquist, K A; Chow, F K; Lundquist, J K
2010-04-15
It is well known in computational fluid dynamics that grid quality affects the accuracy of numerical solutions. When assessing grid quality, properties such as aspect ratio, orthogonality of coordinate surfaces, and cell volume are considered. Mesoscale atmospheric models generally use terrain-following coordinates with large aspect ratios near the surface. As high resolution numerical simulations are increasingly used to study topographically forced flows, a high degree of non-orthogonality is introduced, especially in the vicinity of steep terrain slopes. Numerical errors associated with the use of terrainfollowing coordinates can adversely effect the accuracy of the solution in steep terrain. Inaccuracies from the coordinate transformation are present in each spatially discretized term of the Navier-Stokes equations, as well as in the conservation equations for scalars. In particular, errors in the computation of horizontal pressure gradients, diffusion, and horizontal advection terms have been noted in the presence of sloping coordinate surfaces and steep topography. In this work we study the effects of these spatial discretization errors on the flow solution for three canonical cases: scalar advection over a mountain, an atmosphere at rest over a hill, and forced advection over a hill. This study is completed using the Weather Research and Forecasting (WRF) model. Simulations with terrain-following coordinates are compared to those using a flat coordinate, where terrain is represented with the immersed boundary method. The immersed boundary method is used as a tool which allows us to eliminate the terrain-following coordinate transformation, and quantify numerical errors through a direct comparison of the two solutions. Additionally, the effects of related issues such as the steepness of terrain slope and grid aspect ratio are studied in an effort to gain an understanding of numerical domains where terrain-following coordinates can successfully be used and
Some Surprising Errors in Numerical Differentiation
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2012-01-01
Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Numerical error in groundwater flow and solute transport simulation
NASA Astrophysics Data System (ADS)
Woods, Juliette A.; Teubner, Michael D.; Simmons, Craig T.; Narayan, Kumar A.
2003-06-01
Models of groundwater flow and solute transport may be affected by numerical error, leading to quantitative and qualitative changes in behavior. In this paper we compare and combine three methods of assessing the extent of numerical error: grid refinement, mathematical analysis, and benchmark test problems. In particular, we assess the popular solute transport code SUTRA [Voss, 1984] as being a typical finite element code. Our numerical analysis suggests that SUTRA incorporates a numerical dispersion error and that its mass-lumped numerical scheme increases the numerical error. This is confirmed using a Gaussian test problem. A modified SUTRA code, in which the numerical dispersion is calculated and subtracted, produces better results. The much more challenging Elder problem [Elder, 1967; Voss and Souza, 1987] is then considered. Calculation of its numerical dispersion coefficients and numerical stability show that the Elder problem is prone to error. We confirm that Elder problem results are extremely sensitive to the simulation method used.
Error Analysis of Quadrature Rules. Classroom Notes
ERIC Educational Resources Information Center
Glaister, P.
2004-01-01
Approaches to the determination of the error in numerical quadrature rules are discussed and compared. This article considers the problem of the determination of errors in numerical quadrature rules, taking Simpson's rule as the principal example. It suggests an approach based on truncation error analysis of numerical schemes for differential…
Numerical Simulation of Coherent Error Correction
NASA Astrophysics Data System (ADS)
Crow, Daniel; Joynt, Robert; Saffman, Mark
A major goal in quantum computation is the implementation of error correction to produce a logical qubit with an error rate lower than that of the underlying physical qubits. Recent experimental progress demonstrates physical qubits can achieve error rates sufficiently low for error correction, particularly for codes with relatively high thresholds such as the surface code and color code. Motivated by experimental capabilities of neutral atom systems, we use numerical simulation to investigate whether coherent error correction can be effectively used with the 7-qubit color code. The results indicate that coherent error correction does not work at the 10-qubit level in neutral atom array quantum computers. By adding more qubits there is a possibility of making the encoding circuits fault-tolerant which could improve performance.
ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.
LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.
2004-07-26
We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.
Errata: Papers in Error Analysis.
ERIC Educational Resources Information Center
Svartvik, Jan, Ed.
Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…
NASA Technical Reports Server (NTRS)
Kia, T.; Longuski, J. M.
1984-01-01
Analytic error bounds are presented for the solutions of approximate models for self-excited near-symmetric rigid bodies. The error bounds are developed for analytic solutions to Euler's equations of motion. The results are applied to obtain a simplified analytic solution for Eulerian rates and angles. The results of a sample application of the range and error bound expressions for the case of the Galileo spacecraft experiencing transverse torques demonstrate the use of the bounds in analyses of rigid body spin change maneuvers.
NASA Astrophysics Data System (ADS)
Gregory, Roger B.
1991-05-01
We have recently described modifications to the program CONTIN [S.W. Provencher, Comput. Phys. Commun. 27 (1982) 229] for the solution of Fredholm integral equations with convoluted kernels of the type that occur in the analysis of positron annihilation lifetime data [R.B. Gregory and Yongkang Zhu, Nucl. Instr. and Meth. A290 (1990) 172]. In this article, modifications to the program to correct for source terms in the sample and reference decay curves and for shifts in the position of the zero-time channel of the sample and reference data are described. Unwanted source components, expressed as a discrete sum of exponentials, may be removed from both the sample and reference data by modification of the sample data alone, without the need for direct knowledge of the instrument resolution function. Shifts in the position of the zero-time channel of up to half the channel width of the multichannel analyzer can be corrected. Analyses of computer-simulated test data indicate that the quality of the reconstructed annihilation rate probability density functions is improved by employing a reference material with a short lifetime and indicate that reference materials which generate free positrons by quenching positronium formation (i.e. strong oxidizing agents) have lifetimes that are too long (400-450 ps) to provide reliable estimates of the lifetime parameters for the shortlived components with the methods described here. Well-annealed single crystals of metals with lifetimes less than 200 ps, such as molybdenum (123 ps) and aluminum (166 ps) do not introduce significant errors in estimates of the lifetime parameters and are to be preferred as reference materials. The performance of our modified version of CONTIN is illustrated by application to positron annihilation in polytetrafluoroethylene.
The Insufficiency of Error Analysis
ERIC Educational Resources Information Center
Hammarberg, B.
1974-01-01
The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…
Uncertainty quantification and error analysis
Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
NASA Technical Reports Server (NTRS)
1984-01-01
The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.
Error Estimates for Numerical Integration Rules
ERIC Educational Resources Information Center
Mercer, Peter R.
2005-01-01
The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.
Skylab water balance error analysis
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.
Analysis of discretization errors in LES
NASA Technical Reports Server (NTRS)
Ghosal, Sandip
1995-01-01
All numerical simulations of turbulence (DNS or LES) involve some discretization errors. The integrity of such simulations therefore depend on our ability to quantify and control such errors. In the classical literature on analysis of errors in partial differential equations, one typically studies simple linear equations (such as the wave equation or Laplace's equation). The qualitative insight gained from studying such simple situations is then used to design numerical methods for more complex problems such as the Navier-Stokes equations. Though such an approach may seem reasonable as a first approximation, it should be recognized that strongly nonlinear problems, such as turbulence, have a feature that is absent in linear problems. This feature is the simultaneous presence of a continuum of space and time scales. Thus, in an analysis of errors in the one dimensional wave equation, one may, without loss of generality, rescale the equations so that the dependent variable is always of order unity. This is not possible in the turbulence problem since the amplitudes of the Fourier modes of the velocity field have a continuous distribution. The objective of the present research is to provide some quantitative measures of numerical errors in such situations. Though the focus of this work is LES, the methods introduced here can be just as easily applied to DNS. Errors due to discretization of the time-variable are neglected for the purpose of this analysis.
Error analysis in laparoscopic surgery
NASA Astrophysics Data System (ADS)
Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.
1998-06-01
Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.
Managing numerical errors in random sequential adsorption
NASA Astrophysics Data System (ADS)
Cieśla, Michał; Nowak, Aleksandra
2016-09-01
Aim of this study is to examine the influence of a finite surface size and a finite simulation time on a packing fraction estimated using random sequential adsorption simulations. The goal of particular interest is providing hints on simulation setup to achieve desired level of accuracy. The analysis is based on properties of saturated random packing of disks on continuous and flat surfaces of different sizes.
Orbital and Geodetic Error Analysis
NASA Technical Reports Server (NTRS)
Felsentreger, T.; Maresca, P.; Estes, R.
1985-01-01
Results that previously required several runs determined in more computer-efficient manner. Multiple runs performed only once with GEODYN and stored on tape. ERODYN then performs matrix partitioning and linear algebra required for each individual error-analysis run.
Human Error: A Concept Analysis
NASA Technical Reports Server (NTRS)
Hansen, Frederick D.
2007-01-01
Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.
The characteristics of key analysis errors
NASA Astrophysics Data System (ADS)
Caron, Jean-Francois
This thesis investigates the characteristics of the corrections to the initial state of the atmosphere. The technique employed is the key analysis error algorithm, recently developed to estimate the initial state errors responsible for poor short-range to medium-range numerical weather prediction (NWP) forecasts. The main goal of this work is to determine to which extent the initial corrections obtained with this method can be associated with analysis errors. A secondary goal is to understand their dynamics in improving the forecast. In the first part of the thesis, we examine the realism of the initial corrections obtained from the key analysis error algorithm in terms of dynamical balance and closeness to the observations. The result showed that the initial corrections are strongly out of balance and systematically increase the departure between the control analysis and the observations suggesting that the key analysis error algorithm produced initial corrections that represent more than analysis errors. Significant artificial correction to the initial state seems to be present. The second part of this work examines a few approaches to isolate the balanced component of the initial corrections from the key analysis error method. The best results were obtained with the nonlinear balance potential vorticity (PV) inversion technique. The removal of the imbalance part of the initial corrections makes the corrected analysis slightly closer to the observations, but remains systematically further away as compared to the control analysis. Thus the balanced part of the key analysis errors cannot justifiably be associated with analysis errors. In light of the results presented, some recommendations to improve the key analysis error algorithm were proposed. In the third and last part of the thesis, a diagnosis of the evolution of the initial corrections from the key analysis error method is presented using a PV approach. The initial corrections tend to grow rapidly in time
Analysis of Medication Error Reports
Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.
2004-11-15
In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.
A Classroom Note on: Building on Errors in Numerical Integration
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2011-01-01
In both baseball and mathematics education, the conventional wisdom is to avoid errors at all costs. That advice might be on target in baseball, but in mathematics, it is not always the best strategy. Sometimes an analysis of errors provides much deeper insights into mathematical ideas and, rather than something to eschew, certain types of errors…
Having Fun with Error Analysis
ERIC Educational Resources Information Center
Siegel, Peter
2007-01-01
We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…
Condition and Error Estimates in Numerical Matrix Computations
Konstantinov, M. M.; Petkov, P. H.
2008-10-30
This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.
Numerical analysis of bifurcations
Guckenheimer, J.
1996-06-01
This paper is a brief survey of numerical methods for computing bifurcations of generic families of dynamical systems. Emphasis is placed upon algorithms that reflect the structure of the underlying mathematical theory while retaining numerical efficiency. Significant improvements in the computational analysis of dynamical systems are to be expected from more reliance of geometric insight coming from dynamical systems theory. {copyright} {ital 1996 American Institute of Physics.}
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods. PMID:26328545
Numerical study of error propagation in Monte Carlo depletion simulations
Wyant, T.; Petrovic, B.
2012-07-01
Improving computer technology and the desire to more accurately model the heterogeneity of the nuclear reactor environment have made the use of Monte Carlo depletion codes more attractive in recent years, and feasible (if not practical) even for 3-D depletion simulation. However, in this case statistical uncertainty is combined with error propagating through the calculation from previous steps. In an effort to understand this error propagation, a numerical study was undertaken to model and track individual fuel pins in four 17 x 17 PWR fuel assemblies. By changing the code's initial random number seed, the data produced by a series of 19 replica runs was used to investigate the true and apparent variance in k{sub eff}, pin powers, and number densities of several isotopes. While this study does not intend to develop a predictive model for error propagation, it is hoped that its results can help to identify some common regularities in the behavior of uncertainty in several key parameters. (authors)
ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.
Hromadka, T.V., II
1987-01-01
Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.
Error Analysis and the EFL Classroom Teaching
ERIC Educational Resources Information Center
Xie, Fang; Jiang, Xue-mei
2007-01-01
This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…
NASA Technical Reports Server (NTRS)
Prive, Nikki C.; Errico, Ronald M.
2013-01-01
A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.
Ketkar, S.P.
1999-07-01
This new volume is written for both practicing engineers who want to refresh their knowledge in the fundamentals of numerical thermal analysis as well as for students of numerical heat transfer. it is a handy desktop reference that covers all the basics of finite difference, finite element, and control volume methods. In this volume, the author presents a unique hybrid method that combines the best features of finite element modeling and the computational efficiency of finite difference network solution techniques. It is a robust technique that is used in commercially available software. The contents include: heat conduction: fundamentals and governing equations; finite difference method; control volume method; finite element method; the hybrid method; and software selection.
Error analysis of friction drive elements
NASA Astrophysics Data System (ADS)
Wang, Guomin; Yang, Shihai; Wang, Daxing
2008-07-01
Friction drive is used in some large astronomical telescopes in recent years. Comparing to the direct drive, friction drive train consists of more buildup parts. Usually, the friction drive train consists of motor-tachometer unit, coupling, reducer, driving roller, big wheel, encoder and encoder coupling. Normally, these buildup parts will introduce somewhat errors to the drive system. Some of them are random error and some of them are systematic error. For the random error, the effective way is to estimate their contributions and try to find proper way to decrease its influence. For the systematic error, the useful way is to analyse and test them quantitively, and then feedback the error to the control system to correct them. The main task of this paper is to analyse these error sources and find out their characteristics, such as random error, systematic error and contributions. The methods or equations used in the analysis will be also presented detail in this paper.
ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.
Hromadka, T.V., II
1985-01-01
Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.
Analysis and classification of human error
NASA Technical Reports Server (NTRS)
Rouse, W. B.; Rouse, S. H.
1983-01-01
The literature on human error is reviewed with emphasis on theories of error and classification schemes. A methodology for analysis and classification of human error is then proposed which includes a general approach to classification. Identification of possible causes and factors that contribute to the occurrence of errors is also considered. An application of the methodology to the use of checklists in the aviation domain is presented for illustrative purposes.
Error analysis using organizational simulation.
Fridsma, D. B.
2000-01-01
Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885
Synthetic aperture interferometry: error analysis
Biswas, Amiya; Coupland, Jeremy
2010-07-10
Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.
Error analysis of finite element solutions for postbuckled cylinders
NASA Technical Reports Server (NTRS)
Sistla, Rajaram; Thurston, Gaylen A.
1989-01-01
A general method of error analysis and correction is investigated for the discrete finite-element results for cylindrical shell structures. The method for error analysis is an adaptation of the method of successive approximation. When applied to the equilibrium equations of shell theory, successive approximations derive an approximate continuous solution from the discrete finite-element results. The advantage of this continuous solution is that it contains continuous partial derivatives of an order higher than the basis functions of the finite-element solution. Preliminary numerical results are presented in this paper for the error analysis of finite-element results for a postbuckled stiffened cylindrical panel modeled by a general purpose shell code. Numerical results from the method have previously been reported for postbuckled stiffened plates. A procedure for correcting the continuous approximate solution by Newton's method is outlined.
Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei
2010-01-01
This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…
Integrated analysis of error detection and recovery
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1985-01-01
An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.
Numerical Errors in Coupling Micro- and Macrophysics in the Community Atmosphere Model
NASA Astrophysics Data System (ADS)
Gardner, D. J.; Caldwell, P.; Sexton, J. M.; Woodward, C. S.
2014-12-01
In this study, we investigate numerical errors in version 2 of the Morrison-Gettelman microphysics scheme (MG2) and its coupling to a development version of the macrophysics (condensation/evaporation) scheme used in version 5 of the Community Atmosphere Model (CAM5). Our analysis is performed using a modified version of the Kinematic Driver (KiD) framework, which combines the full macro- and microphysics schemes from CAM5 with idealizations of all other model components. The benefit of this framework is that its simplicity makes diagnosing problems easier and its efficiency allows us to test a variety of numerical schemes. Initial results suggest that numerical convergence requires time steps much shorter than those typically used in CAM5.
Numerical Relativity meets Data Analysis
NASA Astrophysics Data System (ADS)
Schmidt, Patricia
2016-03-01
Gravitational waveforms (GW) from coalescing black hole binaries obtained by Numerical Relativity (NR) play a crucial role in the construction and validation of waveform models used as templates in GW matched filter searches and parameter estimation. In previous efforts, notably the NINJA and NINJA-2 collaborations, NR groups and data analysts worked closely together to use NR waveforms as mock GW signals to test the search and parameter estimation pipelines employed by LIGO. Recently, however, NR groups have been able to simulate hundreds of different binary black holes systems. It is desirable to directly use these waveforms in GW data analysis, for example to assess systematic errors in waveform models, to test general relativity or to appraise the limitations of aligned-spin searches among many other applications. In this talk, I will introduce recent developments that aim to fully integrate NR waveforms into the data analysis pipelines through a standardized interface. I will highlight a number of select applications for this new infrastructure.
Schumacher, R.F.
1992-01-24
Glass frit will be a major raw material for the operation of the Defense Waste Processing Facility. The frit will be controlled by certificate of conformance and a confirmatory analysis by a commercial laboratory. The following effort provides additional quantitative information on the variability of frit analyses at two commercial laboratories.
Measurement error analysis of taxi meter
NASA Astrophysics Data System (ADS)
He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu
2011-12-01
The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.
Error analysis of finite element solutions for postbuckled plates
NASA Technical Reports Server (NTRS)
Sistla, Rajaram; Thurston, Gaylen A.
1988-01-01
An error analysis of results from finite-element solutions of problems in shell structures is further developed, incorporating the results of an additional numerical analysis by which oscillatory behavior is eliminated. The theory is extended to plates with initial geometric imperfections, and this novel analysis is programmed as a postprocessor for a general-purpose finite-element code. Numerical results are given for the case of a stiffened panel in compression and a plate loaded in shear by a 'picture-frame' test fixture.
Analysis of field errors in existing undulators
Kincaid, B.M.
1990-01-01
The Advanced Light Source (ALS) and other third generation synchrotron light sources have been designed for optimum performance with undulator insertion devices. The performance requirements for these new undulators are explored, with emphasis on the effects of errors on source spectral brightness. Analysis of magnetic field data for several existing hybrid undulators is presented, decomposing errors into systematic and random components. An attempts is made to identify the sources of these errors, and recommendations are made for designing future insertion devices. 12 refs., 16 figs.
Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions
NASA Astrophysics Data System (ADS)
McCullough, Christopher; Bettadpur, Srinivas
2015-04-01
In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.
Error analysis of quartz crystal resonator applications
Lucklum, R.; Behling, C.; Hauptmann, P.; Cernosek, R.W.; Martin, S.J.
1996-12-31
Quartz crystal resonators in chemical sensing applications are usually configured as the frequency determining element of an electrical oscillator. By contrast, the shear modulus determination of a polymer coating needs a complete impedance analysis. The first part of this contribution reports the error made if common approximations are used to relate the frequency shift to the sorbed mass. In the second part the authors discuss different error sources in the procedure to determine shear parameters.
TOA/FOA geolocation error analysis.
Mason, John Jeffrey
2008-08-01
This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Renard, Benjamin; Clark, Martyn P.; Fenicia, Fabrizio; Thyer, Mark; Kuczera, George; Franks, Stewart W.
2010-05-01
Confronted with frequently poor model performance, rainfall-runoff modellers have in the past blamed a plethora of sources of uncertainty, including rainfall and runoff errors, non-Gaussianities, model nonlinearities, parameter uncertainty, and just about everything else from Pandorra's box. Moreover, recent work has suggested astonishing numerical artifacts may arise from poor model numerics and confound the Hydrologist. There is a growing recognition that maintaining the lumped nebulous conspiracy of these errors is impeding progress in terms of understanding and, when possible, reducing predictive errors and gaining insights into catchment dynamics. In this study, we take the hydrological bull by its horns and begin disentangling individual sources of error. First, we outline robust and efficient error-control methods that ensure adequate numerical accuracy. We then demonstrate that the formidable interaction between data and structural errors, irresolvable in the absence of independent knowledge of data accuracy, can be tackled using geostatistical analysis of rainfall gauge networks and rating curve data. Structural model deficiencies can then begin being identified using flexible model configurations, paving the way for meaningful model comparison and improvement. Importantly, informative diagnostic measures are available for each component of the analysis. This paper surveys several recent developments along these research directions, summarized in a series of real-data case studies, and indicates areas of future interest.
Numeracy, Literacy and Newman's Error Analysis
ERIC Educational Resources Information Center
White, Allan Leslie
2010-01-01
Newman (1977, 1983) defined five specific literacy and numeracy skills as crucial to performance on mathematical word problems: reading, comprehension, transformation, process skills, and encoding. Newman's Error Analysis (NEA) provided a framework for considering the reasons that underlay the difficulties students experienced with mathematical…
Study of geopotential error models used in orbit determination error analysis
NASA Technical Reports Server (NTRS)
Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.
1991-01-01
The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic
Error Propagation Analysis for Quantitative Intracellular Metabolomics
Tillack, Jana; Paczia, Nicole; Nöh, Katharina; Wiechert, Wolfgang; Noack, Stephan
2012-01-01
Model-based analyses have become an integral part of modern metabolic engineering and systems biology in order to gain knowledge about complex and not directly observable cellular processes. For quantitative analyses, not only experimental data, but also measurement errors, play a crucial role. The total measurement error of any analytical protocol is the result of an accumulation of single errors introduced by several processing steps. Here, we present a framework for the quantification of intracellular metabolites, including error propagation during metabolome sample processing. Focusing on one specific protocol, we comprehensively investigate all currently known and accessible factors that ultimately impact the accuracy of intracellular metabolite concentration data. All intermediate steps are modeled, and their uncertainty with respect to the final concentration data is rigorously quantified. Finally, on the basis of a comprehensive metabolome dataset of Corynebacterium glutamicum, an integrated error propagation analysis for all parts of the model is conducted, and the most critical steps for intracellular metabolite quantification are detected. PMID:24957773
NASA Astrophysics Data System (ADS)
Fiorentini, Marcello; Orlandini, Stefano; Paniconi, Claudio
2015-07-01
A process-based numerical model of integrated surface-subsurface flow is analyzed in order to identify, track, and reduce the mass balance errors affiliated with the model's coupling scheme. The sources of coupling error include a surface-subsurface grid interface that requires node-to-cell and cell-to-node interpolation of exchange fluxes and ponding heads, and a sequential iterative time matching procedure that includes a time lag in these same exchange terms. Based on numerical experiments carried out for two synthetic test cases and for a complex drainage basin in northern Italy, it is shown that the coupling mass balance error increases during the flood recession limb when the rate of change in the fluxes exchanged between the surface and subsurface is highest. A dimensionless index that quantifies the degree of coupling and a saturated area index are introduced to monitor the sensitivity of the model to coupling error. Error reduction is achieved through improvements to the heuristic procedure used to control and adapt the time step interval and to the interpolation algorithm used to pass exchange variables from nodes to cells. The analysis presented illustrates the trade-offs between a flexible description of surface and subsurface flow processes and the numerical errors inherent in sequential iterative coupling with staggered nodal points at the land surface interface, and it reveals mitigation strategies that are applicable to all integrated models sharing this coupling and discretization approach.
Error analysis of stochastic gradient descent ranking.
Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan
2013-06-01
Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error. PMID:24083315
Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2010-01-01
The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.
Numerical analysis of engine instability
NASA Astrophysics Data System (ADS)
Habiballah, M.; Dubois, I.
Following a literature review on numerical analyses of combustion instability, to give the state of the art in the area, the paper describes the ONERA methodology used to analyze the combustion instability in liquid propellant engines. Attention is also given to a model (named Phedre) which describes the unsteady turbulent two-phase reacting flow in a liquid rocket engine combustion chamber. The model formulation includes axial or radial propellant injection, baffles, and acoustic resonators modeling, and makes it possible to treat different engine types. A numerical analysis of a cryogenic engine stability is presented, and the results of the analysis are compared with results of tests of the Viking engine and the gas generator of the Vulcain engine, showing good qualitative agreement and some general trends between experiments and numerical analysis.
Statistical Error Analysis for Digital Recursive Filters
NASA Astrophysics Data System (ADS)
Wu, Kevin Chi-Rung
The study of arithmetic roundoff error has attracted many researchers to investigate how the signal-to-noise ratio (SNR) is affected by algorithmic parameters, especially since the VLSI (Very Large Scale Integrated circuits) technologies have become more promising for digital signal processing. Typically, digital signal processing involving, either with or without matrix inversion, will have tradeoffs on speed and processor cost. Hence, the problems of an area-time efficient matrix computation and roundoff error behavior analysis will play an important role in this dissertation. A newly developed non-Cholesky square-root matrix will be discussed which precludes the arithmetic roundoff error over some interesting operations, such as complex -valued matrix inversion with its SNR analysis and error propagation effects. A non-CORDIC parallelism approach for complex-valued matrix will be presented to upgrade speed at the cost of moderate increase of processor. The lattice filter will also be looked into, in such a way, that one can understand the SNR behavior under the conditions of different inputs in the joint process system. Pipelining technique will be demonstrated to manifest the possibility of high-speed non-matrix-inversion lattice filter. Floating point arithmetic modelings used in this study have been focused on effective methodologies that have been proved to be reliable and feasible. With the models in hand, we study the roundoff error behavior based on some statistical assumptions. Results are demonstrated by carrying out simulation to show the feasibility of SNR analysis. We will observe that non-Cholesky square-root matrix has advantage of saving a time of O(n^3) as well as a reduced realization cost. It will be apparent that for a Kalman filter the register size is increasing significantly, if pole of the system matrix is moving closer to the edge of the unit circle. By comparing roundoff error effect due to floating-point and fixed-point arithmetics, we
NASA Astrophysics Data System (ADS)
Bao, WeiZhu; Cai, YongYong; Jia, XiaoWei; Yin, Jia
2016-08-01
We present several numerical methods and establish their error estimates for the discretization of the nonlinear Dirac equation in the nonrelativistic limit regime, involving a small dimensionless parameter $0<\\varepsilon\\ll 1$ which is inversely proportional to the speed of light. In this limit regime, the solution is highly oscillatory in time, i.e. there are propagating waves with wavelength $O(\\varepsilon^2)$ and $O(1)$ in time and space, respectively. We begin with the conservative Crank-Nicolson finite difference (CNFD) method and establish rigorously its error estimate which depends explicitly on the mesh size $h$ and time step $\\tau$ as well as the small parameter $0<\\varepsilon\\le 1$. Based on the error bound, in order to obtain `correct' numerical solutions in the nonrelativistic limit regime, i.e. $0<\\varepsilon\\ll 1$, the CNFD method requests the $\\varepsilon$-scalability: $\\tau=O(\\varepsilon^3)$ and $h=O(\\sqrt{\\varepsilon})$. Then we propose and analyze two numerical methods for the discretization of the nonlinear Dirac equation by using the Fourier spectral discretization for spatial derivatives combined with the exponential wave integrator and time-splitting technique for temporal derivatives, respectively. Rigorous error bounds for the two numerical methods show that their $\\varepsilon$-scalability is improved to $\\tau=O(\\varepsilon^2)$ and $h=O(1)$ when $0<\\varepsilon\\ll 1$ compared with the CNFD method. Extensive numerical results are reported to confirm our error estimates.
Microlens assembly error analysis for light field camera based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping
2016-08-01
This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.
Error Analysis of Modified Langevin Dynamics
NASA Astrophysics Data System (ADS)
Redon, Stephane; Stoltz, Gabriel; Trstanova, Zofia
2016-06-01
We consider Langevin dynamics associated with a modified kinetic energy vanishing for small momenta. This allows us to freeze slow particles, and hence avoid the re-computation of inter-particle forces, which leads to computational gains. On the other hand, the statistical error may increase since there are a priori more correlations in time. The aim of this work is first to prove the ergodicity of the modified Langevin dynamics (which fails to be hypoelliptic), and next to analyze how the asymptotic variance on ergodic averages depends on the parameters of the modified kinetic energy. Numerical results illustrate the approach, both for low-dimensional systems where we resort to a Galerkin approximation of the generator, and for more realistic systems using Monte Carlo simulations.
Error Analysis of Modified Langevin Dynamics
NASA Astrophysics Data System (ADS)
Redon, Stephane; Stoltz, Gabriel; Trstanova, Zofia
2016-08-01
We consider Langevin dynamics associated with a modified kinetic energy vanishing for small momenta. This allows us to freeze slow particles, and hence avoid the re-computation of inter-particle forces, which leads to computational gains. On the other hand, the statistical error may increase since there are a priori more correlations in time. The aim of this work is first to prove the ergodicity of the modified Langevin dynamics (which fails to be hypoelliptic), and next to analyze how the asymptotic variance on ergodic averages depends on the parameters of the modified kinetic energy. Numerical results illustrate the approach, both for low-dimensional systems where we resort to a Galerkin approximation of the generator, and for more realistic systems using Monte Carlo simulations.
Accumulation of errors in numerical simulations of chemically reacting gas dynamics
NASA Astrophysics Data System (ADS)
Smirnov, N. N.; Betelin, V. B.; Nikitin, V. F.; Stamov, L. I.; Altoukhov, D. I.
2015-12-01
The aim of the present study is to investigate problems of numerical simulations precision and stochastic errors accumulation in solving problems of detonation or deflagration combustion of gas mixtures in rocket engines. Computational models for parallel computing on supercomputers incorporating CPU and GPU units were tested and verified. Investigation of the influence of computational grid size on simulation precision and computational speed was performed. Investigation of accumulation of errors for simulations implying different strategies of computation were performed.
"Error Analysis." A Hard Look at Method in Madness.
ERIC Educational Resources Information Center
Brown, Cheryl
1976-01-01
The origins of error analysis as a pedagogical tool can be traced to the beginnings of the notion of interference and the use of contrastive analysis (CA) to predict learners' errors. With the focus narrowing to actual errors committed by students, it was found that all learners of English as a second language seemed to make errors in the same…
First-order approximation error analysis of Risley-prism-based beam directing system.
Zhao, Yanyan; Yuan, Yan
2014-12-01
To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration. PMID:25607958
Error analysis for the Fourier domain offset estimation algorithm
NASA Astrophysics Data System (ADS)
Wei, Ling; He, Jieling; He, Yi; Yang, Jinsheng; Li, Xiqi; Shi, Guohua; Zhang, Yudong
2016-02-01
The offset estimation algorithm is crucial for the accuracy of the Shack-Hartmann wave-front sensor. Recently, the Fourier Domain Offset (FDO) algorithm has been proposed for offset estimation. Similar to other algorithms, the accuracy of FDO is affected by noise such as background noise, photon noise, and 'fake' spots. However, no adequate quantitative error analysis has been performed for FDO in previous studies, which is of great importance for practical applications of the FDO. In this study, we quantitatively analysed how the estimation error of FDO is affected by noise based on theoretical deduction, numerical simulation, and experiments. The results demonstrate that the standard deviation of the wobbling error is: (1) inversely proportional to the raw signal to noise ratio, and proportional to the square of the sub-aperture size in the presence of background noise; and (2) proportional to the square root of the intensity in the presence of photonic noise. Furthermore, the upper bound of the estimation error is proportional to the intensity of 'fake' spots and the sub-aperture size. The results of the simulation and experiments agreed with the theoretical analysis.
Towards a More Rigorous Analysis of Foreign Language Errors.
ERIC Educational Resources Information Center
Abbott, Gerry
1980-01-01
Presents a precise and detailed process to be used in error analysis. The process is proposed as a means of making research in error analysis more accessible and useful to others, as well as assuring more objectivity. (Author/AMH)
Error propagation in the numerical solutions of the differential equations of orbital mechanics
NASA Technical Reports Server (NTRS)
Bond, V. R.
1982-01-01
The relationship between the eigenvalues of the linearized differential equations of orbital mechanics and the stability characteristics of numerical methods is presented. It is shown that the Cowell, Encke, and Encke formulation with an independent variable related to the eccentric anomaly all have a real positive eigenvalue when linearized about the initial conditions. The real positive eigenvalue causes an amplification of the error of the solution when used in conjunction with a numerical integration method. In contrast an element formulation has zero eigenvalues and is numerically stable.
Numerical analysis of Stirling engine
NASA Astrophysics Data System (ADS)
Sekiya, Hiroshi
1992-11-01
A simulation model of the Stirling engine based on the third order method of analysis is presented. The fundamental equations are derived by applying conservation laws of physics to the machine model, the characteristic equations for heat transfer and gas flow are represented, and a numerical calculation technique using these equations is discussed. A numerical model of the system for balancing pressure in four cylinders is included in the simulation model. Calculations results from the model are compared with experimental results. A comparable study of engine performance using helium and hydrogen as working gas is conducted, clarifying the heat transfer and gas flow characteristics, and the effects of temperature conditions in the hot and cold engine sections on driving conditions. The design optimization of the heat exchanger is addressed.
Trends in MODIS Geolocation Error Analysis
NASA Technical Reports Server (NTRS)
Wolfe, R. E.; Nishihama, Masahiro
2009-01-01
Data from the two MODIS instruments have been accurately geolocated (Earth located) to enable retrieval of global geophysical parameters. The authors describe the approach used to geolocate with sub-pixel accuracy over nine years of data from M0DIS on NASA's E0S Terra spacecraft and seven years of data from MODIS on the Aqua spacecraft. The approach uses a geometric model of the MODIS instruments, accurate navigation (orbit and attitude) data and an accurate Earth terrain model to compute the location of each MODIS pixel. The error analysis approach automatically matches MODIS imagery with a global set of over 1,000 ground control points from the finer-resolution Landsat satellite to measure static biases and trends in the MO0lS geometric model parameters. Both within orbit and yearly thermally induced cyclic variations in the pointing have been found as well as a general long-term trend.
Experimental and numerical study of error fields in the CNT stellarator
NASA Astrophysics Data System (ADS)
Hammond, K. C.; Anichowski, A.; Brenner, P. W.; Pedersen, T. S.; Raftopoulos, S.; Traverso, P.; Volpe, F. A.
2016-07-01
Sources of error fields were indirectly inferred in a stellarator by reconciling computed and numerical flux surfaces. Sources considered so far include the displacements and tilts of the four circular coils featured in the simple CNT stellarator. The flux surfaces were measured by means of an electron beam and fluorescent rod, and were computed by means of a Biot–Savart field-line tracing code. If the ideal coil locations and orientations are used in the computation, agreement with measurements is poor. Discrepancies are ascribed to errors in the positioning and orientation of the in-vessel interlocked coils. To that end, an iterative numerical method was developed. A Newton–Raphson algorithm searches for the coils’ displacements and tilts that minimize the discrepancy between the measured and computed flux surfaces. This method was verified by misplacing and tilting the coils in a numerical model of CNT, calculating the flux surfaces that they generated, and testing the algorithm’s ability to deduce the coils’ displacements and tilts. Subsequently, the numerical method was applied to the experimental data, arriving at a set of coil displacements whose resulting field errors exhibited significantly improved agreement with the experimental results.
Pathway Analysis Software: Annotation Errors and Solutions
Henderson-MacLennan, Nicole K.; Papp, Jeanette C.; Talbot, C. Conover; McCabe, Edward R.B.; Presson, Angela P.
2010-01-01
Genetic databases contain a variety of annotation errors that often go unnoticed due to the large size of modern genetic data sets. Interpretation of these data sets requires bioinformatics tools that may contribute to this problem. While providing gene symbol annotations for identifiers (IDs) such as microarray probeset, RefSeq, GenBank and Entrez Gene is seemingly trivial, the accuracy is fundamental to any subsequent conclusions. We examine gene symbol annotations and results from three commercial pathway analysis software (PAS) packages: Ingenuity Pathways Analysis, GeneGO and Pathway Studio. We compare gene symbol annotations and canonical pathway results over time and among different input ID types. We find that PAS results can be affected by variation in gene symbol annotations across software releases and the input ID type analyzed. As a result, we offer suggestions for using commercial PAS and reporting microarray results to improve research quality. We propose a wiki type website to facilitate communication of bioinformatics software problems within the scientific community. PMID:20663702
A technique for human error analysis (ATHEANA)
Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W.
1996-05-01
Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.
Reducing the error growth in the numerical propagation of satellite orbits
NASA Astrophysics Data System (ADS)
Ferrandiz, Jose M.; Vigo, Jesus; Martin, P.
1991-12-01
An algorithm especially designed for the long term numerical integration of perturbed oscillators, in one or several frequencies, is presented. The method is applied to the numerical propagation of satellite orbits, using focal variables, and the results concerning highly eccentric or nearly circular cases are reported. The method performs particularly well for high eccentricity. For e = 0.99 and J2 + J3 perturbations it allows the last perigee after 1000 revolutions with an error less than 1 cm, with only 80 derivative evaluations per revolution. In general the approach provides about a hundred times more accuracy than Bettis methods over one thousand revolutions.
Investigating Convergence Patterns for Numerical Methods Using Data Analysis
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2013-01-01
The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…
NASA Technical Reports Server (NTRS)
Rao, P. Anil; Velden, Christopher S.; Braun, Scott A.; Einaudi, Franco (Technical Monitor)
2001-01-01
Errors in the height assignment of some satellite-derived winds exist because the satellites sense radiation emitted from a finite layer of the atmosphere rather than a specific level. Potential problems in data assimilation may arise because the motion of a measured layer is often represented by a single-level value. In this research, cloud and water vapor motion winds that are derived from the Geostationary Operational Environmental Satellites (GOES winds) are compared to collocated rawinsonde observations (RAOBs). An important aspect of this work is that in addition to comparisons at each assigned height, the GOES winds are compared to the entire profile of the collocated RAOB data to determine the vertical error characteristics of the GOES winds. The impact of these results on numerical weather prediction is then investigated. The comparisons at individual vector height assignments indicate that the error of the GOES winds range from approx. 3 to 10 m/s and generally increase with height. However, if taken as a percentage of the total wind speed, accuracy is better at upper levels. As expected, comparisons with the entire profile of the collocated RAOBs indicate that clear-air water vapor winds represent deeper layers than do either infrared or water vapor cloud-tracked winds. This is because in cloud-free regions the signal from water vapor features may result from emittance over a thicker layer. To further investigate characteristics of the clear-air water vapor winds, they are stratified into two categories that are dependent on the depth of the layer represented by the vector. It is found that if the vertical gradient of moisture is smooth and uniform from near the height assignment upwards, the clear-air water vapor wind tends to represent a relatively deep layer. The information from the comparisons is then used in numerical model simulations of two separate events to determine the forecast impacts. Four simulations are performed for each case: 1) A
Solar Tracking Error Analysis of Fresnel Reflector
Zheng, Jiantao; Yan, Junjie; Pei, Jie; Liu, Guanjie
2014-01-01
Depending on the rotational structure of Fresnel reflector, the rotation angle of the mirror was deduced under the eccentric condition. By analyzing the influence of the sun tracking rotation angle error caused by main factors, the change rule and extent of the influence were revealed. It is concluded that the tracking errors caused by the difference between the rotation axis and true north meridian, at noon, were maximum under certain conditions and reduced at morning and afternoon gradually. The tracking error caused by other deviations such as rotating eccentric, latitude, and solar altitude was positive at morning, negative at afternoon, and zero at a certain moment of noon. PMID:24895664
Using PASCAL for numerical analysis
NASA Technical Reports Server (NTRS)
Volper, D.; Miller, T. C.
1978-01-01
The data structures and control structures of PASCAL enhance the coding ability of the programmer. Proposed extensions to the language further increase its usefulness in writing numeric programs and support packages for numeric programs.
Analysis of modeling errors in system identification
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Bekey, G. A.
1986-01-01
This paper is concerned with the identification of a system in the presence of several error sources. Following some basic definitions, the notion of 'near-equivalence in probability' is introduced using the concept of near-equivalence between a model and process. Necessary and sufficient conditions for the identifiability of system parameters are given. The effect of structural error on the parameter estimates for both deterministic and stochastic cases are considered.
Analysis of thematic map classification error matrices.
Rosenfield, G.H.
1986-01-01
The classification error matrix expresses the counts of agreement and disagreement between the classified categories and their verification. Thematic mapping experiments compare variables such as multiple photointerpretation or scales of mapping, and produce one or more classification error matrices. This paper presents a tutorial to implement a typical problem of a remotely sensed data experiment for solution by the linear model method.-from Author
Error Analysis of Terrestrial Laser Scanning Data by Means of Spherical Statistics and 3D Graphs
Cuartero, Aurora; Armesto, Julia; Rodríguez, Pablo G.; Arias, Pedro
2010-01-01
This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS) data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles) that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP) and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis) and the proposed method (angular errors analysis) by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics. PMID:22163461
Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, Ronald M.
2015-01-01
The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.
ELIASSI,MEHDI; GLASS JR.,ROBERT J.
2000-03-08
The authors consider the ability of the numerical solution of Richards equation to model gravity-driven fingers. Although gravity-driven fingers can be easily simulated using a partial downwind averaging method, they find the fingers are purely artificial, generated by the combined effects of truncation error induced oscillations and capillary hysteresis. Since Richards equation can only yield a monotonic solution for standard constitutive relations and constant flux boundary conditions, it is not the valid governing equation to model gravity-driven fingers, and therefore is also suspect for unsaturated flow in initially dry, highly nonlinear, and hysteretic media where these fingers occur. However, analysis of truncation error at the wetting front for the partial downwind method suggests the required mathematical behavior of a more comprehensive and physically based modeling approach for this region of parameter space.
Asteroid orbital error analysis: Theory and application
NASA Technical Reports Server (NTRS)
Muinonen, K.; Bowell, Edward
1992-01-01
We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).
Numerical likelihood analysis of cosmic ray anisotropies
Carlos Hojvat et al.
2003-07-02
A numerical likelihood approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.
Two Ways of Looking at Error-Analysis.
ERIC Educational Resources Information Center
Strevens, Peter
In this paper the author discusses "error-analysis"; its emergence as a recognized technique in applied linguistics, with a function in the preparation of new or improved teaching materials; and its new place in relation to theories of language learning and language teaching. He believes that error-analysis has suddenly found a new importance, and…
Dose error analysis for a scanned proton beam delivery system
NASA Astrophysics Data System (ADS)
Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.
2010-12-01
All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.
Numerical Package in Computer Supported Numeric Analysis Teaching
ERIC Educational Resources Information Center
Tezer, Murat
2007-01-01
At universities in the faculties of Engineering, Sciences, Business and Economics together with higher education in Computing, it is stated that because of the difficulty, calculators and computers can be used in Numerical Analysis (NA). In this study, the learning computer supported NA will be discussed together with important usage of the…
Size and Shape Analysis of Error-Prone Shape Data
Du, Jiejun; Dryden, Ian L.; Huang, Xianzheng
2015-01-01
We consider the problem of comparing sizes and shapes of objects when landmark data are prone to measurement error. We show that naive implementation of ordinary Procrustes analysis that ignores measurement error can compromise inference. To account for measurement error, we propose the conditional score method for matching configurations, which guarantees consistent inference under mild model assumptions. The effects of measurement error on inference from naive Procrustes analysis and the performance of the proposed method are illustrated via simulation and application in three real data examples. Supplementary materials for this article are available online. PMID:26109745
Analysis of Errors in a Special Perturbations Satellite Orbit Propagator
Beckerman, M.; Jones, J.P.
1999-02-01
We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.
NASA Technical Reports Server (NTRS)
Fiske, David R.
2004-01-01
In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher
1997-01-01
We proposed a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and is required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has two important applications, which we term the assessment application and the objective analysis application. For the assessment application, our approach results in new objective measures of forecast skill which are more in line with subjective measures of forecast skill and which are useful in validating models and diagnosing their shortcomings. With regard to the objective analysis application, meteorological analysis schemes balance forecast error and observational error to obtain an optimal analysis. Presently, representations of the error covariance matrix used to measure the forecast error are severely limited. For the objective analysis application our approach will improve analyses by providing a more realistic measure of the forecast error. We expect, a priori, that our approach should greatly improve the utility of remotely sensed data which have relatively high horizontal resolution, but which are indirectly related to the conventional atmospheric variables. In this project, we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP) and 500 hPa geopotential height fields for forecasts of the short and medium range. Since the forecasts are generated by the GEOS (Goddard Earth Observing System) data assimilation system with and without ERS 1 scatterometer data, these preliminary studies serve several purposes. They (1) provide a
The error analysis and online measurement of linear slide motion error in machine tools
NASA Astrophysics Data System (ADS)
Su, H.; Hong, M. S.; Li, Z. J.; Wei, Y. L.; Xiong, S. B.
2002-06-01
A new accurate two-probe time domain method is put forward to measure the straight-going component motion error in machine tools. The characteristics of non-periodic and non-closing in the straightness profile error are liable to bring about higher-order harmonic component distortion in the measurement results. However, this distortion can be avoided by the new accurate two-probe time domain method through the symmetry continuation algorithm, uniformity and least squares method. The harmonic suppression is analysed in detail through modern control theory. Both the straight-going component motion error in machine tools and the profile error in a workpiece that is manufactured on this machine can be measured at the same time. All of this information is available to diagnose the origin of faults in machine tools. The analysis result is proved to be correct through experiment.
Mode error analysis of impedance measurement using twin wires
NASA Astrophysics Data System (ADS)
Huang, Liang-Sheng; Yoshiro, Irie; Liu, Yu-Dong; Wang, Sheng
2015-03-01
Both longitudinal and transverse coupling impedance for some critical components need to be measured for accelerator design. The twin wires method is widely used to measure longitudinal and transverse impedance on the bench. A mode error is induced when the twin wires method is used with a two-port network analyzer. Here, the mode error is analyzed theoretically and an example analysis is given. Moreover, the mode error in the measurement is a few percent when a hybrid with no less than 25 dB isolation and a splitter with no less than 20 dB magnitude error are used. Supported by Natural Science Foundation of China (11175193, 11275221)
A Numerical Study of Some Potential Sources of Error in Side-by-Side Seismometer Evaluations
Holcomb, L. Gary
1990-01-01
INTRODUCTION This report presents the results of a series of computer simulations of potential errors in test data, which might be obtained when conducting side-by-side comparisons of seismometers. These results can be used as guides in estimating potential sources and magnitudes of errors one might expect when analyzing real test data. First, the derivation of a direct method for calculating the noise levels of two sensors in a side-by-side evaluation is repeated and extended slightly herein. This bulk of this derivation was presented previously (see Holcomb 1989); it is repeated here for easy reference. This method is applied to the analysis of a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of white noise spectra with known signal-to-noise ratios (SNR's). This report extends this analysis to high SNR's to determine the limitations of the direct method for calculating the noise levels at signal-to-noise levels which are much higher than presented previously (see Holcomb 1989). Next, the method is used to analyze a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of bandshaped noise spectra with known signal-to-noise ratios. This is a much more realistic representation of real world data because the earth's background spectrum is certainly not flat. Finally, the results of the analysis of simulated white and bandshaped side-by-side test data are used to assist in interpreting the analysis of the effects of simulated azimuthal misalignment in side-by-side sensor evaluations. A thorough understanding of azimuthal misalignment errors is important because of the physical impossibility of perfectly aligning two sensors in a real world situation. The analysis herein indicates that alignment errors place lower limits on the levels of system noise which can be resolved in a side-by-side measurement. It also indicates that alignment errors are the source of the fact that
Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation
Barbero, Sergio; Thibos, Larry N.
2007-01-01
Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302
Classification error analysis in stereo vision
NASA Astrophysics Data System (ADS)
Gross, Eitan
2015-07-01
Depth perception in humans is obtained by comparing images generated by the two eyes to each other. Given the highly stochastic nature of neurons in the brain, this comparison requires maximizing the mutual information (MI) between the neuronal responses in the two eyes by distributing the coding information across a large number of neurons. Unfortunately, MI is not an extensive quantity, making it very difficult to predict how the accuracy of depth perception will vary with the number of neurons (N) in each eye. To address this question we present a two-arm, distributed decentralized sensors detection model. We demonstrate how the system can extract depth information from a pair of discrete valued stimuli represented here by a pair of random dot-matrix stereograms. Using the theory of large deviations we calculated the rate at which the global average error probability of our detector; and the MI between the two arms' output, vary with N. We found that MI saturates exponentially with N at a rate which decays as 1 / N. The rate function approached the Chernoff distance between the two probability distributions asymptotically. Our results may have implications in computer stereo vision that uses Hebbian-based algorithms for terrestrial navigation.
Attitude Determination Error Analysis System (ADEAS) mathematical specifications document
NASA Technical Reports Server (NTRS)
Nicholson, Mark; Markley, F.; Seidewitz, E.
1988-01-01
The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.
Error analysis of large aperture static interference imaging spectrometer
NASA Astrophysics Data System (ADS)
Li, Fan; Zhang, Guo
2015-12-01
Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.
Errors of DWPF frit analysis: Final report
Schumacher, R.F.
1993-01-20
Glass frit will be a major raw material for the operation of the Defense Waste Processing Facility. The frit will be controlled by certificate of conformance and a confirmatory analysis from a commercial analytical laboratory. The following effort provides additional quantitative information on the variability of frit chemical analyses at two commercial laboratories. Identical samples of IDMS Frit 202 were chemically analyzed at two commercial laboratories and at three different times over a period of four months. The SRL-ADS analyses, after correction with the reference standard and normalization, provided confirmatory information, but did not detect the low silica level in one of the frit samples. A methodology utilizing elliptical limits for confirming the certificate of conformance or confirmatory analysis was introduced and recommended for use when the analysis values are close but not within the specification limits. It was also suggested that the lithia specification limits might be reduced as long as CELS is used to confirm the analysis.
Recent results on parametric analysis of differential Omega error
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.; Piserchia, P. V.
1974-01-01
Previous tests of the differential Omega concept and an analysis of the characteristics of VLF propagation make it possible to delineate various factors which might contribute to the variation of errors in phase measurements at an Omega receiver site. An experimental investigation is conducted to determine the effect of each of a number of parameters on differential Omega accuracy and to develop prediction equations. The differential Omega error form is considered and preliminary results are presented of the regression analysis used to study differential error.
Dose error analysis for a scanned proton beam delivery system.
Coutrakon, G; Wang, N; Miller, D W; Yang, Y
2010-12-01
All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm(3) target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy. PMID:21076200
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.
2013-01-01
Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).
Abundance recovery error analysis using simulated AVIRIS data
NASA Technical Reports Server (NTRS)
Stoner, William W.; Harsanyi, Joseph C.; Farrand, William H.; Wong, Jennifer A.
1992-01-01
Measurement noise and imperfect atmospheric correction translate directly into errors in the determination of the surficial abundance of materials from imaging spectrometer data. The effects of errors on abundance recovery were investigated previously using Monte Carlo simulation methods by Sabol et. al. The drawback of the Monte Carlo approach is that thousands of trials are needed to develop good statistics on the probable error in abundance recovery. This computational burden invariably limits the number of scenarios of interest that can practically be investigated. A more efficient approach is based on covariance analysis. The covariance analysis approach expresses errors in abundance as a function of noise in the spectral measurements and provides a closed form result eliminating the need for multiple trials. Monte Carlo simulation and covariance analysis are used to predict confidence limits for abundance recovery for a scenario which is modeled as being derived from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).
Errors of DWPF Frit analysis. Final report
Schumacher, R.F.
1992-01-24
Glass frit will be a major raw material for the operation of the Defense Waste Processing Facility. The frit will be controlled by certificate of conformance and a confirmatory analysis by a commercial laboratory. The following effort provides additional quantitative information on the variability of frit analyses at two commercial laboratories.
Error Analysis of p-Version Discontinuous Galerkin Method for Heat Transfer in Built-up Structures
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.
2004-01-01
The purpose of this paper is to provide an error analysis for the p-version of the discontinuous Galerkin finite element method for heat transfer in built-up structures. As a special case of the results in this paper, a theoretical error estimate for the numerical experiments recently conducted by James Tomey is obtained.
Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. PMID:25649961
Parameter estimation and error analysis in environmental modeling and computation
NASA Technical Reports Server (NTRS)
Kalmaz, E. E.
1986-01-01
A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.
Optimization design and error analysis of photoelectric autocollimator
NASA Astrophysics Data System (ADS)
Gao, Lei; Yan, Bixi; Hu, Mingjun; Dong, Mingli
2012-11-01
A photoelectric autocollimator employing an area Charge Coupled Device (CCD) as its target receiver, which is specially used in numerical stage calibration is optimized, and the various error factors are analyzed. By using the ZEMAX software, the image qualities are optimized to ensure the spherical and coma aberrations of the collimating system are less than 0.27mm and 0.035mm respectively; the Root Mean Square (RMS) radius is close to 6.45 microns, which is identified with the resolution of the CCD, and the Modulation Transfer Function (MTF) is greater than 0.3 in the full field of view, 0.5 in the centre field at the corresponding frequency. The errors origin mainly from fabrication and alignment, which are all about 0.4" . The error synthesis shows that the instrument can meet the demands of the design accuracy, which is also consistent with the experiment.
Bahşı, Ayşe Kurt; Yalçınbaş, Salih
2016-01-01
In this study, the Fibonacci collocation method based on the Fibonacci polynomials are presented to solve for the fractional diffusion equations with variable coefficients. The fractional derivatives are described in the Caputo sense. This method is derived by expanding the approximate solution with Fibonacci polynomials. Using this method of the fractional derivative this equation can be reduced to a set of linear algebraic equations. Also, an error estimation algorithm which is based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation algorithm. If the exact solution of the problem is not known, the absolute error function of the problems can be approximately computed by using the Fibonacci polynomial solution. By using this error estimation function, we can find improved solutions which are more efficient than direct numerical solutions. Numerical examples, figures, tables are comparisons have been presented to show efficiency and usable of proposed method. PMID:27610294
Numerical analysis of wave scattering
NASA Astrophysics Data System (ADS)
Beran, Mark J.
1994-12-01
The following topics were studied in detail during the report period: (1) Combined volume and surface scattering in a channel, using a modal formulation. (2) Two-way formulation to account for backscattering in a channel. (3) Data analysis to determine vertical and horizontal correlation lengths of the random index-of-refraction fluctuations in a channel. (4) The effect of random fluctuations on the two-frequency coherence function in a shallow channel. (5) Approximate eigenfunctions and eigenvalues for linear sound-speed profiles. (6) The effect of sea-water absorption on scattering in a shallow channel.
Sensitivity analysis of DOA estimation algorithms to sensor errors
NASA Astrophysics Data System (ADS)
Li, Fu; Vaccaro, Richard J.
1992-07-01
A unified statistical performance analysis using subspace perturbation expansions is applied to subspace-based algorithms for direction-of-arrival (DOA) estimation in the presence of sensor errors. In particular, the multiple signal classification (MUSIC), min-norm, state-space realization (TAM and DDA) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms are analyzed. This analysis assumes that only a finite amount of data is available. An analytical expression for the mean-squared error of the DOA estimates is developed for theoretical comparison in a simple and self-contained fashion. The tractable formulas provide insight into the algorithms. Simulation results verify the analysis.
Error control in the GCF: An information-theoretic model for error analysis and coding
NASA Technical Reports Server (NTRS)
Adeyemi, O.
1974-01-01
The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.
Application of human error analysis to aviation and space operations
Nelson, W.R.
1998-03-01
For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) the authors have been working to apply methods of human error analysis to the design of complex systems. They have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. They are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. The primary vehicle the authors have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. They are currently adapting their methods and tools of human error analysis to the domain of air traffic management (ATM) systems. Under the NASA-sponsored Advanced Air Traffic Technologies (AATT) program they are working to address issues of human reliability in the design of ATM systems to support the development of a free flight environment for commercial air traffic in the US. They are also currently testing the application of their human error analysis approach for space flight operations. They have developed a simplified model of the critical habitability functions for the space station Mir, and have used this model to assess the affects of system failures and human errors that have occurred in the wake of the collision incident last year. They are developing an approach so that lessons learned from Mir operations can be systematically applied to design and operation of long-term space missions such as the International Space Station (ISS) and the manned Mars mission.
Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors
ERIC Educational Resources Information Center
Sarcevic, Aleksandra
2009-01-01
An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…
Bootstrap Standard Error Estimates in Dynamic Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Browne, Michael W.
2010-01-01
Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…
Cloud retrieval using infrared sounder data - Error analysis
NASA Technical Reports Server (NTRS)
Wielicki, B. A.; Coakley, J. A., Jr.
1981-01-01
An error analysis is presented for cloud-top pressure and cloud-amount retrieval using infrared sounder data. Rms and bias errors are determined for instrument noise (typical of the HIRS-2 instrument on Tiros-N) and for uncertainties in the temperature profiles and water vapor profiles used to estimate clear-sky radiances. Errors are determined for a range of test cloud amounts (0.1-1.0) and cloud-top pressures (920-100 mb). Rms errors vary by an order of magnitude depending on the cloud height and cloud amount within the satellite's field of view. Large bias errors are found for low-altitude clouds. These bias errors are shown to result from physical constraints placed on retrieved cloud properties, i.e., cloud amounts between 0.0 and 1.0 and cloud-top pressures between the ground and tropopause levels. Middle-level and high-level clouds (above 3-4 km) are retrieved with low bias and rms errors.
Linear error analysis of slope-area discharge determinations
Kirby, W.H.
1987-01-01
The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.
Geometric error analysis for shuttle imaging spectrometer experiment
NASA Technical Reports Server (NTRS)
Wang, S. J.; Ih, C. H.
1984-01-01
The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.
NASA Astrophysics Data System (ADS)
Privé, N. C.; Errico, R. M.; Tai, K.-S.
2013-06-01
The National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a 1 month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 h forecast, increased observation error only yields a slight decline in forecast skill in the extratropics and no discernible degradation of forecast skill in the tropics.
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
Simple Numerical Analysis of Longboard Speedometer Data
ERIC Educational Resources Information Center
Hare, Jonathan
2013-01-01
Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 "Phys. Educ." 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as…
Numerical analysis of randomly forced glycolitic oscillations
Ryashko, Lev
2015-03-10
Randomly forced glycolytic oscillations in Higgins model are studied both numerically and analytically. Numerical analysis is based on the direct simulation of the solutions of stochastic system. Non-uniformity of the stochastic bundle along the deterministic cycle is shown. For the analytical investigation of the randomly forced Higgins model, the stochastic sensitivity function technique and confidence domains method are applied. Results of the influence of additive noise on the cycle of this model are given.
SINFAC - SYSTEMS IMPROVED NUMERICAL FLUIDS ANALYSIS CODE
NASA Technical Reports Server (NTRS)
Costello, F. A.
1994-01-01
The Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to the April 1983 revision of SINDA, a general thermal analyzer program. The purpose of the additional routines is to allow for the modeling of active heat transfer loops. The modeler can simulate the steady-state and pseudo-transient operations of 16 different heat transfer loop components including radiators, evaporators, condensers, mechanical pumps, reservoirs and many types of valves and fittings. In addition, the program contains a property analysis routine that can be used to compute the thermodynamic properties of 20 different refrigerants. SINFAC can simulate the response to transient boundary conditions. SINFAC was first developed as a method for computing the steady-state performance of two phase systems. It was then modified using CNFRWD, SINDA's explicit time-integration scheme, to accommodate transient thermal models. However, SINFAC cannot simulate pressure drops due to time-dependent fluid acceleration, transient boil-out, or transient fill-up, except in the accumulator. SINFAC also requires the user to be familiar with SINDA. The solution procedure used by SINFAC is similar to that which an engineer would use to solve a system manually. The solution to a system requires the determination of all of the outlet conditions of each component such as the flow rate, pressure, and enthalpy. To obtain these values, the user first estimates the inlet conditions to the first component of the system, then computes the outlet conditions from the data supplied by the manufacturer of the first component. The user then estimates the temperature at the outlet of the third component and computes the corresponding flow resistance of the second component. With the flow resistance of the second component, the user computes the conditions down stream, namely the inlet conditions of the third. The computations follow for the rest of the system, back to the first component
A Case of Error Disclosure: A Communication Privacy Management Analysis
Petronio, Sandra; Helft, Paul R.; Child, Jeffrey T.
2013-01-01
To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios’s theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient’s family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public health Much of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information
NASA Astrophysics Data System (ADS)
Žitko, Rok
2011-08-01
In the numerical renormalization-group (NRG) calculations of spectral functions of quantum impurity models, the results are always affected by discretization and truncation errors. The discretization errors can be alleviated by averaging over different discretization meshes (“z-averaging”), but since each partial calculation is performed for a finite discrete system, there are always some residual discretization and finite-size errors. The truncation errors affect the energies of the states and result in the displacement of the delta-peak spectral contributions from their correct positions. The two types of errors are interrelated: for coarser discretization, the discretization errors increase, but the truncation errors decrease since the separation of energy scales is enhanced. In this work, it is shown that by calculating a series of spectral functions for a range of the total number of states kept in the NRG truncation, it is possible to estimate the errors and determine the error bars for spectral functions, which is important when making accurate comparison to the results obtained by other methods and for determining the errors in the extracted quantities (such as peak positions, heights, and widths). The closely related problem of spectral broadening is also discussed: it is shown that the overbroadening contorts the results without, surprisingly, reducing the variance of the curves. It is thus important to determine the results in the limit of zero broadening. The method is applied to determine the error bounds for the Kondo peak splitting in an external magnetic field. For moderately strong fields, the results are consistent with the Bethe ansatz study by Moore and Wen [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.85.1722 85, 1722 (2000)]. We also discuss the regime of large U/Γ ratio. It is shown that in the strong-field limit, a spectral step is observed in the spectrum precisely at the Zeeman frequency until the field becomes so strong that
Clustered Numerical Data Analysis Using Markov Lie Monoid Based Networks
NASA Astrophysics Data System (ADS)
Johnson, Joseph
2016-03-01
We have designed and build an optimal numerical standardization algorithm that links numerical values with their associated units, error level, and defining metadata thus supporting automated data exchange and new levels of artificial intelligence (AI). The software manages all dimensional and error analysis and computational tracing. Tables of entities verses properties of these generalized numbers (called ``metanumbers'') support a transformation of each table into a network among the entities and another network among their properties where the network connection matrix is based upon a proximity metric between the two items. We previously proved that every network is isomorphic to the Lie algebra that generates continuous Markov transformations. We have also shown that the eigenvectors of these Markov matrices provide an agnostic clustering of the underlying patterns. We will present this methodology and show how our new work on conversion of scientific numerical data through this process can reveal underlying information clusters ordered by the eigenvalues. We will also show how the linking of clusters from different tables can be used to form a ``supernet'' of all numerical information supporting new initiatives in AI.
Monte Carlo analysis of localization errors in magnetoencephalography
Medvick, P.A.; Lewis, P.S.; Aine, C.; Flynn, E.R.
1989-01-01
In magnetoencephalography (MEG), the magnetic fields created by electrical activity in the brain are measured on the surface of the skull. To determine the location of the activity, the measured field is fit to an assumed source generator model, such as a current dipole, by minimizing chi-square. For current dipoles and other nonlinear source models, the fit is performed by an iterative least squares procedure such as the Levenberg-Marquardt algorithm. Once the fit has been computed, analysis of the resulting value of chi-square can determine whether the assumed source model is adequate to account for the measurements. If the source model is adequate, then the effect of measurement error on the fitted model parameters must be analyzed. Although these kinds of simulation studies can provide a rough idea of the effect that measurement error can be expected to have on source localization, they cannot provide detailed enough information to determine the effects that the errors in a particular measurement situation will produce. In this work, we introduce and describe the use of Monte Carlo-based techniques to analyze model fitting errors for real data. Given the details of the measurement setup and a statistical description of the measurement errors, these techniques determine the effects the errors have on the fitted model parameters. The effects can then be summarized in various ways such as parameter variances/covariances or multidimensional confidence regions. 8 refs., 3 figs.
The Delta x B = 0 Constraint Versus Minimization of Numerical Errors in MHD Simulations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, Bjoern; Mansour, Nagi (Technical Monitor)
2002-01-01
The MHD equations are a system of non-strictly hyperbolic conservation laws. The non-convexity of the inviscid flux vector resulted in corresponding Jacobian matrices with undesirable properties. It has previously been shown by Powell et al. (1995) that an 'almost' equivalent MHD system in non-conservative form can be derived. This non-conservative system has a better conditioned eigensystem. Aside from Powell et al., the MHD equations can be derived from basic principles in either conservative or non-conservative form. The Delta x B = 0 constraint of the MHD equations is only an initial condition constraint, it is very different from the incompressible Navier-Stokes equations in which the divergence condition is needed to close the system (i.e., to have the same number of equations and the same number of unknown). In the MHD formulations, if Delta x B = 0 initially, all one needs is to construct appropriate numerical schemes that preserve this constraint at later time evolutions. In other words, one does not need the Delta x B condition to close the MHD system. We formulate our new scheme together with the Cargo & Gallice (1997) form of the MHD approximate Riemann solver in curvilinear grids for both versions of the MHD equations. A novel feature of our new method is that the well-conditioned eigen-decomposition of the non-conservative MHD equations is used to solve the conservative equations. This new feature of the method provides well-conditioned eigenvectors for the conservative formulation, so that correct wave speeds for discontinuities are assured. The justification for using the non-conservative eigen-decomposition to solve the conservative equations is that our scheme has a better control of the numerical error associated with the divergence of the magnetic condition. Consequently, computing both forms of the equations with the same eigen-decomposition is almost equivalent. It will be shown that this approach, using the non-conservative eigensystem when
Error analysis for momentum conservation in Atomic-Continuum Coupled Model
NASA Astrophysics Data System (ADS)
Yang, Yantao; Cui, Junzhi; Han, Tiansi
2016-04-01
Atomic-Continuum Coupled Model (ACCM) is a multiscale computation model proposed by Xiang et al. (in IOP conference series materials science and engineering, 2010), which is used to study and simulate dynamics and thermal-mechanical coupling behavior of crystal materials, especially metallic crystals. In this paper, we construct a set of interpolation basis functions for the common BCC and FCC lattices, respectively, implementing the computation of ACCM. Based on this interpolation approximation, we give a rigorous mathematical analysis of the error of momentum conservation equation introduced by ACCM, and derive a sequence of inequalities that bound the error. Numerical experiment is carried out to verify our result.
Error analysis for momentum conservation in Atomic-Continuum Coupled Model
NASA Astrophysics Data System (ADS)
Yang, Yantao; Cui, Junzhi; Han, Tiansi
2016-08-01
Atomic-Continuum Coupled Model (ACCM) is a multiscale computation model proposed by Xiang et al. (in IOP conference series materials science and engineering, 2010), which is used to study and simulate dynamics and thermal-mechanical coupling behavior of crystal materials, especially metallic crystals. In this paper, we construct a set of interpolation basis functions for the common BCC and FCC lattices, respectively, implementing the computation of ACCM. Based on this interpolation approximation, we give a rigorous mathematical analysis of the error of momentum conservation equation introduced by ACCM, and derive a sequence of inequalities that bound the error. Numerical experiment is carried out to verify our result.
Error analysis of sub-aperture stitching interferometry
NASA Astrophysics Data System (ADS)
Jia, Xin; Xu, Fuchao; Xie, Weimin; Xing, Tingwen
2012-10-01
Large-aperture optical elements are widely employed in high-power laser system, astronomy, and outer-space technology. Sub-aperture stitching is an effective way to extend the lateral and vertical dynamic range of a conventional interferometer. With the aim to provide the accuracy of equipment, this paper simulates the arithmetic to analyze the errors. The Selection of stitching mode and setting of the number of subaperture is given. According to the programmed algorithms simulation stitching is performed for testing the algorithm. In this paper, based on the Matlab we simulate the arithmetic of Sub-aperture stitching. The sub-aperture stitching method can also be used to test the free formed surface. The freeformed surface is created by Zernike polynomials. The accuracy has relationship with the errors of tilting, positioning. Through the stitching the medium spatial frequency of the surface can be tested. The results of errors analysis by means of Matlab are shown that how the tilting and positioning errors to influence the testing accuracy. The analysis of errors can also be used in other interferometer systems.
NASA Astrophysics Data System (ADS)
Assous, Franck; Chaskalovic, Joël
2013-03-01
In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.
Analysis of systematic errors in lateral shearing interferometry for EUV optical testing
Miyakawa, Ryan; Naulleau, Patrick; Goldberg, Kenneth A.
2009-02-24
Lateral shearing interferometry (LSI) provides a simple means for characterizing the aberrations in optical systems at EUV wavelengths. In LSI, the test wavefront is incident on a low-frequency grating which causes the resulting diffracted orders to interfere on the CCD. Due to its simple experimental setup and high photon efficiency, LSI is an attractive alternative to point diffraction interferometry and other methods that require spatially filtering the wavefront through small pinholes which notoriously suffer from low contrast fringes and improper alignment. In order to demonstrate that LSI can be accurate and robust enough to meet industry standards, analytic models are presented to study the effects of unwanted grating and detector tilt on the system aberrations, and a method for identifying and correcting for these errors in alignment is proposed. The models are subsequently verified by numerical simulation. Finally, an analysis is performed of how errors in the identification and correction of grating and detector misalignment propagate to errors in fringe analysis.
[The analysis of the medication error, in practice].
Didelot, Nicolas; Cistio, Céline
2016-01-01
By performing a systemic analysis of medication errors which occur in practice, the multidisciplinary teams can avoid a reoccurrence with the aid of an improvement action plan. The methods must take into account all the factors which might have contributed to or favoured the occurrence of a medication incident or accident. PMID:27177485
Analysis of possible systematic errors in the Oslo method
Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.
2011-03-15
In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.
Canonical Correlation Analysis that Incorporates Measurement and Sampling Error Considerations.
ERIC Educational Resources Information Center
Thompson, Bruce; Daniel, Larry
Multivariate methods are being used with increasing frequency in educational research because these methods control "experimentwise" error rate inflation, and because the methods best honor the nature of the reality to which the researcher wishes to generalize. This paper: explains the basic logic of canonical analysis; illustrates that canonical…
Analysis of possible systematic errors in the Oslo method
NASA Astrophysics Data System (ADS)
Larsen, A. C.; Guttormsen, M.; Krtička, M.; Běták, E.; Bürger, A.; Görgen, A.; Nyhus, H. T.; Rekstad, J.; Schiller, A.; Siem, S.; Toft, H. K.; Tveten, G. M.; Voinov, A. V.; Wikan, K.
2011-03-01
In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.
Numerical Analysis of Robust Phase Estimation
NASA Astrophysics Data System (ADS)
Rudinger, Kenneth; Kimmel, Shelby
Robust phase estimation (RPE) is a new technique for estimating rotation angles and axes of single-qubit operations, steps necessary for developing useful quantum gates [arXiv:1502.02677]. As RPE only diagnoses a few parameters of a set of gate operations while at the same time achieving Heisenberg scaling, it requires relatively few resources compared to traditional tomographic procedures. In this talk, we present numerical simulations of RPE that show both Heisenberg scaling and robustness against state preparation and measurement errors, while also demonstrating numerical bounds on the procedure's efficacy. We additionally compare RPE to gate set tomography (GST), another Heisenberg-limited tomographic procedure. While GST provides a full gate set description, it is more resource-intensive than RPE, leading to potential tradeoffs between the procedures. We explore these tradeoffs and numerically establish criteria to guide experimentalists in deciding when to use RPE or GST to characterize their gate sets.Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
An analysis of pilot error-related aircraft accidents
NASA Technical Reports Server (NTRS)
Kowalsky, N. B.; Masters, R. L.; Stone, R. B.; Babcock, G. L.; Rypka, E. W.
1974-01-01
A multidisciplinary team approach to pilot error-related U.S. air carrier jet aircraft accident investigation records successfully reclaimed hidden human error information not shown in statistical studies. New analytic techniques were developed and applied to the data to discover and identify multiple elements of commonality and shared characteristics within this group of accidents. Three techniques of analysis were used: Critical element analysis, which demonstrated the importance of a subjective qualitative approach to raw accident data and surfaced information heretofore unavailable. Cluster analysis, which was an exploratory research tool that will lead to increased understanding and improved organization of facts, the discovery of new meaning in large data sets, and the generation of explanatory hypotheses. Pattern recognition, by which accidents can be categorized by pattern conformity after critical element identification by cluster analysis.
Star tracker error analysis: Roll-to-pitch nonorthogonality
NASA Technical Reports Server (NTRS)
Corson, R. W.
1979-01-01
An error analysis is described on an anomaly isolated in the star tracker software line of sight (LOS) rate test. The LOS rate cosine was found to be greater than one in certain cases which implied that either one or both of the star tracker measured end point unit vectors used to compute the LOS rate cosine had lengths greater than unity. The roll/pitch nonorthogonality matrix in the TNB CL module of the IMU software is examined as the source of error.
Numerical error in electron orbits with large. omega. sub ce. delta. t
Parker, S.E.; Birdsall, C.K.
1989-12-20
We have found that running electrostatic particle codes relatively large {omega}{sub ce}{Delta}t in some circumstances does not significantly affect the physical results. We first present results from a single particle mover finding the correct first order drifts for large {omega}{sub ce}{Delta}t. We then characterize the numerical orbit of the Boris algorithm for rotation when {omega}{sub ce}{Delta}t {much gt} 1. Next, an analysis of the guiding center motion is given showing why the first order drift is retained at large {omega}{sub ce}{Delta}t. Lastly, we present a plasma simulation of a one dimensional cross field sheath, with large and small {omega}{sub ce}{Delta}t, with very little difference in the results. 15 refs., 7 figs., 1 tab.
Analysis of the Error Associated With the Domenico Solution
NASA Astrophysics Data System (ADS)
Srinivasan, V.; Clement, T.; Lee, K.
2006-12-01
The Domenico solution is one of the widely used analytical solutions used in screening-level ground water contaminant transport models; e.g., BIOCHLOR and BIOSCREEN. This approximate solution describes the transport of a decaying contaminant subjected to advection in one dimension and dispersion in all three dimensions. However, the development of this solution as presented by the original authors involves approximations that are more heuristic than rigorous. This makes it difficult to predict the nature of the error associated with these approximations. Hence, several ground water modelers have expressed skepticism regarding the validity of this solution. To address the issues stated above, it is necessary to perform a rigorous mathematical analysis on the Domenico solution. In this work a rigorous mathematical approach to derive the Domenico solution is presented. Furthermore, the limits of this approximation are explored to provide a qualitative assessment of the error associated with the Domenico solution. The analysis indicates that the Domenico solution is an exact analytical solution when the value of the longitudinal dispersivity is zero. For all non-zero longitudinal dispersivity values, the Domenico solution will have a finite error. The results of our analysis also indicate that this error is highly sensitive to the value of the longitudinal dispersivity and the position of the advective front. Based on these inferences some general guidelines for the appropriate use of this solution are suggested.
A Method for Treating Discretization Error in Nondeterministic Analysis
Alvin, K.F.
1999-01-27
A response surface methodology-based technique is presented for treating discretization error in non-deterministic analysis. The response surface, or metamodel, is estimated from computer experiments which vary both uncertain physical parameters and the fidelity of the computational mesh. The resultant metamodel is then used to propagate the variabilities in the continuous input parameters, while the mesh size is taken to zero, its asymptotic limit. With respect to mesh size, the metamodel is equivalent to Richardson extrapolation, in which solutions on coarser and finer meshes are used to estimate discretization error. The method is demonstrated on a one dimensional prismatic bar, in which uncertainty in the third vibration frequency is estimated by propagating variations in material modulus, density, and bar length. The results demonstrate the efficiency of the method for combining non-deterministic analysis with error estimation to obtain estimates of total simulation uncertainty. The results also show the relative sensitivity of failure estimates to solution bias errors in a reliability analysis, particularly when the physical variability of the system is low.
Key Curriculum Reform Research on Numerical Analysis
NASA Astrophysics Data System (ADS)
Li, Zhong; Peng, Chensong
Based on the current undergraduate teaching characteristics and the actual teaching situation of numerical analysis curriculum, this paper gives a useful discussion and appropriate adjustments for this course's teaching content and style, and it also proposes some new curriculum reform plans to improve the teaching effectiveness which can develop student's abilities of mathematical thinking and computational practice.
How psychotherapists handle treatment errors – an ethical analysis
2013-01-01
Background Dealing with errors in psychotherapy is challenging, both ethically and practically. There is almost no empirical research on this topic. We aimed (1) to explore psychotherapists’ self-reported ways of dealing with an error made by themselves or by colleagues, and (2) to reconstruct their reasoning according to the two principle-based ethical approaches that are dominant in the ethics discourse of psychotherapy, Beauchamp & Childress (B&C) and Lindsay et al. (L). Methods We conducted 30 semi-structured interviews with 30 psychotherapists (physicians and non-physicians) and analysed the transcripts using qualitative content analysis. Answers were deductively categorized according to the two principle-based ethical approaches. Results Most psychotherapists reported that they preferred to an disclose error to the patient. They justified this by spontaneous intuitions and common values in psychotherapy, rarely using explicit ethical reasoning. The answers were attributed to the following categories with descending frequency: 1. Respect for patient autonomy (B&C; L), 2. Non-maleficence (B&C) and Responsibility (L), 3. Integrity (L), 4. Competence (L) and Beneficence (B&C). Conclusions Psychotherapists need specific ethical and communication training to complement and articulate their moral intuitions as a support when disclosing their errors to the patients. Principle-based ethical approaches seem to be useful for clarifying the reasons for disclosure. Further research should help to identify the most effective and acceptable ways of error disclosure in psychotherapy. PMID:24321503
Systems Improved Numerical Fluids Analysis Code
NASA Technical Reports Server (NTRS)
Costello, F. A.
1990-01-01
Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to April, 1983, version of SINDA. Additional routines provide for mathematical modeling of active heat-transfer loops. Simulates steady-state and pseudo-transient operations of 16 different components of heat-transfer loops, including radiators, evaporators, condensers, mechanical pumps, reservoirs, and many types of valves and fittings. Program contains property-analysis routine used to compute thermodynamic properties of 20 different refrigerants. Source code written in FORTRAN 77.
Davidson, R. L.; Earle, G. D.; Heelis, R. A.; Klenzing, J. H.
2010-08-15
Planar retarding potential analyzers (RPAs) have been utilized numerous times on high profile missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellite Program to measure plasma composition, temperature, density, and the velocity component perpendicular to the plane of the instrument aperture. These instruments use biased grids to approximate ideal biased planes. These grids introduce perturbations in the electric potential distribution inside the instrument and when unaccounted for cause errors in the measured plasma parameters. Traditionally, the grids utilized in RPAs have been made of fine wires woven into a mesh. Previous studies on the errors caused by grids in RPAs have approximated woven grids with a truly flat grid. Using a commercial ion optics software package, errors in inferred parameters caused by both woven and flat grids are examined. A flat grid geometry shows the smallest temperature and density errors, while the double thick flat grid displays minimal errors for velocities over the temperature and velocity range used. Wire thickness along the dominant flow direction is found to be a critical design parameter in regard to errors in all three inferred plasma parameters. The results shown for each case provide valuable design guidelines for future RPA development.
Manufacturing in space: Fluid dynamics numerical analysis
NASA Technical Reports Server (NTRS)
Robertson, S. J.; Nicholson, L. A.; Spradley, L. W.
1981-01-01
Natural convection in a spherical container with cooling at the center was numerically simulated using the Lockheed-developed General Interpolants Method (GIM) numerical fluid dynamic computer program. The numerical analysis was simplified by assuming axisymmetric flow in the spherical container, with the symmetry axis being a sphere diagonal parallel to the gravity vector. This axisymmetric spherical geometry was intended as an idealization of the proposed Lal/Kroes growing experiments to be performed on board Spacelab. Results were obtained for a range of Rayleigh numbers from 25 to 10,000. For a temperature difference of 10 C from the cooling sting at the center to the container surface, and a gravitional loading of 0.000001 g a computed maximum fluid velocity of about 2.4 x 0.00001 cm/sec was reached after about 250 sec. The computed velocities were found to be approximately proportional to the Rayleigh number over the range of Rayleigh numbers investigated.
Error analysis of a 3D imaging system based on fringe projection technique
NASA Astrophysics Data System (ADS)
Zhang, Zonghua; Dai, Jie
2013-12-01
In the past few years, optical metrology has found numerous applications in scientific and commercial fields owing to its non-contact nature. One of the most popular methods is the measurement of 3D surface based on fringe projection techniques because of the advantages of non-contact operation, full-field and fast acquisition and automatic data processing. In surface profilometry by using digital light processing (DLP) projector, many factors affect the accuracy of 3D measurement. However, there is no research to give the complete error analysis of a 3D imaging system. This paper will analyze some possible error sources of a 3D imaging system, for example, nonlinear response of CCD camera and DLP projector, sampling error of sinusoidal fringe pattern, variation of ambient light and marker extraction during calibration. These error sources are simulated in a software environment to demonstrate their effects on measurement. The possible compensation methods are proposed to give high accurate shape data. Some experiments were conducted to evaluate the effects of these error sources on 3D shape measurement. Experimental results and performance evaluation show that these errors have great effect on measuring 3D shape and it is necessary to compensate for them for accurate measurement.
NASA Astrophysics Data System (ADS)
Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin
2016-09-01
To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.
NASA Technical Reports Server (NTRS)
Snow, L. S.; Kuhn, A. E.
1975-01-01
Previous error analyses conducted by the Guidance and Dynamics Branch of NASA have used the Guidance Analysis Program (GAP) as the trajectory simulation tool. Plans are made to conduct all future error analyses using the Space Vehicle Dynamics Simulation (SVDS) program. A study was conducted to compare the inertial measurement unit (IMU) error simulations of the two programs. Results of the GAP/SVDS comparison are presented and problem areas encountered while attempting to simulate IMU errors, vehicle performance uncertainties and environmental uncertainties using SVDS are defined. An evaluation of the SVDS linear error analysis capability is also included.
ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS
NASA Technical Reports Server (NTRS)
Putney, B.
1994-01-01
The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and
Ferrofluids: Modeling, numerical analysis, and scientific computation
NASA Astrophysics Data System (ADS)
Tomas, Ignacio
This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a
Structure function analysis of mirror fabrication and support errors
NASA Astrophysics Data System (ADS)
Hvisc, Anastacia M.; Burge, James H.
2007-09-01
Telescopes are ultimately limited by atmospheric turbulence, which is commonly characterized by a structure function. The telescope optics will not further degrade the performance if their errors are small compared to the atmospheric effects. Any further improvement to the mirrors is not economical since there is no increased benefit to performance. Typically the telescope specification is written in terms of an image size or encircled energy and is derived from the best seeing that is expected at the site. Ideally, the fabrication and support errors should never exceed atmospheric turbulence at any spatial scale, so it is instructive to look at how these errors affect the structure function of the telescope. The fabrication and support errors are most naturally described by Zernike polynomials or by bending modes for the active mirrors. This paper illustrates an efficient technique for relating this modal analysis to wavefront structure functions. Data is provided for efficient calculation of structure function given coefficients for Zernike annular polynomials. An example of this procedure for the Giant Magellan Telescope primary mirror is described.
Eigenvector method for umbrella sampling enables error analysis.
Thiede, Erik H; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R
2016-08-28
Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912
Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis
NASA Technical Reports Server (NTRS)
Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl
2009-01-01
The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.
Error analysis of 3D laser scanning system for gangue monitoring
NASA Astrophysics Data System (ADS)
Hu, Shaoxing; Xia, Yuyang; Zhang, Aiwu
2012-01-01
The paper put forward the system error evaluation method of 3D scanning system for gangue monitoring; analyzed system errors including integrated error which can be avoided, and measurement error which needed whole analysis; firstly established the system equation after understanding the relationship of each structure. Then, used error independent effect and spread law to set up the entire error analysis system, and simulated the trend of error changing along X, Y, Z directions. At last, it is analytic that the laser rangefinder carries some weight in system error, and the horizontal and vertical scanning angles have some influences on system error in the certain vertical and horizontal scanning parameters.
Multiple boundary condition testing error analysis. [for large flexible space structures
NASA Technical Reports Server (NTRS)
Glaser, R. J.; Kuo, C. P.; Wada, B. K.
1989-01-01
Techniques for interpreting data from multiple-boundary-condition (MBC) ground tests of large space structures are developed analytically and demonstrated. The use of MBC testing to validate structures too large to stand alone on the ground is explained; the generalized least-squares mass and stiffness curve-fitting methods typically applied to MBC test data are reviewed; and a detailed error analysis is performed. Consideration is given to sensitivity coefficients, covariance-matrix theory, the correspondence between test and analysis modes, constraints and step sizes, convergence criteria, and factor-analysis theory. Numerical results for a simple beam problem are presented in tables and briefly characterized. The improved error-updating capabilities of MBC testing are confirmed, and it is concluded that reasonably accurate results can be obtained using a diagonal covariance matrix.
NASA Technical Reports Server (NTRS)
Luers, J. K.
1980-01-01
An initial value of pressure is required to derive the density and pressure profiles of the rocketborne rocketsonde sensor. This tie-on pressure value is obtained from the nearest rawinsonde launch at an altitude where overlapping rawinsonde and rocketsonde measurements occur. An error analysis was performed of the error sources in these sensors that contribute to the error in the tie-on pressure. It was determined that significant tie-on pressure errors result from radiation errors in the rawinsonde rod thermistor, and temperature calibration bias errors. To minimize the effect of these errors radiation corrections should be made to the rawinsonde temperature and the tie-on altitude should be chosen at the lowest altitude of overlapping data. Under these conditions the tie-on error, and consequently the resulting error in the Datasonde pressure and density profiles is less tha 1%. The effect of rawinsonde pressure and temperature errors on the wind and temperature versus height profiles of the rawinsonde was also determined.
Computing the surveillance error grid analysis: procedure and examples.
Kovatchev, Boris P; Wakeman, Christian A; Breton, Marc D; Kost, Gerald J; Louie, Richard F; Tran, Nam K; Klonoff, David C
2014-07-01
The surveillance error grid (SEG) analysis is a tool for analysis and visualization of blood glucose monitoring (BGM) errors, based on the opinions of 206 diabetes clinicians who rated 4 distinct treatment scenarios. Resulting from this large-scale inquiry is a matrix of 337 561 risk ratings, 1 for each pair of (reference, BGM) readings ranging from 20 to 580 mg/dl. The computation of the SEG is therefore complex and in need of automation. The SEG software introduced in this article automates the task of assigning a degree of risk to each data point for a set of measured and reference blood glucose values so that the data can be distributed into 8 risk zones. The software's 2 main purposes are to (1) distribute a set of BG Monitor data into 8 risk zones ranging from none to extreme and (2) present the data in a color coded display to promote visualization. Besides aggregating the data into 8 zones corresponding to levels of risk, the SEG computes the number and percentage of data pairs in each zone and the number/percentage of data pairs above/below the diagonal line in each zone, which are associated with BGM errors creating risks for hypo- or hyperglycemia, respectively. To illustrate the action of the SEG software we first present computer-simulated data stratified along error levels defined by ISO 15197:2013. This allows the SEG to be linked to this established standard. Further illustration of the SEG procedure is done with a series of previously published data, which reflect the performance of BGM devices and test strips under various environmental conditions. We conclude that the SEG software is a useful addition to the SEG analysis presented in this journal, developed to assess the magnitude of clinical risk from analytically inaccurate data in a variety of high-impact situations such as intensive care and disaster settings. PMID:25562887
Beam line error analysis, position correction, and graphic processing
NASA Astrophysics Data System (ADS)
Wang, Fuhua; Mao, Naifeng
1993-12-01
A beam transport line error analysis and beam position correction code called ``EAC'' has been enveloped associated with a graphics and data post processing package for TRANSPORT. Based on the linear optics design using TRANSPORT or other general optics codes, EAC independently analyzes effects of magnet misalignments, systematic and statistical errors of magnetic fields as well as the effects of the initial beam positions, on the central trajectory and upon the transverse beam emittance dilution. EAC also provides an efficient way to develop beam line trajectory correcting schemes. The post processing package generates various types of graphics such as the beam line geometrical layout, plots of the Twiss parameters, beam envelopes, etc. It also generates an EAC input file, thus connecting EAC with general optics codes. EAC and the post processing package are small size codes, that are easy to access and use. They have become useful tools for the design of transport lines at SSCL.
Fast computation of Lagrangian coherent structures: algorithms and error analysis
NASA Astrophysics Data System (ADS)
Brunton, Steven; Rowley, Clarence
2009-11-01
This work investigates a number of efficient methods for computing finite time Lyapunov exponent (FTLE) fields in unsteady flows by approximating the particle flow map and eliminating redundant particle integrations in neighboring flow maps. Ridges of the FTLE fields are Lagrangian coherent structures (LCS) and provide an unsteady analogue of invariant manifolds from dynamical systems theory. The fast methods fall into two categories, unidirectional and bidirectional, depending on whether flow maps in one or both time directions are composed to form an approximate flow map. An error analysis is presented which shows that the unidirectional methods are accurate while the bidirectional methods have significant error which is aligned with the opposite time coherent structures. This relies on the fact that material from the positive time LCS attracts onto the negative time LCS near time-dependent saddle points.
Jason-2 systematic error analysis in the GPS derived orbits
NASA Astrophysics Data System (ADS)
Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.
2011-12-01
Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced
Sequential analysis of the numerical Stroop effect reveals response suppression.
Cohen Kadosh, Roi; Gevers, Wim; Notebaert, Wim
2011-09-01
Automatic processing of irrelevant stimulus dimensions has been demonstrated in a variety of tasks. Previous studies have shown that conflict between relevant and irrelevant dimensions can be reduced when a feature of the irrelevant dimension is repeated. The specific level at which the automatic process is suppressed (e.g., perceptual repetition, response repetition), however, is less understood. In the current experiment we used the numerical Stroop paradigm, in which the processing of irrelevant numerical values of 2 digits interferes with the processing of their physical size, to pinpoint the precise level of the suppression. Using a sequential analysis, we dissociated perceptual repetition from response repetition of the relevant and irrelevant dimension. Our analyses of reaction times, error rates, and diffusion modeling revealed that the congruity effect is significantly reduced or even absent when the response sequence of the irrelevant dimension, rather than the numerical value or the physical size, is repeated. These results suggest that automatic activation of the irrelevant dimension is suppressed at the response level. The current results shed light on the level of interaction between numerical magnitude and physical size as well as the effect of variability of responses and stimuli on automatic processing. PMID:21500951
Numerical Analysis of Rocket Exhaust Cratering
NASA Technical Reports Server (NTRS)
2008-01-01
Supersonic jet exhaust impinging onto a flat surface is a fundamental flow encountered in space or with a missile launch vehicle system. The flow is important because it can endanger launch operations. The purpose of this study is to evaluate the effect of a landing rocket s exhaust on soils. From numerical simulations and analysis, we developed characteristic expressions and curves, which we can use, along with rocket nozzle performance, to predict cratering effects during a soft-soil landing. We conducted a series of multiphase flow simulations with two phases: exhaust gas and sand particles. The main objective of the simulation was to obtain the numerical results as close to the experimental results as possible. After several simulating test runs, the results showed that packing limit and the angle of internal friction are the two critical and dominant factors in the simulations.
Numerical Analysis of Orbital Perturbation Effects on Inclined Geosynchronous SAR.
Dong, Xichao; Hu, Cheng; Long, Teng; Li, Yuanhao
2016-01-01
The geosynchronous synthetic aperture radar (GEO SAR) is susceptible to orbit perturbations, leading to orbit drifts and variations. The influences behave very differently from those in low Earth orbit (LEO) SAR. In this paper, the impacts of perturbations on GEO SAR orbital elements are modelled based on the perturbed dynamic equations, and then, the focusing is analyzed theoretically and numerically by using the Systems Tool Kit (STK) software. The accurate GEO SAR slant range histories can be calculated according to the perturbed orbit positions in STK. The perturbed slant range errors are mainly the first and second derivatives, leading to image drifts and defocusing. Simulations of the point target imaging are performed to validate the aforementioned analysis. In the GEO SAR with an inclination of 53° and an argument of perigee of 90°, the Doppler parameters and the integration time are different and dependent on the geometry configurations. Thus, the influences are varying at different orbit positions: at the equator, the first-order phase errors should be mainly considered; at the perigee and apogee, the second-order phase errors should be mainly considered; at other positions, first-order and second-order exist simultaneously. PMID:27598168
Theoretical and Numerical Assessment of Strain Pattern Analysis
NASA Astrophysics Data System (ADS)
Milne, R. D.; Simpson, A.
1996-04-01
The Strain Pattern Analysis (SPA) method was conceived at the RAE in the 1970s as a means of estimating the displacement shape of a helicopter rotor blade by using only strain gauge data, but no attempt was made to provide theoretical justification for the procedure. In this paper, the SPA method is placed on a firm mathematical basis by the use of vector space theory. It is shown that the natural normwhich underlies the SPA projection is the strain energy functionalof the structure under consideration. The natural norm is a weightedversion of the original SPA norm. Numerical experiments on simple flexure and coupled flexure-torsion systems indicate that the use of the natural norm yields structural deflection estimates of significantly greater accuracy than those obtained from the original SPA procedure and that measurement error tolerance is also enhanced. Extensive numerical results are presented for an emulation of the SPA method as applied to existing mathematical models of the main rotor of the DRA Lynx ZD559 helicopter. The efficacy of SPA is demonstrated by using a quasi-linear rotor model in the frequency domain and a fully non-linear, kinematically exact model in the time domain: the procedure based on the natural (or weighted) norm is again found to be superior to that based on the original SPA method, both in respect of displacement estimates and measurement error tolerance.
Prior-predictive value from fast-growth simulations: Error analysis and bias estimation
NASA Astrophysics Data System (ADS)
Favaro, Alberto; Nickelsen, Daniel; Barykina, Elena; Engel, Andreas
2015-01-01
Variants of fluctuation theorems recently discovered in the statistical mechanics of nonequilibrium processes may be used for the efficient determination of high-dimensional integrals as typically occurring in Bayesian data analysis. In particular for multimodal distributions, Monte Carlo procedures not relying on perfect equilibration are advantageous. We provide a comprehensive statistical error analysis for the determination of the prior-predictive value (the evidence) in a Bayes problem, building on a variant of the Jarzynski equation. Special care is devoted to the characterization of the bias intrinsic to the method and statistical errors arising from exponential averages. We also discuss the determination of averages over multimodal posterior distributions with the help of a consequence of the Crooks relation. All our findings are verified by extensive numerical simulations of two model systems with bimodal likelihoods.
Lon N. Haney; David I. Gertman
2003-04-01
Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.
Zhu, Fangqiang; Hummer, Gerhard
2012-01-01
The weighted histogram analysis method (WHAM) has become the standard technique for the analysis of umbrella sampling simulations. In this paper, we address the challenges (1) of obtaining fast and accurate solutions of the coupled nonlinear WHAM equations, (2) of quantifying the statistical errors of the resulting free energies, (3) of diagnosing possible systematic errors, and (4) of optimal allocation of the computational resources. Traditionally, the WHAM equations are solved by a fixed-point direct iteration method, despite poor convergence and possible numerical inaccuracies in the solutions. Here we instead solve the mathematically equivalent problem of maximizing a target likelihood function, by using superlinear numerical optimization algorithms with a significantly faster convergence rate. To estimate the statistical errors in one-dimensional free energy profiles obtained from WHAM, we note that for densely spaced umbrella windows with harmonic biasing potentials, the WHAM free energy profile can be approximated by a coarse-grained free energy obtained by integrating the mean restraining forces. The statistical errors of the coarse-grained free energies can be estimated straightforwardly and then used for the WHAM results. A generalization to multidimensional WHAM is described. We also propose two simple statistical criteria to test the consistency between the histograms of adjacent umbrella windows, which help identify inadequate sampling and hysteresis in the degrees of freedom orthogonal to the reaction coordinate. Together, the estimates of the statistical errors and the diagnostics of inconsistencies in the potentials of mean force provide a basis for the efficient allocation of computational resources in free energy simulations. PMID:22109354
A missing error term in benefit-cost analysis.
Farrow, R Scott
2012-03-01
Benefit-cost models are frequently used to inform environmental policy and management decisions. However, they typically omit a random or pure error which biases downward any estimated forecast variance. Ex-ante benefit-cost analyses create a particular problem because there are no historically observed values of the dependent variable, such as net present social value, on which to construct a historically based variance as is the usual statistical approach. To correct this omission, an estimator for the random error variance in this situation is developed based on analysis of variance measures and the coefficient of determination, R(2). A larger variance may affect decision-maker's choices if they are risk averse, consider confidence intervals, exceedance probabilities, or other measures related to the variance. When applied to a model of the net benefits of the Clean Air Act, although the probability of large net benefits increases, the probability that the net present value is negative also increases from 0.2 to 4.5%. A framework is also provided to assist in determining when a variance estimate would be better, in a utility sense, than using the current default of a zero error variance. PMID:22145927
The Communication Link and Error ANalysis (CLEAN) simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.; Crowe, Shane
1993-01-01
During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.
NASA Astrophysics Data System (ADS)
Hasegawa, Kei; Geller, Robert J.; Hirabayashi, Nobuyasu
2016-02-01
We present a theoretical analysis of the error of synthetic seismograms computed by higher-order finite element methods (ho-FEMs). We show the existence of a previously unrecognized type of error due to degenerate coupling between waves with the same frequency but different wavenumbers. These results are confirmed by simple numerical experiments using the spectral element method (SEM) as an example of ho-FEMs. Errors of the type found by this study may occur generally in applications of ho-FEMs.
Rasch Analysis of the Student Refractive Error and Eyeglass Questionnaire
Crescioni, Mabel; Messer, Dawn H.; Warholak, Terri L.; Miller, Joseph M.; Twelker, J. Daniel; Harvey, Erin M.
2014-01-01
Purpose To evaluate and refine a newly developed instrument, the Student Refractive Error and Eyeglasses Questionnaire (SREEQ), designed to measure the impact of uncorrected and corrected refractive error on vision-related quality of life (VRQoL) in school-aged children. Methods. A 38 statement instrument consisting of two parts was developed: Part A relates to perceptions regarding uncorrected vision and Part B relates to perceptions regarding corrected vision and includes other statements regarding VRQoL with spectacle correction. The SREEQ was administered to 200 Native American 6th through 12th grade students known to have previously worn and who currently require eyeglasses. Rasch analysis was conducted to evaluate the functioning of the SREEQ. Statements on Part A and Part B were analyzed to examine the dimensionality and constructs of the questionnaire, how well the items functioned, and the appropriateness of the response scale used. Results Rasch analysis suggested two items be eliminated and the measurement scale for matching items be reduced from a 4-point response scale to a 3-point response scale. With these modifications, categorical data were converted to interval level data, to conduct an item and person analysis. A shortened version of the SREEQ was constructed with these modifications, the SREEQ-R, which included the statements that were able to capture changes in VRQoL associated with spectacle wear for those with significant refractive error in our study population. Conclusions While the SREEQ Part B appears to be a have less than optimal reliability to assess the impact of spectacle correction on VRQoL in our student population, it is also able to detect statistically significant differences from pretest to posttest on both the group and individual levels to show that the instrument can assess the impact that glasses have on VRQoL. Further modifications to the questionnaire, such as those included in the SREEQ-R, could enhance its functionality
Numerical analysis method for linear induction machines.
NASA Technical Reports Server (NTRS)
Elliott, D. G.
1972-01-01
A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.
Analysis of Random Segment Errors on Coronagraph Performance
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; N'Diaye, Mamadou; Stahl, Mark T.; Stahl, H. Philip
2016-01-01
At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt
2014-01-01
In adsorption study, to describe sorption process and evaluation of best-fitting isotherm model is a key analysis to investigate the theoretical hypothesis. Hence, numerous statistically analysis have been extensively used to estimate validity of the experimental equilibrium adsorption values with the predicted equilibrium values. Several statistical error analysis were carried out. In the present study, the following statistical analysis were carried out to evaluate the adsorption isotherm model fitness, like the Pearson correlation, the coefficient of determination and the Chi-square test, have been used. The ANOVA test was carried out for evaluating significance of various error functions and also coefficient of dispersion were evaluated for linearised and non-linearised models. The adsorption of phenol onto natural soil (Local name Kalathur soil) was carried out, in batch mode at 30 ± 20 C. For estimating the isotherm parameters, to get a holistic view of the analysis the models were compared between linear and non-linear isotherm models. The result reveled that, among above mentioned error functions and statistical functions were designed to determine the best fitting isotherm. PMID:25018878
Analysis of Solar Two Heliostat Tracking Error Sources
Jones, S.A.; Stone, K.W.
1999-01-28
This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.
Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher
1998-01-01
We proposed a novel characterization of errors for numerical weather predictions. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has several important applications, including the model assessment application and the objective analysis application. In this project, we have focused on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP), the 500 hPa geopotential height, and the 315 K potential vorticity fields for forecasts of the short and medium range. The forecasts are generated by the Goddard Earth Observing System (GEOS) data assimilation system with and without ERS-1 scatterometer data. A great deal of novel work has been accomplished under the current contract. In broad terms, we have developed and tested an efficient algorithm for determining distortions. The algorithm and constraints are now ready for application to larger data sets to be used to determine the statistics of the distortion as outlined above, and to be applied in data analysis by using GEOS water vapor imagery to correct short-term forecast errors.
Laser measurement and analysis of reposition error in polishing systems
NASA Astrophysics Data System (ADS)
Liu, Weisen; Wang, Junhua; Xu, Min; He, Xiaoying
2015-10-01
In this paper, robotic reposition error measurement method based on laser interference remote positioning is presented, the geometric error is analyzed in the polishing system based on robot and the mathematical model of the tilt error is presented. Studies show that less than 1 mm error is mainly caused by the tilt error with small incident angle. Marking spot position with interference fringe enhances greatly the error measurement precision, the measurement precision of tilt error can reach 5 um. Measurement results show that reposition error of the polishing system is mainly from the tilt error caused by the motor A, repositioning precision is greatly increased after polishing system improvement. The measurement method has important applications in the actual error measurement with low cost, simple operation.
Improved iterative error analysis for endmember extraction from hyperspectral imagery
NASA Astrophysics Data System (ADS)
Sun, Lixin; Zhang, Ying; Guindon, Bert
2008-08-01
Automated image endmember extraction from hyperspectral imagery is a challenge and a critical step in spectral mixture analysis (SMA). Over the past years, great efforts were made and a large number of algorithms have been proposed to address this issue. Iterative error analysis (IEA) is one of the well-known existing endmember extraction methods. IEA identifies pixel spectra as a number of image endmembers by an iterative process. In each of the iterations, a fully constrained (abundance nonnegativity and abundance sum-to-one constraints) spectral unmixing based on previously identified endmembers is performed to model all image pixels. The pixel spectrum with the largest residual error is then selected as a new image endmember. This paper proposes an updated version of IEA by making improvements on three aspects of the method. First, fully constrained spectral unmixing is replaced by a weakly constrained (abundance nonnegativity and abundance sum-less-or-equal-to-one constraints) alternative. This is necessary due to the fact that only a subset of endmembers exhibit in a hyperspectral image have been extracted up to an intermediate iteration and the abundance sum-to-one constraint is invalid at the moment. Second, the search strategy for achieving an optimal set of image endmembers is changed from sequential forward selection (SFS) to sequential forward floating selection (SFFS) to reduce the so-called "nesting effect" in resultant set of endmembers. Third, a pixel spectrum is identified as a new image endmember depending on both its spectral extremity in the feature hyperspace of a dataset and its capacity to characterize other mixed pixels. This is achieved by evaluating a set of extracted endmembers using a criterion function, which is consisted of the mean and standard deviation of residual error image. Preliminary comparison between the image endmembers extracted using improved and original IEA are conducted based on an airborne visible infrared imaging
Numerical Analysis of Convection/Transpiration Cooling
NASA Technical Reports Server (NTRS)
Glass, David E.; Dilley, Arthur D.; Kelly, H. Neale
1999-01-01
An innovative concept utilizing the natural porosity of refractory-composite materials and hydrogen coolant to provide CONvective and TRANspiration (CONTRAN) cooling and oxidation protection has been numerically studied for surfaces exposed to a high heat flux high temperature environment such as hypersonic vehicle engine combustor walls. A boundary layer code and a porous media finite difference code were utilized to analyze the effect of convection and transpiration cooling on surface heat flux and temperature. The boundary layer code determined that transpiration flow is able to provide blocking of the surface heat flux only if it is above a minimum level due to heat addition from combustion of the hydrogen transpirant. The porous media analysis indicated that cooling of the surface is attained with coolant flow rates that are in the same range as those required for blocking, indicating that a coupled analysis would be beneficial.
Numerical Analysis of Convection/Transpiration Cooling
NASA Technical Reports Server (NTRS)
Glass, David E.; Dilley, Arthur D.; Kelly, H. Neale
1999-01-01
An innovative concept utilizing the natural porosity of refractory-composite materials and hydrogen coolant to provide CONvective and TRANspiration (CONTRAN) cooling and oxidation protection has been numerically studied for surfaces exposed to a high heat flux, high temperature environment such as hypersonic vehicle engine combustor walls. A boundary layer code and a porous media finite difference code were utilized to analyze the effect of convection and transpiration cooling on surface heat flux and temperature. The boundary, layer code determined that transpiration flow is able to provide blocking of the surface heat flux only if it is above a minimum level due to heat addition from combustion of the hydrogen transpirant. The porous media analysis indicated that cooling of the surface is attained with coolant flow rates that are in the same range as those required for blocking, indicating that a coupled analysis would be beneficial.
Three Dimensional Numerical Analysis on Discharge Properties
NASA Astrophysics Data System (ADS)
Takaishi, Kenji; Katsurai, Makoto
2003-10-01
A three dimensional simulation code with the finite difference time domain (FDTD) method combined with the two fluids model for electron and ion has been developed for the microwave excited surface wave plasma in the RDL-SWP device. This code permits the numerical analysis of the spatial distributions of electric field, power absorption, electron density and electron temperature. At low gas pressure of about 10 mTorr, the numerical results compared with the experimental measurements that shows the validity of this 3-D simulation code. A simplified analysis assuming that an electron density is spatially uniform has been studied and its applicability is evaluated by 3-D simulation. The surface wave eigenmodes are determined by electron density, and it is found that the structure of the device strongly influences to the spatial distribution of the electric fields of surface wave in a low density area. A method to irradiate a microwave to the whole surface area of the plasma is proposed which is found to be effective to obtain a high uniformity distribution of electron density.
Starlight emergence angle error analysis of star simulator
NASA Astrophysics Data System (ADS)
Zhang, Jian; Zhang, Guo-yu
2015-10-01
With continuous development of the key technologies of star sensor, the precision of star simulator have been to be further improved, for it directly affects the accuracy of star sensor laboratory calibration. For improving the accuracy level of the star simulator, a theoretical accuracy analysis model need to be proposed. According the ideal imaging model of star simulator, the theoretical accuracy analysis model can be established. Based on theoretically analyzing the theoretical accuracy analysis model we can get that the starlight emergent angle deviation is primarily affected by star position deviation, main point position deviation, focal length deviation, distortion deviation and object plane tilt deviation. Based on the above affecting factors, a comprehensive deviation model can be established. According to the model, the formula of each factors deviation model separately and the comprehensive deviation model can be summarized and concluded out. By analyzing the properties of each factors deviation model and the comprehensive deviation model formula, concluding the characteristics of each factors respectively and the weight relationship among them. According the result of analysis of the comprehensive deviation model, a reasonable designing indexes can be given by considering the star simulator optical system requirements and the precision of machining and adjustment. So, starlight emergence angle error analysis of star simulator is very significant to guide the direction of determining and demonstrating the index of star simulator, analyzing and compensating the error of star simulator for improving the accuracy of star simulator and establishing a theoretical basis for further improving the starlight angle precision of the star simulator can effectively solve the problem.
SIRTF Focal Plane Survey: A Pre-flight Error Analysis
NASA Technical Reports Server (NTRS)
Bayard, David S.; Brugarolas, Paul B.; Boussalis, Dhemetrios; Kang, Bryan H.
2003-01-01
This report contains a pre-flight error analysis of the calibration accuracies expected from implementing the currently planned SIRTF focal plane survey strategy. The main purpose of this study is to verify that the planned strategy will meet focal plane survey calibration requirements (as put forth in the SIRTF IOC-SV Mission Plan [4]), and to quantify the actual accuracies expected. The error analysis was performed by running the Instrument Pointing Frame (IPF) Kalman filter on a complete set of simulated IOC-SV survey data, and studying the resulting propagated covariances. The main conclusion of this study is that the all focal plane calibration requirements can be met with the currently planned survey strategy. The associated margins range from 3 to 95 percent, and tend to be smallest for frames having a 0.14" requirement, and largest for frames having a more generous 0.28" (or larger) requirement. The smallest margin of 3 percent is associated with the IRAC 3.6 and 5.8 micron array centers (frames 068 and 069), and the largest margin of 95 percent is associated with the MIPS 160 micron array center (frame 087). For pointing purposes, the most critical calibrations are for the IRS Peakup sweet spots and short wavelength slit centers (frames 019, 023, 052, 028, 034). Results show that these frames are meeting their 0.14" requirements with an expected accuracy of approximately 0.1", which corresponds to a 28 percent margin.
Analysis of Spherical Form Errors to Coordinate Measuring Machine Data
NASA Astrophysics Data System (ADS)
Chen, Mu-Chen
Coordinates measuring machines (CMMs) are commonly utilized to take measurement data from manufactured surfaces for inspection purposes. The measurement data are then used to evaluate the geometric form errors associated with the surface. Traditionally, the evaluation of spherical form errors involves an optimization process of fitting a substitute sphere to the sampled points. This paper proposes the computational strategies for sphericity with respect to ASME Y14.5M-1994 standard. The proposed methods consider the trade-off between the accuracy of sphericity and the efficiency of inspection. Two approaches of computational metrology based on genetic algorithms (GAs) are proposed to explore the optimality of sphericity measurements and the sphericity feasibility analysis, respectively. The proposed algorithms are verified by using several CMM data sets. Observing from the computational results, the proposed algorithms are practical for on-line implementation to the sphericity evaluation. Using the GA-based computational techniques, the accuracy of sphericity assessment and the efficiency of sphericity feasibility analysis are agreeable.
Verifying the error bound of numerical computation implemented in computer systems
Sawada, Jun
2013-03-12
A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.
Hill, M.C.
1989-01-01
Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author
Error analysis for earth orientation recovery from GPS data
NASA Technical Reports Server (NTRS)
Zelensky, N.; Ray, J.; Liebrecht, P.
1990-01-01
The use of GPS navigation satellites to study earth-orientation parameters in real-time is examined analytically with simulations of network geometries. The Orbit Analysis covariance-analysis program is employed to simulate the block-II constellation of 18 GPS satellites, and attention is given to the budget for tracking errors. Simultaneous solutions are derived for earth orientation given specific satellite orbits, ground clocks, and station positions with tropospheric scaling at each station. Media effects and measurement noise are found to be the main causes of uncertainty in earth-orientation determination. A program similar to the Polaris network using single-difference carrier-phase observations can provide earth-orientation parameters with accuracies similar to those for the VLBI program. The GPS concept offers faster data turnaround and lower costs in addition to more accurate determinations of UT1 and pole position.
Soft X Ray Telescope (SXT) focus error analysis
NASA Technical Reports Server (NTRS)
Ahmad, Anees
1991-01-01
The analysis performed on the soft x-ray telescope (SXT) to determine the correct thickness of the spacer to position the CCD camera at the best focus of the telescope and to determine the maximum uncertainty in this focus position due to a number of metrology and experimental errors, and thermal, and humidity effects is presented. This type of analysis has been performed by the SXT prime contractor, Lockheed Palo Alto Research Lab (LPARL). The SXT project office at MSFC formed an independent team of experts to review the LPARL work, and verify the analysis performed by them. Based on the recommendation of this team, the project office will make a decision if an end to end focus test is required for the SXT prior to launch. The metrology and experimental data, and the spreadsheets provided by LPARL are used at the basis of the analysis presented. The data entries in these spreadsheets have been verified as far as feasible, and the format of the spreadsheets has been improved to make these easier to understand. The results obtained from this analysis are very close to the results obtained by LPARL. However, due to the lack of organized documentation the analysis uncovered a few areas of possibly erroneous metrology data, which may affect the results obtained by this analytical approach.
Numerical Analysis of a Finite Element/Volume Penalty Method
NASA Astrophysics Data System (ADS)
Maury, Bertrand
The penalty method makes it possible to incorporate a large class of constraints in general purpose Finite Element solvers like freeFEM++. We present here some contributions to the numerical analysis of this method. We propose an abstract framework for this approach, together with some general error estimates based on the discretization parameter ɛ and the space discretization parameter h. As this work is motivated by the possibility to handle constraints like rigid motion for fluid-particle flows, we shall pay a special attention to a model problem of this kind, where the constraint is prescribed over a subdomain. We show how the abstract estimate can be applied to this situation, in the case where a non-body-fitted mesh is used. In addition, we describe how this method provides an approximation of the Lagrange multiplier associated to the constraint.
Effects of Correlated Errors on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, Andres; Jacobs, C. S.
2011-01-01
As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.
Design and analysis of vector color error diffusion halftoning systems.
Damera-Venkata, N; Evans, B L
2001-01-01
Traditional error diffusion halftoning is a high quality method for producing binary images from digital grayscale images. Error diffusion shapes the quantization noise power into the high frequency regions where the human eye is the least sensitive. Error diffusion may be extended to color images by using error filters with matrix-valued coefficients to take into account the correlation among color planes. For vector color error diffusion, we propose three contributions. First, we analyze vector color error diffusion based on a new matrix gain model for the quantizer, which linearizes vector error diffusion. The model predicts the key characteristics of color error diffusion, esp. image sharpening and noise shaping. The proposed model includes linear gain models for the quantizer by Ardalan and Paulos (1987) and by Kite et al. (1997) as special cases. Second, based on our model, we optimize the noise shaping behavior of color error diffusion by designing error filters that are optimum with respect to any given linear spatially-invariant model of the human visual system. Our approach allows the error filter to have matrix-valued coefficients and diffuse quantization error across color channels in an opponent color representation. Thus, the noise is shaped into frequency regions of reduced human color sensitivity. To obtain the optimal filter, we derive a matrix version of the Yule-Walker equations which we solve by using a gradient descent algorithm. Finally, we show that the vector error filter has a parallel implementation as a polyphase filterbank. PMID:18255498
Analysis of infusion pump error logs and their significance for health care.
Lee, Paul T; Thompson, Frankle; Thimbleby, Harold
Infusion therapy is one of the largest practised therapies in any healthcare organisation, and infusion pumps are used to deliver millions of infusions every year in the NHS. The aircraft industry downloads information from 'black boxes' to help design better systems and reduce risk; however, the same cannot be said about error logs and data logs from infusion pumps. This study downloaded and analysed approximately 360 000 hours of infusion pump error logs from 131 infusion pumps used for up to 2 years in one large acute hospital. Staff had to manage 260 129 alarms; this accounted for approximately 5% of total infusion time, costing about £1000 per pump per year. This paper describes many such insights, including numerous technical errors, propensity for certain alarms in clinical conditions, logistical issues and how infrastructure problems can lead to an increase in alarm conditions. Routine use of error log analysis, combined with appropriate management of pumps to help identify improved device design, use and application is recommended. PMID:22629592
Incremental communication for multilayer neural networks: error analysis.
Ghorbani, A A; Bhavsar, V C
1998-01-01
Artificial neural networks (ANNs) involve a large amount of internode communications. To reduce the communication cost as well as the time of learning process in ANNs, we earlier proposed (1995) an incremental internode communication method. In the incremental communication method, instead of communicating the full magnitude of the output value of a node, only the increment or decrement to its previous value is sent to a communication link. In this paper, the effects of the limited precision incremental communication method on the convergence behavior and performance of multilayer neural networks are investigated. The nonlinear aspects of representing the incremental values with reduced (limited) precision for the commonly used error backpropagation training algorithm are analyzed. It is shown that the nonlinear effect of small perturbations in the input(s)/output of a node does not cause instability. The analysis is supported by simulation studies of two problems. The simulation results demonstrate that the limited precision errors are bounded and do not seriously affect the convergence of multilayer neural networks. PMID:18252431
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Numerical analysis of flows in reciprocating engines
NASA Astrophysics Data System (ADS)
Takata, H.; Kojima, M.
1986-07-01
A numerical method of the analysis for three-dimensional turbulent flow in cylinders of reciprocating engines with arbitrary geometry is described. A scheme of the finite volume/finite element methods is used, employing a large number of small elements of arbitrary shapes to form a cylinder. The fluid dynamic equations are expressed in integral form for each element, taking into account the deformation of the element shape according to the piston movements, and are solved in the physical space using rectangular coordinates. The conventional k-epsilon two-equation model is employed to describe the flow turbulence. Example calculations are presented for simple pancake-type combustion chambers having an annular intake port at either center or asymmetric position of the cylinder head. The suction inflow direction is also changed in several ways. The results show a good simulation of overall fluid movements within the engine cylinder.
Error Analysis in Composition of Iranian Lower Intermediate Students
ERIC Educational Resources Information Center
Taghavi, Mehdi
2012-01-01
Learners make errors during the process of learning languages. This study examines errors in writing task of twenty Iranian lower intermediate male students aged between 13 and 15. A subject was given to the participants was a composition about the seasons of a year. All of the errors were identified and classified. Corder's classification (1967)…
Analysis of personnel error occurrence reports across Defense Program facilities
Stock, D.A.; Shurberg, D.A.; O`Brien, J.N.
1994-05-01
More than 2,000 reports from the Occurrence Reporting and Processing System (ORPS) database were examined in order to identify weaknesses in the implementation of the guidance for the Conduct of Operations (DOE Order 5480.19) at Defense Program (DP) facilities. The analysis revealed recurrent problems involving procedures, training of employees, the occurrence of accidents, planning and scheduling of daily operations, and communications. Changes to DOE 5480.19 and modifications of the Occurrence Reporting and Processing System are recommended to reduce the frequency of these problems. The primary tool used in this analysis was a coding scheme based on the guidelines in 5480.19, which was used to classify the textual content of occurrence reports. The occurrence reports selected for analysis came from across all DP facilities, and listed personnel error as a cause of the event. A number of additional reports, specifically from the Plutonium Processing and Handling Facility (TA55), and the Chemistry and Metallurgy Research Facility (CMR), at Los Alamos National Laboratory, were analyzed separately as a case study. In total, 2070 occurrence reports were examined for this analysis. A number of core issues were consistently found in all analyses conducted, and all subsets of data examined. When individual DP sites were analyzed, including some sites which have since been transferred, only minor variations were found in the importance of these core issues. The same issues also appeared in different time periods, in different types of reports, and at the two Los Alamos facilities selected for the case study.
ERIC Educational Resources Information Center
Moqimipour, Kourosh; Shahrokhi, Mohsen
2015-01-01
The present study aimed at analyzing writing errors caused by the interference of the Persian language, regarded as the first language (L1), in three writing genres, namely narration, description, and comparison/contrast by Iranian EFL students. 65 English paragraphs written by the participants, who were at the intermediate level based on their…
Reduction of S-parameter errors using singular spectrum analysis.
Ozturk, Turgut; Uluer, İhsan; Ünal, İlhami
2016-07-01
A free space measurement method, which consists of two horn antennas, a network analyzer, two frequency extenders, and a sample holder, is used to measure transmission (S21) coefficients in 75-110 GHz (W-Band) frequency range. Singular spectrum analysis method is presented to eliminate the error and noise of raw S21 data after calibration and measurement processes. The proposed model can be applied easily to remove the repeated calibration process for each sample measurement. Hence, smooth, reliable, and accurate data are obtained to determine the dielectric properties of materials. In addition, the dielectric constant of materials (paper, polyvinylchloride-PVC, Ultralam® 3850HT, and glass) is calculated by thin sheet approximation and Newton-Raphson extracting techniques using a filtered S21 transmission parameter. PMID:27475579
Reduction of S-parameter errors using singular spectrum analysis
NASA Astrophysics Data System (ADS)
Ozturk, Turgut; Uluer, Ihsan; Ünal, Ilhami
2016-07-01
A free space measurement method, which consists of two horn antennas, a network analyzer, two frequency extenders, and a sample holder, is used to measure transmission (S21) coefficients in 75-110 GHz (W-Band) frequency range. Singular spectrum analysis method is presented to eliminate the error and noise of raw S21 data after calibration and measurement processes. The proposed model can be applied easily to remove the repeated calibration process for each sample measurement. Hence, smooth, reliable, and accurate data are obtained to determine the dielectric properties of materials. In addition, the dielectric constant of materials (paper, polyvinylchloride-PVC, Ultralam® 3850HT, and glass) is calculated by thin sheet approximation and Newton-Raphson extracting techniques using a filtered S21 transmission parameter.
Error analysis and data reduction for interferometric surface measurements
NASA Astrophysics Data System (ADS)
Zhou, Ping
High-precision optical systems are generally tested using interferometry, since it often is the only way to achieve the desired measurement precision and accuracy. Interferometers can generally measure a surface to an accuracy of one hundredth of a wave. In order to achieve an accuracy to the next order of magnitude, one thousandth of a wave, each error source in the measurement must be characterized and calibrated. Errors in interferometric measurements are classified into random errors and systematic errors. An approach to estimate random errors in the measurement is provided, based on the variation in the data. Systematic errors, such as retrace error, imaging distortion, and error due to diffraction effects, are also studied in this dissertation. Methods to estimate the first order geometric error and errors due to diffraction effects are presented. Interferometer phase modulation transfer function (MTF) is another intrinsic error. The phase MTF of an infrared interferometer is measured with a phase Siemens star, and a Wiener filter is designed to recover the middle spatial frequency information. Map registration is required when there are two maps tested in different systems and one of these two maps needs to be subtracted from the other. Incorrect mapping causes wavefront errors. A smoothing filter method is presented which can reduce the sensitivity to registration error and improve the overall measurement accuracy. Interferometric optical testing with computer-generated holograms (CGH) is widely used for measuring aspheric surfaces. The accuracy of the drawn pattern on a hologram decides the accuracy of the measurement. Uncertainties in the CGH manufacturing process introduce errors in holograms and then the generated wavefront. An optimal design of the CGH is provided which can reduce the sensitivity to fabrication errors and give good diffraction efficiency for both chrome-on-glass and phase etched CGHs.
Error analysis of exponential integrators for oscillatory second-order differential equations
NASA Astrophysics Data System (ADS)
Grimm, Volker; Hochbruck, Marlis
2006-05-01
In this paper, we analyse a family of exponential integrators for second-order differential equations in which high-frequency oscillations in the solution are generated by a linear part. Conditions are given which guarantee that the integrators allow second-order error bounds independent of the product of the step size with the frequencies. Our convergence analysis generalizes known results on the mollified impulse method by García-Archilla, Sanz-Serna and Skeel (1998, SIAM J. Sci. Comput. 30 930-63) and on Gautschi-type exponential integrators (Hairer E, Lubich Ch and Wanner G 2002 Geometric Numerical Integration (Berlin: Springer), Hochbruck M and Lubich Ch 1999 Numer. Math. 83 403-26).
Fixed-point error analysis of Winograd Fourier transform algorithms
NASA Technical Reports Server (NTRS)
Patterson, R. W.; Mcclellan, J. H.
1978-01-01
The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.
Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Ghaffari, Farhad
2012-01-01
Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.
Analysis of error-correction constraints in an optical disk.
Roberts, J D; Ryley, A; Jones, D M; Burke, D
1996-07-10
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check. PMID:21102793
Analysis of the "naming game" with learning errors in communications.
Lou, Yang; Chen, Guanrong
2015-01-01
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective. PMID:26178457
Two numerical models for landslide dynamic analysis
NASA Astrophysics Data System (ADS)
Hungr, Oldrich; McDougall, Scott
2009-05-01
Two microcomputer-based numerical models (Dynamic ANalysis (DAN) and three-dimensional model DAN (DAN3D)) have been developed and extensively used for analysis of landslide runout, specifically for the purposes of practical landslide hazard and risk assessment. The theoretical basis of both models is a system of depth-averaged governing equations derived from the principles of continuum mechanics. Original features developed specifically during this work include: an open rheological kernel; explicit use of tangential strain to determine the tangential stress state within the flowing sheet, which is both more realistic and beneficial to the stability of the model; orientation of principal tangential stresses parallel with the direction of motion; inclusion of the centripetal forces corresponding to the true curvature of the path in the motion direction and; the use of very simple and highly efficient free surface interpolation methods. Both models yield similar results when applied to the same sets of input data. Both algorithms are designed to work within the semi-empirical framework of the "equivalent fluid" approach. This approach requires selection of material rheology and calibration of input parameters through back-analysis of real events. Although approximate, it facilitates simple and efficient operation while accounting for the most important characteristics of extremely rapid landslides. The two models have been verified against several controlled laboratory experiments with known physical basis. A large number of back-analyses of real landslides of various types have also been carried out. One example is presented. Calibration patterns are emerging, which give a promise of predictive capability.
Error analysis for encoding a qubit in an oscillator
Glancy, S.; Knill, E.
2006-01-15
In Phys. Rev. A 64, 012310 (2001), Gottesman, Kitaev, and Preskill described a method to encode a qubit in the continuous Hilbert space of an oscillator's position and momentum variables. This encoding provides a natural error-correction scheme that can correct errors due to small shifts of the position or momentum wave functions (i.e., use of the displacement operator). We present bounds on the size of correctable shift errors when both qubit and ancilla states may contain errors. We then use these bounds to constrain the quality of input qubit and ancilla states.
NASA Technical Reports Server (NTRS)
Mohr, R. L.
1975-01-01
A set of four digital computer programs is presented which can be used to investigate the effects of instrumentation errors on the accuracy of aircraft and helicopter stability-and-control derivatives identified from flight test data. The programs assume that the differential equations of motion are linear and consist of small perturbations about a quasi-steady flight condition. It is also assumed that a Newton-Raphson optimization technique is used for identifying the estimates of the parameters. Flow charts and printouts are included.
Nonclassicality thresholds for multiqubit states: Numerical analysis
Gruca, Jacek; Zukowski, Marek; Laskowski, Wieslaw; Kiesel, Nikolai; Wieczorek, Witlef; Weinfurter, Harald; Schmid, Christian
2010-07-15
States that strongly violate Bell's inequalities are required in many quantum-informational protocols as, for example, in cryptography, secret sharing, and the reduction of communication complexity. We investigate families of such states with a numerical method which allows us to reveal nonclassicality even without direct knowledge of Bell's inequalities for the given problem. An extensive set of numerical results is presented and discussed.
English Majors' Errors in Translating Arabic Endophora: Analysis and Remedy
ERIC Educational Resources Information Center
Abdellah, Antar Solhy
2007-01-01
Egyptian English majors in the faculty of Education, South Valley University tend to mistranslate the plural inanimate Arabic pronoun with the singular inanimate English pronoun. A diagnostic test was designed to analyze this error. Results showed that a large number of students (first year and fourth year students) make this error, that the error…
Factor Rotation and Standard Errors in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.
2015-01-01
In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…
Analysis of Children's Computational Errors: A Qualitative Approach
ERIC Educational Resources Information Center
Engelhardt, J. M.
1977-01-01
This study was designed to replicate and extend Roberts' (1968) efforts at classifying computational errors. 198 elementary school students were administered an 84-item arithmetic computation test. Eight types of errors were described which led to several tentative generalizations. (Editor/RK)
Chiu, Ming-Chuan; Hsieh, Min-Chih
2016-05-01
The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. PMID:26851473
Analysis on the alignment errors of segmented Fresnel lens
NASA Astrophysics Data System (ADS)
Zhou, Xudong; Wu, Shibin; Yang, Wei; Wang, Lihua
2014-09-01
Stitching Fresnel lens are designed for the application in the micro-focus X-ray, but splicing errors between sub-apertures will affect optical performance of the entire mirror. The offset error tolerance of different degrees of freedom between the sub-apertures are analyzed theoretically according to the wave-front aberration theory and with the Rayleigh criterion as evaluation criteria, and then validate the correctness of the theory using simulation software of ZEMAX. The results show that Z-axis piston error tolerance and translation error tolerance of XY axis increases with the increasing F-number of stitching Fresnel lens, and tilt error tolerance of XY axis decreases with increasing diameter. The results provide a theoretical basis and guidance for the design, detection and alignment of stitching Fresnel lens.
Error Analysis of Stereophotoclinometry in Support of the OSIRIS-REx Mission
NASA Astrophysics Data System (ADS)
Palmer, Eric; Gaskell, Robert W.; Weirich, John R.
2015-11-01
Stereophotoclinometry has been used on numerous planetary bodies to derive the shape model, most recently 67P-Churyumov-Gerasimenko (Jorda, et al., 2014), the Earth (Palmer, et al., 2014) and Vesta (Gaskell, 2012). SPC is planned to create the ultra-high resolution topography for the upcoming mission OSIRIS-REx that will sample the asteroid Bennu, arriving in 2018. This shape model will be used both for scientific analysis as well as operational navigation, to include providing the topography that will ensure a safe collection of the surface.We present the initial results of error analysis of SPC, with specific focus on how both systematic and non-systematic error propagate through SPC into the shape model. For this testing, we have created a notional global truth model at 5cm and a single region at 2.5mm ground sample distance. These truth models were used to create images using GSFC's software Freespace. Then these images were used by SPC to form a derived shape model with a ground sample distance of 5cm.We will report on both the absolute and relative error that the derived shape model has compared to the original truth model as well as other empirical and theoretical measurement of errors within SPC.Jorda, L. et al (2014) "The Shape of Comet 67P/Churyumov-Gerasimenko from Rosetta/Osiris Images", AGU Fall Meeting, #P41C-3943. Gaskell, R (2012) "SPC Shape and Topography of Vesta from DAWN Imaging Data", DSP Meeting #44, #209.03. Palmer, L. Sykes, M. V. Gaskll, R.W. (2014) "Mercator — Autonomous Navigation Using Panoramas", LPCS 45, #1777.
Diagnosing non-Gaussianity of forecast and analysis errors in a convective-scale model
NASA Astrophysics Data System (ADS)
Legrand, R.; Michel, Y.; Montmerle, T.
2016-01-01
In numerical weather prediction, the problem of estimating initial conditions with a variational approach is usually based on a Bayesian framework associated with a Gaussianity assumption of the probability density functions of both observations and background errors. In practice, Gaussianity of errors is tied to linearity, in the sense that a nonlinear model will yield non-Gaussian probability density functions. In this context, standard methods relying on Gaussian assumption may perform poorly. This study aims to describe some aspects of non-Gaussianity of forecast and analysis errors in a convective-scale model using a Monte Carlo approach based on an ensemble of data assimilations. For this purpose, an ensemble of 90 members of cycled perturbed assimilations has been run over a highly precipitating case of interest. Non-Gaussianity is measured using the K2 statistics from the D'Agostino test, which is related to the sum of the squares of univariate skewness and kurtosis. Results confirm that specific humidity is the least Gaussian variable according to that measure and also that non-Gaussianity is generally more pronounced in the boundary layer and in cloudy areas. The dynamical control variables used in our data assimilation, namely vorticity and divergence, also show distinct non-Gaussian behaviour. It is shown that while non-Gaussianity increases with forecast lead time, it is efficiently reduced by the data assimilation step especially in areas well covered by observations. Our findings may have implication for the choice of the control variables.
Motion error analysis of the 3D coordinates of airborne lidar for typical terrains
NASA Astrophysics Data System (ADS)
Peng, Tao; Lan, Tian; Ni, Guoqiang
2013-07-01
A motion error model of 3D coordinates is established and the impact on coordinate errors caused by the non-ideal movement of the airborne platform is analyzed. The simulation results of the model show that when the lidar system operates at high altitude, the influence on the positioning errors derived from laser point cloud spacing is small. For the model the positioning errors obey simple harmonic vibration whose amplitude envelope gradually reduces with the increase of the vibration frequency. When the vibration period number is larger than 50, the coordinate errors are almost uncorrelated with time. The elevation error is less than the plane error and in the plane the error in the scanning direction is less than the error in the flight direction. Through the analysis of flight test data, the conclusion is verified.
Numerical analysis and measurement in corner-fired furnace
Zhengjun, S.; Rongsheng, G.
1999-07-01
For several years, numerical analysis has been successfully used by Dongfang Boiler (Group) Co., Ltd. at a 200MW boiler, a 300MW boiler and so on, which were designed and made by DBC. The distribution of results is agreement each other between numerical analysis and measurement. In conclusion, it is considered that numerical analysis can be used as an important reference method in pulverized coal boiler design and test.
The slider motion error analysis by positive solution method in parallel mechanism
NASA Astrophysics Data System (ADS)
Ma, Xiaoqing; Zhang, Lisong; Zhu, Liang; Yang, Wenguo; Hu, Penghao
2016-01-01
Motion error of slider plays key role in 3-PUU parallel coordinates measuring machine (CMM) performance and influence the CMM accuracy, which attracts lots of experts eyes in the world, Generally, the analysis method is based on the view of space 6-DOF. Here, a new analysis method is provided. First, the structure relation of slider and guideway can be abstracted as a 4-bar parallel mechanism. So, the sliders can be considered as moving platform in parallel kinematic mechanism PKM. Its motion error analysis is also transferred to moving platform position analysis in PKM. Then, after establishing the positive and negative solutions, some existed theory and technology for PKM can be applied to analyze slider straightness motion error and angular motion error simultaneously. Thirdly, some experiments by autocollimator are carried out to capture the original error data about guideway its own error, the data can be described as straightness error function by fitting curvilinear equation. Finally, the Straightness error of two guideways are considered as the variation of rod length in parallel mechanism, the slider's straightness error and angular error can be obtained by putting data into the established model. The calculated result is generally consistent with experiment result. The idea will be beneficial on accuracy calibration and error correction of 3-PUU CMM and also provides a new thought to analyze kinematic error of guideway in precision machine tool and precision instrument.
Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.
ERIC Educational Resources Information Center
Monagle, E. Brette
The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…
ERIC Educational Resources Information Center
El-khateeb, Mahmoud M. A.
2016-01-01
The purpose of this study aims to investigate the errors classes occurred by the Preparatory year students at King Saud University, through analysis student responses to the items of the study test, and to identify the varieties of the common errors and ratios of common errors that occurred in solving inequalities. In the collection of the data,…
Numerical analysis of granular soil fabrics
NASA Astrophysics Data System (ADS)
Torbahn, L.; Huhn, K.
2012-04-01
Soil stability strongly depends on the material strength that is in general influenced by deformation processes and vice versa. Hence, investigation of material strength is of great interest in many geoscientific studies where soil deformations occur, e.g. the destabilization of slopes or the evolution of fault gouges. Particularly in the former case, slope failure occurs if the applied forces exceed the shear strength of slope material. Hence, the soil resistance or respectively the material strength acts contrary to deformation processes. Besides, geotechnical experiments, e.g. direct shear or ring shear tests, suggest that shear resistance mainly depends on properties of soil structure, texture and fabric. Although laboratory tests enable investigations of soil structure and texture during shear, detailed observations inside the sheared specimen during the failure processes as well as fabric effects are very limited. So, high-resolution information in space and time regarding texture evolution and/or grain behavior during shear is refused. However, such data is essential to gain a deeper insight into the key role of soil structure, texture, etc. on material strength and the physical processes occurring during material deformation on a micro-scaled level. Additionally, laboratory tests are not completely reproducible enabling a detailed statistical investigation of fabric during shear. So, almost identical setups to run methodical tests investigating the impact of fabric on soil resistance are hard to archive under laboratory conditions. Hence, we used numerical shear test experiments utilizing the Discrete Element Method to quantify the impact of different material fabrics on the shear resistance of soil as this granular model approach enables to investigate failure processes on a grain-scaled level. Our numerical setup adapts general settings from laboratory tests while the model characteristics are fixed except for the soil structure particularly the used
Systematic error analysis for 3D nanoprofiler tracing normal vector
NASA Astrophysics Data System (ADS)
Kudo, Ryota; Tokuta, Yusuke; Nakano, Motohiro; Yamamura, Kazuya; Endo, Katsuyoshi
2015-10-01
In recent years, demand for an optical element having a high degree of freedom shape is increased. High-precision aspherical shape is required for the X-ray focusing mirror etc. For the head-mounted display etc., optical element of the free-form surface is used. For such an optical device fabrication, measurement technology is essential. We have developed a high- precision 3D nanoprofiler. By nanoprofiler, the normal vector information of the sample surface is obtained on the basis of the linearity of light. Normal vector information is differential value of the shape, it is possible to determine the shape by integrating. Repeatability of sub-nanometer has been achieved by nanoprofiler. To pursue the accuracy of shapes, systematic error is analyzed. The systematic errors are figure error of sample and assembly errors of the device. This method utilizes the information of the ideal shape of the sample, and the measurement point coordinates and normal vectors are calculated. However, measured figure is not the ideal shape by the effect of systematic errors. Therefore, the measurement point coordinate and the normal vector is calculated again by feeding back the measured figure. Correction of errors have been attempted by figure re-derivation. It was confirmed theoretically effectiveness by simulation. This approach also applies to the experiment, it was confirmed the possibility of about 4 nm PV figure correction in the employed sample.
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2015-12-21
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2015-12-21
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
NASA Astrophysics Data System (ADS)
Verleysdonk, Sarah; Flores-Orozco, Adrian; Krautblatter, Michael; Kemna, Andreas
2010-05-01
Electrical resistivity tomography (ERT) has been used for the monitoring of permafrost-affected rock walls for some years now. To further enhance the interpretation of ERT measurements a deeper insight into error sources and the influence of error model parameters on the imaging results is necessary. Here, we present the effect of different statistical schemes for the determination of error parameters from the discrepancies between normal and reciprocal measurements - bin analysis and histogram analysis - using a smoothness-constrained inversion code (CRTomo) with an incorporated appropriate error model. The study site is located in galleries adjacent to the Zugspitze North Face (2800 m a.s.l.) at the border between Austria and Germany. A 20 m * 40 m rock permafrost body and its surroundings have been monitored along permanently installed transects - with electrode spacings of 1.5 m and 4.6 m - from 2007 to 2009. For data acquisition, a conventional Wenner survey was conducted as this array has proven to be the most robust array in frozen rock walls. Normal and reciprocal data were collected directly one after another to ensure identical conditions. The ERT inversion results depend strongly on the chosen parameters of the employed error model, i.e., the absolute resistance error and the relative resistance error. These parameters were derived (1) for large normal/reciprocal data sets by means of bin analyses and (2) for small normal/reciprocal data sets by means of histogram analyses. Error parameters were calculated independently for each data set of a monthly monitoring sequence to avoid the creation of artefacts (over-fitting of the data) or unnecessary loss of contrast (under-fitting of the data) in the images. The inversion results are assessed with respect to (1) raw data quality as described by the error model parameters, (2) validation via available (rock) temperature data and (3) the interpretation of the images from a geophysical as well as a
A Numerical Model for Atomtronic Circuit Analysis
Chow, Weng W.; Straatsma, Cameron J. E.; Anderson, Dana Z.
2015-07-16
A model for studying atomtronic devices and circuits based on finite-temperature Bose-condensed gases is presented. The approach involves numerically solving equations of motion for atomic populations and coherences, derived using the Bose-Hubbard Hamiltonian and the Heisenberg picture. The resulting cluster expansion is truncated at a level giving balance between physics rigor and numerical demand mitigation. This approach allows parametric studies involving time scales that cover both the rapid population dynamics relevant to nonequilibrium state evolution, as well as the much longer time durations typical for reaching steady-state device operation. This model is demonstrated by studying the evolution of a Bose-condensed gas in the presence of atom injection and extraction in a double-well potential. In this configuration phase locking between condensates in each well of the potential is readily observed, and its influence on the evolution of the system is studied.
Numerical model for atomtronic circuit analysis
NASA Astrophysics Data System (ADS)
Chow, Weng W.; Straatsma, Cameron J. E.; Anderson, Dana Z.
2015-07-01
A model for studying atomtronic devices and circuits based on finite-temperature Bose-condensed gases is presented. The approach involves numerically solving equations of motion for atomic populations and coherences, derived using the Bose-Hubbard Hamiltonian and the Heisenberg picture. The resulting cluster expansion is truncated at a level giving balance between physics rigor and numerical demand mitigation. This approach allows parametric studies involving time scales that cover both the rapid population dynamics relevant to nonequilibrium state evolution, as well as the much longer time durations typical for reaching steady-state device operation. The model is demonstrated by studying the evolution of a Bose-condensed gas in the presence of atom injection and extraction in a double-well potential. In this configuration phase locking between condensates in each well of the potential is readily observed, and its influence on the evolution of the system is studied.
Numerical Analysis of the SCHOLAR Supersonic Combustor
NASA Technical Reports Server (NTRS)
Rodriguez, Carlos G.; Cutler, Andrew D.
2003-01-01
The SCHOLAR scramjet experiment is the subject of an ongoing numerical investigation. The facility nozzle and combustor were solved separate and sequentially, with the exit conditions of the former used as inlet conditions for the latter. A baseline configuration for the numerical model was compared with the available experimental data. It was found that ignition-delay was underpredicted and fuel-plume penetration overpredicted, while the pressure rise was close to experimental values. In addition, grid-convergence by means of grid-sequencing could not be established. The effects of the different turbulence parameters were quantified. It was found that it was not possible to simultaneously predict the three main parameters of this flow: pressure-rise, ignition-delay, and fuel-plume penetration.
SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1994-01-01
SAMSAN algorithm; however, it is generally agreed by experienced users, and in the numerical error analysis literature, that computation with non-symmetric matrices of order greater than about 200 should be avoided or treated with extreme care. SAMSAN attempts to support the needs of application oriented analysis by providing: 1) a methodology with unlimited growth potential, 2) a methodology to insure that associated documentation is current and available "on demand", 3) a foundation of basic computational algorithms that most controls analysis procedures are based upon, 4) a set of check out and evaluation programs which demonstrate usage of the algorithms on a series of problems which are structured to expose the limits of each algorithm's applicability, and 5) capabilities which support both a priori and a posteriori error analysis for the computational algorithms provided. The SAMSAN algorithms are coded in FORTRAN 77 for batch or interactive execution and have been implemented on a DEC VAX computer under VMS 4.7. An effort was made to assure that the FORTRAN source code was portable and thus SAMSAN may be adaptable to other machine environments. The documentation is included on the distribution tape or can be purchased separately at the price below. SAMSAN version 2.0 was developed in 1982 and updated to version 3.0 in 1988.
NASA Astrophysics Data System (ADS)
Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; Littlewood, David J.; Baines, Andrew J.
2016-05-01
Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cell represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Ultimately, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.
Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; Littlewood, David J.; Baines, Andrew J.
2016-03-16
Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less
Error Analysis of Weekly Station Coordinates in the DORIS Network
NASA Astrophysics Data System (ADS)
Williams, Simon D. P.; Willis, Pascal
2006-11-01
Twelve years of DORIS data from 31 selected sites of the IGN/JPL (Institut Géographique National/Jet Propulsion Laboratory) solution IGNWD05 have been analysed using maximum likelihood estimation (MLE) in an attempt to understand the nature of the noise in the weekly station coordinate time-series. Six alternative noise models in a total of 12 different combinations were used as possible descriptions of the noise. The six noise models can be divided into two natural groups, temporally uncorrelated (white) noise and temporally correlated (coloured) noise. The noise can be described as a combination of variable white noise and one of flicker, first-order Gauss Markov or power-law noise. The data set as a whole is best described as a combination of variable white noise plus flicker noise. The variable white noise, which is white noise with variable amplitude that is a function of the weekly formal errors multiplied by an estimated scale factor, shows a dependence on site latitude and the number of DORIS-equipped satellites used in the solution. The latitude dependence is largest in the east component due to the near polar orbit of the SPOT satellites. The amplitude of the flicker noise is similar in all three components and equal to about 20 mm/year1/4. There appears to be no latitude dependence of the flicker noise amplitude. The uncertainty in rates (site velocities) after 12 years is just under 1 mm/year. These uncertainties are around 3 4 times larger than if only variable white noise had been assumed, i.e., no temporally correlated noise. A rate uncertainty of 1 mm/year after 12 years in the vertical is similar to that achieved using Global Positioning System (GPS) data but it takes DORIS twice as long to reach 1 mm/year than GPS in the horizontal. The analysis has also helped to identify sites with either anomalous noise characteristics or large noise amplitudes, and tested the validity of previously proposed discontinuities. In addition, several new offsets
NASA Technical Reports Server (NTRS)
Ward, R. C.
1974-01-01
Backward error analyses of the application of Householder transformations to both the standard and the generalized eigenvalue problems are presented. The analysis for the standard eigenvalue problem determines the error from the application of an exact similarity transformation, and the analysis for the generalized eigenvalue problem determines the error from the application of an exact equivalence transformation. Bounds for the norms of the resulting perturbation matrices are presented and compared with existing bounds when known.
Analysis of Students' Error in Learning of Quadratic Equations
ERIC Educational Resources Information Center
Zakaria, Effandi; Ibrahim; Maat, Siti Mistima
2010-01-01
The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…
Pitch Error Analysis of Young Piano Students' Music Reading Performances
ERIC Educational Resources Information Center
Rut Gudmundsdottir, Helga
2010-01-01
This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…
Reading and Spelling Error Analysis of Native Arabic Dyslexic Readers
ERIC Educational Resources Information Center
Abu-rabia, Salim; Taha, Haitham
2004-01-01
This study was an investigation of reading and spelling errors of dyslexic Arabic readers ("n"=20) compared with two groups of normal readers: a young readers group, matched with the dyslexics by reading level ("n"=20) and an age-matched group ("n"=20). They were tested on reading and spelling of texts, isolated words and pseudowords. Two…
Oral Definitions of Newly Learned Words: An Error Analysis
ERIC Educational Resources Information Center
Steele, Sara C.
2012-01-01
This study examined and compared patterns of errors in the oral definitions of newly learned words. Fifteen 9- to 11-year-old children with language learning disability (LLD) and 15 typically developing age-matched peers inferred the meanings of 20 nonsense words from four novel reading passages. After reading, children provided oral definitions…
Analysis of Errors Made by Students Solving Genetics Problems.
ERIC Educational Resources Information Center
Costello, Sandra Judith
The purpose of this study was to analyze the errors made by students solving genetics problems. A sample of 10 non-science undergraduate students was obtained from a private college in Northern New Jersey. The results support prior research in the area of genetics education and show that a weak understanding of the relationship of meiosis to…
Shape error analysis for reflective nano focusing optics
Modi, Mohammed H.; Idir, Mourad
2010-06-23
Focusing performance of reflective x-ray optics is determined by surface figure accuracy. Any surface imperfection present on such optics introduces a phase error in the outgoing wave fields. Therefore converging beam at the focal spot will differ from the desired performance. Effect of these errors on focusing performance can be calculated by wave optical approach considering a coherent wave field illumination of optical elements. We have developed a wave optics simulator using Fresnel-Kirchhoff diffraction integral to calculate the mirror pupil function. Both analytically calculated and measured surface topography data can be taken as an aberration source to outgoing wave fields. Simulations are performed to study the effect of surface height fluctuations on focusing performances over wide frequency range in high, mid and low frequency band. The results using real shape profile measured with long trace profilometer (LTP) suggest that the shape error of {lambda}/4 PV (peak to valley) is tolerable to achieve diffraction limited performance. It is desirable to remove shape error of very low frequency as 0.1 mm{sup -1} which otherwise will generate beam waist or satellite peaks. All other frequencies above this limit will not affect the focused beam profile but only caused a loss in intensity.
NASA Astrophysics Data System (ADS)
Pan, B.; Wang, B.; Lubineau, G.
2016-07-01
Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work.
Eigenvalue error analysis of viscously damped structures using a Ritz reduction method
NASA Technical Reports Server (NTRS)
Chu, Cheng-Chih; Milman, Mark H.
1992-01-01
The efficient solution of the eigenvalue problem that results from inserting passive dampers with variable stiffness and damping coefficients into a structure is addressed. Eigenanalysis of reduced models obtained by retaining a number of normal modes augmented with Ritz vectors corresponding to the static solutions resulting from the load patterns introduced by the dampers has been empirically shown to yield excellent approximations to the full eigenvalue problem. An analysis of this technique in the case of a single damper is presented. A priori and a posteriori error estimates are generated and tested on numerical examples. Comparison theorems with modally truncated models and a Markov parameter matching reduced-order model are derived. These theorems corroborate the heuristic that residual flexibility methods improve low-frequency approximation of the system. The analysis leads to other techniques for eigenvalue approximation. Approximate closed-form solutions are derived that include a refinement to eigenvalue derivative methods for approximation. An efficient Newton scheme is also developed. A numerical example is presented demonstrating the effectiveness of each of these methods.
Numerical Analysis of the Symmetric Methods
NASA Astrophysics Data System (ADS)
Xu, Ji-Hong; Zhang, A.-Li
1995-03-01
Aimed at the initial value problem of the particular second-order ordinary differential equations,y ″=f(x, y), the symmetric methods (Quinlan and Tremaine, 1990) and our methods (Xu and Zhang, 1994) have been compared in detail by integrating the artificial earth satellite orbits in this paper. In the end, we point out clearly that the integral accuracy of numerical integration of the satellite orbits by applying our methods is obviously higher than that by applying the same order formula of the symmetric methods when the integration time-interval is not greater than 12000 periods.
ERIC Educational Resources Information Center
McGuire, Patrick
2013-01-01
This article describes how a free, web-based intelligent tutoring system, (ASSISTment), was used to create online error analysis items for preservice elementary and secondary mathematics teachers. The online error analysis items challenged preservice teachers to analyze, diagnose, and provide targeted instructional remediation intended to help…
Template Construction as a Basis for Error-Analysis Packages in Language Learning Programs.
ERIC Educational Resources Information Center
Helmreich, Stephen C.
1987-01-01
An "intelligent" system for constructing computer-assisted pattern drills to be used in second language instruction is proposed. First, some of the difficulties in designing intelligent error analysis are discussed briefly. Two major approaches to error analysis in computer-assisted instruction, pattern matching and parsing, are described, and…
Numerical analysis of slender vortex motion
Zhou, H.
1996-02-01
Several numerical methods for slender vortex motion (the local induction equation, the Klein-Majda equation, and the Klein-Knio equation) are compared on the specific example of sideband instability of Kelvin waves on a vortex. Numerical experiments on this model problem indicate that all these methods yield qualitatively similar behavior, and this behavior is different from the behavior of a non-slender vortex with variable cross-section. It is found that the boundaries between stable, recurrent, and chaotic regimes in the parameter space of the model problem depend on the method used. The boundaries of these domains in the parameter space for the Klein-Majda equation and for the Klein-Knio equation are closely related to the core size. When the core size is large enough, the Klein-Majda equation always exhibits stable solutions for our model problem. Various conclusions are drawn; in particular, the behavior of turbulent vortices cannot be captured by these local approximations, and probably cannot be captured by any slender vortex model with constant vortex cross-section. Speculations about the differences between classical and superfluid hydrodynamics are also offered.
Waveform error analysis for bistatic synthetic aperture radar systems
NASA Astrophysics Data System (ADS)
Adams, J. W.; Schifani, T. M.
The signal phase histories at the transmitter, receiver, and radar signal processor in bistatic SAR systems are described. The fundamental problem of mismatches in the waveform generators for the illuminating and receiving radar systems is analyzed. The effects of errors in carrier frequency and chirp slope are analyzed for bistatic radar systems which use linear FM waveforms. It is shown that the primary effect of a mismatch in carrier frequencies is an azimuth displacement of the image.
Digital floodplain mapping and an analysis of errors involved
Hamblen, C.S.; Soong, D.T.; Cai, X.
2007-01-01
Mapping floodplain boundaries using geographical information system (GIS) and digital elevation models (DEMs) was completed in a recent study. However convenient this method may appear at first, the resulting maps potentially can have unaccounted errors. Mapping the floodplain using GIS is faster than mapping manually, and digital mapping is expected to be more common in the future. When mapping is done manually, the experience and judgment of the engineer or geographer completing the mapping and the contour resolution of the surface topography are critical in determining the flood-plain and floodway boundaries between cross sections. When mapping is done digitally, discrepancies can result from the use of the computing algorithm and digital topographic datasets. Understanding the possible sources of error and how the error accumulates through these processes is necessary for the validation of automated digital mapping. This study will evaluate the procedure of floodplain mapping using GIS and a 3 m by 3 m resolution DEM with a focus on the accumulated errors involved in the process. Within the GIS environment of this mapping method, the procedural steps of most interest, initially, include: (1) the accurate spatial representation of the stream centerline and cross sections, (2) properly using a triangulated irregular network (TIN) model for the flood elevations of the studied cross sections, the interpolated elevations between them and the extrapolated flood elevations beyond the cross sections, and (3) the comparison of the flood elevation TIN with the ground elevation DEM, from which the appropriate inundation boundaries are delineated. The study area involved is of relatively low topographic relief; thereby, making it representative of common suburban development and a prime setting for the need of accurately mapped floodplains. This paper emphasizes the impacts of integrating supplemental digital terrain data between cross sections on floodplain delineation
Magnetic error analysis of recycler pbar injection transfer line
Yang, M.J.; /Fermilab
2007-06-01
Detailed study of Fermilab Recycler Ring anti-proton injection line became feasible with its BPM system upgrade, though the beamline has been in existence and operational since year 2000. Previous attempts were not fruitful due to limitations in the BPM system. Among the objectives are the assessment of beamline optics and the presence of error fields. In particular the field region of the permanent Lambertson magnets at both ends of R22 transfer line will be scrutinized.
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
Procedures for numerical analysis of circadian rhythms
REFINETTI, ROBERTO; LISSEN, GERMAINE CORNÉ; HALBERG, FRANZ
2010-01-01
This article reviews various procedures used in the analysis of circadian rhythms at the populational, organismal, cellular and molecular levels. The procedures range from visual inspection of time plots and actograms to several mathematical methods of time series analysis. Computational steps are described in some detail, and additional bibliographic resources and computer programs are listed. PMID:23710111
PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.
PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.
1999-03-29
All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.
Calculating Internal Avalanche Velocities From Correlation With Error Analysis.
NASA Astrophysics Data System (ADS)
McElwaine, J. N.; Tiefenbacher, F.
Velocities inside avalanches have been calculated for many years by calculating the cross-correlation between light sensitive sensors using a method pioneered by Dent. His approach has been widely adopted but suffers from four shortcomings. (i) Corre- lations are studied between pairs of sensors rather than between all sensors simulta- neously. This can result in inconsistent velocities and does not extract the maximum information from the data. (ii) The longer the time that the correlations are taken over the better the noise rejection, but errors due to non-constant velocity increase. (iii) The errors are hard to quantify. (iv) The calculated velocities are usually widely scattered and discontinuous. A new approach is described that produces a continuous veloc- ity field from any number of sensors at arbitrary locations. The method is based on a variational principle that reconstructs the underlying signal as it is advected past the sensors and enforces differentiability on the velocity. The errors in the method are quantified and applied to the problem of optimal sensor positioning and design. Results on SLF data from chute experiments are discussed.
Alignment error analysis of the snapshot imaging polarimeter.
Liu, Zhen; Yang, Wei-Feng; Ye, Qing-Hao; Hong, Jin; Gong, Guan-Yuan; Zheng, Xiao-Bing
2016-03-10
A snapshot imaging polarimeter (SIP) system is able to reconstruct two-dimensional spatial polarization information through a single interferogram. In this system, the alignment errors of the half-wave plate (HWP) and the analyzer have a predominant impact on the accuracies of reconstructed complete Stokes parameters. A theoretical model for analyzing the alignment errors in the SIP system is presented in this paper. Based on this model, the accuracy of the reconstructed Stokes parameters has been evaluated by using different incident states of polarization. An optimum thickness of the Savart plate for alleviating the perturbation introduced by the alignment error of the HWP is found by using the condition number of the system measurement matrix as an objective function in a minimization procedure. The result shows that when the thickness of a Savart plate is 23 mm, corresponding to the condition number 2.06, the precision of the SIP system can reach to 0.21% at 1° alignment tolerance of the HWP. PMID:26974785
Probability analysis of position errors using uncooled IR stereo camera
NASA Astrophysics Data System (ADS)
Oh, Jun Ho; Lee, Sang Hwa; Lee, Boo Hwan; Park, Jong-Il
2016-05-01
This paper analyzes the random phenomenon of 3D positions when tracking moving objects using the infrared (IR) stereo camera, and proposes a probability model of 3D positions. The proposed probability model integrates two random error phenomena. One is the pixel quantization error which is caused by discrete sampling pixels in estimating disparity values of stereo camera. The other is the timing jitter which results from the irregular acquisition-timing in the uncooled IR cameras. This paper derives a probability distribution function by combining jitter model with pixel quantization error. To verify the proposed probability function of 3D positions, the experiments on tracking fast moving objects are performed using IR stereo camera system. The 3D depths of moving object are estimated by stereo matching, and be compared with the ground truth obtained by laser scanner system. According to the experiments, the 3D depths of moving object are estimated within the statistically reliable range which is well derived by the proposed probability distribution. It is expected that the proposed probability model of 3D positions can be applied to various IR stereo camera systems that deal with fast moving objects.
Numerical Analysis of Magnetic Sail Spacecraft
Sasaki, Daisuke; Yamakawa, Hiroshi; Usui, Hideyuki; Funaki, Ikkoh; Kojima, Hirotsugu
2008-12-31
To capture the kinetic energy of the solar wind by creating a large magnetosphere around the spacecraft, magneto-plasma sail injects a plasma jet into a strong magnetic field produced by an electromagnet onboard the spacecraft. The aim of this paper is to investigate the effect of the IMF (interplanetary magnetic field) on the magnetosphere of magneto-plasma sail. First, using an axi-symmetric two-dimensional MHD code, we numerically confirm the magnetic field inflation, and the formation of a magnetosphere by the interaction between the solar wind and the magnetic field. The expansion of an artificial magnetosphere by the plasma injection is then simulated, and we show that the magnetosphere is formed by the interaction between the solar wind and the magnetic field expanded by the plasma jet from the spacecraft. This simulation indicates the size of the artificial magnetosphere becomes smaller when applying the IMF.
Symbolic dynamics-based error analysis on chaos synchronization via noisy channels
NASA Astrophysics Data System (ADS)
Lin, Da; Zhang, Fuchen; Liu, Jia-Ming
2014-07-01
In this study, symbolic dynamics is used to research the error of chaos synchronization via noisy channels. The theory of symbolic dynamics reduces chaos to a shift map that acts on a discrete set of symbols, each of which contains information about the system state. Using this transformation, a coder-decoder scheme is proposed. A model for the relationship among word length, region number of a partition, and synchronization error is provided. According to the model, the fundamental trade-off between word length and region number can be optimized to minimize the synchronization error. Numerical simulations provide support for our results.
Research in applied mathematics, numerical analysis, and computer science
NASA Technical Reports Server (NTRS)
1984-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.
Tolerance analysis of optical telescopes using coherent addition of wavefront errors
NASA Technical Reports Server (NTRS)
Davenport, J. W.
1982-01-01
A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.
Analysis of instrumentation error effects on the identification accuracy of aircraft parameters
NASA Technical Reports Server (NTRS)
Sorensen, J. A.
1972-01-01
An analytical investigation is presented of the effect of unmodeled measurement system errors on the accuracy of aircraft stability and control derivatives identified from flight test data. Such error sources include biases, scale factor errors, instrument position errors, misalignments, and instrument dynamics. Two techniques (ensemble analysis and simulated data analysis) are formulated to determine the quantitative variations to the identified parameters resulting from the unmodeled instrumentation errors. The parameter accuracy that would result from flight tests of the F-4C aircraft with typical quality instrumentation is determined using these techniques. It is shown that unmodeled instrument errors can greatly increase the uncertainty in the value of the identified parameters. General recommendations are made of procedures to be followed to insure that the measurement system associated with identifying stability and control derivatives from flight test provides sufficient accuracy.
Errors in logic and statistics plague a meta-analysis
Technology Transfer Automated Retrieval System (TEKTRAN)
The non-target effects of transgenic insecticidal crops has been a topic of debate for over a decade and many laboratory and field studies have addressed the issue in numerous countries. In 2009 Lovei et al. (Transgenic Insecticidal Crops and Natural Enemies: A Detailed Review of Laboratory Studies)...
Close-range radar rainfall estimation and error analysis
NASA Astrophysics Data System (ADS)
van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.
2012-04-01
It is well-known that quantitative precipitation estimation (QPE) is affected by many sources of error. The most important of these are 1) radar calibration, 2) wet radome attenuation, 3) rain attenuation, 4) vertical profile of reflectivity, 5) variations in drop size distribution, and 6) sampling effects. The study presented here is an attempt to separate and quantify these sources of error. For this purpose, QPE is performed very close to the radar (~1-2 km) so that 3), 4), and 6) will only play a minor role. Error source 5) can be corrected for because of the availability of two disdrometers (instruments that measure the drop size distribution). A 3-day rainfall event (25-27 August 2010) that produced more than 50 mm in De Bilt, The Netherlands is analyzed. Radar, rain gauge, and disdrometer data from De Bilt are used for this. It is clear from the analyses that without any corrections, the radar severely underestimates the total rain amount (only 25 mm). To investigate the effect of wet radome attenuation, stable returns from buildings close to the radar are analyzed. It is shown that this may have caused an underestimation up to ~4 dB. The calibration of the radar is checked by looking at received power from the sun. This turns out to cause another 1 dB of underestimation. The effect of variability of drop size distributions is shown to cause further underestimation. Correcting for all of these effects yields a good match between radar QPE and gauge measurements.
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
Dongarra, J. |; Rosener, B.
1991-12-01
This report describes a facility called NA-NET created to allow numerical analysts (na) an easy method of communicating with one another. The main advantage of the NA-NET is uniformity of addressing. All mail is addressed to the Internet host ``na-net.ornl.gov`` at Oak Ridge National Laboratory. Hence, members of the NA-NET do not need to remember complicated addresses or even where a member is currently located. As long as moving members change their e-mail address in the NA-NET everything works smoothly. The NA-NET system is currently located at Oak Ridge National Laboratory. It is running on the same machine that serves netlib. Netlib is a separate facility that distributes mathematical software via electronic mail. For more information on netlib consult, or send the one-line message ``send index`` to netlib{at}ornl.gov. The following report describes the current NA-NET system from both a user`s perspective and from an implementation perspective. Currently, there are over 2100 members in the NA-NET. An average of 110 mail messages pass through this facility daily.
Dongarra, J. . Dept. of Computer Science Oak Ridge National Lab., TN ); Rosener, B. . Dept. of Computer Science)
1991-12-01
This report describes a facility called NA-NET created to allow numerical analysts (na) an easy method of communicating with one another. The main advantage of the NA-NET is uniformity of addressing. All mail is addressed to the Internet host na-net.ornl.gov'' at Oak Ridge National Laboratory. Hence, members of the NA-NET do not need to remember complicated addresses or even where a member is currently located. As long as moving members change their e-mail address in the NA-NET everything works smoothly. The NA-NET system is currently located at Oak Ridge National Laboratory. It is running on the same machine that serves netlib. Netlib is a separate facility that distributes mathematical software via electronic mail. For more information on netlib consult, or send the one-line message send index'' to netlib{at}ornl.gov. The following report describes the current NA-NET system from both a user's perspective and from an implementation perspective. Currently, there are over 2100 members in the NA-NET. An average of 110 mail messages pass through this facility daily.
Error analysis of combined stereo/optical-flow passive ranging
NASA Technical Reports Server (NTRS)
Barniv, Yair
1991-01-01
The motion of an imaging sensor causes each imaged point of the scene to correspondingly describe a time trajectory on the image plane. The trajectories of all imaged points are reminiscent of a flow (e.g., of liquid) which is the source of the term 'optical flow'. Optical-flow ranging is a method by which the stream of two-dimensional images obtained from a forward-looking forward-moving passive sensor is used to compute depth (or range) to points in the field of view. Another well-known ranging method consists of triangulation based on stereo images obtained from at least two stationary sensors. In this paper we analyze the potential accuracies of a combined optical flow and stereo passive-ranging system in the context of helicopter nap-of-the-earth obstacle avoidance. The Cramer-Rao lower bound is developed for the combined system under the assumption of an unknown angular bias error common to both cameras of a stereo pair. It is shown that the depth accuracy degradations caused by a bias error is negligible for a combined optical-flow and stereo system as compared to a monocular optical-flow system.
ANALYSIS OF A CLASSIFICATION ERROR MATRIX USING CATEGORICAL DATA TECHNIQUES.
Rosenfield, George H.; Fitzpatrick-Lins, Katherine
1984-01-01
Summary form only given. A classification error matrix typically contains tabulation results of an accuracy evaluation of a thematic classification, such as that of a land use and land cover map. The diagonal elements of the matrix represent the counts corrected, and the usual designation of classification accuracy has been the total percent correct. The nondiagonal elements of the matrix have usually been neglected. The classification error matrix is known in statistical terms as a contingency table of categorical data. As an example, an application of these methodologies to a problem of remotely sensed data concerning two photointerpreters and four categories of classification indicated that there is no significant difference in the interpretation between the two photointerpreters, and that there are significant differences among the interpreted category classifications. However, two categories, oak and cottonwood, are not separable in classification in this experiment at the 0. 51 percent probability. A coefficient of agreement is determined for the interpreted map as a whole, and individually for each of the interpreted categories. A conditional coefficient of agreement for the individual categories is compared to other methods for expressing category accuracy which have already been presented in the remote sensing literature.
Error Analysis of non-TLD HDR Brachytherapy Dosimetric Techniques
NASA Astrophysics Data System (ADS)
Amoush, Ahmad
The American Association of Physicists in Medicine Task Group Report43 (AAPM-TG43) and its updated version TG-43U1 rely on the LiF TLD detector to determine the experimental absolute dose rate for brachytherapy. The recommended uncertainty estimates associated with TLD experimental dosimetry include 5% for statistical errors (Type A) and 7% for systematic errors (Type B). TG-43U1 protocol does not include recommendation for other experimental dosimetric techniques to calculate the absolute dose for brachytherapy. This research used two independent experimental methods and Monte Carlo simulations to investigate and analyze uncertainties and errors associated with absolute dosimetry of HDR brachytherapy for a Tandem applicator. An A16 MicroChamber* and one dose MOSFET detectors† were selected to meet the TG-43U1 recommendations for experimental dosimetry. Statistical and systematic uncertainty analyses associated with each experimental technique were analyzed quantitatively using MCNPX 2.6‡ to evaluate source positional error, Tandem positional error, the source spectrum, phantom size effect, reproducibility, temperature and pressure effects, volume averaging, stem and wall effects, and Tandem effect. Absolute dose calculations for clinical use are based on Treatment Planning System (TPS) with no corrections for the above uncertainties. Absolute dose and uncertainties along the transverse plane were predicted for the A16 microchamber. The generated overall uncertainties are 22%, 17%, 15%, 15%, 16%, 17%, and 19% at 1cm, 2cm, 3cm, 4cm, and 5cm, respectively. Predicting the dose beyond 5cm is complicated due to low signal-to-noise ratio, cable effect, and stem effect for the A16 microchamber. Since dose beyond 5cm adds no clinical information, it has been ignored in this study. The absolute dose was predicted for the MOSFET detector from 1cm to 7cm along the transverse plane. The generated overall uncertainties are 23%, 11%, 8%, 7%, 7%, 9%, and 8% at 1cm, 2cm, 3cm
Analysis of star camera errors in GRACE data and their impact on monthly gravity field models
NASA Astrophysics Data System (ADS)
Inácio, Pedro; Ditmar, Pavel; Klees, Roland; Farahani, Hassan Hashemi
2015-06-01
Star cameras (SCs) on board the GRACE satellites provide information about the attitudes of the spacecrafts. This information is needed to reduce the K-band ranging data to the centre of mass of the satellites. In this paper, we analyse GRACE SC errors using two months of real data of the primary and secondary SCs. We show that the errors consist of a harmonic component, which is highly correlated with the satellite's true anomaly, and a stochastic component. We built models of both error components, and use these models for error propagation studies. Firstly, we analyse the propagation of SC errors into inter-satellite accelerations. A spectral analysis reveals that the stochastic component exceeds the harmonic component, except in the 3-10 mHz frequency band. In this band, which contains most of the geophysically relevant signal, the harmonic error component is larger than the random component. Secondly, we propagate SC errors into optimally filtered monthly mass anomaly maps and compare them with the total error. We found that SC errors account for about 18 % of the total error. Moreover, gaps in the SC data series amplify the effect of SC errors by a factor of . Finally, an analysis of inter-satellite pointing angles for GRACE data between 2003 and 2010 reveals that inter-satellite ranging errors were exceptionally large during the period February 2003 till May 2003. During these months, SC noise is amplified by a factor of 3 and is a considerable source of errors in monthly GRACE mass anomaly maps. In the context of future satellite gravity missions, the noise models developed in this paper may be valuable for mission performance studies.
The energetics of error-growth and the predictability analysis in precipitation prediction
NASA Astrophysics Data System (ADS)
Luo, Yu; Zhang, Lifeng; Zhang, Yun
2012-02-01
Sensitivity simulations are conducted in AREM (Advanced Regional Eta-Coordinate numerical heavy-rain prediction Model) for a torrential precipitation in June 2008 along South China to investigate the effect of initial uncertainty on precipitation predictability. It is found that the strong initial-condition sensitivity for precipitation prediction can be attributed to the upscale evolution of error growth. However, different modality of error growth can be observed in lower and upper layers. Compared with lower-level, significant error growth in the upper-layer appears over both convective area and high jet stream. It thus indicates that the error growth depends on both moist convection due to convective instability and the wind shear associated with dynamic instability. As heavy rainfall process can be described as a series of energy conversion, it reveals that the advection-term and latent heating serve as significant energy sources. Moreover, the dominant source terms of error-energy growth are nonlinearity advection (ADVT) and difference in latent heating (DLHT), with the latter being largely responsible for the rapid error growth in the initial stage. In this sense, the occurrence of precipitation and error-growth share the energy source, which implies the inherent predictability of heavy rainfall. In addition, a decomposition of ADVT further indicates that the flow-dependent error growth is closely related to the atmospheric instability. Thus the system growing from unstable flow regime has its intrinsic predictability.
ERIC Educational Resources Information Center
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
Investigation of Biogrout processes by numerical analysis at pore scale
NASA Astrophysics Data System (ADS)
Bergwerff, Luke; van Paassen, Leon A.; Picioreanu, Cristian; van Loosdrecht, Mark C. M.
2013-04-01
Biogrout is a soil improving process that aims to improve the strength of sandy soils. The process is based on microbially induced calcite precipitation (MICP). In this study the main process is based on denitrification facilitated by bacteria indigenous to the soil using substrates, which can be derived from pretreated waste streams containing calcium salts of fatty acids and calcium nitrate, making it a cost effective and environmentally friendly process. The goal of this research is to improve the understanding of the process by numerical analysis so that it may be improved and applied properly for varying applications, such as borehole stabilization, liquefaction prevention, levee fortification and mitigation of beach erosion. During the denitrification process there are many phases present in the pore space including a liquid phase containing solutes, crystals, bacteria forming biofilms and gas bubbles. Due to the amount of phases and their dynamic changes (multiphase flow with (non-linear) reactive transport), there are many interactions making the process very complex. To understand this complexity in the system, the interactions between these phases are studied in a reductionist approach, increasing the complexity of the system by one phase at a time. The model will initially include flow, solute transport, crystal nucleation and growth in 2D at pore scale. The flow will be described by Navier-Stokes equations. Initial study and simulations has revealed that describing crystal growth for this application on a fixed grid can introduce significant fundamental errors. Therefore a level set method will be employed to better describe the interface of developing crystals in between sand grains. Afterwards the model will be expanded to 3D to provide more realistic flow, nucleation and clogging behaviour at pore scale. Next biofilms and lastly gas bubbles may be added to the model. From the results of these pore scale models the behaviour of the system may be
Numerical Analysis of the Sea State Bias for Satellite Altimetry
NASA Technical Reports Server (NTRS)
Glazman, R. E.; Fabrikant, A.; Srokosz, M. A.
1996-01-01
Theoretical understanding of the dependence of sea state bias (SSB) on wind wave conditions has been achieved only for the case of a unidirectional wind-driven sea. Recent analysis of Geosat and TOPEX altimeter data showed that additional factors, such as swell, ocean currents, and complex directional properties of realistic wave fields, may influence SSB behavior. Here we investigate effects of two-dimensional multimodal wave spectra using a numerical model of radar reflection from a random, non-Gaussian surface. A recently proposed ocean wave spectrum is employed to describe sea surface statistics. The following findings appear to be of particular interest: (1) Sea swell has an appreciable effect in reducing the SSB coefficient compared with the pure wind sea case but has less effect on the actual SSB owing to the corresponding increase in significant wave height. (2) Hidden multimodal structure (the two-dimensional wavenumber spectrum contains separate peaks, for swell and wind seas, while the frequency spectrum looks unimodal) results in an appreciable change of SSB. (3) For unimodal, purely wind-driven seas, the influence of the angular spectral width is relatively unimportant; that is, a unidirectional sea provides a good qualitative model for SSB if the swell is absent. (4) The pseudo wave age is generally much better fo parametrization the SSB coefficient than the actual wave age (which is ill-defined for a multimodal sea) or wind speed. (5) SSB can be as high as 5% of the significant wave height, which is significantly greater than predicted by present empirical model functions tuned on global data sets. (6) Parameterization of SSB in terms of wind speed is likely to lead to errors due to the dependence on the (in practice, unknown) fetch.
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
Numerical analysis and design of upwind sails
NASA Astrophysics Data System (ADS)
Shankaran, Sriram
The use of computational techniques that solve the Euler or the Navier-Stokes equations are increasingly being used by competing syndicates in races like the Americas Cup. For sail configurations, this desire stems from a need to understand the influence of the mast on the boundary layer and pressure distribution on the main sail, the effect of camber and planform variations of the sails on the driving and heeling force produced by them and the interaction of the boundary layer profile of the air over the surface of the water and the gap between the boom and the deck on the performance of the sail. Traditionally, experimental methods along with potential flow solvers have been widely used to quantify these effects. While these approaches are invaluable either for validation purposes or during the early stages of design, the potential advantages of high fidelity computational methods makes them attractive candidates during the later stages of the design process. The aim of this study is to develop and validate numerical methods that solve the inviscid field equations (Euler) to simulate and design upwind sails. The three dimensional compressible Euler equations are modified using the idea of artificial compressibility and discretized on unstructured tetrahedral grids to provide estimates of lift and drag for upwind sail configurations. Convergence acceleration techniques like multigrid and residual averaging are used along with parallel computing platforms to enable these simulations to be performed in a few minutes. To account for the elastic nature of the sail cloth, this flow solver was coupled to NASTRAN to provide estimates of the deflections caused by the pressure loading. The results of this aeroclastic simulation, showed that the major effect of the sail elasticity; was in altering the pressure distribution around the leading edge of the head and the main sail. Adjoint based design methods were developed next and were used to induce changes to the camber
Analysis and Correction of Systematic Height Model Errors
NASA Astrophysics Data System (ADS)
Jacobsen, K.
2016-06-01
The geometry of digital height models (DHM) determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC). Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3) has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP), but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM) digital surface model (DSM) or the new AW3D30 DSM, based on ALOS PRISM images, are
Error analysis for Mariner Venus/Mercury 1973 conducted at the JPL Mesa west antenna range
NASA Technical Reports Server (NTRS)
Vincent, N. L.; Smith, C. A.; Brejcha, A. J.; Curtis, H. A.
1973-01-01
Theoretical analysis and experimental data are combined to yield the errors to be used with antenna gain, antenna patterns, and RF cable insertion loss measurements for the Mariner Venus-Mercury 1973 Flight Project. These errors apply to measurements conducted at the JPL Mesa, West Antenna Range, on the high gain antenna, low gain antenna, and RF coaxial cables.
Software reliability: Application of a reliability model to requirements error analysis
NASA Technical Reports Server (NTRS)
Logan, J.
1980-01-01
The application of a software reliability model having a well defined correspondence of computer program properties to requirements error analysis is described. Requirements error categories which can be related to program structural elements are identified and their effect on program execution considered. The model is applied to a hypothetical B-5 requirement specification for a program module.
Analysis of Errors and Misconceptions in the Learning of Calculus by Undergraduate Students
ERIC Educational Resources Information Center
Muzangwa, Jonatan; Chifamba, Peter
2012-01-01
This paper is going to analyse errors and misconceptions in an undergraduate course in Calculus. The study will be based on a group of 10 BEd. Mathematics students at Great Zimbabwe University. Data is gathered through use of two exercises on Calculus 1&2.The analysis of the results from the tests showed that a majority of the errors were due…
Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife
ERIC Educational Resources Information Center
Jennrich, Robert I.
2008-01-01
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…
Mark-Up-Based Writing Error Analysis Model in an On-Line Classroom.
ERIC Educational Resources Information Center
Feng, Cheng; Yano, Yoneo; Ogata, Hiroaki
2000-01-01
Describes a new component called "Writing Error Analysis Model" (WEAM) in the CoCoA system for teaching writing composition in Japanese as a foreign language. The Weam can be used for analyzing learners' morphological errors and selecting appropriate compositions for learners' revising exercises. (Author/VWL)
ERIC Educational Resources Information Center
Kingsdorf, Sheri; Krawec, Jennifer
2014-01-01
Solving word problems is a common area of struggle for students with learning disabilities (LD). In order for instruction to be effective, we first need to have a clear understanding of the specific errors exhibited by students with LD during problem solving. Error analysis has proven to be an effective tool in other areas of math but has had…
Composing Boolean Search Statements: Self-Confidence, Concept Analysis, Search Logic, and Errors.
ERIC Educational Resources Information Center
Nahl, Diane; Harada, Violet H.
1996-01-01
A study of 191 juniors and seniors from 6 Oahu high schools tested their ability to interpret and construct search statements after reading brief instructions on concept analysis, Boolean operators, and search statement format. On average, students made two errors per statement; scores and types of errors are examined for influences of gender and…
Carriage Error Identification Based on Cross-Correlation Analysis and Wavelet Transformation
Mu, Donghui; Chen, Dongju; Fan, Jinwei; Wang, Xiaofeng; Zhang, Feihu
2012-01-01
This paper proposes a novel method for identifying carriage errors. A general mathematical model of a guideway system is developed, based on the multi-body system method. Based on the proposed model, most error sources in the guideway system can be measured. The flatness of a workpiece measured by the PGI1240 profilometer is represented by a wavelet. Cross-correlation analysis performed to identify the error source of the carriage. The error model is developed based on experimental results on the low frequency components of the signals. With the use of wavelets, the identification precision of test signals is very high. PMID:23012558
Error Analysis of Remotely-Acquired Mossbauer Spectra
NASA Technical Reports Server (NTRS)
Schaefer, Martha W.; Dyar, M. Darby; Agresti, David G.; Schaefer, Bradley E.
2005-01-01
On the Mars Exploration Rovers, Mossbauer spectroscopy has recently been called upon to assist in the task of mineral identification, a job for which it is rarely used in terrestrial studies. For example, Mossbauer data were used to support the presence of olivine in Martian soil at Gusev and jarosite in the outcrop at Meridiani. The strength (and uniqueness) of these interpretations lies in the assumption that peak positions can be determined with high degrees of both accuracy and precision. We summarize here what we believe to be the major sources of error associated with peak positions in remotely-acquired spectra, and speculate on their magnitudes. Our discussion here is largely qualitative because necessary background information on MER calibration sources, geometries, etc., have not yet been released to the PDS; we anticipate that a more quantitative discussion can be presented by March 2005.
Detecting medication errors: analysis based on a hospital's incident reports.
Härkänen, Marja; Turunen, Hannele; Saano, Susanna; Vehviläinen-Julkunen, Katri
2015-04-01
The aim of this paper is to analyse how medication incidents are detected in different phases of the medication process. The study design is a retrospective register study. The material was collected from one university hospital's web-based incident reporting database in Finland. In 2010, 1617 incident reports were made, 671 of those were medication incidents and analysed in this study. Statistical methods were used to analyse the material. Results were reported using frequencies and percentages. Twenty-one percent of all medication incidents were detected during documenting or reading the documents. One-sixth of medication incidents were detected during medicating the patients, and approximately one-tenth were detected during verifying of the medicines. It is important to learn how to break the chain of medication errors as early as possible. Findings showed that for nurses, the ability to concentrate on documenting and medicating the patient is essential. PMID:24256158
Comet Tempel 2: Orbit, ephemerides and error analysis
NASA Technical Reports Server (NTRS)
Yeomans, D. K.
1978-01-01
The dynamical behavior of comet Tempel 2 is investigated and the comet is found to be very well behaved and easily predictable. The nongravitational forces affecting the motion of this comet are the smallest of any comet that is affected by nongravitational forces. The sign and time history of these nongravitational forces imply (1) a direct rotation of the comet's nucleus and (2) the comet's ability to outgas has not changed substantially over its entire observational history. The well behaved dynamical motion of the comet, the well observed past apparitions, the small nongravitational forces and the excellent 1988 ground based observing conditions all contribute to relatively small position and velocity errors in 1988 -- the year of a proposed rendezvous space mission to this comet. To assist in planned ground based and earth orbital observations of this comet, ephemerides are given for the 1978-79, 1983-84 and 1988 apparitions.
Error analysis and implementation issues for energy density probe
NASA Astrophysics Data System (ADS)
Locey, Lance L.; Woolford, Brady L.; Sommerfeldt, Scott D.; Blotter, Jonathan D.
2001-05-01
Previous research has demonstrated the utility of acoustic energy density measurements as a means to gain a greater understanding of acoustic fields. Three spherical energy density probe designs are under development. The first probe design has three orthogonal pairs of surface mounted microphones. The second probe design utilizes a similarly sized sphere with four surface mounted microphones. The four microphones are located at the origin and unit vectors of a Cartesian coordinate system, where the origin and the tips of the three unit vectors all lie on the surface of the sphere. The third probe design consists of a similarly sized sphere, again with four surface microphones, each placed at the vertices of a regular tetrahedron. The sensing elements of all three probes are Panasonic electret microphones. The work presented here will expand on previously reported work, and address bias errors, spherical scattering effects, and practical implementation issues. [Work supported by NASA.
Numerical bifurcation analysis of immunological models with time delays
NASA Astrophysics Data System (ADS)
Luzyanina, Tatyana; Roose, Dirk; Bocharov, Gennady
2005-12-01
In recent years, a large number of mathematical models that are described by delay differential equations (DDEs) have appeared in the life sciences. To analyze the models' dynamics, numerical methods are necessary, since analytical studies can only give limited results. In turn, the availability of efficient numerical methods and software packages encourages the use of time delays in mathematical modelling, which may lead to more realistic models. We outline recently developed numerical methods for bifurcation analysis of DDEs and illustrate the use of these methods in the analysis of a mathematical model of human hepatitis B virus infection.
Numerical Uncertainty Quantification for Radiation Analysis Tools
NASA Technical Reports Server (NTRS)
Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha
2007-01-01
Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.
A numerical comparison of sensitivity analysis techniques
Hamby, D.M.
1993-12-31
Engineering and scientific phenomena are often studied with the aid of mathematical models designed to simulate complex physical processes. In the nuclear industry, modeling the movement and consequence of radioactive pollutants is extremely important for environmental protection and facility control. One of the steps in model development is the determination of the parameters most influential on model results. A {open_quotes}sensitivity analysis{close_quotes} of these parameters is not only critical to model validation but also serves to guide future research. A previous manuscript (Hamby) detailed many of the available methods for conducting sensitivity analyses. The current paper is a comparative assessment of several methods for estimating relative parameter sensitivity. Method practicality is based on calculational ease and usefulness of the results. It is the intent of this report to demonstrate calculational rigor and to compare parameter sensitivity rankings resulting from various sensitivity analysis techniques. An atmospheric tritium dosimetry model (Hamby) is used here as an example, but the techniques described can be applied to many different modeling problems. Other investigators (Rose; Dalrymple and Broyd) present comparisons of sensitivity analyses methodologies, but none as comprehensive as the current work.
Numerical analysis on pump turbine runaway points
NASA Astrophysics Data System (ADS)
Guo, L.; Liu, J. T.; Wang, L. Q.; Jiao, L.; Li, Z. F.
2012-11-01
To research the character of pump turbine runaway points with different guide vane opening, a hydraulic model was established based on a pumped storage power station. The RNG k-ε model and SMPLEC algorithms was used to simulate the internal flow fields. The result of the simulation was compared with the test data and good correspondence was got between experimental data and CFD result. Based on this model, internal flow analysis was carried out. The result show that when the pump turbine ran at the runway speed, lots of vortexes appeared in the flow passage of the runner. These vortexes could always be observed even if the guide vane opening changes. That is an important way of energy loss in the runaway condition. Pressure on two sides of the runner blades were almost the same. So the runner power is very low. High speed induced large centrifugal force and the small guide vane opening gave the water velocity a large tangential component, then an obvious water ring could be observed between the runner blades and guide vanes in small guide vane opening condition. That ring disappeared when the opening bigger than 20°. These conclusions can provide a theory basis for the analysis and simulation of the pump turbine runaway points.
Corina, David P.; Loudermilk, Brandon C.; Detwiler, Landon; Martin, Richard F.; Brinkley, James F.; Ojemann, George
2011-01-01
This study reports on the characteristics and distribution of naming errors of patients undergoing cortical stimulation mapping (CSM). During the procedure, electrical stimulation is used to induce temporary functional lesions and locate ‘essential’ language areas for preservation. Under stimulation, patients are shown slides of common objects and asked to name them. Cortical stimulation can lead to a variety of naming errors. In the present study, we aggregate errors across patients to examine the neuroanatomical correlates and linguistic characteristics of six common errors: semantic paraphasias, circumlocutions, phonological paraphasias, neologisms, performance errors, and no-response errors. Aiding analysis, we relied on a suite of web-based querying and imaging tools that enabled the summative mapping of normalized stimulation sites. Errors were visualized and analyzed by type and location. We provide descriptive statistics to characterize the commonality of errors across patients and location. The errors observed suggest a widely distributed and heterogeneous cortical network that gives rise to differential patterning of paraphasic errors. Data are discussed in relation to emerging models of language representation that honor distinctions between frontal, parietal, and posterior temporal dorsal implementation systems and ventral-temporal lexical semantic and phonological storage and assembly regions; the latter of which may participate both in language comprehension and production. PMID:20452661
NASA Astrophysics Data System (ADS)
Hasegawa, Kei; Geller, Robert J.; Hirabayashi, Nobuyasu
2016-06-01
We present a theoretical analysis of the error of synthetic seismograms computed by higher-order finite-element methods (ho-FEMs). We show the existence of a previously unrecognized type of error due to degenerate coupling between waves with the same frequency but different wavenumbers. These results are confirmed by simple numerical experiments using the spectral element method as an example of ho-FEMs. Errors of the type found by this study may occur generally in applications of ho-FEMs.
Cooper, S.E.; Wreathall, J.; Thompson, C.M., Drouin, M.; Bley, D.C.
1996-10-01
This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ``A Technique for Human Error Analysis`` (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst.
NASA Astrophysics Data System (ADS)
Bramich, D. M.; Bachelet, E.; Alsubai, K. A.; Mislis, D.; Parley, N.
2015-05-01
Context. Understanding the source of systematic errors in photometry is essential for their calibration. Aims: We investigate how photometry performed on difference images can be influenced by errors in the photometric scale factor. Methods: We explore the equations for difference image analysis (DIA), and we derive an expression describing how errors in the difference flux, the photometric scale factor and the reference flux are propagated to the object photometry. Results: We find that the error in the photometric scale factor is important, and while a few studies have shown that it can be at a significant level, it is currently neglected by the vast majority of photometric surveys employing DIA. Conclusions: Minimising the error in the photometric scale factor, or compensating for it in a post-calibration model, is crucial for reducing the systematic errors in DIA photometry.
Error analysis of the articulated flexible arm CMM based on geometric method
NASA Astrophysics Data System (ADS)
Wang, Xueying; Liu, Shugui; Zhang, Guoxiong; Wang, Bin
2006-11-01
In order to overcome the disadvantage of traditional CMM (Coordinate Measuring Machine), a new type of CMM with rotational joints and flexible arms named articulated arm flexible CMM is developed, in which linear measurements are substituted by angular ones. Firstly a quasi-spherical coordinate system is put forward, the ideal mathematical model of articulated arm flexible CMM is established. On the base of full analysis on the factors affecting the measurement accuracy, ideal mathematical model is modified to error model according to structural parameters and geometric errors. A geometric method is proposed to verify feasibility of error model, and the results convincingly show its validity. Position errors caused by different type of error sources are analyzed, and a theoretic base for introducing error compensation and improving the accuracy of articulated arm flexible CMM is established.
Numerical analysis of human dental occlusal contact
NASA Astrophysics Data System (ADS)
Bastos, F. S.; Las Casas, E. B.; Godoy, G. C. D.; Meireles, A. B.
2010-06-01
The purpose of this study was to obtain real contact areas, forces, and pressures acting on human dental enamel as a function of the nominal pressure during dental occlusal contact. The described development consisted of three steps: characterization of the surface roughness by 3D contact profilometry test, finite element analysis of micro responses for each pair of main asperities in contact, and homogenization of macro responses using an assumed probability density function. The inelastic deformation of enamel was considered, adjusting the stress-strain relationship of sound enamel to that obtained from instrumented indentation tests conducted with spherical tip. A mechanical part of the static friction coefficient was estimated as the ratio between tangential and normal components of the overall resistive force, resulting in μd = 0.057. Less than 1% of contact pairs reached the yield stress of enamel, indicating that the occlusal contact is essentially elastic. The micro-models indicated an average hardness of 6.25GPa, and the homogenized result for macroscopic interface was around 9GPa. Further refinements of the methodology and verification using experimental data can provide a better understanding of processes related to contact, friction and wear of human tooth enamel.
Combustion irreversibilities: Numerical simulation and analysis
NASA Astrophysics Data System (ADS)
Silva, Valter; Rouboa, Abel
2012-08-01
An exergy analysis was performed considering the combustion of methane and agro-industrial residues produced in Portugal (forest residues and vines pruning). Regarding that the irreversibilities of a thermodynamic process are path dependent, the combustion process was considering as resulting from different hypothetical paths each one characterized by four main sub-processes: reactant mixing, fuel oxidation, internal thermal energy exchange (heat transfer), and product mixing. The exergetic efficiency was computed using a zero dimensional model developed by using a Visual Basic home code. It was concluded that the exergy losses were mainly due to the internal thermal energy exchange sub-process. The exergy losses from this sub-process are higher when the reactants are preheated up to the ignition temperature without previous fuel oxidation. On the other hand, the global exergy destruction can be minored increasing the pressure, the reactants temperature and the oxygen content on the oxidant stream. This methodology allows the identification of the phenomena and processes that have larger exergy losses, the understanding of why these losses occur and how the exergy changes with the parameters associated to each system which is crucial to implement the syngas combustion from biomass products as a competitive technology.
NASCRIN - NUMERICAL ANALYSIS OF SCRAMJET INLET
NASA Technical Reports Server (NTRS)
Kumar, A.
1994-01-01
The NASCRIN program was developed for analyzing two-dimensional flow fields in supersonic combustion ramjet (scramjet) inlets. NASCRIN solves the two-dimensional Euler or Navier-Stokes equations in conservative form by an unsplit, explicit, two-step finite-difference method. A more recent explicit-implicit, two-step scheme has also been incorporated in the code for viscous flow analysis. An algebraic, two-layer eddy-viscosity model is used for the turbulent flow calculations. NASCRIN can analyze both inviscid and viscous flows with no struts, one strut, or multiple struts embedded in the flow field. NASCRIN can be used in a quasi-three-dimensional sense for some scramjet inlets under certain simplifying assumptions. Although developed for supersonic internal flow, NASCRIN may be adapted to a variety of other flow problems. In particular, it should be readily adaptable to subsonic inflow with supersonic outflow, supersonic inflow with subsonic outflow, or fully subsonic flow. The NASCRIN program is available for batch execution on the CDC CYBER 203. The vectorized FORTRAN version was developed in 1983. NASCRIN has a central memory requirement of approximately 300K words for a grid size of about 3,000 points.
Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant
Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar
2015-01-01
Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485
New dimension analyses with error analysis for quaking aspen and black spruce
NASA Technical Reports Server (NTRS)
Woods, K. D.; Botkin, D. B.; Feiveson, A. H.
1987-01-01
Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.
Application of Bayesian statistical techniques in the analysis of spacecraft pointing errors
NASA Astrophysics Data System (ADS)
Dungate, D. G.
1993-09-01
A key problem in the statistical analysis of spacecraft pointing performance is the justifiable identification of a Probability Density Function (PDF) for each contributing error source. The drawbacks of Gaussian distributions are well known, and more flexible families of distributions have been identified, but often only limited data is available to support PDF assignment. Two methods based on Bayesian statistical principles, each working from alternative viewpoints, are applied to the problem here, and appear to offer significant advantages in the analysis of many error types. In particular, errors such as time-varying thermal distortions, where data is only available via a small number of Finite Element Analyses, appear to be satisfactorily dealt with via one of these methods, which also explicitly allows for the inclusion of estimated errors in quantities formed from the data available for a particular error source.
Measurement error analysis of Brillouin lidar system using F-P etalon and ICCD
NASA Astrophysics Data System (ADS)
Yao, Yuan; Niu, Qunjie; Liang, Kun
2016-09-01
Brillouin lidar system using Fabry-Pérot (F-P) etalon and Intensified Charge Coupled Device (ICCD) is capable of real time remote measuring of properties like temperature of seawater. The measurement accuracy is determined by two key parameters, Brillouin frequency shift and Brillouin linewidth. Three major errors, namely the laser frequency instability, the calibration error of F-P etalon and the random shot noise are discussed. Theoretical analysis combined with simulation results showed that the laser and F-P etalon will cause about 4 MHz error to both Brillouin shift and linewidth, and random noise bring more error to linewidth than frequency shift. A comprehensive and comparative analysis of the overall errors under various conditions proved that colder ocean(10 °C) is more accurately measured with Brillouin linewidth, and warmer ocean (30 °C) is better measured with Brillouin shift.
Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher
1996-01-01
We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.
Error analysis of penetrator impacts on bodies without atmospheres
NASA Technical Reports Server (NTRS)
Davis, D. R.
1975-01-01
Penetrators are missile shaped objects designed to implant electronic instrumentation in various of surface materials with a nominal impact speed around 150 m/sec. An interest in the application of this concept to in situ subsurface studies of extra terrestrial bodies and planetary satellites exists. Since many of these objects do not have atmospheres, the feasibility of successfully guiding penetrators to the required near-zero angle-of-attack impact conditions in the absence of an atmosphere was analyzed. Two potential targets were included, i.e., the moon and Mercury and several different penetrator deployment modes were involved. Impact errors arising from open-loop and closed-loop deployment control systems were given particular attention. Successful penetrator implacement requires: (1) that the impact speed be controlled, nominally to 150 m/sec, (2) that the angle of attack be in range 0 deg - 11 deg at impact, and (3) that the impact flight path angle be with 15 deg of vertical.
Confirmation of standard error analysis techniques applied to EXAFS using simulations
Booth, Corwin H; Hu, Yung-Jin
2009-12-14
Systematic uncertainties, such as those in calculated backscattering amplitudes, crystal glitches, etc., not only limit the ultimate accuracy of the EXAFS technique, but also affect the covariance matrix representation of real parameter errors in typical fitting routines. Despite major advances in EXAFS analysis and in understanding all potential uncertainties, these methods are not routinely applied by all EXAFS users. Consequently, reported parameter errors are not reliable in many EXAFS studies in the literature. This situation has made many EXAFS practitioners leery of conventional error analysis applied to EXAFS data. However, conventional error analysis, if properly applied, can teach us more about our data, and even about the power and limitations of the EXAFS technique. Here, we describe the proper application of conventional error analysis to r-space fitting to EXAFS data. Using simulations, we demonstrate the veracity of this analysis by, for instance, showing that the number of independent dat a points from Stern's rule is balanced by the degrees of freedom obtained from a 2 statistical analysis. By applying such analysis to real data, we determine the quantitative effect of systematic errors. In short, this study is intended to remind the EXAFS community about the role of fundamental noise distributions in interpreting our final results.
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Errors of DWPF frit analysis: Final report. Revision 1
Schumacher, R.F.
1993-01-20
Glass frit will be a major raw material for the operation of the Defense Waste Processing Facility. The frit will be controlled by certificate of conformance and a confirmatory analysis from a commercial analytical laboratory. The following effort provides additional quantitative information on the variability of frit chemical analyses at two commercial laboratories. Identical samples of IDMS Frit 202 were chemically analyzed at two commercial laboratories and at three different times over a period of four months. The SRL-ADS analyses, after correction with the reference standard and normalization, provided confirmatory information, but did not detect the low silica level in one of the frit samples. A methodology utilizing elliptical limits for confirming the certificate of conformance or confirmatory analysis was introduced and recommended for use when the analysis values are close but not within the specification limits. It was also suggested that the lithia specification limits might be reduced as long as CELS is used to confirm the analysis.
Method of error analysis for phase-measuring algorithms applied to photoelasticity.
Quiroga, J A; González-Cano, A
1998-07-10
We present a method of error analysis that can be applied for phase-measuring algorithms applied to photoelasticity. We calculate the contributions to the measurement error of the different elements of a circular polariscope as perturbations of the Jones matrices associated with each element. The Jones matrix of the real polariscope can then be calculated as a sum of the nominal matrix and a series of contributions that depend on the errors associated with each element separately. We apply this method to the analysis of phase-measuring algorithms for the determination of isoclinics and isochromatics, including comparisons with real measurements. PMID:18285900
Error analysis of rigid body posture measurement system based on circular feature points
NASA Astrophysics Data System (ADS)
Huo, Ju; Cui, Jishan; Yang, Ning
2015-02-01
For monocular vision pose parameters determine the problem, feature-based target feature points on the plane quadrilateral, an improved two-stage iterative algorithm is proposed to improve the optimization of rigid body posture measurement calculating model. Monocular vision rigid body posture measurement system is designed; experimentally in each coordinate system determined coordinate a unified method to unify the each feature point measure coordinates; theoretical analysis sources of error from rigid body posture measurement system simulation experiments. Combined with the actual experimental analysis system under the condition of simulation error of pose accuracy of measurement, gives the comprehensive error of measurement system, for improving measurement precision of certain theoretical guiding significance.
Generalized multiplicative error models: Asymptotic inference and empirical analysis
NASA Astrophysics Data System (ADS)
Li, Qian
This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.
Quantitative error analysis for computer assisted navigation: a feasibility study
Güler, Ö.; Perwög, M.; Kral, F.; Schwarm, F.; Bárdosi, Z. R.; Göbel, G.; Freysinger, W.
2013-01-01
Purpose The benefit of computer-assisted navigation depends on the registration process, at which patient features are correlated to some preoperative imagery. The operator-induced uncertainty in localizing patient features – the User Localization Error (ULE) - is unknown and most likely dominating the application accuracy. This initial feasibility study aims at providing first data for ULE with a research navigation system. Methods Active optical navigation was done in CT-images of a plastic skull, an anatomic specimen (both with implanted fiducials) and a volunteer with anatomical landmarks exclusively. Each object was registered ten times with 3, 5, 7, and 9 registration points. Measurements were taken at 10 (anatomic specimen and volunteer) and 11 targets (plastic skull). The active NDI Polaris system was used under ideal working conditions (tracking accuracy 0.23 mm root mean square, RMS; probe tip calibration was 0.18 mm RMS. Variances of tracking along the principal directions were measured as 0.18 mm2, 0.32 mm2, and 0.42 mm2. ULE was calculated from predicted application accuracy with isotropic and anisotropic models and from experimental variances, respectively. Results The ULE was determined from the variances as 0.45 mm (plastic skull), 0.60 mm (anatomic specimen), and 4.96 mm (volunteer). The predicted application accuracy did not yield consistent values for the ULE. Conclusions Quantitative data of application accuracy could be tested against prediction models with iso- and anisotropic noise models and revealed some discrepancies. This could potentially be due to the facts that navigation and one prediction model wrongly assume isotropic noise (tracking is anisotropic), while the anisotropic noise prediction model assumes an anisotropic registration strategy (registration is isotropic in typical navigation systems). The ULE data are presumably the first quantitative values for the precision of localizing anatomical landmarks and implanted fiducials
NASA Astrophysics Data System (ADS)
Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim
2012-12-01
This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.
On the error analysis of quantum repeaters with encoding
NASA Astrophysics Data System (ADS)
Epping, Michael; Kampermann, Hermann; Bruß, Dagmar
2016-03-01
Losses of optical signals scale exponentially with the distance. Quantum repeaters are devices that tackle these losses in quantum communication by splitting the total distance into shorter parts. Today two types of quantum repeaters are subject of research in the field of quantum information: Those that use two-way communication and those that only use one-way communication. Here we explain the details of the performance analysis for repeaters of the second type. Furthermore we compare the two different schemes. Finally we show how the performance analysis generalizes to large-scale quantum networks.
Analysis of the operational error of heat flux transducers placed on wall surfaces
NASA Astrophysics Data System (ADS)
Baba, Tetsuya; Ono, Akira; Hattori, Susumu
1985-07-01
The operational error in the heat flux measurements is theoretically investigated when the heat flux from a furnace wall to the environment is measured by a heat flux transducer. Change of the original heat flux, which is caused by placing a transducer on the furnace wall, is clarified by solving a three-dimensional heat transfer problem. The operational error is explicitly given by a simple equation taking into account the thermal properties of the furnace wall and the transducer. Numerical results are also provided for a typical application to industrial furnaces.
Quantitative analysis of numerical solvers for oscillatory biomolecular system models
Quo, Chang F; Wang, May D
2008-01-01
Background This article provides guidelines for selecting optimal numerical solvers for biomolecular system models. Because various parameters of the same system could have drastically different ranges from 10-15 to 1010, the ODEs can be stiff and ill-conditioned, resulting in non-unique, non-existing, or non-reproducible modeling solutions. Previous studies have not examined in depth how to best select numerical solvers for biomolecular system models, which makes it difficult to experimentally validate the modeling results. To address this problem, we have chosen one of the well-known stiff initial value problems with limit cycle behavior as a test-bed system model. Solving this model, we have illustrated that different answers may result from different numerical solvers. We use MATLAB numerical solvers because they are optimized and widely used by the modeling community. We have also conducted a systematic study of numerical solver performances by using qualitative and quantitative measures such as convergence, accuracy, and computational cost (i.e. in terms of function evaluation, partial derivative, LU decomposition, and "take-off" points). The results show that the modeling solutions can be drastically different using different numerical solvers. Thus, it is important to intelligently select numerical solvers when solving biomolecular system models. Results The classic Belousov-Zhabotinskii (BZ) reaction is described by the Oregonator model and is used as a case study. We report two guidelines in selecting optimal numerical solver(s) for stiff, complex oscillatory systems: (i) for problems with unknown parameters, ode45 is the optimal choice regardless of the relative error tolerance; (ii) for known stiff problems, both ode113 and ode15s are good choices under strict relative tolerance conditions. Conclusions For any given biomolecular model, by building a library of numerical solvers with quantitative performance assessment metric, we show that it is possible
Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors
NASA Technical Reports Server (NTRS)
Boussalis, Dhemetrios; Bayard, David S.
2013-01-01
G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to
Modeling decenter, wedge, and tilt errors in optical tolerance analysis and simulation
NASA Astrophysics Data System (ADS)
Youngworth, Richard N.; Herman, Eric
2014-09-01
Many optical designs have lenses with circular outer profiles that are mounted in cylindrical barrels. This geometry leads to errors on mounting parameters such as decenter and tilt, and component error like wedge which are best modeled with a cylindrical or spherical coordinate system. In the absence of clocking registration, this class of errors is effectively reduced to an error magnitude with a random clocking azimuth. Optical engineers consequently must fully understand how cylindrical or spherical basis geometry relates to Cartesian representation. Understanding these factors as well as how optical design codes can differ in error application for Monte Carlo simulations produces the most effective statistical simulations for tolerance assignment, analysis, and verification. This paper covers these topics to aid practicing optical engineers and designers.
Error analysis for relay type satellite-aided search and rescue systems
NASA Technical Reports Server (NTRS)
Marini, J. W.
1977-01-01
An analysis was made of the errors in the determination of the position of an emergency transmitter in a satellite aided search and rescue system. The satellite was assumed to be at a height of 820 km in a near circular near polar orbit. Short data spans of four minutes or less were used. The error sources considered were measurement noise, transmitter frequency drift, ionospheric effects and error in the assumed height of the transmitter. The errors were calculated for several different transmitter positions, data rates and data spans. The only transmitter frequency used was 406 MHz, but the results can be scaled to different frequencies. In a typical case, in which four Doppler measurements were taken over a span of two minutes, the position error was about 1.2 km.
Unified Analysis for Antenna Pointing and Structural Errors. Part 1. Review
NASA Technical Reports Server (NTRS)
Abichandani, K.
1983-01-01
A necessary step in the design of a high accuracy microwave antenna system is to establish the signal error budget due to structural, pointing, and environmental parameters. A unified approach in performing error budget analysis as applicable to ground-based microwave antennas of different size and operating frequency is discussed. Major error sources contributing to the resultant deviation in antenna boresighting in pointing and tracking modes and the derivation of the governing equations are presented. Two computer programs (SAMCON and EBAP) were developed in-house, including the antenna servo-control program, as valuable tools in the error budget determination. A list of possible errors giving their relative contributions and levels is presented.
NASA Technical Reports Server (NTRS)
Borgia, Andrea; Spera, Frank J.
1990-01-01
This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.
Milne, A D; Lee, J M
1999-01-01
The direct current electromagnetic tracking device has seen increasing use in biomechanics studies of joint kinematics and anatomical surface geometry. In these applications, a stylus is attached to a sensor to measure the spatial location of three-dimensional landmarks. Stylus calibration is performed by rotating the stylus about a fixed point in space and using regression analysis to determine the tip offset vector. Measurement errors can be induced via several pathways, including; intrinsic system errors in sensor position or angle and tip offset calibration errors. A detailed study was performed to determine the errors introduced in digitizing small surfaces with different stylus lengths (35, 55, and 65 mm) and approach angles (30 and 45 degrees) using a plastic calibration board and hemispherical models. Two-point discrimination errors increased to an average of 1.93 mm for a 254 mm step size. Rotation about a single point produced mean errors of 0.44 to 1.18 mm. Statistically significant differences in error were observed with increasing approach angles (p < 0.001). Errors of less than 6% were observed in determining the curvature of a 19 mm hemisphere. This study demonstrates that the "Flock of Birds" can be used as a digitizing tool with accuracy better than 0.76% over 254 mm step sizes. PMID:11143353
NASA Astrophysics Data System (ADS)
Raggio, Leandro Iglesias; Etcheverry, Javier; Sánchez, Gustavo; Bonadeo, Nicolás
2010-01-01
The knowledge of the acoustic velocities in solid materials is crucial for several nondestructive evaluation techniques such as wall thickness measurement, materials characterization, determination of the location of cracks and inclusions, TOFD, etc. The longitudinal wave velocity is easily measured using ultrasonic pulse-echo technique, while a simple and accurate way to measure the shear wave speed would be a useful addition to the commonly available tools. In this work we use the impulse excitation of vibration, a very well known technique to determine the elastic constants of solid materials from the measurement of the lowest resonant frequencies excited by an impulse, to determine both longitudinal and transversal sound velocities for steel samples. Significant differences were found when comparing the longitudinal wave velocity with the one determined by a standard pulse-echo technique. Part of the difference was tracked back to the use of analytical formulas for the resonant frequencies, and corrected through the use of accurate numerical simulations. In this paper the systematic analysis of the possible error sources is reported.
NASA Astrophysics Data System (ADS)
Tao, R.; Wang, X.; Zhou, Pu; Si, Lei
2016-01-01
A theoretical model of coherent beam combining (CBC) based on a self-imaging waveguide (SIW) is built and the effects of mismatched errors on SIW-based CBC are simulated and analysed numerically. With the combination of the theoretical model and the finite difference beam propagation method, two main categories of errors, assembly and nonassembly errors, are numerically studied to investigate their effect on the beam quality by using the M2 factor. The optimisation of the SIW and error control principle of the system is briefly discussed. The generalised methodology offers a good reference for investigating waveguide-based high-power coherent combining of fibre lasers in a comprehensive way.
A stochastic dynamic model for human error analysis in nuclear power plants
NASA Astrophysics Data System (ADS)
Delgado-Loperena, Dharma
Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.
Analysis of transmission error effects on the transfer of real-time simulation data
NASA Technical Reports Server (NTRS)
Credeur, L.
1977-01-01
An analysis was made to determine the effect of transmission errors on the quality of data transferred from the Terminal Area Air Traffic Model to a remote site. Data formating schemes feasible within the operational constraints of the data link were proposed and their susceptibility to both random bit error and to noise burst were investigated. It was shown that satisfactory reliability is achieved by a scheme formating the simulation output into three data blocks which has the priority data triply redundant in the first block in addition to having a retransmission priority on that first block when it is received in error.
Error-free DWDM transmission and crosstalk analysis for a silicon photonics transmitter.
Seyedi, M Ashkan; Chen, Chin-Hui; Fiorentino, Marco; Beausoleil, Ray
2015-12-28
Individual channels of a five-channel microring silicon photonics transmitter are used for bit error ratio analysis and demonstrate error-free transmission at 10Gb/s. Two channels of the same transmitter are concurrently modulated using an 80GHz channel spacing comb laser and demonstrate open eye diagrams at 10Gb/s and 12.5Gb/s. Finally, concurrent modulation with tunable lasers is done to quantify optical power penalty for link bit error ratio versus channel spacing from +100GHz to -100GHz. When using a comb laser for concurrent modulation, no direct power penalty is observed for an 80GHz channel separation. PMID:26831964
Mars Entry Atmospheric Data System Modeling, Calibration, and Error Analysis
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; VanNorman, John; Siemers, Paul M.; Schoenenberger, Mark; Munk, Michelle M.
2014-01-01
The Mars Science Laboratory (MSL) Entry, Descent, and Landing Instrumentation (MEDLI)/Mars Entry Atmospheric Data System (MEADS) project installed seven pressure ports through the MSL Phenolic Impregnated Carbon Ablator (PICA) heatshield to measure heatshield surface pressures during entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the dynamic pressure, angle of attack, and angle of sideslip. This report describes the calibration of the pressure transducers utilized to reconstruct the atmospheric data and associated uncertainty models, pressure modeling and uncertainty analysis, and system performance results. The results indicate that the MEADS pressure measurement system hardware meets the project requirements.
Probability of error analysis for FHSS/CDMA communications in the presence of fading
NASA Astrophysics Data System (ADS)
Wickert, Mark A.; Turcotte, Randy L.
1992-04-01
Expressions are found for the error probability of a slow frequency-hopped spread-spectrum (FHSS) M-ary FSK multiple-access system in the presence of slow-nonselective Rayleigh or single-term Rician fading. The approach is general enough to allow for the consideration of independent power levels; that is to say, the power levels of the interfering signals can be varied with respect to the power level of the desired signal. The exact analysis is carried out for one and two multiple-access interferers using BFSK modulation. The analysis is general enough for the consideration of the near/far problem under the specified channel conditions. Comparisons between the error expressions developed here and previously published upper bounds (Geraniotis and Pursley, 1982) show that, under certain conditions, the previous upper bounds on the error probability may exceed the true error probability by an order of magnitude.
Wavefront-error evaluation by mathematical analysis of experimental Foucault-test data
NASA Technical Reports Server (NTRS)
Wilson, R. G.
1975-01-01
The diffraction theory of the Foucault test provides an integral formula expressing the complex amplitude and irradiance distribution in the Foucault pattern of a test mirror (lens) as a function of wavefront error. Recent literature presents methods of inverting this formula to express wavefront error in terms of irradiance in the Foucault pattern. The present paper describes a study in which the inversion formulation was applied to photometric Foucault-test measurements on a nearly diffraction-limited mirror to determine wavefront errors for direct comparison with ones determined from scatter-plate interferometer measurements. The results affirm the practicability of the Foucault test for quantitative wavefront analysis of very small errors, and they reveal the fallacy of the prevalent belief that the test is limited to qualitative use only. Implications of the results with regard to optical testing and the potential use of the Foucault test for wavefront analysis in orbital space telescopes are discussed.
Numerical and semiclassical analysis of some generalized Casimir pistons
Schaden, M.
2009-05-15
The Casimir force due to a scalar field in a cylinder of radius r with a spherical cap of radius R>r is computed numerically in the world-line approach. A geometrical subtraction scheme gives the finite interaction energy that determines the Casimir force. The spectral function of convex domains is obtained from a probability measure on convex surfaces that is induced by the Wiener measure on Brownian bridges the convex surfaces are the hulls of. Due to reflection positivity, the vacuum force on the piston by a scalar field satisfying Dirichlet boundary conditions is attractive in these geometries, but the strength and short-distance behavior of the force depend strongly on the shape of the piston casing. For a cylindrical casing with a hemispherical head, the force on the piston does not depend on the dimension of the casing at small piston elevation a<
A general numerical model for wave rotor analysis
NASA Technical Reports Server (NTRS)
Paxson, Daniel W.
1992-01-01
Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.
NASA Technical Reports Server (NTRS)
Ulvestad, J. S.; Thurman, S. W.
1992-01-01
An error covariance analysis methodology is used to investigate different weighting schemes for two-way (coherent) Doppler data in the presence of transmission-media and observing-platform calibration errors. The analysis focuses on orbit-determination performance in the interplanetary cruise phase of deep-space missions. Analytical models for the Doppler observable and for transmission-media and observing-platform calibration errors are presented, drawn primarily from previous work. Previously published analytical models were improved upon by the following: (1) considering the effects of errors in the calibration of radio signal propagation through the troposphere and ionosphere as well as station-location errors; (2) modelling the spacecraft state transition matrix using a more accurate piecewise-linear approximation to represent the evolution of the spacecraft trajectory; and (3) incorporating Doppler data weighting functions that are functions of elevation angle, which reduce the sensitivity of the estimated spacecraft trajectory to troposphere and ionosphere calibration errors. The analysis is motivated by the need to develop suitable weighting functions for two-way Doppler data acquired at 8.4 GHz (X-band) and 32 GHz (Ka-band). This weighting is likely to be different from that in the weighting functions currently in use; the current functions were constructed originally for use with 2.3 GHz (S-band) Doppler data, which are affected much more strongly by the ionosphere than are the higher frequency data.
Estimating error cross-correlations in soil moisture data sets using extended collocation analysis
NASA Astrophysics Data System (ADS)
Gruber, A.; Su, C.-H.; Crow, W. T.; Zwieback, S.; Dorigo, W. A.; Wagner, W.
2016-02-01
Global soil moisture records are essential for studying the role of hydrologic processes within the larger earth system. Various studies have shown the benefit of assimilating satellite-based soil moisture data into water balance models or merging multisource soil moisture retrievals into a unified data set. However, this requires an appropriate parameterization of the error structures of the underlying data sets. While triple collocation (TC) analysis has been widely recognized as a powerful tool for estimating random error variances of coarse-resolution soil moisture data sets, the estimation of error cross covariances remains an unresolved challenge. Here we propose a method—referred to as extended collocation (EC) analysis—for estimating error cross-correlations by generalizing the TC method to an arbitrary number of data sets and relaxing the therein made assumption of zero error cross-correlation for certain data set combinations. A synthetic experiment shows that EC analysis is able to reliably recover true error cross-correlation levels. Applied to real soil moisture retrievals from Advanced Microwave Scanning Radiometer-EOS (AMSR-E) C-band and X-band observations together with advanced scatterometer (ASCAT) retrievals, modeled data from Global Land Data Assimilation System (GLDAS)-Noah and in situ measurements drawn from the International Soil Moisture Network, EC yields reasonable and strong nonzero error cross-correlations between the two AMSR-E products. Against expectation, nonzero error cross-correlations are also found between ASCAT and AMSR-E. We conclude that the proposed EC method represents an important step toward a fully parameterized error covariance matrix for coarse-resolution soil moisture data sets, which is vital for any rigorous data assimilation framework or data merging scheme.
NASA Technical Reports Server (NTRS)
Spar, J.; Notario, J. J.; Quirk, W. J.
1978-01-01
Monthly mean global forecasts for January 1975 have been computed with the Goddard Institute for Space Studies model from four slightly different sets of initial conditions - a 'control' state and three random perturbations thereof - to simulate the effects of initial state uncertainty on forecast quality. Differences among the forecasts are examined in terms of energetics, synoptic patterns and forecast statistics. The 'noise level' of the model predictions is depicted on global maps of standard deviations of sea level pressures, 500 mb heights and 850 mb temperatures for the set of four forecasts. Initial small-scale random errors do not appear to result in any major degradation of the large-scale monthly mean forecast beyond that generated by the model itself, nor do they appear to represent the major source of large-scale forecast error.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2014-12-01
Physically based models provide insights into key hydrologic processes, but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology. Here we employ global sensitivity analysis to explore how different error types (i.e., bias, random errors), different error distributions, and different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use Sobol' global sensitivity analysis, which is typically used for model parameters, but adapted here for testing model sensitivity to co-existing errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 520 000 Monte Carlo simulations across four sites and four different scenarios. Model outputs were generally (1) more sensitive to forcing biases than random errors, (2) less sensitive to forcing error distributions, and (3) sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a significant impact depending on forcing error magnitudes. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Sun, Ting; Xing, Fei; You, Zheng
2013-01-01
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527
Optical system error analysis and calibration method of high-accuracy star trackers.
Sun, Ting; Xing, Fei; You, Zheng
2013-01-01
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527
Scilab and Maxima Environment: Towards Free Software in Numerical Analysis
ERIC Educational Resources Information Center
Mora, Angel; Galan, Jose Luis; Aguilera, Gabriel; Fernandez, Alvaro; Merida, Enrique; Rodriguez, Pedro
2010-01-01
In this work we will present the ScilabUMA environment we have developed as an alternative to Matlab. This environment connects Scilab (for numerical analysis) and Maxima (for symbolic computations). Furthermore, the developed interface is, in our opinion at least, as powerful as the interface of Matlab. (Contains 3 figures.)
Error budget analysis for advanced X-ray Astrophysics Facility (AXAF)
NASA Technical Reports Server (NTRS)
Korsch, D.
1980-01-01
The AXAF telescope was analytically investigated during the period from September 1979 to March 1980. The results of a performance evaluation in the presence of aligment errors and surface defects, a sensitivity analysis of every individual subsystem, and a diffraction analysis of the telescope assembly are presented.
NASA Astrophysics Data System (ADS)
Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng
2016-06-01
The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.
The design and analysis of single flank transmission error tester for loaded gears
NASA Technical Reports Server (NTRS)
Bassett, Duane E.; Houser, Donald R.
1987-01-01
To strengthen the understanding of gear transmission error and to verify mathematical models which predict them, a test stand that will measure the transmission error of gear pairs under design loads has been investigated. While most transmission error testers have been used to test gear pairs under unloaded conditions, the goal of this report was to design and perform dynamic analysis of a unique tester with the capability of measuring the transmission error of gears under load. This test stand will have the capability to continuously load a gear pair at torques up to 16,000 in-lb at shaft speeds from 0 to 5 rpm. Error measurement will be accomplished with high resolution optical encoders and the accompanying signal processing unit from an existing unloaded transmission error tester. Input power to the test gear box will be supplied by a dc torque motor while the load will be applied with a similar torque motor. A dual input, dual output control system will regulate the speed and torque of the system. This control system's accuracy and dynamic response were analyzed and it was determined that proportional plus derivative speed control is needed in order to provide the precisely constant torque necessary for error-free measurement.
Dynamic error analysis based on flexible shaft of wind turbine gearbox
NASA Astrophysics Data System (ADS)
Liu, H.; Zhao, R. Z.
2013-12-01
In view of the asynchrony issue between excitation and response in the transmission system, a study on the system dynamic error caused by sun axis which suspended in the gear box of a 1.5MW wind turbine was carried out considering flexibility of components. Firstly, the numerical recursive model was established by using D'Alembert's principle, then an application of MATLAB was used to simulate and analyze the model which was verified by the equivalent system. The results show that the dynamic error is not only related to the inherent parameter of system but also the external load imposed on the system; the module value of dynamic error are represented as a linear superposition of synchronization error component and harmonic vibration component and the latter can cause a random fluctuations of the gears, However, the dynamic error could be compensated partly if the stiffness coefficient of the sun axis is increased, thereby it is beneficial to improve the stability and accuracy of transmission system.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2013-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Error analysis and optimization of a 3-degree of freedom translational Parallel Kinematic Machine
NASA Astrophysics Data System (ADS)
Shankar Ganesh, S.; Koteswara Rao, A. B.
2014-06-01
In this paper, error modeling and analysis of a typical 3-degree of freedom translational Parallel Kinematic Machine is presented. This mechanism provides translational motion along the Cartesian X-, Y- and Z-axes. It consists of three limbs each having an arm and forearm with prismatic-revolute-revolute-revolute joints. The moving or tool platform maintains same orientation in the entire workspace due to its joint arrangement. From inverse kinematics, the joint angles for a given position of tool platform necessary for the error modeling and analysis are obtained. Error modeling is done based on the differentiation of the inverse kinematic equations. Variation of pose errors along X, Y and Z directions for a set of dimensions of the parallel kinematic machine is presented. A non-dimensional performance index, namely, global error transformation index is used to study the influence of dimensions and its corresponding global maximum pose error is reported. An attempt is made to find the optimal dimensions of the Parallel Kinematic Machine using Genetic Algorithms in MATLAB. The methodology presented and the results obtained are useful for predicting the performance capability of the Parallel Kinematic Machine under study.
A comparative analysis of errors in long-term econometric forecasts
Tepel, R.
1986-04-01
The growing body of literature that documents forecast accuracy falls generally into two parts. The first is prescriptive and is carried out by modelers who use simulation analysis as a tool for model improvement. These studies are ex post, that is, they make use of known values for exogenous variables and generate an error measure wholly attributable to the model. The second type of analysis is descriptive and seeks to measure errors, identify patterns among errors and variables and compare forecasts from different sources. Most descriptive studies use an ex ante approach, that is, they evaluate model outputs based on estimated (or forecasted) exogenous variables. In this case, it is the forecasting process, rather than the model, that is under scrutiny. This paper uses an ex ante approach to measure errors in forecast series prepared by Data Resources Incorporated (DRI), Wharton Econometric Forecasting Associates (Wharton), and Chase Econometrics (Chase) and to determine if systematic patterns of errors can be discerned between services, types of variables (by degree of aggregation), length of forecast and time at which the forecast is made. Errors are measured as the percent difference between actual and forecasted values for the historical period of 1971 to 1983.
NON-GAUSSIAN ERROR CONTRIBUTION TO LIKELIHOOD ANALYSIS OF THE MATTER POWER SPECTRUM
Takahashi, Ryuichi; Yoshida, Naoki; Takada, Masahiro; Sugiyama, Naoshi; Kayo, Issha; Nishimichi, Takahiro; Taruya, Atsushi; Matsubara, Takahiko; Saito, Shun
2011-01-01
We study the sample variance of the matter power spectrum for the standard {Lambda} cold dark matter universe. We use a total of 5000 cosmological N-body simulations to study in detail the distribution of best-fit cosmological parameters and the baryon acoustic peak positions. The obtained distribution is compared with the results from the Fisher matrix analysis with and without including non-Gaussian errors. For the Fisher matrix analysis, we compute the derivatives of the matter power spectrum with respect to cosmological parameters using directly full nonlinear simulations. We show that the non-Gaussian errors increase the unmarginalized errors by up to a factor of five for k{sub max} = 0.4 h Mpc{sup -1} if there is only one free parameter, provided other parameters are well determined by external information. On the other hand, for multi-parameter fitting, the impact of the non-Gaussian errors is significantly mitigated due to severe parameter degeneracies in the power spectrum. The distribution of the acoustic peak positions is well described by a Gaussian distribution, with its width being consistent with the statistical interval predicted from the Fisher matrix. We also examine systematic bias in the best-fit parameter due to the non-Gaussian errors. The bias is found to be smaller than the 1{sigma} statistical error for both the cosmological parameters and the acoustic scale positions.
Formulation of numerical procedures for dynamic analysis of spinning structures
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1986-01-01
The paper presents the descriptions of recently developed numerical algorithms that prove to be useful for the solution of the free vibration problem of spinning structures. First, a generalized procedure for the computation of nodal centrifugal forces in a finite element owing to any specified spin rate is derived in detail. This is followed by a description of an improved eigenproblem solution procedure that proves to be economical for the free vibration analysis of spinning structures. Numerical results are also presented which indicate the efficacy of the currently developed procedures.
Error analysis in post linac to driver linac transport beam line of RAON
NASA Astrophysics Data System (ADS)
Kim, Chanmi; Kim, Eun-San
2016-07-01
We investigated the effects of magnet errors in the beam transport line connecting the post linac to the driver linac (P2DT) in the Rare Isotope Accelerator in Korea (RAON). The P2DT beam line is bent by 180-degree to send the radioactive Isotope Separation On-line (ISOL) beams accelerated in Linac-3 to Linac-2. This beam line transports beams with multi-charge state 132Sn45,46,47. The P2DT beam line includes 42 quadrupole, 4 dipole and 10 sextupole magnets. We evaluate the effects of errors on the trajectory of the beam by using the TRACK code, which includes the translational and the rotational errors of the quadrupole, dipole and sextupole magnets in the beam line. The purpose of this error analysis is to reduce the rate of beam loss in the P2DT beam line. The distorted beam trajectories can be corrected by using six correctors and seven monitors.
NASA Astrophysics Data System (ADS)
Shao, Lina; Cao, Zhaoliang; Mu, Quanquan; Zhang, Peiguang; Yao, Lishuang; Wang, Shaoxin; Hu, Lifa; Xuan, Li
2016-05-01
An experimental analysis was conducted to investigate the fitting error of diffractive liquid crystal wavefront correctors (LCWFCs). First, an experiment was performed to validate the theoretical equations presented in our previous work Cao et al., 2009 [9]. The results showed an apparent discrepancy between the theoretical and measured results for the fitting error. This difference was examined and the influence of nonlinearities and rounding errors generated by the LCWFC was analyzed and discussed. Finally, the fitting error formula of the LCWFC was modified to obtain a more effective tool for the design of LCWFCs for atmospheric turbulence correction. These results will be useful for researchers who design liquid crystal adaptive optics systems for large-aperture ground-based telescopes.
Longwave surface radiation over the globe from satellite data - An error analysis
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Wilber, A. C.; Darnell, W. L.; Suttles, J. T.
1993-01-01
Errors have been analyzed for monthly-average downward and net longwave surface fluxes derived on a 5-deg equal-area grid over the globe, using a satellite technique. Meteorological data used in this technique are available from the TIROS Operational Vertical Sounder (TOVS) system flown aboard NOAA's operational sun-synchronous satellites. The data used are for February 1982 from NOAA-6 and NOAA-7 satellites. The errors in the parametrized equations were estimated by comparing their results with those from a detailed radiative transfer model. The errors in the TOVS-derived surface temperature, water vapor burden, and cloud cover were estimated by comparing these meteorological parameters with independent measurements obtained from other satellite sources. Analysis of the overall errors shows that the present technique could lead to underestimation of downward fluxes by 5 to 15 W/sq m and net fluxes by 4 to 12 W/sq m.
NASA Astrophysics Data System (ADS)
Allen, S. E.; Dinniman, M. S.; Klinck, J. M.; Gorby, D. D.; Hewett, A. J.; Hickey, B. M.
2003-01-01
Submarine canyons which indent the continental shelf are frequently regions of steep (up to 45°), three-dimensional topography. Recent observations have delineated the flow over several submarine canyons during 2-4 day long upwelling episodes. Thus upwelling episodes over submarine canyons provide an excellent flow regime for evaluating numerical and physical models. Here we compare a physical and numerical model simulation of an upwelling event over a simplified submarine canyon. The numerical model being evaluated is a version of the S-Coordinate Rutgers University Model (SCRUM). Careful matching between the models is necessary for a stringent comparison. Results show a poor comparison for the homogeneous case due to nonhydrostatic effects in the laboratory model. Results for the stratified case are better but show a systematic difference between the numerical results and laboratory results. This difference is shown not to be due to nonhydrostatic effects. Rather, the difference is due to truncation errors in the calculation of the vertical advection of density in the numerical model. The calculation is inaccurate due to the terrain-following coordinates combined with a strong vertical gradient in density, vertical shear in the horizontal velocity and topography with strong curvature.
NASA Astrophysics Data System (ADS)
Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua
2014-07-01
Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.
Numerical analysis of the big bounce in loop quantum cosmology
Laguna, Pablo
2007-01-15
Loop quantum cosmology (LQC) homogeneous models with a massless scalar field show that the big-bang singularity can be replaced by a big quantum bounce. To gain further insight on the nature of this bounce, we study the semidiscrete loop quantum gravity Hamiltonian constraint equation from the point of view of numerical analysis. For illustration purposes, we establish a numerical analogy between the quantum bounces and reflections in finite difference discretizations of wave equations triggered by the use of nonuniform grids or, equivalently, reflections found when solving numerically wave equations with varying coefficients. We show that the bounce is closely related to the method for the temporal update of the system and demonstrate that explicit time-updates in general yield bounces. Finally, we present an example of an implicit time-update devoid of bounces and show back-in-time, deterministic evolutions that reach and partially jump over the big-bang singularity.
Numerical analysis of 3-D potential flow in centrifugal turbomachines
NASA Astrophysics Data System (ADS)
Daiguji, H.
1983-09-01
A numerical method is developed for analysing a three-dimensional steady incompressible potential flow through an impeller in centrifugal turbomachines. The method is the same as the previous method which was developed for the axial flow turbomachines, except for some treatments in the downstream region. In order to clarify the validity and limitation of the method, a comparison with the existing experimental data and numerical results is made for radial flow compressor impellers. The calculated blade surface pressure distributions almost coincide with the quasi-3-D calculation by Krimerman and Adler (1978), but are different partly from the quasi-3-D calculation using one meridional flow analysis. It is suggested from this comparison that the flow through an impeller with high efficiency near the design point can be predicted by this fully 3-D numerical method.
Numerical Analysis of Deflections of Multi-Layered Beams
NASA Astrophysics Data System (ADS)
Biliński, Tadeusz; Socha, Tomasz
2015-03-01
The paper concerns the rheological bending problem of wooden beams reinforced with embedded composite bars. A theoretical model of the behaviour of a multi-layered beam is presented. The component materials of this beam are described with equations for the linear viscoelastic five-parameter rheological model. Two numerical analysis methods for the long-term response of wood structures are presented. The first method has been developed with SCILAB software. The second one has been developed with the finite element calculation software ABAQUS and user subroutine UMAT. Laboratory investigations were conducted on sample beams of natural dimensions in order to validate the proposed theoretical model and verify numerical simulations. Good agreement between experimental measurements and numerical results is observed.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
Integrated numerical methods for hypersonic aircraft cooling systems analysis
NASA Technical Reports Server (NTRS)
Petley, Dennis H.; Jones, Stuart C.; Dziedzic, William M.
1992-01-01
Numerical methods have been developed for the analysis of hypersonic aircraft cooling systems. A general purpose finite difference thermal analysis code is used to determine areas which must be cooled. Complex cooling networks of series and parallel flow can be analyzed using a finite difference computer program. Both internal fluid flow and heat transfer are analyzed, because increased heat flow causes a decrease in the flow of the coolant. The steady state solution is a successive point iterative method. The transient analysis uses implicit forward-backward differencing. Several examples of the use of the program in studies of hypersonic aircraft and rockets are provided.
Error analysis for semi-analytic displacement derivatives with respect to shape and sizing variables
NASA Technical Reports Server (NTRS)
Fenyes, Peter A.; Lust, Robert V.
1989-01-01
Sensitivity analysis is fundamental to the solution of structural optimization problems. Consequently, much research has focused on the efficient computation of static displacement derivatives. As originally developed, these methods relied on analytical representations for the derivatives of the structural stiffness matrix (K) with respect to the design variables (b sub i). To extend these methods for use with complex finite element formulations and facilitate their implementation into structural optimization programs using the general finite element method analysis codes, the semi-analytic method was developed. In this method the matrix the derivative of K/the derivative b sub i is approximated by finite difference. Although it is well known that the accuracy of the semi-analytic method is dependent on the finite difference parameter, recent work has suggested that more fundamental inaccuracies exist in the method when used for shape optimization. Another study has argued qualitatively that these errors are related to nonuniform errors in the stiffness matrix derivatives. The accuracy of the semi-analytic method is investigated. A general framework was developed for the error analysis and then it is shown analytically that the errors in the method are entirely accounted for by errors in delta K/delta b sub i. Furthermore, it is demonstrated that acceptable accuracy in the derivatives can be obtained through careful selection of the finite difference parameter.
Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes
NASA Astrophysics Data System (ADS)
Harrington, James William
Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present
Analysis of coupling errors in a physically-based integrated surface water-groundwater model
NASA Astrophysics Data System (ADS)
Dagès, Cécile; Paniconi, Claudio; Sulis, Mauro
2012-12-01
Several physically-based models that couple 1D or 2D surface and 3D subsurface flow have recently been developed, but few studies have evaluated the errors directly associated with the different coupling schemes. In this paper we analyze the causes of mass balance error for a conventional and a modified sequential coupling scheme in worst-case scenario simulations of Hortonian runoff generation on a sloping plane catchment. The conventional scheme is noniterative, whereas for the modified scheme the surface-subsurface exchange fluxes are determined via a boundary condition switching procedure that is performed iteratively during resolution of the nonlinear subsurface flow equation. It is shown that the modified scheme generates much lower coupling mass balance errors than the conventional sequential scheme. While both coupling schemes are sensitive to time discretization, the iterative control of infiltration in the modified scheme greatly limits its sensitivity to temporal resolution. Little sensitivity to spatial discretization is observed for both schemes. For the modified scheme the different factors contributing to coupling error are isolated, and the error is observed to be highly correlated to the flood recession duration. More testing, under broader hydrologic contexts and including other coupling schemes, is recommended so that the findings from this first analysis of coupling errors can be extended to other surface water-groundwater models.
Analysis and modeling of radiometric error caused by imaging blur in optical remote sensing systems
NASA Astrophysics Data System (ADS)
Xie, Xufen; Zhang, Yuncui; Wang, Hongyuan; Zhang, Wei
2016-07-01
Imaging blur changes the digital output values of imaging systems. It leads to radiometric errors when the system is used for measurement. In this paper, we focus on the radiometric error due to imaging blur in remote sensing imaging systems. First, in accordance with the radiometric response calibration of imaging systems, we provide a theoretical analysis on the evaluation standard of radiometric errors caused by imaging blur. Then, we build a radiometric error model for imaging blur based on the natural stochastic fractal characteristics of remote sensing images. Finally, we verify the model by simulations and physical defocus experiments. The simulation results show that the modeling estimation result approaches to the simulation computation. The maximum difference of relative MSE (Mean Squared Error) between simulation computation and modeling estimation can achieve 1.6%. The physical experimental results show that the maximum difference of relative MSE between experimental results and modeling estimation is only 1.29% under experimental conditions. Simulations and experiments demonstrate that the proposed model is correct, which can be used to estimate the radiometric error caused by imaging blur in remote sensing images. This research is of great importance for radiometric measurement system evaluation and application.
An analysis of error patterns in children's backward digit recall in noise
Osman, Homira; Sullivan, Jessica R.
2015-01-01
The purpose of the study was to determine whether perceptual masking or cognitive processing accounts for a decline in working memory performance in the presence of competing speech. The types and patterns of errors made on the backward digit span in quiet and multitalker babble at -5 dB signal-to-noise ratio (SNR) were analyzed. The errors were classified into two categories: item (if digits that were not presented in a list were repeated) and order (if correct digits were repeated but in an incorrect order). Fifty five children with normal hearing were included. All the children were aged between 7 years and 10 years. Repeated measures of analysis of variance (RM-ANOVA) revealed the main effects for error type and digit span length. In terms of listening condition interaction it was found that the order errors occurred more frequently than item errors in the degraded listening condition compared to quiet. In addition, children had more difficulty recalling the correct order of intermediate items, supporting strong primacy and recency effects. Decline in children's working memory performance was not primarily related to perceptual difficulties alone. The majority of errors was related to the maintenance of sequential order information, which suggests that reduced performance in competing speech may result from increased cognitive processing demands in noise. PMID:26168949
Hart, Pamela; Scherz, Julie; Apel, Kenn; Hodson, Barbara
2007-03-01
The purpose of this study was to examine the relationships between patterns of spelling error and related linguistic abilities of four persons with complex communication needs and physical impairments, compared to younger individuals without disabilities matched by spelling age. All participants completed a variety of spelling and linguistic tasks to determine overall spelling age, patterns of spelling errors, and abilities across phonemic, orthographic, and morphological awareness. Performance of the spelling-age matched pairs was similar across most of the phonemic, orthographic, and morphological awareness tasks. Analysis of the participants' spelling errors, however, revealed different patterns of spelling errors for three of the spelling-age matched pairs. Within these three pairs, the participants with complex communication needs and physical impairments made most of their spelling errors due to phonemic awareness difficulties, while most of the errors on the part of the participants without disabilities were due to orthographic difficulties. The results of this study lend support to the findings of previous investigations that reported difficulties among individuals with complex communication needs and physical impairments evidence when applying phonemic knowledge to literacy tasks. PMID:17364485
Lü, Li-hui; Liu, Wen-qing; Zhang, Tian-shu; Lu, Yi-huai; Dong, Yun-sheng; Chen, Zhen-yi; Fan, Guang-qiang; Qi, Shao-shuai
2015-07-01
Atmospheric aerosols have important impacts on human health, the environment and the climate system. Micro Pulse Lidar (MPL) is a new effective tool for detecting atmosphere aerosol horizontal distribution. And the extinction coefficient inversion and error analysis are important aspects of data processing. In order to detect the horizontal distribution of atmospheric aerosol near the ground, slope and Fernald algorithms were both used to invert horizontal MPL data and then the results were compared. The error analysis showed that the error of the slope algorithm and Fernald algorithm were mainly from theoretical model and some assumptions respectively. Though there still some problems exist in those two horizontal extinction coefficient inversions, they can present the spatial and temporal distribution of aerosol particles accurately, and the correlations with the forward-scattering visibility sensor are both high with the value of 95%. Furthermore relatively speaking, Fernald algorithm is more suitable for the inversion of horizontal extinction coefficient. PMID:26717723
EAC: A program for the error analysis of STAGS results for plates
NASA Technical Reports Server (NTRS)
Sistla, Rajaram; Thurston, Gaylen A.; Bains, Nancy Jane C.
1989-01-01
A computer code is now available for estimating the error in results from the STAGS finite element code for a shell unit consisting of a rectangular orthotropic plate. This memorandum contains basic information about the computer code EAC (Error Analysis and Correction) and describes the connection between the input data for the STAGS shell units and the input data necessary to run the error analysis code. The STAGS code returns a set of nodal displacements and a discrete set of stress resultants; the EAC code returns a continuous solution for displacements and stress resultants. The continuous solution is defined by a set of generalized coordinates computed in EAC. The theory and the assumptions that determine the continuous solution are also outlined in this memorandum. An example of application of the code is presented and instructions on its usage on the Cyber and the VAX machines have been provided.
A numerical analysis of the unsteady flow past bluff bodies
NASA Astrophysics Data System (ADS)
Fernando, M. S. U. K.; Modi, V. J.
1990-01-01
The paper describes in detail a relatively sophisticated numerical approach, using the Boundary Element Method in conjunction with the Discrete Vortex Model, to represent the complex unsteady flow field around a bluff body with separating shear layers. Important steps in the numerical analysis of this challenging problem are discussed and a performance evaluation algorithm established. Of considerable importance is the effect of computational parameters such as number of elements representing the geometry, time-step size, location of the nascent vortices, etc., on the accuracy of results and the associated cost. As an example, the method is applied to the analysis of the flow around a stationary Savonius rotor. A detailed parametric study provides fundamental information concerning the starting torque time histories, evolution of the wake, Strouhal number, etc. A comparison with the wind tunnel test data shows remarkable correlation suggesting considerable promise for the approach.
Error Analysis In Explicit Finite Element Analysis Of Incremental Sheet Forming
Bambach, M.; Hirt, G.
2007-05-17
Asymmetric incremental sheet forming (AISF) is a relatively new manufacturing process for the production of low volumes of sheet metal parts. Forming is accomplished by the CNC controlled movements of a simple ball-headed tool that follows a 3D trajectory to gradually shape a sheet metal blank. The local plastic deformation under the tool leads to a number of challenges for the Finite Element Modeling. Previous work indicates that implicit finite element methods are at present not efficient enough to allow for the simulation of AISF for industrially relevant parts, mostly due to the fact that the moving contact requires a very small time step. Explicit Finite Element methods can be speeded up by means of mass or load scaling to enable the simulation of large scale sheet metal forming problems, even for AISF. However, it is well known that the methods used to speed up the FE calculations can entail poor results when dynamic effects start to dominate the solution. Typically, the ratio of kinetic to internal energy is used as an assessment of the influence of dynamical effects. It has already been shown in the past that this global criterion can easily be violated locally for a patch of elements of the finite element mesh. This is particularly important for AISF with its highly localised loading and complex tool kinematics. The present paper details an investigation of dynamical effects in explicit Finite Element analysis of AISF. The interplay of mass or time scaling scheme and the smoothness of the tool trajectory is analysed with respect to the resulting errors. Models for tool path generation will be presented allowing for a generation of tool trajectories with predefined maximum speed and acceleration. Based on this, a strategy for error control is proposed which helps reduce the time for setting up reliable explicit finite element models for AISF.
Error Analysis In Explicit Finite Element Analysis Of Incremental Sheet Forming
NASA Astrophysics Data System (ADS)
Bambach, M.; Hirt, G.
2007-05-01
Asymmetric incremental sheet forming (AISF) is a relatively new manufacturing process for the production of low volumes of sheet metal parts. Forming is accomplished by the CNC controlled movements of a simple ball-headed tool that follows a 3D trajectory to gradually shape a sheet metal blank. The local plastic deformation under the tool leads to a number of challenges for the Finite Element Modeling. Previous work indicates that implicit finite element methods are at present not efficient enough to allow for the simulation of AISF for industrially relevant parts, mostly due to the fact that the moving contact requires a very small time step. Explicit Finite Element methods can be speeded up by means of mass or load scaling to enable the simulation of large scale sheet metal forming problems, even for AISF. However, it is well known that the methods used to speed up the FE calculations can entail poor results when dynamic effects start to dominate the solution. Typically, the ratio of kinetic to internal energy is used as an assessment of the influence of dynamical effects. It has already been shown in the past that this global criterion can easily be violated locally for a patch of elements of the finite element mesh. This is particularly important for AISF with its highly localised loading and complex tool kinematics. The present paper details an investigation of dynamical effects in explicit Finite Element analysis of AISF. The interplay of mass or time scaling scheme and the smoothness of the tool trajectory is analysed with respect to the resulting errors. Models for tool path generation will be presented allowing for a generation of tool trajectories with predefined maximum speed and acceleration. Based on this, a strategy for error control is proposed which helps reduce the time for setting up reliable explicit finite element models for AISF.
Proper handling of random errors and distortions in astronomical data analysis
NASA Astrophysics Data System (ADS)
Cardiel, Nicolas; Gorgas, Javier; Gallego, Jess; Serrano, Angel; Zamorano, Jaime; Garcia-Vargas, Maria-Luisa; Gomez-Cambronero, Pedro; Filgueira, Jose M.
2002-12-01
The aim of a data reduction process is to minimize the influence of data acquisition imperfections on the estimation of the desired astronomical quantity. For this purpose, one must perform appropriate manipulations with data and calibration frames. In addition, random-error frames (computed from first principles: expected statistical distribution of photo-electrons, detector gain, readout-noise, etc.), corresponding to the raw-data frames, can also be properly reduced. This parallel treatment of data and errors guarantees the correct propagation of random errors due to the arithmetic manipulations throughout the reduction procedure. However, due to the unavoidable fact that the information collected by detectors is physically sampled, this approach collides with a major problem: errors are correlated when applying image manipulations involving non-integer pixel shifts of data. Since this is actually the case for many common reduction steps (wavelength calibration into a linear scale, image rectification when correcting for geometric distortions,...), we discuss the benefits of considering the data reduction as the full characterization of the raw-data frames, but avoiding, as far as possible, the arithmetic manipulation of that data until the final measure of the image properties with a scientific meaning for the astronomer. For this reason, it is essential that the software tools employed for the analysis of the data perform their work using that characterization. In that sense, the real reduction of the data should be performed during the analysis, and not before, in order to guarantee the proper treatment of errors.
The CarbonSat Earth Explorer 8 candidate mission: Error analysis for carbon dioxide and methane
NASA Astrophysics Data System (ADS)
Buchwitz, Michael; Bovensmann, Heinrich; Reuter, Maximilian; Gerilowski, Konstantin; Meijer, Yasjka; Sierk, Bernd; Caron, Jerome; Loescher, Armin; Ingmann, Paul; Burrows, John P.
2015-04-01
CarbonSat is one of two candidate missions for ESA's Earth Explorer 8 (EE8) satellite to be launched around 2022. The main goal of CarbonSat is to advance our knowledge on the natural and man-made sources and sinks of the two most important anthropogenic greenhouse gases (GHGs) carbon dioxide (CO2) and methane (CH4) on various temporal and spatial scales (e.g., regional, city and point source scale), as well as related climate feedbacks. CarbonSat will be the first satellite mission optimised to detect emission hot spots of CO2 (e.g., cities, industrialised areas, power plants) and CH4 (e.g., oil and gas fields) and to quantify their emissions. Furthermore, CarbonSat will deliver a number of important by-products such as Vegetation Chlorophyll Fluorescence (VCF, also called Solar Induced Fluorescence (SIF)) at 755 nm. These applications require appropriate retrieval algorithms which are currently being optimized and used for error analysis. The status of this error analysis will be presented based on the latest version of the CO2 and CH4 retrieval algorithm and taking the current instrument specification into account. An overview will be presented focusing on nadir observations over land. Focus will be on specific issues such as errors of the CO2 and CH4 products due to residual polarization related errors and errors related to inhomogeneous ground scenes.
Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto
2006-01-01
We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.
On the relationship between anxiety and error monitoring: a meta-analysis and conceptual framework
Moser, Jason S.; Moran, Tim P.; Schroder, Hans S.; Donnellan, M. Brent; Yeung, Nick
2013-01-01
Research involving event-related brain potentials has revealed that anxiety is associated with enhanced error monitoring, as reflected in increased amplitude of the error-related negativity (ERN). The nature of the relationship between anxiety and error monitoring is unclear, however. Through meta-analysis and a critical review of the literature, we argue that anxious apprehension/worry is the dimension of anxiety most closely associated with error monitoring. Although, overall, anxiety demonstrated a robust, “small-to-medium” relationship with enhanced ERN (r = −0.25), studies employing measures of anxious apprehension show a threefold greater effect size estimate (r = −0.35) than those utilizing other measures of anxiety (r = −0.09). Our conceptual framework helps explain this more specific relationship between anxiety and enhanced ERN and delineates the unique roles of worry, conflict processing, and modes of cognitive control. Collectively, our analysis suggests that enhanced ERN in anxiety results from the interplay of a decrease in processes supporting active goal maintenance and a compensatory increase in processes dedicated to transient reactivation of task goals on an as-needed basis when salient events (i.e., errors) occur. PMID:23966928
A Monte Carlo error analysis program for near-Mars, finite-burn, orbital transfer maneuvers
NASA Technical Reports Server (NTRS)
Green, R. N.; Hoffman, L. H.; Young, G. R.
1972-01-01
A computer program was developed which performs an error analysis of a minimum-fuel, finite-thrust, transfer maneuver between two Keplerian orbits in the vicinity of Mars. The method of analysis is the Monte Carlo approach where each off-nominal initial orbit is targeted to the desired final orbit. The errors in the initial orbit are described by two covariance matrices of state deviations and tracking errors. The function of the program is to relate these errors to the resulting errors in the final orbit. The equations of motion for the transfer trajectory are those of a spacecraft maneuvering with constant thrust and mass-flow rate in the neighborhood of a single body. The thrust vector is allowed to rotate in a plane with a constant pitch rate. The transfer trajectory is characterized by six control parameters and the final orbit is defined, or partially defined, by the desired target parameters. The program is applicable to the deboost maneuver (hyperbola to ellipse), orbital trim maneuver (ellipse to ellipse), fly-by maneuver (hyperbola to hyperbola), escape maneuvers (ellipse to hyperbola), and deorbit maneuver.
NASA Astrophysics Data System (ADS)
Wang, Jian; Zhang, Fang; Song, Qiang; Zeng, Aijun; Zhu, Jing; Huang, Huijie
2015-04-01
With the constant shrinking of printable critical dimensions in photolithography, off-axis illumination (OAI) becomes one of the effective resolution-enhancement methods facing these challenges. This, in turn, is driving much more strict requirements, such as higher diffractive efficiency of the diffractive optical elements (DOEs) used in the OAI system. Since the design algorithms to optimize DOEs' phase profile are improved, the fabrication process becomes the main limiting factor leading to energy loss. Tolerance analysis is the general method to evaluate the fabrication accuracy requirement, which is especially useful for highly specialized deep UV applications with small structures and tight tolerances. A subpixel DOE simulation model is applied for tolerance analysis of DOEs by converting the abstractive fabrication structure errors into quantifiable subpixel phase matrices. Adopting the proposed model, four kinds of fabrication errors including misetch, misalignment, feature size error, and feature rounding error are able to be investigated. In the simulation experiments, systematic fabrication error studies of five typical DOEs used in 90-nm scanning photolithography illumination system are carried out. These results are valuable in the range of high precision DOE design algorithm and fabrication process optimization.
NASA Astrophysics Data System (ADS)
Kavetski, D.; Clark, M. P.; Fenicia, F.
2011-12-01
Hydrologists often face sources of uncertainty that dwarf those normally encountered in many engineering and scientific disciplines. Especially when representing large scale integrated systems, internal heterogeneities such as stream networks, preferential flowpaths, vegetation, etc, are necessarily represented with a considerable degree of lumping. The inputs to these models are themselves often the products of sparse observational networks. Given the simplifications inherent in environmental models, especially lumped conceptual models, does it really matter how they are implemented? At the same time, given the complexities usually found in the response surfaces of hydrological models, increasingly sophisticated analysis methodologies are being proposed for sensitivity analysis, parameter calibration and uncertainty assessment. Quite remarkably, rather than being caused by the model structure/equations themselves, in many cases model analysis complexities are consequences of seemingly trivial aspects of the model implementation - often, literally, whether the start-of-step or end-of-step fluxes are used! The extent of problems can be staggering, including (i) degraded performance of parameter optimization and uncertainty analysis algorithms, (ii) erroneous and/or misleading conclusions of sensitivity analysis, parameter inference and model interpretations and, finally, (iii) poor reliability of a calibrated model in predictive applications. While the often nontrivial behavior of numerical approximations has long been recognized in applied mathematics and in physically-oriented fields of environmental sciences, it remains a problematic issue in many environmental modeling applications. Perhaps detailed attention to numerics is only warranted for complicated engineering models? Would not numerical errors be an insignificant component of total uncertainty when typical data and model approximations are present? Is this really a serious issue beyond some rare isolated
Near-field visualization of plasmonic lenses: an overall analysis of characterization errors
Wang, Jing; Xu, Zongwei; Fang, Fengzhou
2015-01-01
Summary Many factors influence the near-field visualization of plasmonic structures that are based on perforated elliptical slits. Here, characterization errors are experimentally analyzed in detail from both fabrication and measurement points of view. Some issues such as geometrical parameter, probe–sample surface interaction, misalignment, stigmation, and internal stress, have influence on the final near-field probing results. In comparison to the theoretical ideal case of near-field probing of the structures, numerical calculation is carried out on the basis of a finite-difference and time-domain (FDTD) algorithm so as to support the error analyses. The analyses performed on the basis of both theoretical calculation and experimental probing can provide a helpful reference for the researchers probing their plasmonic structures and nanophotonic devices. PMID:26665078
ERIC Educational Resources Information Center
Protopapas, Athanassios; Fakou, Aikaterini; Drakopoulou, Styliani; Skaloumbakas, Christos; Mouzaki, Angeliki
2013-01-01
In this study we propose a classification system for spelling errors and determine the most common spelling difficulties of Greek children with and without dyslexia. Spelling skills of 542 children from the general population and 44 children with dyslexia, Grades 3-4 and 7, were assessed with a dictated common word list and age-appropriate…
Error Ratio Analysis: Alternate Mathematics Assessment for General and Special Educators.
ERIC Educational Resources Information Center
Miller, James H.; Carr, Sonya C.
1997-01-01
Eighty-seven elementary students in grades four, five, and six, were administered a 30-item multiplication instrument to assess performance in computation across grade levels. An interpretation of student performance using error ratio analysis is provided and the use of this method with groups of students for instructional decision making is…
Formulation and error analysis for a generalized image point correspondence algorithm
NASA Technical Reports Server (NTRS)
Shapiro, Linda (Editor); Rosenfeld, Azriel (Editor); Fotedar, Sunil; Defigueiredo, Rui J. P.; Krishen, Kumar
1992-01-01
A Generalized Image Point Correspondence (GIPC) algorithm, which enables the determination of 3-D motion parameters of an object in a configuration where both the object and the camera are moving, is discussed. A detailed error analysis of this algorithm has been carried out. Furthermore, the algorithm was tested on both simulated and video-acquired data, and its accuracy was determined.
Advanced GIS Exercise: Performing Error Analysis in ArcGIS ModelBuilder
ERIC Educational Resources Information Center
Hall, Steven T.; Post, Christopher J.
2009-01-01
Knowledge of Geographic Information Systems is quickly becoming an integral part of the natural resource professionals' skill set. With the growing need of professionals with these skills, we created an advanced geographic information systems (GIS) exercise for students at Clemson University to introduce them to the concept of error analysis,…
ERIC Educational Resources Information Center
Isik, Cemalettin; Kar, Tugrul
2012-01-01
The present study aimed to make an error analysis in the problems posed by pre-service elementary mathematics teachers about fractional division operation. It was carried out with 64 pre-service teachers studying in their final year in the Department of Mathematics Teaching in an eastern university during the spring semester of academic year…
Table-lookup algorithms for elementary functions and their error analysis
Tang, Ping Tak Peter.
1991-01-01
Table-lookup algorithms for calculating elementary functions offer superior speed and accuracy when compared with more traditional algorithms. With careful design, we show that it is feasible to implement table-lookup algorithms in hardware. Furthermore, we present a uniform approach to carry out tight error analysis for such implementations. 7 refs.
Robustness of Type I Error and Power in Set Correlation Analysis of Contingency Tables.
ERIC Educational Resources Information Center
Cohen, Jacob; Nee, John C. M.
1990-01-01
The analysis of contingency tables via set correlation allows the assessment of subhypotheses involving contrast functions of the categories of the nominal scales. The robustness of such methods with regard to Type I error and statistical power was studied via a Monte Carlo experiment. (TJH)
Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis
ERIC Educational Resources Information Center
Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara
2014-01-01
This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…
NASA Technical Reports Server (NTRS)
Smith, D. R.; Leslie, F. W.
1984-01-01
The Purdue Regional Objective Analysis of the Mesoscale (PROAM) is a successive correction type scheme for the analysis of surface meteorological data. The scheme is subjected to a series of experiments to evaluate its performance under a variety of analysis conditions. The tests include use of a known analytic temperature distribution to quantify error bounds for the scheme. Similar experiments were conducted using actual atmospheric data. Results indicate that the multiple pass technique increases the accuracy of the analysis. Furthermore, the tests suggest appropriate values for the analysis parameters in resolving disturbances for the data set used in this investigation.
Asymptotic/numerical analysis of supersonic propeller noise
NASA Technical Reports Server (NTRS)
Myers, M. K.; Wydeven, R.
1989-01-01
An asymptotic analysis based on the Mach surface structure of the field of a supersonic helical source distribution is applied to predict thickness and loading noise radiated by high speed propeller blades. The theory utilizes an integral representation of the Ffowcs-Williams Hawkings equation in a fully linearized form. The asymptotic results are used for chordwise strips of the blade, while required spanwise integrations are performed numerically. The form of the analysis enables predicted waveforms to be interpreted in terms of Mach surface propagation. A computer code developed to implement the theory is described and found to yield results in close agreement with more exact computations.
Numerical analysis on thermal drilling of aluminum metal matrix composite
NASA Astrophysics Data System (ADS)
Hynes, N. Rajesh Jesudoss; Maheshwaran, M. V.
2016-05-01
The work-material deformation is very large and both the tool and workpiece temperatures are high in thermal drilling. Modeling is a necessary tool to understand the material flow, temperatures, stress, and strains, which are difficult to measure experimentally during thermal drilling. The numerical analysis of thermal drilling process of aluminum metal matrix composite has been done in the present work. In this analysis the heat flux of different stages is calculated. The calculated heat flux is applied on the surface of work piece and thermal distribution is predicted in different stages during the thermal drilling process.
Error analysis of marker-based object localization using a single-plane XRII
Habets, Damiaan F.; Pollmann, Steven I.; Yuan, Xunhua; Peters, Terry M.; Holdsworth, David W.
2009-01-15
The role of imaging and image guidance is increasing in surgery and therapy, including treatment planning and follow-up. Fluoroscopy is used for two-dimensional (2D) guidance or localization; however, many procedures would benefit from three-dimensional (3D) guidance or localization. Three-dimensional computed tomography (CT) using a C-arm mounted x-ray image intensifier (XRII) can provide high-quality 3D images; however, patient dose and the required acquisition time restrict the number of 3D images that can be obtained. C-arm based 3D CT is therefore limited in applications for x-ray based image guidance or dynamic evaluations. 2D-3D model-based registration, using a single-plane 2D digital radiographic system, does allow for rapid 3D localization. It is our goal to investigate - over a clinically practical range - the impact of x-ray exposure on the resulting range of 3D localization precision. In this paper it is assumed that the tracked instrument incorporates a rigidly attached 3D object with a known configuration of markers. A 2D image is obtained by a digital fluoroscopic x-ray system and corrected for XRII distortions ({+-}0.035 mm) and mechanical C-arm shift ({+-}0.080 mm). A least-square projection-Procrustes analysis is then used to calculate the 3D position using the measured 2D marker locations. The effect of x-ray exposure on the precision of 2D marker localization and on 3D object localization was investigated using numerical simulations and x-ray experiments. The results show a nearly linear relationship between 2D marker localization precision and the 3D localization precision. However, a significant amplification of error, nonuniformly distributed among the three major axes, occurs, and that is demonstrated. To obtain a 3D localization error of less than {+-}1.0 mm for an object with 20 mm marker spacing, the 2D localization precision must be better than {+-}0.07 mm. This requirement was met for all investigated nominal x-ray exposures at 28 cm
The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications
Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em
2008-11-20
Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L{sup 2} error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods.
Numerical analysis of volcanic SO{sub 2} plume transport
Uno, Itsushi
1996-12-31
Mt. Sakurajima volcano (1060m) located southern part of Kyushu island, Japan, emitted a huge amount of volcanic gas (e.g., 1000-2000 SO{sub 2}-ton/day) and has a strong impact in the environmental SO{sub 2} concentration. This volcanic SO{sub 2} plume transport process over the Kyushu island was simulated by a random walk model based on the wind and turbulence fields simulated by a mesoscale numerical model using four-dimensional data assimilation (FDDA). Continuous four days of numerical simulation was the period covering from May 7 to May 10, 1987. Grided global analysis by ECMWF and the special pilot-balloon observation data were used in the FDDA. Mesoscale numerical model with FDDA simulated well the general wind fields during the passage of high pressure system, and the complicated local wind circulation within the planetary boundary layer (PBL). Simulated surface wind variation was quantitatively compared with the observation data, and showed the good agreements. Numerical results of plume transport process were compared with SO{sub 2} surface and 3-D airborne measurements. It was revealed that simulated three-dimensional plume behavior explained well the observed SO{sub 2} variation, and the day-time development of PBL played an important role for the transport of the volcanic SO{sub 2} aloft to the surface level. Transformation rate from SO{sub 2} to sulfate was also determined from the trajectory by the random walk calculation.
Execution-Error Modeling and Analysis of the GRAIL Spacecraft Pair
NASA Technical Reports Server (NTRS)
Goodson, Troy D.
2013-01-01
The GRAIL spacecraft, Ebb and Flow (aka GRAIL-A and GRAIL-B), completed their prime mission in June and extended mission in December 2012. The excellent performance of the propulsion and attitude control subsystems contributed significantly to the mission's success. In order to better understand this performance, the Navigation Team has analyzed and refined the execution-error models for delta-v maneuvers. There were enough maneuvers in the prime mission to form the basis of a model update that was used in the extended mission. This paper documents the evolution of the execution-error models along with the analysis and software used.
Throughput Analysis of IEEE 802.11 DCF in the Presence of Transmission Errors
NASA Astrophysics Data System (ADS)
Alshanyour, Ahed; Agarwal, Anjali
This paper introduces an accurate analysis using three- dimensional Markov chain modeling to compute the IEEE 802.11 DCF throughput under heavy traffic conditions and absence of hidden terminals for both access modes, basic and rts/cts. The proposed model considers the impact of retry counts of control and data frames jointly on the saturated throughput. Moreover, It considers the impact of transmission errors by taking into account the strength of the received signal and using the BER model to convert the SNR to a bit error probability.
Error Analysis for Estimation of Trace Vapor Concentration Pathlength in Stack Plumes
Gallagher, Neal B.; Wise, Barry M.; Sheen, David M.
2003-06-01
Near infrared hpyerspectral imaging is finding utility in remote sensing applications such as detection and quantification of chemical vapor effluents in stack plumes. Optimizing the sensing system or quantification algorithms is difficult since reference images are rarely well characterized. The present work uses a radiance model for a down looking scene and a detailed noise model for a dispersive and Fourier transform spectrometer to generate well-characterized synthetic data. These data were used in conjunction with a classical least squares based estimation procedure in an error analysis to provide estimates of different sources of concentration-pathlength quantification error in the remote sensing problem.
NASA Astrophysics Data System (ADS)
Flynn, Lawrence E.; Labow, Gordon J.; Beach, Robert A.; Rawlins, Michael A.; Flittner, David E.
1996-10-01
Inexpensive devices to measure solar UV irradiance are available to monitor atmospheric ozone, for example, total ozone portable spectroradiometers (TOPS instruments). A procedure to convert these measurements into ozone estimates is examined. For well-characterized filters with 7-nm FWHM bandpasses, the method provides ozone values (from 304- and 310-nm channels) with less than 0.4 error attributable to inversion of the theoretical model. Analysis of sensitivity to model assumptions and parameters yields estimates of 3 bias in total ozone results with dependence on total ozone and path length. Unmodeled effects of atmospheric constituents and instrument components can result in additional 2 errors.
Error analysis for duct leakage tests in ASHRAE standard 152P
Andrews, J.W.
1997-06-01
This report presents an analysis of random uncertainties in the two methods of testing for duct leakage in Standard 152P of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). The test method is titled Standard Method of Test for Determining Steady-State and Seasonal Efficiency of Residential Thermal Distribution Systems. Equations have been derived for the uncertainties in duct leakage for given levels of uncertainty in the measured quantities used as inputs to the calculations. Tables of allowed errors in each of these independent variables, consistent with fixed criteria of overall allowed error, have been developed.
Failure mode and effect analysis: a technique to prevent chemotherapy errors.
Sheridan-Leos, Norma; Schulmeister, Lisa; Hartranft, Steve
2006-06-01
Complex, multidrug chemotherapy protocols commonly are administered to patients with cancer. At every step of the chemotherapy administration process, from the point that chemotherapy is ordered to the point that it is infused and beyond, potential for error exists. FMEA, a proactive process that promotes systematic thinking about the safety of patient care, is a risk analysis technique that can be used to evaluate the process of chemotherapy administration. Error prevention is an ongoing quality improvement process that requires institutional commitment and support, and nurses play a vital role in the process. PMID:16789584
NASA Astrophysics Data System (ADS)
Montanari, A.; Grossi, G.
2007-12-01
It is well known that uncertainty assessment in hydrological forecasting is a topical issue. Already in 1905 W.E. Cooke, who was issuing daily weather forecasts in Australia, stated: "It seems to me that the condition of confidence or otherwise form a very important part of the prediction, and ought to find expression". Uncertainty assessment in hydrology involves the analysis of multiple sources of error. The contribution of these latter to the formation of the global uncertainty cannot be quantified independently, unless (a) one is willing to introduce subjective assumptions about the nature of the individual error components or (2) independent observations are available for estimating input error, model error, parameter error and state error. An alternative approach, that is applied in this study and still requires the introduction of some assumptions, is to quantify the global hydrological uncertainty in an integrated way, without attempting to quantify each independent contribution. This methodology can be applied in situations characterized by limited data availability and therefore is gaining increasing attention by end users. This work aims to propose a statistically based approach for assessing the global uncertainty in hydrological forecasting, by building a statistical model for the forecast error xt,d, where t is the forecast time and d is the lead time. Accordingly, the probability distribution of xt,d is inferred through a non linear multiple regression, depending on an arbitrary number of selected conditioning variables. These include the current forecast issued by the hydrological model, the past forecast error and internal state variables of the model. The final goal is to indirectly relate the forecast error to the sources of uncertainty, through a probabilistic link with the conditioning variables. Any statistical model is based on assumptions whose fulfilment is to be checked in order to assure the validity of the underlying theory. Statistical
Woontner, Michael; Goodman, Stephen I
2006-11-01
This unit describes methods for the preparation of samples for analysis of physiological amino acids and organic acids. Amino acids are analyzed by ion-exchange chromatography using an automated system. Organic acids are analyzed by gas-chromatography/mass spectrometry (GC-MS). Analysis of amino and organic acids is necessary to detect and monitor the treatment of many inborn errors of metabolism. PMID:18428392
Feature analysis of singleton consonant errors in developmental verbal dyspraxia (DVD).
Thoonen, G; Maassen, B; Gabreëls, F; Schreuder, R
1994-12-01
The aim of this study is to quantify diagnostic characteristics related to consonant production of developmental verbal dyspraxia (DVD). For this, a paradigmatic and syntagmatic feature-value analysis of the consonant substitution and omission errors in DVD speech was conducted. Following a three-step procedure, eleven clear cases were selected from a group of 24 children with DVD. The consonants produced in a word and nonsense-word imitation task were phonetically transcribed and transferred to confusion matrices, which allows for a feature and feature-value analysis. The analysis revealed that children with DVD (a) show low percentages of retention for place and manner of articulation and voicing, due to high substitution and omission rates; (b) show a particularly low percentage of retention of place of articulation in words, which, together with error rate, is strongly related to severity of involvement; (c) are inconsistent in their feature realization and feature preference; and (d) show a high syntagmatic error rate. These results form a quantification of diagnostic characteristics. Unexpectedly, however, very few qualitative differences in error pattern were found between children with DVD and a group of 11 age-matched children with normal speech. Thus, although the children with DVD produced higher substitution and omission rates than children with normal speech, the speech profiles of both subject groups are similar. This result stresses the importance of interpreting profiles, not isolated symptoms. The hypothesis to consider DVD as a deficit in the phonological encoding process is discussed. PMID:7533219
Analysis of the screw compressor rotors’ non-uniform thermal field effect on transmission error
NASA Astrophysics Data System (ADS)
Mustafin, T. N.; Yakupov, R. R.; Burmistrov, A. V.; Khamidullin, M. S.; Khisameev, I. G.
2015-08-01
The vibrational state of the screw compressor is largely dependent on the gearing of the rotors and on the possibility of angular backlash in the gears. The presence of the latter leads to a transmission error and is caused by the need for the downward bias of the actual profile in relation to the theoretical. The loss of contact between rotors and, as a consequence, the current value of the quantity, characterizing the transmission error, is affected by a large number of different factors. In particular, a major influence on the amount of possible movement in the gearing will be exerted by thermal deformations of the rotor and the housing parts in the working mode of the machine. The present work is devoted to the analysis of the thermal state in the operation of the screw oil-flooded compressor and its impact on the transmission error and the possibility of losing contact between them during the operating cycle.
Error analysis of a fast partial pivoting method for structured matrices
NASA Astrophysics Data System (ADS)
Sweet, Douglas R.; Brent, Richard P.
1995-06-01
Many matrices that arise in the solution of signal processing have a special displacement structure. For example, adaptive filtering and direction-of-arrival estimation yield matrices of a Toeplitz type. A recent method of Gohberg, Kailath, and Olshevsky (GKO) allows fast Gaussian elimination with partial pivoting for such structured matrices. In this paper, a rounding error analysis is performed on the Cauchy and Toeplitz variants of the GKO method. It is shown the error growth depends on the growth in certain auxiliary vectors, the generators, which are computed by the GKO algorithms, It is also shown that in certain circumstances, the growth in the generators can be large, and so the error growth is much larger than would be encountered with normal Gaussian elimination with partial pivoting. A modification of the algorithm to perform a type of row-column pivoting is proposed which may circumvent this problem.
Analysis of the orbit errors in the CERN accelerators using model simulation
Lee, M.; Kleban, S.; Clearwater, S.; Scandale, W.; Pettersson, T.; Kugler, H.; Riche, A.; Chanel, M.; Martensson, E.; Lin, In-Ho
1987-09-01
This paper will describe the use of the PLUS program to find various types of machine and beam errors such as, quadrupole strength, dipole strength, beam position monitors (BPMs), energy profile, and beam launch. We refer to this procedure as the GOLD (Generic Orbit and Lattice Debugger) Method which is a general technique that can be applied to analysis of errors in storage rings and transport lines. One useful feature of the Method is that it analyzes segments of a machine at a time so that the application and efficiency is independent of the size of the overall machine. Because the techniques are the same for all the types of problems it solves, the user need learn only how to find one type of error in order to use the program.
OuYang, Xiaoying; Wang, Ning; Wu, Hua; Li, Zhao-Liang
2010-01-18
Sensitivity analysis of temperature-emissivity separation method commonly applied to hyperspectral data to various sources of errors is performed in this paper. In terms of resulting errors in the process of retrieving surface temperature, results show that: (1) Satisfactory results can be obtained for heterogeneous land surfaces and retrieval error of surface temperature is small enough to be neglected for all atmospheric conditions. (2) Separation of atmospheric downwelling radiance from at-ground radiance is not very sensitive to the uncertainty of column water vapor (WV) in the atmosphere. The errors in land surface temperature retrievals from at-ground radiance with the DRRI method due to the uncertainty in atmospheric downwelling radiance vary from -0.2 to 0.6K if the uncertainty of WV is within 50% of the actual WV; (3) Impact of the errors generated by the poor atmospheric corrections is significant, implying that a well-done atmospheric correction is indeed required to obtain accurate at-ground radiance from at-satellite radiance for successful separation of land-surface temperature and emissivity. PMID:20173873
Lamar, Melissa; Libon, David J; Ashley, Angela V; Lah, James J; Levey, Allan I; Goldstein, Felicia C
2010-01-01
Recent evidence suggests that patients with Alzheimer's disease (AD) and vascular comorbidities (VC) perform worse across measures of verbal reasoning and abstraction when compared to patients with AD alone. We performed a qualitative error analysis of Wechsler Adult Intelligence Scale-III Similarities zero-point responses in 45 AD patients with varying numbers of VC, including diabetes, hypertension, and hypercholesterolemia. Errors were scored in set if the answer was vaguely related to how the word pair was alike (e.g., dog-lion: "they can be trained") and out of set if the response was unrelated ("a lion can eat a dog"). AD patients with 2-3 VC did not differ on Similarities total score or qualitative errors from AD patients with 0-1 VC. When analyzing the group as a whole, we found that increasing numbers of VC were significantly associated with increasing out of set errors and decreasing in set errors in AD. Of the vascular diseases investigated, it was only the severity of diastolic blood pressure that significantly correlated with out of set responses. Understanding the contribution of VC to patterns of impairment in AD may provide support for directed patient and caregiver education concerning the presentation of a more severe pattern of cognitive impairment in affected individuals. PMID:19835657
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469
Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid
NASA Technical Reports Server (NTRS)
VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)
1997-01-01
The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).
Tian, Zengshan; Xu, Kunjie; Yu, Xiang
2014-01-01
This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349
Error analysis of look-up-table implementations in device-independent color imaging systems
NASA Astrophysics Data System (ADS)
Jennings, Eddie; Holland, R. D.; Lee, C. C.
1994-04-01
In device-independent color imaging systems, it is necessary to relate device color coordinates to and from standard colorimetric or appearance based color spaces. Such relationships are determined by mathematical modeling techniques with error estimates commonly quoted with the CIELAB (Delta) E metric. Due to performance considerations, a lookup table (LUT) is commonly used to approximate the model. LUT approximation accuracy is affected by the number of LUT entries, the distribution of the LUT data, and the interpolation technique used (full linear interpolation using cubes or hypercubes versus partial linear interpolation using tetrahedrons or hypertetrahedrons). Error estimates of such LUT approximations are not widely known. An overview of the modeling process and lookup table approximation technique is given with a study of relevant error analysis techniques. The application of such error analyses is shown for two common problems (converting scanner RGB and prepress proofing CMYK color definitions to CIELAB). In each application, (Delta) E statistics are shown for LUTs based on the above contributing factors. An industry recommendation is made for a standard way of communicating error information about interpolation solutions that will be meaningful to both vendors and end users.
Numerical analysis of free vibrations of damped rotating structures
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1977-01-01
This paper is concerned with the efficient numerical solution of damped and undamped free vibration problems of rotating structures. While structural discretization is achieved by the finite element method, the associated eigenproblem solution is effected by a combined Sturm sequence and inverse iteration technique that enables the computation of a few required roots only without having to compute any other. For structures of complex configurations, a modal synthesis technique is also presented, which is based on appropriate combinations of eigenproblem solution of various structural components. Such numerical procedures are general in nature, which fully exploit matrix sparsity inherent in finite element discretizations, and prove to be most efficient for the vibration analysis of any damped rotating structure, such as rotating machineries, helicopter and turbine blades, spinning space stations, among others.
Numerical Ergonomics Analysis in Operation Environment of CNC Machine
NASA Astrophysics Data System (ADS)
Wong, S. F.; Yang, Z. X.
2010-05-01
The performance of operator will be affected by different operation environments [1]. Moreover, poor operation environment may cause health problems of the operator [2]. Physical and psychological considerations are two main factors that will affect the performance of operator under different conditions of operation environment. In this paper, applying scientific and systematic methods find out the pivot elements in the field of physical and psychological factors. There are five main factors including light, temperature, noise, air flow and space that are analyzed. A numerical ergonomics model has been built up regarding the analysis results which can support to advance the design of operation environment. Moreover, the output of numerical ergonomic model can provide the safe, comfortable, more productive conditions for the operator.
Error analysis of deep sequencing of phage libraries: peptides censored in sequencing.
Matochko, Wadim L; Derda, Ratmir
2013-01-01
Next-generation sequencing techniques empower selection of ligands from phage-display libraries because they can detect low abundant clones and quantify changes in the copy numbers of clones without excessive selection rounds. Identification of errors in deep sequencing data is the most critical step in this process because these techniques have error rates >1%. Mechanisms that yield errors in Illumina and other techniques have been proposed, but no reports to date describe error analysis in phage libraries. Our paper focuses on error analysis of 7-mer peptide libraries sequenced by Illumina method. Low theoretical complexity of this phage library, as compared to complexity of long genetic reads and genomes, allowed us to describe this library using convenient linear vector and operator framework. We describe a phage library as N × 1 frequency vector n = ||ni||, where ni is the copy number of the ith sequence and N is the theoretical diversity, that is, the total number of all possible sequences. Any manipulation to the library is an operator acting on n. Selection, amplification, or sequencing could be described as a product of a N × N matrix and a stochastic sampling operator (Sa). The latter is a random diagonal matrix that describes sampling of a library. In this paper, we focus on the properties of Sa and use them to define the sequencing operator (Seq). Sequencing without any bias and errors is Seq = Sa IN, where IN is a N × N unity matrix. Any bias in sequencing changes IN to a nonunity matrix. We identified a diagonal censorship matrix (CEN), which describes elimination or statistically significant downsampling, of specific reads during the sequencing process. PMID:24416071
Numerical Analysis of a Radiant Heat Flux Calibration System
NASA Technical Reports Server (NTRS)
Jiang, Shanjuan; Horn, Thomas J.; Dhir, V. K.
1998-01-01
A radiant heat flux gage calibration system exists in the Flight Loads Laboratory at NASA's Dryden Flight Research Center. This calibration system must be well understood if the heat flux gages calibrated in it are to provide useful data during radiant heating ground tests or flight tests of high speed aerospace vehicles. A part of the calibration system characterization process is to develop a numerical model of the flat plate heater element and heat flux gage, which will help identify errors due to convection, heater element erosion, and other factors. A 2-dimensional mathematical model of the gage-plate system has been developed to simulate the combined problem involving convection, radiation and mass loss by chemical reaction. A fourth order finite difference scheme is used to solve the steady state governing equations and determine the temperature distribution in the gage and plate, incident heat flux on the gage face, and flat plate erosion. Initial gage heat flux predictions from the model are found to be within 17% of experimental results.
Numeral-Incorporating Roots in Numeral Systems: A Comparative Analysis of Two Sign Languages
ERIC Educational Resources Information Center
Fuentes, Mariana; Massone, Maria Ignacia; Fernandez-Viader, Maria del Pilar; Makotrinsky, Alejandro; Pulgarin, Francisca
2010-01-01
Numeral-incorporating roots in the numeral systems of Argentine Sign Language (LSA) and Catalan Sign Language (LSC), as well as the main features of the number systems of both languages, are described and compared. Informants discussed the use of numerals and roots in both languages (in most cases in natural contexts). Ten informants took part in…
NASA Astrophysics Data System (ADS)
Li, Zexian; Latva-aho, Matti
2004-12-01
Multicarrier code division multiple access (MC-CDMA) is a promising technique that combines orthogonal frequency division multiplexing (OFDM) with CDMA. In this paper, based on an alternative expression for the[InlineEquation not available: see fulltext.]-function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER) of multiuser MC-CDMA systems in frequency-selective Nakagami-[InlineEquation not available: see fulltext.] fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC) or equal gain combining (EGC). The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.
Birge, Jonathan R.; Kaertner, Franz X.
2008-06-15
We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty.
A Meta-Analysis for Association of Maternal Smoking with Childhood Refractive Error and Amblyopia
Li, Li; Qi, Ya; Shi, Wei; Wang, Yuan; Liu, Wen; Hu, Man
2016-01-01
Background. We aimed to evaluate the association between maternal smoking and the occurrence of childhood refractive error and amblyopia. Methods. Relevant articles were identified from PubMed and EMBASE up to May 2015. Combined odds ratio (OR) corresponding with its 95% confidence interval (CI) was calculated to evaluate the influence of maternal smoking on childhood refractive error and amblyopia. The heterogeneity was evaluated with the Chi-square-based Q statistic and the I2 test. Potential publication bias was finally examined by Egger's test. Results. A total of 9 articles were included in this meta-analysis. The pooled OR showed that there was no significant association between maternal smoking and childhood refractive error. However, children whose mother smoked during pregnancy were 1.47 (95% CI: 1.12–1.93) times and 1.43 (95% CI: 1.23-1.66) times more likely to suffer from amblyopia and hyperopia, respectively, compared with children whose mother did not smoke, and the difference was significant. Significant heterogeneity was only found among studies involving the influence of maternal smoking on children's refractive error (P < 0.05; I2 = 69.9%). No potential publication bias was detected by Egger's test. Conclusion. The meta-analysis suggests that maternal smoking is a risk factor for childhood hyperopia and amblyopia. PMID:27247800
Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response
Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios
2016-01-01
In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system’s response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor’s optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section. PMID:27136562
Unsaturated Shear Strength and Numerical Analysis Methods for Unsaturated Soils
NASA Astrophysics Data System (ADS)
Kim, D.; Kim, G.; Kim, D.; Baek, H.; Kang, S.
2011-12-01
The angles of shearing resistance(φb) and internal friction(φ') appear to be identical in low suction range, but the angle of shearing resistance shows non-linearity as suction increases. In most numerical analysis however, a fixed value for the angle of shearing resistance is applied even in low suction range for practical reasons, often leading to a false conclusion. In this study, a numerical analysis has been undertaken employing the estimated shear strength curve of unsaturated soils from the residual water content of SWCC proposed by Vanapalli et al.(1996). The result was also compared with that from a fixed value of φb. It is suggested that, in case it is difficult to measure the unsaturated shear strength curve through the triaxial soil tests, the estimated shear strength curve using the residual water content can be a useful alternative. This result was applied for analyzing the slope stablity of unsaturated soils. The effects of a continuous rainfall on slope stability were analyzed using a commercial program "SLOPE/W", with the coupled infiltration analysis program "SEEP/W" from the GEO-SLOPE International Ltd. The results show that, prior to the infiltration by the intensive rainfall, the safety factors using the estimated shear strength curve were substantially higher than that from the fixed value of φb at all time points. After the intensive infiltration, both methods showed a similar behavior.
NASA Technical Reports Server (NTRS)
Bryant, W. H.; Hodge, W. F.
1974-01-01
An error analysis program based on an output error estimation method was used to evaluate the effects of sensor and instrumentation errors on the estimation of aircraft stability and control derivatives. A Monte Carlo analysis was performed using simulated flight data for a high performance military aircraft, a large commercial transport, and a small general aviation aircraft for typical cruise flight conditions. The effects of varying the input sequence and combinations of the sensor and instrumentation errors were investigated. The results indicate that both the parameter accuracy and the corresponding measurement trajectory fit error can be significantly affected. Of the error sources considered, instrumentation lags and control measurement errors were found to be most significant.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
An improved numerical model for wave rotor design and analysis
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Wilson, Jack
1992-01-01
A numerical model has been developed which can predict both the unsteady flows within a wave rotor and the steady averaged flows in the ports. The model is based on the assumptions of one-dimensional, unsteady, and perfect gas flow. Besides the dominant wave behavior, it is also capable of predicting the effects of finite tube opening time, leakage from the tube ends, and viscosity. The relative simplicity of the model makes it useful for design, optimization, and analysis of wave rotor cycles for any application. This paper discusses some details of the model and presents comparisons between the model and two laboratory wave rotor experiments.
Numerical analysis of decoy state quantum key distribution protocols
Harrington, Jim W; Rice, Patrick R
2008-01-01
Decoy state protocols are a useful tool for many quantum key distribution systems implemented with weak coherent pulses, allowing significantly better secret bit rates and longer maximum distances. In this paper we present a method to numerically find optimal three-level protocols, and we examine how the secret bit rate and the optimized parameters are dependent on various system properties, such as session length, transmission loss, and visibility. Additionally, we show how to modify the decoy state analysis to handle partially distinguishable decoy states as well as uncertainty in the prepared intensities.
NASA Astrophysics Data System (ADS)
You, Jiong; Pei, Zhiyuan
2015-01-01
With the development of remote sensing technology, its applications in agriculture monitoring systems, crop mapping accuracy, and spatial distribution are more and more being explored by administrators and users. Uncertainty in crop mapping is profoundly affected by the spatial pattern of spectral reflectance values obtained from the applied remote sensing data. Errors in remotely sensed crop cover information and the propagation in derivative products need to be quantified and handled correctly. Therefore, this study discusses the methods of error modeling for uncertainty characterization in crop mapping using GF-1 multispectral imagery. An error modeling framework based on geostatistics is proposed, which introduced the sequential Gaussian simulation algorithm to explore the relationship between classification errors and the spectral signature from remote sensing data source. On this basis, a misclassification probability model to produce a spatially explicit classification error probability surface for the map of a crop is developed, which realizes the uncertainty characterization for crop mapping. In this process, trend surface analysis was carried out to generate a spatially varying mean response and the corresponding residual response with spatial variation for the spectral bands of GF-1 multispectral imagery. Variogram models were employed to measure the spatial dependence in the spectral bands and the derived misclassification probability surfaces. Simulated spectral data and classification results were quantitatively analyzed. Through experiments using data sets from a region in the low rolling country located at the Yangtze River valley, it was found that GF-1 multispectral imagery can be used for crop mapping with a good overall performance, the proposal error modeling framework can be used to quantify the uncertainty in crop mapping, and the misclassification probability model can summarize the spatial variation in map accuracy and is helpful for
Orbit Determination Error Analysis Results for the Triana Sun-Earth L2 Libration Point Mission
NASA Technical Reports Server (NTRS)
Marr, G.
2003-01-01
Using the NASA Goddard Space Flight Center's Orbit Determination Error Analysis System (ODEAS), orbit determination error analysis results are presented for all phases of the Triana Sun-Earth L1 libration point mission and for the science data collection phase of a future Sun-Earth L2 libration point mission. The Triana spacecraft was nominally to be released by the Space Shuttle in a low Earth orbit, and this analysis focuses on that scenario. From the release orbit a transfer trajectory insertion (TTI) maneuver performed using a solid stage would increase the velocity be approximately 3.1 km/sec sending Triana on a direct trajectory to its mission orbit. The Triana mission orbit is a Sun-Earth L1 Lissajous orbit with a Sun-Earth-vehicle (SEV) angle between 4.0 and 15.0 degrees, which would be achieved after a Lissajous orbit insertion (LOI) maneuver at approximately launch plus 6 months. Because Triana was to be launched by the Space Shuttle, TTI could potentially occur over a 16 orbit range from low Earth orbit. This analysis was performed assuming TTI was performed from a low Earth orbit with an inclination of 28.5 degrees and assuming support from a combination of three Deep Space Network (DSN) stations, Goldstone, Canberra, and Madrid and four commercial Universal Space Network (USN) stations, Alaska, Hawaii, Perth, and Santiago. These ground stations would provide coherent two-way range and range rate tracking data usable for orbit determination. Larger range and range rate errors were assumed for the USN stations. Nominally, DSN support would end at TTI+144 hours assuming there were no USN problems. Post-TTI coverage for a range of TTI longitudes for a given nominal trajectory case were analyzed. The orbit determination error analysis after the first correction maneuver would be generally applicable to any libration point mission utilizing a direct trajectory.
NASA Technical Reports Server (NTRS)
Byrnes, D. V.; Carney, P. C.; Underwood, J. W.; Vogt, E. D.
1974-01-01
The six month effort was responsible for the development, test, conversion, and documentation of computer software for the mission analysis of missions to halo orbits about libration points in the earth-sun system. The software consisting of two programs called NOMNAL and ERRAN is part of the Space Trajectories Error Analysis Programs. The program NOMNAL targets a transfer trajectory from earth on a given launch date to a specified halo orbit on a required arrival date. Either impulsive or finite thrust insertion maneuvers into halo orbit are permitted by the program. The transfer trajectory is consistent with a realistic launch profile input by the user. The second program ERRAN conducts error analyses of the targeted transfer trajectory. Measurements including range, doppler, star-planet angles, and apparent planet diameter are processed in a Kalman-Schmidt filter to determine the trajectory knowledge uncertainty.
Development of an improved HRA method: A technique for human error analysis (ATHEANA)
Taylor, J.H.; Luckas, W.J.; Wreathall, J.
1996-03-01
Probabilistic risk assessment (PRA) has become an increasingly important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. The NRC recently published a final policy statement, SECY-95-126, encouraging the use of PRA in regulatory activities. Human reliability analysis (HRA), while a critical element of PRA, has limitations in the analysis of human actions in PRAs that have long been recognized as a constraint when using PRA. In fact, better integration of HRA into the PRA process has long been a NRC issue. Of particular concern, has been the omission of errors of commission - those errors that are associated with inappropriate interventions by operators with operating systems. To address these concerns, the NRC identified the need to develop an improved HRA method, so that human reliability can be better represented and integrated into PRA modeling and quantification.
The use of failure mode effect and criticality analysis in a medication error subcommittee.
Williams, E; Talley, R
1994-04-01
Failure Mode Effect and Criticality Analysis (FMECA) is the systematic assessment of a process or product that enables one to determine the location and mechanism of potential failures. It has been used by engineers, particularly in the aerospace industry, to identify and prioritize potential failures during product development when there is a lack of data but an abundance of expertise. The Institute for Safe Medication Practices has recommended its use in analyzing the medication administration process in hospitals and in drug product development in the pharamceutical industry. A medication error subcommittee adopted and modified FMECA to identify and prioritize significant failure modes in its specific medication administration process. Based on this analysis, the subcommittee implemented solutions to four of the five highest ranked failure modes. FMECA provided a method for a multidisciplinary group to address the most important medication error concerns based upon the expertise of the group members. It also facilitated consensus building in a group with varied perceptions. PMID:10133462
Error analysis for the ground-based microwave ozone measurements during STOIC
NASA Technical Reports Server (NTRS)
Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick
1995-01-01
We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ('baseline'). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17km at 0.2 mbar. We show the 'blind' microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE 2. The STOIC results and comparisons are broadly consistent with the formal analysis.
Numerical Analysis on Air Ingress Behavior in GTHTR300H
Tetsuaki Takeda; Xing Yan; Kazuhiko Kunitomi
2006-07-01
Japan Atomic Energy Agency (JAEA) has been developing the analytical code for the safety characteristics of the HTGR and carrying out design study of the gas turbine high temperature reactor of 300 MWe nominal-capacity for hydrogen production, the GTHTR300H (Gas Turbine High Temperature Reactor 300 for Hydrogen). The objective of this study is to clarify safety characteristics of the GTHTR300H for the pipe rupture accident. A numerical analysis of heat and mass transfer fluid flow with multi-component gas mixture has been performed to obtain the variation of the density of the gas mixture, and the onset time of natural circulation of air. From the results obtained in this analysis, it was found that the duration time of the air ingress by molecular diffusion would increase due to the existence of the recuperator in the GTHTR300H system. (authors)
Numerical analysis of cocurrent conical and cylindrical axial cyclone separators
NASA Astrophysics Data System (ADS)
Nor, M. A. M.; Al-Kayiem, H. H.; Lemma, T. A.
2015-12-01
Axial concurrent liquid-liquid separator is seen as an alternative unit to the traditional tangential counter current cyclone due to lower droplet break ups, turbulence and pressure drop. This paper presents the numerical analysis of a new conical axial cocurrent design along with a comparison to the cylindrical axial cocurrent type. The simulation was carried out using CFD technique in ANSYS-FLUENT software. The simulation results were validated by comparison with experimental data from literature, and mesh independency and quality were performed. The analysis indicates that the conical version achieves better separation performance compared to the cylindrical type. Simulation results indicate tangential velocity with 8% higher and axial velocity with 80% lower recirculation compared to the cylindrical type. Also, the flow visualization counters shows smaller recirculation region relative to the cylindrical unit. The proposed conical design seems more efficient and suits the crude/water separation in O&G industry.
NASA Technical Reports Server (NTRS)
Seasholtz, R. G.
1977-01-01
A laser Doppler velocimeter (LDV) built for use in the Lewis Research Center's turbine stator cascade facilities is described. The signal processing and self contained data processing are based on a computing counter. A procedure is given for mode matching the laser to the probe volume. An analysis is presented of biasing errors that were observed in turbulent flow when the mean flow was not normal to the fringes.
Asymptotic and numerical analysis of electrohydrodynamic flows of dielectric liquid.
Suh, Y K; Baek, K H; Cho, D S
2013-08-01
We perform an asymptotic analysis of electrohydrodynamic (EHD) flow of nonpolar liquid subjected to an external, nonuniform electric field. The domain of interest covers the bulk as well as the thin dissociation layers (DSLs) near the electrodes. Outer (i.e., bulk) equations for the ion transport in hierarchical order of perturbation parameters can be expressed in linear form, whereas the inner (i.e., DSL) equations take a nonlinear form. We derive a simple formula in terms of various parameters which can be used to estimate the relative importance of the DSL-driven flow compared with the bulk-driven flow. EHD flow over a pair of cylindrical electrodes is then solved asymptotically and numerically. It is found that in large geometric scale and high ion concentration the EHD flow is dominated by the bulk-charge-induced flow. As the scale and concentration are decreased, the DSL-driven slip velocity increases and the resultant flow tends to dominate the domain and finally leads to flow reversal. We also conduct a flow-visualization experiment to verify the analysis and attain good agreement between the two results with parameter tuning. We finally show, based on the comparison of experimental and numerical solutions, that the rate of free-ion generation (dissociation) should be less than the one predicted from the existing formula. PMID:24032920
Numerical analysis of distortion characteristics of heterojunction bipolar transistor laser
NASA Astrophysics Data System (ADS)
Piramasubramanian, S.; Ganesh Madhan, M.; Nagella, Jyothsna; Dhanapriya, G.
2015-12-01
Numerical analysis of harmonic and third order intermodulation distortion of transistor laser is presented in this paper. The three level rate equations are numerically solved to determine the modulation and distortion characteristics. DC and AC analysis on the device are carried out to determine its power-current and frequency response characteristics. Further, the effects of quantum well recombination time and electron capture time in the quantum well, on the modulation depth and distortion characteristics are examined. It is observed that the threshold current density of the device decreases with increasing electron lifetime, which coincides with earlier findings. Also, the magnitude of harmonic distortion and intermodulation products are found to reduce with increasing current density and with a reduction of spontaneous emission recombination lifetime. However, an increase of electron capture time improves the distortion performance. A maximum modulation depth of 18.42 dB is obtained for 50 ps spontaneous emission life time and 1 ps electron capture time, for 2.4 GHz frequency at a current density of 2Jth. A minimum second harmonic distortion magnitude of -66.8 dBc is predicted for 50 ps spontaneous emission life time and 1 ps electron capture time for 2.4 GHz frequency, at a current density of 7Jth. Similarly, a minimum third order intermodulation distortion of -83.93 dBc is obtained for 150 ps spontaneous emission life time and 5 ps electron capture time under similar biasing conditions.
Numerical analysis and experimental verification of vehicle trajectories
NASA Astrophysics Data System (ADS)
Wekezer, J. W.; Cichocki, K.
2003-09-01
The paper presents research results of a study, in which computational mechanics was utilized to predict vehicle trajectories upon traversing standard Florida DOT street curbs. Computational analysis was performed using LS-DYNA non-linear, finite element computer code with two public domain, finite element models of motor vehicles: Ford Festiva and Ford Taurus. Shock absorbers were modeled using discrete spring and damper elements. Connections for the modifie suspension systems were carefully designed to assure proper range of motion for the suspension models. Inertia properties of the actual vehicles were collected using tilt-table tests and were used for LS-DYNA vehicle models. Full-scale trajectory tests have been performed at Texas Transportation Institute to validate the numerical models and predictions from computational mechanics. Experiments were conducted for Ford Festiva and Ford Taurus, both for two values of approach angle: 15 and 90 degrees, with impact velocity of 45 mph. Experimental data including accelerations, displacements and overall vehicles behavior were collected by high-speed video cameras and have e been compared with numerical results. Verification results indicated a good correlation between computational analysis and full-scale test data. The study also underlined a strong dependence of properly modeled suspension and tires on resulting vehicle trajectories.
Styck, Kara M; Walsh, Shana M
2016-01-01
The purpose of the present investigation was to conduct a meta-analysis of the literature on examiner errors for the Wechsler scales of intelligence. Results indicate that a mean of 99.7% of protocols contained at least 1 examiner error when studies that included a failure to record examinee responses as an error were combined and a mean of 41.2% of protocols contained at least 1 examiner error when studies that ignored errors of omission were combined. Furthermore, graduate student examiners were significantly more likely to make at least 1 error on Wechsler intelligence test protocols than psychologists. However, psychologists made significantly more errors per protocol than graduate student examiners regardless of the inclusion or exclusion of failure to record examinee responses as errors. On average, 73.1% of Full-Scale IQ (FSIQ) scores changed as a result of examiner errors, whereas 15.8%-77.3% of scores on the Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), and Processing Speed Index changed as a result of examiner errors. In addition, results suggest that examiners tend to overestimate FSIQ scores and underestimate VCI scores. However, no strong pattern emerged for the PRI and WMI. It can be concluded that examiner errors occur frequently and impact index and FSIQ scores. Consequently, current estimates for the standard error of measurement of popular IQ tests may not adequately capture the variance due to the examiner. PMID:26011479
Numerical analysis of modified Central Solenoid insert design
Khodak, Andrei; Martovetsky, Nicolai; Smirnov, Aleksandre; Titus, Peter
2015-06-21
The United States ITER Project Office (USIPO) is responsible for fabrication of the Central Solenoid (CS) for ITER project. The ITER machine is currently under construction by seven parties in Cadarache, France. The CS Insert (CSI) project should provide a verification of the conductor performance in relevant conditions of temperature, field, currents and mechanical strain. The US IPO designed the CSI that will be tested at the Central Solenoid Model Coil (CSMC) Test Facility at JAEA, Naka. To validate the modified design we performed three-dimensional numerical simulations using coupled solver for simultaneous structural, thermal and electromagnetic analysis. Thermal and electromagnetic simulations supported structural calculations providing necessary loads and strains. According to current analysis design of the modified coil satisfies ITER magnet structural design criteria for the following conditions: (1) room temperature, no current, (2) temperature 4K, no current, (3) temperature 4K, current 60 kA direct charge, and (4) temperature 4K, current 60 kA reverse charge. Fatigue life assessment analysis is performed for the alternating conditions of: temperature 4K, no current, and temperature 4K, current 45 kA direct charge. Results of fatigue analysis show that parts of the coil assembly can be qualified for up to 1 million cycles. Distributions of the Current Sharing Temperature (TCS) in the superconductor were obtained from numerical results using parameterization of the critical surface in the form similar to that proposed for ITER. Lastly, special ADPL scripts were developed for ANSYS allowing one-dimensional representation of TCS along the cable, as well as three-dimensional fields of TCS in superconductor material. Published by Elsevier B.V.
Numerical analysis of modified Central Solenoid insert design
Khodak, Andrei; Martovetsky, Nicolai; Smirnov, Aleksandre; Titus, Peter
2015-06-21
The United States ITER Project Office (USIPO) is responsible for fabrication of the Central Solenoid (CS) for ITER project. The ITER machine is currently under construction by seven parties in Cadarache, France. The CS Insert (CSI) project should provide a verification of the conductor performance in relevant conditions of temperature, field, currents and mechanical strain. The US IPO designed the CSI that will be tested at the Central Solenoid Model Coil (CSMC) Test Facility at JAEA, Naka. To validate the modified design we performed three-dimensional numerical simulations using coupled solver for simultaneous structural, thermal and electromagnetic analysis. Thermal and electromagneticmore » simulations supported structural calculations providing necessary loads and strains. According to current analysis design of the modified coil satisfies ITER magnet structural design criteria for the following conditions: (1) room temperature, no current, (2) temperature 4K, no current, (3) temperature 4K, current 60 kA direct charge, and (4) temperature 4K, current 60 kA reverse charge. Fatigue life assessment analysis is performed for the alternating conditions of: temperature 4K, no current, and temperature 4K, current 45 kA direct charge. Results of fatigue analysis show that parts of the coil assembly can be qualified for up to 1 million cycles. Distributions of the Current Sharing Temperature (TCS) in the superconductor were obtained from numerical results using parameterization of the critical surface in the form similar to that proposed for ITER. Lastly, special ADPL scripts were developed for ANSYS allowing one-dimensional representation of TCS along the cable, as well as three-dimensional fields of TCS in superconductor material. Published by Elsevier B.V.« less
Principal components analysis of reward prediction errors in a reinforcement learning task.
Sambrook, Thomas D; Goslin, Jeremy
2016-01-01
Models of reinforcement learning represent reward and punishment in terms of reward prediction errors (RPEs), quantitative signed terms describing the degree to which outcomes are better than expected (positive RPEs) or worse (negative RPEs). An electrophysiological component known as feedback related negativity (FRN) occurs at frontocentral sites 240-340ms after feedback on whether a reward or punishment is obtained, and has been claimed to neurally encode an RPE. An outstanding question however, is whether the FRN is sensitive to the size of both positive RPEs and negative RPEs. Previous attempts to answer this question have examined the simple effects of RPE size for positive RPEs and negative RPEs separately. However, this methodology can be compromised by overlap from components coding for unsigned prediction error size, or "salience", which are sensitive to the absolute size of a prediction error but not its valence. In our study, positive and negative RPEs were parametrically modulated using both reward likelihood and magnitude, with principal components analysis used to separate out overlying components. This revealed a single RPE encoding component responsive to the size of positive RPEs, peaking at ~330ms, and occupying the delta frequency band. Other components responsive to unsigned prediction error size were shown, but no component sensitive to negative RPE size was found. PMID:26196667
NASA Astrophysics Data System (ADS)
Yang, Liangen; Wang, Xuanze; Lv, Wei
2011-05-01
A displacement sensor with controlled measuring force and its error analysis and precision verification are discussed in this paper. The displacement sensor consists of an electric induction transducer with high resolution and a voice coil motor (VCM). The measuring principles, structure, method enlarging measuring range, signal process of the sensor are discussed. The main error sources such as parallelism error and incline of framework by unequal length of leaf springs, rigidity of measuring rods, shape error of stylus, friction between iron core and other parts, damping of leaf springs, variation of voltage, linearity of induction transducer, resolution and stability are analyzed. A measuring system for surface topography with large measuring range is constructed based on the displacement sensor and 2D moving platform. Measuring precision and stability of the measuring system is verified. Measuring force of the sensor in measurement process of surface topography can be controlled at μN level and hardly changes. It has been used in measurement of bearing ball, bullet mark, etc. It has measuring range up to 2mm and precision of nm level.
NASA Astrophysics Data System (ADS)
Yang, Liangen; Wang, Xuanze; Lv, Wei
2010-12-01
A displacement sensor with controlled measuring force and its error analysis and precision verification are discussed in this paper. The displacement sensor consists of an electric induction transducer with high resolution and a voice coil motor (VCM). The measuring principles, structure, method enlarging measuring range, signal process of the sensor are discussed. The main error sources such as parallelism error and incline of framework by unequal length of leaf springs, rigidity of measuring rods, shape error of stylus, friction between iron core and other parts, damping of leaf springs, variation of voltage, linearity of induction transducer, resolution and stability are analyzed. A measuring system for surface topography with large measuring range is constructed based on the displacement sensor and 2D moving platform. Measuring precision and stability of the measuring system is verified. Measuring force of the sensor in measurement process of surface topography can be controlled at μN level and hardly changes. It has been used in measurement of bearing ball, bullet mark, etc. It has measuring range up to 2mm and precision of nm level.
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.
1994-01-01
Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.
An analysis of the effects of initial velocity errors on geometric pairing
NASA Astrophysics Data System (ADS)
Schricker, Bradley C.; Ford, Louis
2007-04-01
For a number of decades, among the most prevalent training media in the military has been Tactical Engagement Simulation (TES) training. TES has allowed troops to train for practical missions in highly realistic combat environments without the associated risks involved with live weaponry and munitions. This has been possible because current TES has relied largely upon the Multiple Integrated Laser Engagement System (MILES) and similar systems for a number of years for direct-fire weapons, using a laser to pair the shooter to the potential target(s). Emerging systems, on the other hand, will use a pairing method called geometric pairing (geo-pairing), which uses a set of data about both the shooter and target, such as locations, weapon orientations, velocities, and weapon projectile velocities, nearby terrain to resolve an engagement. A previous paper [1] introduces various potential sources of error for a geo-pairing solution. This paper goes into greater depth regarding the impact of errors that originate within initial velocity errors, beginning with a short introduction into the TES system (TESS). The next section will explain the modeling characteristics of the projectile motion followed by a mathematical analysis illustrating the impacts of errors related to those characteristics. A summary and conclusion containing recommendations will close this paper.
Analysis of S-box in Image Encryption Using Root Mean Square Error Method
NASA Astrophysics Data System (ADS)
Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan
2012-07-01
The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes
A Preliminary ZEUS Lightning Location Error Analysis Using a Modified Retrieval Theory
NASA Technical Reports Server (NTRS)
Elander, Valjean; Koshak, William; Phanord, Dieudonne
2004-01-01
The ZEUS long-range VLF arrival time difference lightning detection network now covers both Europe and Africa, and there are plans for further expansion into the western hemisphere. In order to fully optimize and assess ZEUS lightning location retrieval errors and to determine the best placement of future receivers expected to be added to the network, a software package is being developed jointly between the NASA Marshall Space Flight Center (MSFC) and the University of Nevada Las Vegas (UNLV). The software package, called the ZEUS Error Analysis for Lightning (ZEAL), will be used to obtain global scale lightning location retrieval error maps using both a Monte Carlo approach and chi-squared curvature matrix theory. At the core of ZEAL will be an implementation of an Iterative Oblate (IO) lightning location retrieval method recently developed at MSFC. The IO method will be appropriately modified to account for variable wave propagation speed, and the new retrieval results will be compared with the current ZEUS retrieval algorithm to assess potential improvements. In this preliminary ZEAL work effort, we defined 5000 source locations evenly distributed across the Earth. We then used the existing (as well as potential future ZEUS sites) to simulate arrival time data between source and ZEUS site. A total of 100 sources were considered at each of the 5000 locations, and timing errors were selected from a normal distribution having a mean of 0 seconds and a standard deviation of 20 microseconds. This simulated "noisy" dataset was analyzed using the IO algorithm to estimate source locations. The exact locations were compared with the retrieved locations, and the results are summarized via several color-coded "error maps."
Stochastic algorithms for the analysis of numerical flame simulations
Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.
2004-04-26
Recent progress in simulation methodologies and high-performance parallel computers have made it is possible to perform detailed simulations of multidimensional reacting flow phenomena using comprehensive kinetics mechanisms. As simulations become larger and more complex, it becomes increasingly difficult to extract useful information from the numerical solution, particularly regarding the interactions of the chemical reaction and diffusion processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of reacting flow. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian view point that follows the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system . From this perspective an ''atom'' is part of some molecule of a species that is transported through the domain by advection and diffusion. Reactions cause the atom to shift from one chemical host species to another and the subsequent transport of the atom is given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion and chemistry as stochastic processes. In this paper, we discuss the numerical issues in detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. The capabilities of this diagnostic are then demonstrated by applications to study the modulation of carbon chemistry during a vortex-flame interaction, and the role of cyano chemistry in rm NO{sub x} production for a steady diffusion flame.
Numerical analysis of impact-damaged sandwich composites
NASA Astrophysics Data System (ADS)
Hwang, Youngkeun
Sandwich structures are used in a wide variety of structural applications due to their relative advantages over other conventional structural materials in terms of improved stability, weight savings, and ease of manufacture and repair. Foreign object impact damage in sandwich composites can result in localized damage to the facings, core, and core-facing interface. Such damage may result in drastic reductions in composite strength, elastic moduli, and durability and damage tolerance characteristics. In this study, physically-motivated numerical models have been developed for predicting the residual strength of impact-damaged sandwich composites comprised of woven-fabric graphite-epoxy facesheets and Nomex honeycomb cores subjected to compression-after-impact loading. Results from non-destructive inspection and destructive sectioning of damaged sandwich panels were used to establish initial conditions for damage (residual facesheet indentation, core crush dimension, etc.) in the numerical analysis. Honeycomb core crush test results were used to establish the nonlinear constitutive behavior for the Nomex core. The influence of initial facesheet property degradation and progressive loss of facesheet structural integrity on the residual strength of impact-damaged sandwich panels was examined. The influence of damage of various types and sizes, specimen geometry, support boundary conditions, and variable material properties on the estimated residual strength is discussed. Facesheet strains from material and geometric nonlinear finite element analyses correlated relatively well with experimentally determined values. Moreover, numerical predictions of residual strength are consistent with experimental observations. Using a methodology similar to that presented in this work, it may be possible to develop robust residual strength estimates for complex sandwich composite structural components with varying levels of in-service damage. Such studies may facilitate sandwich
Technique of analysis and error detection for thermo-hydraulic system data
Bordner, G.L.
1985-01-01
Statistical techniques based on estimation theory were developed for the analysis of steady-state data from thermo-hydraulic systems, which could be either experimental loops or operating power plants. The method seeks to resolve errors in the component heat balances which describe the system, to obtain system parameter estimates which are more accurate than the raw data, and to flag possible faulty sensors. Sample results are given for the analysis of test data from the Sodium Loop Safety Faciltiy (SLSF) P3 experiment.
Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; McLauchlan, Lifford
2010-08-01
In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.
NASA Astrophysics Data System (ADS)
Ross, A.; Czisch, M.; King, G. C.
1997-02-01
A theoretical approach to calculate the time evolution of magnetization during a CPMG pulse sequence of arbitrary parameter settings is developed and verified by experiment. The analysis reveals that off-resonance effects can cause systematic reductions in measured peak amplitudes that commonly lie in the range 5-25%, reaching 50% in unfavorable circumstances. These errors, which are finely dependent upon frequency offset and CPMG parameter settings, are subsequently transferred into erroneousT2values obtained by curve fitting, where they are reduced or amplified depending upon the magnitude of the relaxation time. Subsequent transfer to Lipari-Szabo model analysis can produce significant errors in derived motional parameters, with τeinternal correlation times being affected somewhat more thanS2order parameters. A hazard of this off-resonance phenomenon is its oscillatory nature, so that strongly affected and unaffected signals can be found at various frequencies within a CPMG spectrum. Methods for the reduction of the systematic error are discussed. Relaxation studies on biomolecules, especially at high field strengths, should take account of potential off-resonance contributions.
NASA Technical Reports Server (NTRS)
Byrnes, D. V.; Carney, P. C.; Underwood, J. W.; Vogt, E. D.
1974-01-01
Development, test, conversion, and documentation of computer software for the mission analysis of missions to halo orbits about libration points in the earth-sun system is reported. The software consisting of two programs called NOMNAL and ERRAN is part of the Space Trajectories Error Analysis Programs (STEAP). The program NOMNAL targets a transfer trajectory from Earth on a given launch date to a specified halo orbit on a required arrival date. Either impulsive or finite thrust insertion maneuvers into halo orbit are permitted by the program. The transfer trajectory is consistent with a realistic launch profile input by the user. The second program ERRAN conducts error analyses of the targeted transfer trajectory. Measurements including range, doppler, star-planet angles, and apparent planet diameter are processed in a Kalman-Schmidt filter to determine the trajectory knowledge uncertainty. Execution errors at injection, midcourse correction and orbit insertion maneuvers are analyzed along with the navigation uncertainty to determine trajectory control uncertainties and fuel-sizing requirements. The program is also capable of generalized covariance analyses.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. The SD module rejects waste heat from the power conversion cycle to space through a pumped-loop, multi-panel, deployable radiator. The baseline radiator configuration was defined during the Space Station conceptual design phase and is a function of the state point and heat rejection requirements of the power conversion unit. Requirements determined by the overall station design such as mass, system redundancy, micrometeoroid and space debris impact survivability, launch packaging, costs, and thermal and structural interaction with other station components have also been design drivers for the radiator configuration. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations. A brief description and discussion of the numerical model, it's capabilities and limitations, and results of the parametric studies performed is presented.
Numerical Analysis of Film Cooling at High Blowing Ratio
NASA Technical Reports Server (NTRS)
El-Gabry, Lamyaa; Heidmann, James; Ameri, Ali
2009-01-01
Computational Fluid Dynamics is used in the analysis of a film cooling jet in crossflow. Predictions of film effectiveness are compared with experimental results for a circular jet at blowing ratios ranging from 0.5 to 2.0. Film effectiveness is a surface quantity which alone is insufficient in understanding the source and finding a remedy for shortcomings of the numerical model. Therefore, in addition, comparisons are made to flow field measurements of temperature along the jet centerline. These comparisons show that the CFD model is accurately predicting the extent and trajectory of the film cooling jet; however, there is a lack of agreement in the near-wall region downstream of the film hole. The effects of main stream turbulence conditions, boundary layer thickness, turbulence modeling, and numerical artificial dissipation are evaluated and found to have an insufficient impact in the wake region of separated films (i.e. cannot account for the discrepancy between measured and predicted centerline fluid temperatures). Analyses of low and moderate blowing ratio cases are carried out and results are in good agreement with data.
Cao, Junjie; Jia, Hongzhi
2015-11-15
We propose error analysis using a rotating coordinate system with three parameters of linearly polarized light—incidence angle, azimuth angle on the front surface, and angle between the incidence and vibration planes—and demonstrate the method on a rotating birefringent prism system. The transmittance and angles are calculated plane-by-plane using a birefringence ellipsoid model and the final transmitted intensity equation is deduced. The effects of oblique incidence, light interference, beam convergence, and misalignment of the rotation and prism axes are discussed. We simulate the entire error model using MATLAB and conduct experiments based on a built polarimeter. The simulation and experimental results are consistent and demonstrate the rationality and validity of this method.
Cao, Junjie; Jia, Hongzhi
2015-11-01
We propose error analysis using a rotating coordinate system with three parameters of linearly polarized light--incidence angle, azimuth angle on the front surface, and angle between the incidence and vibration planes--and demonstrate the method on a rotating birefringent prism system. The transmittance and angles are calculated plane-by-plane using a birefringence ellipsoid model and the final transmitted intensity equation is deduced. The effects of oblique incidence, light interference, beam convergence, and misalignment of the rotation and prism axes are discussed. We simulate the entire error model using MATLAB and conduct experiments based on a built polarimeter. The simulation and experimental results are consistent and demonstrate the rationality and validity of this method. PMID:26628116
Error analysis of the quadratic nodal expansion method in slab geometry
Penland, R.C.; Turinsky, P.J.; Azmy, Y.Y.
1994-10-01
As part of an effort to develop an adaptive mesh refinement strategy for use in state-of-the-art nodal diffusion codes, the authors derive error bounds on the solution variables of the quadratic Nodal Expansion Method (NEM) in slab geometry. Closure of the system is obtained through flux discontinuity relationships and boundary conditions. In order to verify the analysis presented, the authors compare the quadratic NEM to the analytic solution of a test problem. The test problem for this investigation is a one-dimensional slab [0,20cm] with L{sup 2} = 6.495cm{sup 2} and D = 0.1429cm. The slab has a unit neutron source distributed uniformly throughout and zero flux boundary conditions. The analytic solution to this problem is used to compute the node-average fluxes over a variety of meshes, and these are used to compute the NEM maximum error on each mesh.
Errors analysis of dimensional metrology for internal components assembly of EAST
NASA Astrophysics Data System (ADS)
Gu, Yongqi; Liu, Chen; Xi, Weibin; Lu, Kun; Wei, Jing; Song, Yuntao; Yu, Liandong; Ge, Jian; Zheng, Yuanyang; Zhao, Huining; Zheng, Fubin; Wang, Jun
2016-01-01
The precision of dimensional measurement plays an important role in guaranteeing the assembly accuracy of its internal components during the upgrading phase of EAST device. In this paper, the experimental research and analysis were done based on three dimensional combined measurement systems, combining Laser Tracker, flexible Measure ARM and measurement fiducials network, which are used for alignment and measurement of EAST components during the assembly process. The error sources were analyzed, e.g. temperature, gravity, welding, and so on. And the effective weight of each kind of error source was estimated by the simulation method. Then these results were used to correct and compensate the actual measured data, the stability and consistency of the measurement results was greatly improved in different measurement process, and the assembly precision of the EAST components was promised.
A Cartesian parametrization for the numerical analysis of material instability
Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.; Ostien, Jakob T.; Lai, Zhengshou
2016-02-25
We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less
Numerical Analysis for Structural Safety Evaluation of Butterfly Valves
NASA Astrophysics Data System (ADS)
Shin, Myung-Seob; Yoon, Joon-Yong; Park, Han-Yung
2010-06-01
Butterfly valves are widely used in current industry to control the fluid flow. They are used for both on-off and throttling applications involving large flows at relatively low operating pressure especially in large size pipelines. For the industrial application of butterfly valves, it must be ensured that the valve could be used safety under the fatigue life and the deformations produced by the pressure of the fluid. In this study, we carried out the structure analysis of the body and the valve disc of the butterfly valve and the numerical simulation was performed by using ANSYS v11.0. The reliability of valve is evaluated under the investigation of the deformation, the leak test and the durability of the valve.
Preliminary Numerical and Experimental Analysis of the Spallation Phenomenon
NASA Technical Reports Server (NTRS)
Martin, Alexandre; Bailey, Sean C. C.; Panerai, Francesco; Davuluri, Raghava S. C.; Vazsonyi, Alexander R.; Zhang, Huaibao; Lippay, Zachary S.; Mansour, Nagi N.; Inman, Jennifer A.; Bathel, Brett F.; Splinter, Scott C.; Danehy, Paul M.
2015-01-01
The spallation phenomenon was studied through numerical analysis using a coupled Lagrangian particle tracking code and a hypersonic aerothermodynamics computational fluid dynamics solver. The results show that carbon emission from spalled particles results in a significant modification of the gas composition of the post shock layer. Preliminary results from a test-campaign at the NASA Langley HYMETS facility are presented. Using an automated image processing of high-speed images, two-dimensional velocity vectors of the spalled particles were calculated. In a 30 second test at 100 W/cm2 of cold-wall heat-flux, more than 1300 particles were detected, with an average velocity of 102 m/s, and most frequent observed velocity of 60 m/s.
Numerical analysis of boosting scheme for scalable NMR quantum computation
SaiToh, Akira; Kitagawa, Masahiro
2005-02-01
Among initialization schemes for ensemble quantum computation beginning at thermal equilibrium, the scheme proposed by Schulman and Vazirani [in Proceedings of the 31st ACM Symposium on Theory of Computing (STOC'99) (ACM Press, New York, 1999), pp. 322-329] is known for the simple quantum circuit to redistribute the biases (polarizations) of qubits and small time complexity. However, our numerical simulation shows that the number of qubits initialized by the scheme is rather smaller than expected from the von Neumann entropy because of an increase in the sum of the binary entropies of individual qubits, which indicates a growth in the total classical correlation. This result--namely, that there is such a significant growth in the total binary entropy--disagrees with that of their analysis.
Stability analysis and numerical simulation of simplified solid rocket motors
NASA Astrophysics Data System (ADS)
Boyer, G.; Casalis, G.; Estivalèzes, J.-L.
2013-08-01
This paper investigates the Parietal Vortex Shedding (PVS) instability that significantly influences the Pressure Oscillations of the long and segmented solid rocket motors. The eigenmodes resulting from the stability analysis of a simplified configuration, namely, a cylindrical duct with sidewall injection, are presented. They are computed taking into account the presence of a wall injection defect, which is shown to induce hydrodynamic instabilities at discrete frequencies. These instabilities exhibit eigenfunctions in good agreement with the measured PVS vortical structures. They are successfully compared in terms of temporal evolution and frequencies to the unsteady hydrodynamic fluctuations computed by numerical simulations. In addition, this study has shown that the hydrodynamic instabilities associated with the PVS are the driving force of the flow dynamics, since they are responsible for the emergence of pressure waves propagating at the same frequency.
Random dynamic load identification based on error analysis and weighted total least squares method
NASA Astrophysics Data System (ADS)
Jia, You; Yang, Zhichun; Guo, Ning; Wang, Le
2015-12-01
In most cases, random dynamic load identification problems in structural dynamics are in general ill-posed. A common approach to treat these problems is to reformulate these problems into some well-posed problems by some numerical regularization methods. In a previous paper by the authors, a random dynamic load identification model was built, and a weighted regularization approach based on the proper orthogonal decomposition (POD) was proposed to identify the random dynamic loads. In this paper, the upper bound of relative load identification error in frequency domain is derived. The selection condition and the specific form of the weighting matrix is also proposed and validated analytically and experimentally, In order to improve the accuracy of random dynamic load identification, a weighted total least squares method is proposed to reduce the impact of these errors. To further validate the feasibility and effectiveness of the proposed method, the comparative study of the proposed method and other methods are conducted with the experiment. The experimental results demonstrated that the weighted total least squares method is more effective than other methods for random dynamic load identification.
Experimental and Numerical Analysis of Notched Composites Under Tension Loading
NASA Astrophysics Data System (ADS)
Aidi, Bilel; Case, Scott W.
2015-12-01
Experimental quasi-static tests were performed on center notched carbon fiber reinforced polymer (CFRP) composites having different stacking sequences made of G40-600/5245C prepreg. The three-dimensional Digital Image Correlation (DIC) technique was used during quasi-static tests conducted on quasi-isotropic notched samples to obtain the distribution of strains as a function of applied stress. A finite element model was built within Abaqus to predict the notched strength and the strain profiles for comparison with measured results. A user-material subroutine using the multi-continuum theory (MCT) as a failure initiation criterion and an energy-based damage evolution law as implemented by Autodesk Simulation Composite Analysis (ASCA) was used to conduct a quantitative comparison of strain components predicted by the analysis and obtained in the experiments. Good agreement between experimental data and numerical analyses results are observed. Modal analysis was carried out to investigate the effect of static damage on the dominant frequencies of the notched structure using the resulted degraded material elements. The first in-plane mode was found to be a good candidate for tracking the level of damage.
Ginting, Victor
2014-03-15
it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.
Space Trajectories Error Analysis (STEAP) Programs. Volume 1: Analytic manual, update
NASA Technical Reports Server (NTRS)
1971-01-01
Manual revisions are presented for the modified and expanded STEAP series. The STEAP 2 is composed of three independent but related programs: NOMAL for the generation of n-body nominal trajectories performing a number of deterministic guidance events; ERRAN for the linear error analysis and generalized covariance analysis along specific targeted trajectories; and SIMUL for testing the mathematical models used in the navigation and guidance process. The analytic manual provides general problem description, formulation, and solution and the detailed analysis of subroutines. The programmers' manual gives descriptions of the overall structure of the programs as well as the computational flow and analysis of the individual subroutines. The user's manual provides information on the input and output quantities of the programs. These are updates to N69-36472 and N69-36473.
Elliptic systems and numerical transformations
NASA Technical Reports Server (NTRS)
Mastin, C. W.; Thompson, J. F.
1976-01-01
Properties of a transformation method, which was developed for solving fluid dynamic problems on general two dimensional regions, are discussed. These include construction error of the transformation and applications to mesh generation. An error and stability analysis for the numerical solution of a model parabolic problem is also presented.
Error Analysis System for Spacecraft Navigation Using the Global Positioning System (GPS)
NASA Technical Reports Server (NTRS)
Truong, S. H.; Hart, R. C.; Hartman, K. R.; Tomcsik, T. L.; Searl, J. E.; Bernstein, A.
1997-01-01
The Flight Dynamics Division (FDD) at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) is currently developing improved space-navigation filtering algorithms to use the Global Positioning System (GPS) for autonomous real-time onboard orbit determination. In connection with a GPS technology demonstration on the Small Satellite Technology Initiative (SSTI)/Lewis spacecraft, FDD analysts and programmers have teamed with the GSFC Guidance, Navigation, and Control Branch to develop the GPS Enhanced Orbit Determination Experiment (GEODE) system. The GEODE system consists of a Kalman filter operating as a navigation tool for estimating the position, velocity, and additional states required to accurately navigate the orbiting Lewis spacecraft by using astrodynamic modeling and GPS measurements from the receiver. A parallel effort at the FDD is the development of a GPS Error Analysis System (GEAS) that will be used to analyze and improve navigation filtering algorithms during development phases and during in-flight calibration. For GEAS, the Kalman filter theory is extended to estimate the errors in position, velocity, and other error states of interest. The estimation of errors in physical variables at regular intervals will allow the time, cause, and effect of navigation system weaknesses to be identified. In addition, by modeling a sufficient set of navigation system errors, a system failure that causes an observed error anomaly can be traced and accounted for. The GEAS software is formulated using Object Oriented Design (OOD) techniques implemented in the C++ programming language on a Sun SPARC workstation. The Phase 1 of this effort is the development of a basic system to be used to evaluate navigation algorithms implemented in the GEODE system. This paper presents the GEAS mathematical methodology, systems and operations concepts, and software design and implementation. Results from the use of the basic system to evaluate
Libon, David J; Bondi, Mark W; Price, Catherine C; Lamar, Melissa; Eppig, Joel; Wambach, Denene M; Nieves, Christine; Delano-Wood, Lisa; Giovannetti, Tania; Lippa, Carol; Kabasakalian, Anahid; Cosentino, Stephanie; Swenson, Rod; Penney, Dana L
2011-09-01
Using cluster analysis Libon et al. (2010) found three verbal serial list-learning profiles involving delay memory test performance in patients with mild cognitive impairment (MCI). Amnesic MCI (aMCI) patients presented with low scores on delay free recall and recognition tests; mixed MCI (mxMCI) patients scored higher on recognition compared to delay free recall tests; and dysexecutive MCI (dMCI) patients generated relatively intact scores on both delay test conditions. The aim of the current research was to further characterize memory impairment in MCI by examining forgetting/savings, interference from a competing word list, intrusion errors/perseverations, intrusion word frequency, and recognition foils in these three statistically determined MCI groups compared to normal control (NC) participants. The aMCI patients exhibited little savings, generated more highly prototypic intrusion errors, and displayed indiscriminate responding to delayed recognition foils. The mxMCI patients exhibited higher saving scores, fewer and less prototypic intrusion errors, and selectively endorsed recognition foils from the interference list. dMCI patients also selectively endorsed recognition foils from the interference list but performed similarly compared to NC participants. These data suggest the existence of distinct memory impairments in MCI and caution against the routine use of a single memory test score to operationally define MCI. PMID:21880171
Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo
2016-01-01
The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385
Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo
2016-01-01
The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385
Error Analysis and Measurement Uncertainty for a Fiber Grating Strain-Temperature Sensor
Tang, Jaw-Luen; Wang, Jian-Neng
2010-01-01
A fiber grating sensor capable of distinguishing between temperature and strain, using a reference and a dual-wavelength fiber Bragg grating, is presented. Error analysis and measurement uncertainty for this sensor are studied theoretically and experimentally. The measured root mean squared errors for temperature T and strain ε were estimated to be 0.13 °C and 6 με, respectively. The maximum errors for temperature and strain were calculated as 0.00155 T + 2.90 × 10−6 ε and 3.59 × 10−5 ε + 0.01887 T, respectively. Using the estimation of expanded uncertainty at 95% confidence level with a coverage factor of k = 2.205, temperature and strain measurement uncertainties were evaluated as 2.60 °C and 32.05 με, respectively. For the first time, to our knowledge, we have demonstrated the feasibility of estimating the measurement uncertainty for simultaneous strain-temperature sensing with such a fiber grating sensor. PMID:22163567
An analysis of temperature-induced errors for an ultrasound distance measuring system. M. S. Thesis
NASA Technical Reports Server (NTRS)
Wenger, David Paul
1991-01-01
The presentation of research is provided in the following five chapters. Chapter 2 presents the necessary background information and definitions for general work with ultrasound and acoustics. It also discusses the basis for errors in the slant range measurements. Chapter 3 presents a method of problem solution and an analysis of the sensitivity of the equations to slant range measurement errors. It also presents various methods by which the error in the slant range measurements can be reduced to improve overall measurement accuracy. Chapter 4 provides a description of a type of experiment used to test the analytical solution and provides a discussion of its results. Chapter 5 discusses the setup of a prototype collision avoidance system, discusses its accuracy, and demonstrates various methods of improving the accuracy along with the improvements' ramifications. Finally, Chapter 6 provides a summary of the work and a discussion of conclusions drawn from it. Additionally, suggestions for further research are made to improve upon what has been presented here.
NASA Astrophysics Data System (ADS)
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
ERIC Educational Resources Information Center
Valdman, Albert
Errors in second language learning are viewed as evidence of the learner's hypotheses and strategies about the new data. Error observation and analysis are important to the formulation of theories about language learning and the preparation of teaching materials. Learning a second language proceeds by a series of approximative reorganizations…
ERIC Educational Resources Information Center
Yang, Jie Chi; Akahori, Kanji
1998-01-01
Describes development and evaluation of an error analysis procedure for a computer-assisted language learning program using natural language processing techniques. The program can be used for learning passive voice in Japanese on any World Wide Web browser. The program enables learners to type sentences freely, detects errors, and displays…
The measurement error analysis when a pitot probe is used in supersonic air flow
NASA Astrophysics Data System (ADS)
Zhang, XiWen; Hao, PengFei; Yao, ZhaoHui
2011-04-01
Pitot probes enable a simple and convenient way of measuring mean velocity in air flow. The contrastive numerical simulation between free supersonic airflow and pitot tube at different positions in supersonic air flow was performed using Navier-Stokes equations, the ENN scheme with time-dependent boundary conditions (TDBC) and the Spalart-Allmaras turbulence model. The physical experimental results including pitot pressure and shadowgraph are also presented. Numerical results coincide with the experimental data. The flow characteristics of the pitot probe on the supersonic flow structure show that the measurement gives actually the total pressure behind the detached shock wave by using the pitot probe to measure the total pressure. The measurement result of the distribution of the total pressure can still represent the real free jet flow. The similar features of the intersection and reflection of shock waves can be identified. The difference between the measurement results and the actual ones is smaller than 10%. When the pitot probe is used to measure the region of L=0-4 D, the measurement is smaller than the real one due to the increase of the shock wave strength. The difference becomes larger where the waves intersect. If the pitot probe is put at L=8 D-10 D, where the flow changes from supersonic to subsonic, the addition of the pitot probe turns the original supersonic flow region subsonic and causes bigger measurement errors.
Error analysis and modeling for the time grating length measurement system
NASA Astrophysics Data System (ADS)
Gao, Zhonghua; Fen, Jiqin; Zheng, Fangyan; Chen, Ziran; Peng, Donglin; Liu, Xiaokang
2013-10-01
Through analyzing errors of the length measurement system in which a linear time grating was the principal measuring component, we found that the study on the error law was very important to reduce system errors and optimize the system structure. Mainly error sources in the length measuring system, including the time grating sensor, slide way, and cantilever, were studied; and therefore total errors were obtained. Meanwhile we erected the mathematic model of errors of the length measurement system. Using the error model, we calibrated system errors being in the length measurement system. Also, we developed a set of experimental devices in which a laser interferometer was used to calibrate the length measurement system errors. After error calibrating, the accuracy of the measurement system was improved from original 36um/m to 14um/m. The fact that experiment results are consistent with the simulation results shows that the error mathematic model is suitable for the length measuring system.
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated
NASA Astrophysics Data System (ADS)
Korteland, Suze-Anne; Heimovaara, Timo
2015-03-01
Electrical resistivity tomography (ERT) is a geophysical technique that can be used to obtain three-dimensional images of the bulk electrical conductivity of the subsurface. Because the electrical conductivity is strongly related to properties of the subsurface and the flow of water it has become a valuable tool for visualization in many hydrogeological and environmental applications. In recent years, ERT is increasingly being used for quantitative characterization, which requires more detailed prior information than a conventional geophysical inversion for qualitative purposes. In addition, the careful interpretation of measurement and modelling errors is critical if ERT measurements are to be used in a quantitative way. This paper explores the quantitative determination of the electrical conductivity distribution of a cylindrical object placed in a water bath in a laboratory-scale tank. Because of the sharp conductivity contrast between the object and the water, a standard geophysical inversion using a smoothness constraint could not reproduce this target accurately. Better results were obtained by using the ERT measurements to constrain a model describing the geometry of the system. The posterior probability distributions of the parameters describing the geometry were estimated with the Markov chain Monte Carlo method DREAM(ZS). Using the ERT measurements this way, accurate estimates of the parameters could be obtained. The information quality of the measurements was assessed by a detailed analysis of the errors. Even for the uncomplicated laboratory setup used in this paper, errors in the modelling of the shape and position of the electrodes and the shape of the domain could be identified. The results indicate that the ERT measurements have a high information content which can be accessed by the inclusion of prior information and the consideration of measurement and modelling errors.
Numerical analysis of sandstone composition, provenance, and paleogeography
Smosma, R.; Bruner, K.R.; Burns, A.
1999-09-01
Cretaceous deltaic sandstones of the National Petroleum Reserve in Alaska exhibit an extreme variability in their mineral makeup. A series of numerical techniques, however, provides some order to the petrographic characteristics of these complex rocks. Ten mineral constituents occur in the sandstones, including quartz, chert, feldspar, mica, and organic matter, plus rock fragments of volcanics, carbonates, shale, phyllite, and schist. A mixing coefficient quantities the degree of heterogeneity in each sample. Hierarchical cluster analysis then groups sandstones on the basis of similarities among all ten mineral components--in the Alaskan example, six groupings characterized mainly by the different rock fragments. Multidimensional scaling shows how the clusters relate to one another and arranges them along compositional gradients--two trends in Alaska based on varying proportions of metamorphic/volcanic and shale/carbonate rock fragments. The resulting sandstone clusters and petrographic gradients can be mapped across the study area and compared with the stratigraphic section. This study confirms the presence of three different source areas that provided diverse sediment to the Cretaceous deltas as well as the general transport directions and distances. In addition, the sand composition is shown to have changed over time, probably related to erosional unroofing in the source areas. This combination of multivariate-analysis techniques proves to be a powerful tool, revealing subtle spatial and temporal relationships among the sandstones and allowing one to enhance provenance and paleogeographic conclusions made from compositional data.
A hybrid neurocomputing/numerical strategy for nonlinear structural analysis
NASA Technical Reports Server (NTRS)
Szewczyk, Z. Peter; Noor, Ahmed K.
1995-01-01
A hybrid neurocomputing/numerical strategy is presented for geometrically nonlinear analysis of structures. The strategy combines model-free data processing capabilities of computational neural networks with a Pade approximants-based perturbation technique to predict partial information about the nonlinear response of structures. In the hybrid strategy, multilayer feedforward neural networks are used to extend the validity of solutions by using training samples produced by Pade approximations to the Taylor series expansion of the response function. The range of validity of the training samples is taken to be the radius of convergence of Pade approximants and is estimated by setting a tolerance on the diverging approximants. The norm of residual vector of unbalanced forces in a given element is used as a measure to assess the quality of network predictions. To further increase the accuracy and the range of network predictions, additional training data are generated by either applying linear regression to weight matrices or expanding the training data by using predicted coefficients in a Taylor series. The effectiveness of the hybrid strategy is assessed by performing large-deflection analysis of a doubly-curved composite panel with a circular cutout, and postbuckling analyses of stiffened composite panels subjected to an in-plane edge shear load. In all the problems considered, the hybrid strategy is used to predict selective information about the structural response, namely the total strain energy and the maximum displacement components only.
A stable and efficient numerical algorithm for unconfined aquifer analysis.
Keating, Elizabeth; Zyvoloski, George
2009-01-01
The nonlinearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to the solution of Richard's equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table, does not require "dry" cells to convert to inactive cells, and allows recharge to flow through relatively dry cells to the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem as well. PMID:19341374
A stable and efficient numerical algorithm for unconfined aquifer analysis
Keating, Elizabeth; Zyvoloski, George
2008-01-01
The non-linearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of forward model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency, and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to solution of Richard's Equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem, as well.
Error analysis of motion correction method for laser scanning of moving objects
NASA Astrophysics Data System (ADS)
Goel, S.; Lohani, B.
2014-05-01
The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.
Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.
2010-09-15
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article
NASA Astrophysics Data System (ADS)
Jones, Marvin Quenten, Jr.
The motion and behavior of quantum processes can be described by the Schrodinger equation using the wave function, Psi(x,t). The use of the Schrodinger equation to study quantum phenomena is known as Quantum Mechanics, akin to classical mechanics being the tool to study classical physics. This research focuses on the emphasis of numerical techniques: Finite-Difference, Fast Fourier Transform (spectral method), finite difference schemes such as the Leapfrog method and the Crank-Nicolson scheme and second quantization to solve and analyze the Schrodinger equation for the infinite square well problem, the free particle with periodic boundary conditions, the barrier problem, tight-binding hamiltonians and a potential wall problem. We discuss these techniques and the problems created to test how these different techniques draw both physical and numerical conclusions in a tabular summary. We observed both numerical stability and quantum stability (conservation of energy, probability, momentum, etc.). We found in our results that the Crank-Nicolson scheme is an unconditionally stable scheme and conserves probability (unitary), and momentum, though dissipative with energy. The time-independent problems conserved energy, momentum and were unitary, which is of interest, but we found when time-dependence was introduced, quantum stability (i.e. conservation of mass, momentum, etc.) was not implied by numerical stability. Hence, we observed schemes that were numerically stable, but not quantum stable as well as schemes that were quantum stable, but not numerically stable for all of time, t. We also observed that second quantization removed the issues with stability as the problem was transformed into a discrete problem. Moreover, all quantum information is conserved in second quantization. This method, however, does not work universally for all problems.
NASA Technical Reports Server (NTRS)
Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.
1999-01-01
Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.