Minimizing Errors in Numerical Analysis of Chemical Data.
ERIC Educational Resources Information Center
Rusling, James F.
1988-01-01
Investigates minimizing errors in computational methods commonly used in chemistry. Provides a series of examples illustrating the propagation of errors, finite difference methods, and nonlinear regression analysis. Includes illustrations to explain these concepts. (MVL)
Numerical errors in the real-height analysis of ionograms at high latitudes
Titheridge, J.E.
1987-10-01
A simple dual-range integration method for maintaining accuracy in the analysis of real-height ionograms at high latitudes up to a dip angle of 89 deg is presented. Numerical errors are reduced to zero for the start and valley calculations at all dip angles up to 89.9 deg. It is noted that the extreme errors which occur at high latitudes can be alternatively reduced by using a decreased value for the dip angle. An expression for the optimun dip angle for different integration orders and frequency intervals is given. 17 references.
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.
Numerical errors in the presence of steep topography: analysis and alternatives
Lundquist, K A; Chow, F K; Lundquist, J K
2010-04-15
It is well known in computational fluid dynamics that grid quality affects the accuracy of numerical solutions. When assessing grid quality, properties such as aspect ratio, orthogonality of coordinate surfaces, and cell volume are considered. Mesoscale atmospheric models generally use terrain-following coordinates with large aspect ratios near the surface. As high resolution numerical simulations are increasingly used to study topographically forced flows, a high degree of non-orthogonality is introduced, especially in the vicinity of steep terrain slopes. Numerical errors associated with the use of terrainfollowing coordinates can adversely effect the accuracy of the solution in steep terrain. Inaccuracies from the coordinate transformation are present in each spatially discretized term of the Navier-Stokes equations, as well as in the conservation equations for scalars. In particular, errors in the computation of horizontal pressure gradients, diffusion, and horizontal advection terms have been noted in the presence of sloping coordinate surfaces and steep topography. In this work we study the effects of these spatial discretization errors on the flow solution for three canonical cases: scalar advection over a mountain, an atmosphere at rest over a hill, and forced advection over a hill. This study is completed using the Weather Research and Forecasting (WRF) model. Simulations with terrain-following coordinates are compared to those using a flat coordinate, where terrain is represented with the immersed boundary method. The immersed boundary method is used as a tool which allows us to eliminate the terrain-following coordinate transformation, and quantify numerical errors through a direct comparison of the two solutions. Additionally, the effects of related issues such as the steepness of terrain slope and grid aspect ratio are studied in an effort to gain an understanding of numerical domains where terrain-following coordinates can successfully be used and
Some Surprising Errors in Numerical Differentiation
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2012-01-01
Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
NASA Astrophysics Data System (ADS)
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
Correcting numerical integration errors caused by small aliasing errors
Smallwood, D.O.
1997-11-01
Small sampling errors can have a large effect on numerically integrated waveforms. An example is the integration of acceleration to compute velocity and displacement waveforms. These large integration errors complicate checking the suitability of the acceleration waveform for reproduction on shakers. For waveforms typically used for shaker reproduction, the errors become significant when the frequency content of the waveform spans a large frequency range. It is shown that these errors are essentially independent of the numerical integration method used, and are caused by small aliasing errors from the frequency components near the Nyquist frequency. A method to repair the integrated waveforms is presented. The method involves using a model of the acceleration error, and fitting this model to the acceleration, velocity, and displacement waveforms to force the waveforms to fit the assumed initial and final values. The correction is then subtracted from the acceleration before integration. The method is effective where the errors are isolated to a small section of the time history. It is shown that the common method to repair these errors using a high pass filter is sometimes ineffective for this class of problem.
Error Analysis of Quadrature Rules. Classroom Notes
ERIC Educational Resources Information Center
Glaister, P.
2004-01-01
Approaches to the determination of the error in numerical quadrature rules are discussed and compared. This article considers the problem of the determination of errors in numerical quadrature rules, taking Simpson's rule as the principal example. It suggests an approach based on truncation error analysis of numerical schemes for differential…
ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.
LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.
2004-07-26
We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.
Errata: Papers in Error Analysis.
ERIC Educational Resources Information Center
Svartvik, Jan, Ed.
Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…
NASA Technical Reports Server (NTRS)
Kia, T.; Longuski, J. M.
1984-01-01
Analytic error bounds are presented for the solutions of approximate models for self-excited near-symmetric rigid bodies. The error bounds are developed for analytic solutions to Euler's equations of motion. The results are applied to obtain a simplified analytic solution for Eulerian rates and angles. The results of a sample application of the range and error bound expressions for the case of the Galileo spacecraft experiencing transverse torques demonstrate the use of the bounds in analyses of rigid body spin change maneuvers.
Uncertainty quantification and error analysis
Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Error Estimates for Numerical Integration Rules
ERIC Educational Resources Information Center
Mercer, Peter R.
2005-01-01
The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.
Error estimates of numerical solutions for a cyclic plasticity problem
NASA Astrophysics Data System (ADS)
Han, W.
A cyclic plasticity problem is numerically analyzed in [13], where a sub-optimal order error estimate is shown for a spatially discrete scheme. In this note, we prove an optimal order error estimate for the spatially discrete scheme under the same solution regularity condition. We also derive an error estimate for a fully discrete scheme for solving the plasticity problem.
Error Analysis in Mathematics Education.
ERIC Educational Resources Information Center
Rittner, Max
1982-01-01
The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)
Skylab water balance error analysis
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.
Analysis of discretization errors in LES
NASA Technical Reports Server (NTRS)
Ghosal, Sandip
1995-01-01
All numerical simulations of turbulence (DNS or LES) involve some discretization errors. The integrity of such simulations therefore depend on our ability to quantify and control such errors. In the classical literature on analysis of errors in partial differential equations, one typically studies simple linear equations (such as the wave equation or Laplace's equation). The qualitative insight gained from studying such simple situations is then used to design numerical methods for more complex problems such as the Navier-Stokes equations. Though such an approach may seem reasonable as a first approximation, it should be recognized that strongly nonlinear problems, such as turbulence, have a feature that is absent in linear problems. This feature is the simultaneous presence of a continuum of space and time scales. Thus, in an analysis of errors in the one dimensional wave equation, one may, without loss of generality, rescale the equations so that the dependent variable is always of order unity. This is not possible in the turbulence problem since the amplitudes of the Fourier modes of the velocity field have a continuous distribution. The objective of the present research is to provide some quantitative measures of numerical errors in such situations. Though the focus of this work is LES, the methods introduced here can be just as easily applied to DNS. Errors due to discretization of the time-variable are neglected for the purpose of this analysis.
Wood, William Monford
2015-02-23
Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.
An Ensemble-type Approach to Numerical Error Estimation
NASA Astrophysics Data System (ADS)
Ackmann, J.; Marotzke, J.; Korn, P.
2015-12-01
The estimation of the numerical error in a specific physical quantity of interest (goal) is of key importance in geophysical modelling. Towards this aim, we have formulated an algorithm that combines elements of the classical dual-weighted error estimation with stochastic methods. Our algorithm is based on the Dual-weighted Residual method in which the residual of the model solution is weighed by the adjoint solution, i.e. by the sensitivities of the goal towards the residual. We extend this method by modelling the residual as a stochastic process. Parameterizing the residual by a stochastic process was motivated by the Mori-Zwanzig formalism from statistical mechanics.Here, we apply our approach to two-dimensional shallow-water flows with lateral boundaries and an eddy viscosity parameterization. We employ different parameters of the stochastic process for different dynamical regimes in different regions. We find that for each region the temporal fluctuations of local truncation errors (discrete residuals) can be interpreted stochastically by a Laplace-distributed random variable. Assuming that these random variables are fully correlated in time leads to a stochastic process that parameterizes a problem-dependent temporal evolution of local truncation errors. The parameters of this stochastic process are estimated from short, near-initial, high-resolution simulations. Under the assumption that the estimated parameters can be extrapolated to the full time window of the error estimation, the estimated stochastic process is proven to be a valid surrogate for the local truncation errors.Replacing the local truncation errors by a stochastic process puts our method within the class of ensemble methods and makes the resulting error estimator a random variable. The result of our error estimator is thus a confidence interval on the error in the respective goal. We will show error estimates for two 2D ocean-type experiments and provide an outlook for the 3D case.
A Classroom Note on: Building on Errors in Numerical Integration
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2011-01-01
In both baseball and mathematics education, the conventional wisdom is to avoid errors at all costs. That advice might be on target in baseball, but in mathematics, it is not always the best strategy. Sometimes an analysis of errors provides much deeper insights into mathematical ideas and, rather than something to eschew, certain types of errors…
Human Error: A Concept Analysis
NASA Technical Reports Server (NTRS)
Hansen, Frederick D.
2007-01-01
Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.
Antenna trajectory error analysis in backprojection-based SAR images
NASA Astrophysics Data System (ADS)
Wang, Ling; Yazıcı, Birsen; Yanik, H. Cagri
2014-06-01
We present an analysis of the positioning errors in Backprojection (BP)-based Synthetic Aperture Radar (SAR) images due to antenna trajectory errors for a monostatic SAR traversing a straight linear trajectory. Our analysis is developed using microlocal analysis, which can provide an explicit quantitative relationship between the trajectory error and the positioning error in BP-based SAR images. The analysis is applicable to arbitrary trajectory errors in the antenna and can be extended to arbitrary imaging geometries. We present numerical simulations to demonstrate our analysis.
Orbit IMU alignment: Error analysis
NASA Technical Reports Server (NTRS)
Corson, R. W.
1980-01-01
A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.
Error Analysis and Remedial Teaching.
ERIC Educational Resources Information Center
Corder, S. Pit
The purpose of this paper is to analyze the role of error analysis in specifying and planning remedial treatment in second language learning. Part 1 discusses situations that demand remedial action. This is a quantitative assessment that requires measurement of the varying degrees of disparity between the learner's knowledge and the demands of the…
Having Fun with Error Analysis
ERIC Educational Resources Information Center
Siegel, Peter
2007-01-01
We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…
Measurement Error and Equating Error in Power Analysis
ERIC Educational Resources Information Center
Phillips, Gary W.; Jiang, Tao
2016-01-01
Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…
Condition and Error Estimates in Numerical Matrix Computations
Konstantinov, M. M.; Petkov, P. H.
2008-10-30
This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.
Numerical study of an error model for a strap-down INS
NASA Astrophysics Data System (ADS)
Grigorie, T. L.; Sandu, D. G.; Corcau, C. L.
2016-10-01
The paper presents a numerical study related to a mathematical error model developed for a strap-down inertial navigation system. The study aims to validate the error model by using some Matlab/Simulink software models implementing the inertial navigator and the error model mathematics. To generate the inputs in the evaluation Matlab/Simulink software some inertial sensors software models are used. The sensors models were developed based on the IEEE equivalent models for the inertial sensorsand on the analysis of the data sheets related to real inertial sensors. In the paper are successively exposed the inertial navigation equations (attitude, position and speed), the mathematics of the inertial navigator error model, the software implementations and the numerical evaluation results.
ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.
Hromadka, T.V.
1987-01-01
Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
GP-B error modeling and analysis
NASA Technical Reports Server (NTRS)
Hung, J. C.
1982-01-01
Individual source errors and their effects on the accuracy of the Gravity Probe B (GP-B) experiment were investigated. Emphasis was placed on: (1) the refinement of source error identification and classifications of error according to their physical nature; (2) error analysis for the GP-B data processing; and (3) measurement geometry for the experiment.
NASA Technical Reports Server (NTRS)
Prive, Nikki C.; Errico, Ronald M.
2013-01-01
A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.
ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.
Hromadka, T.V.; ,
1985-01-01
Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.
Error Analysis and the EFL Classroom Teaching
ERIC Educational Resources Information Center
Xie, Fang; Jiang, Xue-mei
2007-01-01
This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…
Error Analysis in Mathematics. Technical Report #1012
ERIC Educational Resources Information Center
Lai, Cheng-Fei
2012-01-01
Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…
Analysis and classification of human error
NASA Technical Reports Server (NTRS)
Rouse, W. B.; Rouse, S. H.
1983-01-01
The literature on human error is reviewed with emphasis on theories of error and classification schemes. A methodology for analysis and classification of human error is then proposed which includes a general approach to classification. Identification of possible causes and factors that contribute to the occurrence of errors is also considered. An application of the methodology to the use of checklists in the aviation domain is presented for illustrative purposes.
Error analysis using organizational simulation.
Fridsma, D. B.
2000-01-01
Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885
ISMP Medication Error Report Analysis.
2013-10-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.
ISMP Medication Error Report Analysis.
2014-01-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.
ISMP Medication Error Report Analysis.
2013-05-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.
ISMP Medication Error Report Analysis.
2013-12-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.
ISMP Medication Error Report Analysis.
2013-11-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.
ISMP Medication error report analysis.
2013-04-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.
ISMP Medication Error Report Analysis.
2013-06-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.
ISMP Medication Error Report Analysis.
2013-01-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.
ISMP Medication Error Report Analysis.
2013-02-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.
ISMP Medication Error Report Analysis.
2013-03-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.
ISMP Medication Error Report Analysis.
2013-09-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.
ISMP Medication Error Report Analysis.
2013-07-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.
Error Analysis: Past, Present, and Future
ERIC Educational Resources Information Center
McCloskey, George
2017-01-01
This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…
GP-B error modeling and analysis
NASA Technical Reports Server (NTRS)
1984-01-01
The analysis and modeling for the Gravity Probe B (GP-B) experiment is reported. The finite-wordlength induced errors in Kalman filtering computation were refined. Errors in the crude result were corrected, improved derivation steps are taken, and better justifications are given. The errors associated with the suppression of the 1-noise were analyzed by rolling the spacecraft and then performing a derolling operation by computation.
Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei
2010-01-01
This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…
Integrated analysis of error detection and recovery
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1985-01-01
An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.
Symbolic Error Analysis and Robot Planning,
1982-09-01
ARD-RJL2i 867 SYMBOLIC ERROR ANALYSIS AND ROBOT PLANNING(U3 MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB R A BROOKS SEP 82 AI-N...LABORATORY A.I. Memo No. 685 September, 1982 Symbolic Error Analysis and Robot Planning Rodney A. Brooks -- Abstract>A program to control a robot manipulator...a human robot programmer. ~ !tJ.J Acknowledgements. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts
Phonological error analysis, development and empirical evaluation.
Roeltgen, D P
1992-08-01
A method of error analysis, designed to examine phonological and nonphonological reading and spelling processes, was developed from preliminary studies and theoretical background, including a linguistic model and the relationships between articulatory features of phonemes. The usefulness of this method as an assessment tool for phonological ability was tested on a group of normal subjects. The results from the error analysis helped clarify similarities and differences in phonological performance among the subjects and helped delineate differences between phonological performance in spelling (oral and written) and reading within the group of subjects. These results support the usefulness of this method of error analysis in assessing phonological ability. Also, these results support the position that phonological approximation of responses is an important diagnostic feature and merely cataloging errors as phonologically accurate or inaccurate is inadequate for assessing phonological ability.
Empirical Error Analysis of GPS RO Atmospheric Profiles
NASA Astrophysics Data System (ADS)
Scherllin-Pirscher, B.; Steiner, A. K.; Foelsche, U.; Kirchengast, G.; Kuo, Y.
2010-12-01
In the upper troposphere and lower stratosphere (UTLS) region the radio occultation (RO) technique provides accurate profiles of atmospheric parameters. These profiles can be used in operational meteorology (i.e., numerical weather prediction), atmospheric and climate research. We present results of an empirical error analysis of GPS RO data retrieved at UCAR and at WEGC and compare data characteristics of CHAMP, GRACE-A, and Formosat-3/COSMIC. Retrieved atmospheric profiles of bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature are compared to reference profiles extracted from ECMWF analysis fields. This statistical error characterization yields a combined (RO observational plus ECMWF model) error. We restrict our analysis to the years 2007 to 2009 due to known ECMWF deficiencies prior to 2007 (e.g., deficiencies in the representation of the austral polar vortex or the weak representation of tropopause height variability). The GPS RO observational error is determined by subtracting the estimated ECMWF error from the combined error in terms of variances. Our results indicate that the estimated ECMWF error and the GPS RO observational error are approximately of the same order of magnitude. Differences between different satellites are small below 35 km. The GPS RO observational error features latitudinal and seasonal variations, which are most pronounced at stratospheric altitudes at high latitudes. We present simplified models for the observational error, which depend on a few parameters only (Steiner and Kirchengast, JGR 110, D15307, 2005). These global error models are derived from fitting simple analytical functions to the GPS RO observational error. From the lower troposphere up to the tropopause, the model error decreases closely proportional to an inverse height law. Within a core "tropopause region" of the upper troposphere/lower stratosphere the model error is constant and above this region it increases exponentially with
Error analysis of quartz crystal resonator applications
Lucklum, R.; Behling, C.; Hauptmann, P.; Cernosek, R.W.; Martin, S.J.
1996-12-31
Quartz crystal resonators in chemical sensing applications are usually configured as the frequency determining element of an electrical oscillator. By contrast, the shear modulus determination of a polymer coating needs a complete impedance analysis. The first part of this contribution reports the error made if common approximations are used to relate the frequency shift to the sorbed mass. In the second part the authors discuss different error sources in the procedure to determine shear parameters.
TOA/FOA geolocation error analysis.
Mason, John Jeffrey
2008-08-01
This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.
NASA Technical Reports Server (NTRS)
Weir, Kent A.; Wells, Eugene M.
1990-01-01
The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.
Numeracy, Literacy and Newman's Error Analysis
ERIC Educational Resources Information Center
White, Allan Leslie
2010-01-01
Newman (1977, 1983) defined five specific literacy and numeracy skills as crucial to performance on mathematical word problems: reading, comprehension, transformation, process skills, and encoding. Newman's Error Analysis (NEA) provided a framework for considering the reasons that underlay the difficulties students experienced with mathematical…
Error analysis in nuclear density functional theory
NASA Astrophysics Data System (ADS)
Schunck, Nicolas; McDonnell, Jordan D.; Sarich, Jason; Wild, Stefan M.; Higdon, Dave
2015-03-01
Nuclear density functional theory (DFT) is the only microscopic, global approach to the structure of atomic nuclei. It is used in numerous applications, from determining the limits of stability to gaining a deep understanding of the formation of elements in the Universe or the mechanisms that power stars and reactors. The predictive power of the theory depends on the amount of physics embedded in the energy density functional as well as on efficient ways to determine a small number of free parameters and solve the DFT equations. In this article, we discuss the various sources of uncertainties and errors encountered in DFT and possible methods to quantify these uncertainties in a rigorous manner.
Study of geopotential error models used in orbit determination error analysis
NASA Technical Reports Server (NTRS)
Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.
1991-01-01
The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic
Error Analysis and Propagation in Metabolomics Data Analysis.
Moseley, Hunter N B
2013-01-01
Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of 'omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed.
Error Analysis of Stochastic Gradient Descent Ranking.
Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan
2012-12-31
Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.
Error analysis of stochastic gradient descent ranking.
Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan
2013-06-01
Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.
Accumulation of errors in numerical simulations of chemically reacting gas dynamics
NASA Astrophysics Data System (ADS)
Smirnov, N. N.; Betelin, V. B.; Nikitin, V. F.; Stamov, L. I.; Altoukhov, D. I.
2015-12-01
The aim of the present study is to investigate problems of numerical simulations precision and stochastic errors accumulation in solving problems of detonation or deflagration combustion of gas mixtures in rocket engines. Computational models for parallel computing on supercomputers incorporating CPU and GPU units were tested and verified. Investigation of the influence of computational grid size on simulation precision and computational speed was performed. Investigation of accumulation of errors for simulations implying different strategies of computation were performed.
A constrained-gradient method to control divergence errors in numerical MHD
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2016-10-01
In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.
Error analysis of aspheric surface with reference datum.
Peng, Yanglin; Dai, Yifan; Chen, Shanyong; Song, Ci; Shi, Feng
2015-07-20
Severe requirements of location tolerance provide new challenges for optical component measurement, evaluation, and manufacture. Form error, location error, and the relationship between form error and location error need to be analyzed together during error analysis of aspheric surface with reference datum. Based on the least-squares optimization method, we develop a least-squares local optimization method to evaluate form error of aspheric surface with reference datum, and then calculate the location error. According to the error analysis of a machined aspheric surface, the relationship between form error and location error is revealed, and the influence on the machining process is stated. In different radius and aperture of aspheric surface, the change laws are simulated by superimposing normally distributed random noise on an ideal surface. It establishes linkages between machining and error analysis, and provides an effective guideline for error correcting.
Error propagation in the numerical solutions of the differential equations of orbital mechanics
NASA Technical Reports Server (NTRS)
Bond, V. R.
1982-01-01
The relationship between the eigenvalues of the linearized differential equations of orbital mechanics and the stability characteristics of numerical methods is presented. It is shown that the Cowell, Encke, and Encke formulation with an independent variable related to the eccentric anomaly all have a real positive eigenvalue when linearized about the initial conditions. The real positive eigenvalue causes an amplification of the error of the solution when used in conjunction with a numerical integration method. In contrast an element formulation has zero eigenvalues and is numerically stable.
Error begat error: design error analysis and prevention in social infrastructure projects.
Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M
2012-09-01
Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.
Performance analysis of ARQ error controls under Markovian block error pattern
NASA Astrophysics Data System (ADS)
Cho, Young Jong; Un, Chong Kwan
1994-02-01
In this paper, we investigate the effect of forward/backward channel memory (statistical dependence in the occurrence of transmission errors) on ARQ error controls. To take into account the effect of backward channel errors in the performance analysis, we suppose some modified ARQ schemes that have an effective retransmission strategy to prevent the deadlock incurred by the errors on acknowledgments. In the study, we consider two modified go-back-N schemes with timer control and with buffer control.
Localization algorithm and error analysis for micro radio-localizer
NASA Astrophysics Data System (ADS)
Li, Xudong; Wang, Xiaohao; Li, Qiang; Zhao, Huijie
2006-11-01
After more than ten years' research efforts on the Micro Aerial Vehicle (MAV) since it was proposed in 1990s, the stable flying platform has been matured. The next reasonable goal is to implement more practical applications for MAVs. Equipped with a micro radio-localizer, MAVs have the ability of localizing a target that transmitting radio signals, and further can be a novel promising Anti-Radiation device. A micro radio-localizer prototype and its localization principle and localization algorithm are proposed. The error analysis of the algorithm is also discussed. On the basis of the comparison of the often-used radio localization method, considering the MAVs' inherent limitation on the dimension of the antennas, a signal intensity and guidance information based localization method is proposed. Under the assumption that the electromagnetic wave obeys the free-space spreading model and the signal's power keeps unchanged, the measuring equations under different target motions are established. Localization algorithm is derived. The determination of several factors such as the number of measuring positions, numerical solving method and initial solution is discussed. Error analysis of the localization algorithm is also proposed by utilizing error analysis theory. A radio-localizer prototype is developed and experiment results are shown as well.
Biomedical model fitting and error analysis.
Costa, Kevin D; Kleinstein, Steven H; Hershberg, Uri
2011-09-20
This Teaching Resource introduces students to curve fitting and error analysis; it is the second of two lectures on developing mathematical models of biomedical systems. The first focused on identifying, extracting, and converting required constants--such as kinetic rate constants--from experimental literature. To understand how such constants are determined from experimental data, this lecture introduces the principles and practice of fitting a mathematical model to a series of measurements. We emphasize using nonlinear models for fitting nonlinear data, avoiding problems associated with linearization schemes that can distort and misrepresent the data. To help ensure proper interpretation of model parameters estimated by inverse modeling, we describe a rigorous six-step process: (i) selecting an appropriate mathematical model; (ii) defining a "figure-of-merit" function that quantifies the error between the model and data; (iii) adjusting model parameters to get a "best fit" to the data; (iv) examining the "goodness of fit" to the data; (v) determining whether a much better fit is possible; and (vi) evaluating the accuracy of the best-fit parameter values. Implementation of the computational methods is based on MATLAB, with example programs provided that can be modified for particular applications. The problem set allows students to use these programs to develop practical experience with the inverse-modeling process in the context of determining the rates of cell proliferation and death for B lymphocytes using data from BrdU-labeling experiments.
Nonlinear grid error effects on numerical solution of partial differential equations
NASA Technical Reports Server (NTRS)
Dey, S. K.
1980-01-01
Finite difference solutions of nonlinear partial differential equations require discretizations and consequently grid errors are generated. These errors strongly affect stability and convergence properties of difference models. Previously such errors were analyzed by linearizing the difference equations for solutions. Properties of mappings of decadence were used to analyze nonlinear instabilities. Such an analysis is directly affected by initial/boundary conditions. An algorithm was developed, applied to nonlinear Burgers equations, and verified computationally. A preliminary test shows that Navier-Stokes equations may be treated similarly.
Trends in MODIS Geolocation Error Analysis
NASA Technical Reports Server (NTRS)
Wolfe, R. E.; Nishihama, Masahiro
2009-01-01
Data from the two MODIS instruments have been accurately geolocated (Earth located) to enable retrieval of global geophysical parameters. The authors describe the approach used to geolocate with sub-pixel accuracy over nine years of data from M0DIS on NASA's E0S Terra spacecraft and seven years of data from MODIS on the Aqua spacecraft. The approach uses a geometric model of the MODIS instruments, accurate navigation (orbit and attitude) data and an accurate Earth terrain model to compute the location of each MODIS pixel. The error analysis approach automatically matches MODIS imagery with a global set of over 1,000 ground control points from the finer-resolution Landsat satellite to measure static biases and trends in the MO0lS geometric model parameters. Both within orbit and yearly thermally induced cyclic variations in the pointing have been found as well as a general long-term trend.
Investigating Convergence Patterns for Numerical Methods Using Data Analysis
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2013-01-01
The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…
Using PASCAL for numerical analysis
NASA Technical Reports Server (NTRS)
Volper, D.; Miller, T. C.
1978-01-01
The data structures and control structures of PASCAL enhance the coding ability of the programmer. Proposed extensions to the language further increase its usefulness in writing numeric programs and support packages for numeric programs.
NASA Technical Reports Server (NTRS)
Rao, P. Anil; Velden, Christopher S.; Braun, Scott A.; Einaudi, Franco (Technical Monitor)
2001-01-01
Errors in the height assignment of some satellite-derived winds exist because the satellites sense radiation emitted from a finite layer of the atmosphere rather than a specific level. Potential problems in data assimilation may arise because the motion of a measured layer is often represented by a single-level value. In this research, cloud and water vapor motion winds that are derived from the Geostationary Operational Environmental Satellites (GOES winds) are compared to collocated rawinsonde observations (RAOBs). An important aspect of this work is that in addition to comparisons at each assigned height, the GOES winds are compared to the entire profile of the collocated RAOB data to determine the vertical error characteristics of the GOES winds. The impact of these results on numerical weather prediction is then investigated. The comparisons at individual vector height assignments indicate that the error of the GOES winds range from approx. 3 to 10 m/s and generally increase with height. However, if taken as a percentage of the total wind speed, accuracy is better at upper levels. As expected, comparisons with the entire profile of the collocated RAOBs indicate that clear-air water vapor winds represent deeper layers than do either infrared or water vapor cloud-tracked winds. This is because in cloud-free regions the signal from water vapor features may result from emittance over a thicker layer. To further investigate characteristics of the clear-air water vapor winds, they are stratified into two categories that are dependent on the depth of the layer represented by the vector. It is found that if the vertical gradient of moisture is smooth and uniform from near the height assignment upwards, the clear-air water vapor wind tends to represent a relatively deep layer. The information from the comparisons is then used in numerical model simulations of two separate events to determine the forecast impacts. Four simulations are performed for each case: 1) A
Naming in aphasic children: analysis of paraphasic errors.
van Dongen, H R; Visch-Brink, E G
1988-01-01
In the spontaneous speech of aphasic children paraphasias have been described. This analysis of naming errors during recovery showed that neologisms, literal and verbal paraphasias occurred. The etiology affected the recovery course of neologisms, but not other errors.
Xue, Hongqi; Miao, Hongyu; Wu, Hulin
2010-01-01
This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge-Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n(-1/(p∧4)), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics.
Xue, Hongqi; Miao, Hongyu; Wu, Hulin
2010-01-01
This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge–Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n−1/(p∧4), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics. PMID:21132064
A technique for human error analysis (ATHEANA)
Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W.
1996-05-01
Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.
ELIASSI,MEHDI; GLASS JR.,ROBERT J.
2000-03-08
The authors consider the ability of the numerical solution of Richards equation to model gravity-driven fingers. Although gravity-driven fingers can be easily simulated using a partial downwind averaging method, they find the fingers are purely artificial, generated by the combined effects of truncation error induced oscillations and capillary hysteresis. Since Richards equation can only yield a monotonic solution for standard constitutive relations and constant flux boundary conditions, it is not the valid governing equation to model gravity-driven fingers, and therefore is also suspect for unsaturated flow in initially dry, highly nonlinear, and hysteretic media where these fingers occur. However, analysis of truncation error at the wetting front for the partial downwind method suggests the required mathematical behavior of a more comprehensive and physically based modeling approach for this region of parameter space.
Analysis of Pronominal Errors: A Case Study.
ERIC Educational Resources Information Center
Oshima-Takane, Yuriko
1992-01-01
Reports on a study of a normally developing boy who made pronominal errors for about 10 months. Comprehension and production data clearly indicate that the child persistently made pronominal errors because of semantic confusion in the use of first- and second-person pronouns. (28 references) (GLR)
Errors, correlations and fidelity for noisy Hamilton flows. Theory and numerical examples
NASA Astrophysics Data System (ADS)
Turchetti, G.; Sinigardi, S.; Servizi, G.; Panichi, F.; Vaienti, S.
2017-02-01
We analyse the asymptotic growth of the error for Hamiltonian flows due to small random perturbations. We compare the forward error with the reversibility error, showing their equivalence for linear flows on a compact phase space. The forward error, given by the root mean square deviation σ (t) of the noisy flow, grows according to a power law if the system is integrable and according to an exponential law if it is chaotic. The autocorrelation and the fidelity, defined as the correlation of the perturbed flow with respect to the unperturbed one, exhibit an exponential decay as \\exp ≤ft(-2{π2}{σ2}(t)\\right) . Some numerical examples such as the anharmonic oscillator and the Hénon Heiles model confirm these results. We finally consider the effect of the observational noise on an integrable system, and show that the decay of correlations can only be observed after a sequence of measurements and that the multiplicative noise is more effective if the delay between two measurements is large.
Perez-Benito, Joaquin F; Mulero-Raichs, Mar
2016-10-06
Many kinetic studies concerning homologous reaction series report the existence of an activation enthalpy-entropy linear correlation (compensation plot), its slope being the temperature at which all the members of the series have the same rate constant (isokinetic temperature). Unfortunately, it has been demonstrated by statistical methods that the experimental errors associated with the activation enthalpy and entropy are mutually interdependent. Therefore, the possibility that some of those correlations might be caused by accidental errors has been explored by numerical simulations. As a result of this study, a computer program has been developed to evaluate the probability that experimental errors might lead to a linear compensation plot parting from an initial randomly scattered set of activation parameters (p-test). Application of this program to kinetic data for 100 homologous reaction series extracted from bibliographic sources has allowed concluding that most of the reported compensation plots can hardly be explained by the accumulation of experimental errors, thus requiring the existence of a previously existing, physically meaningful correlation.
Numerical errors in the computation of subfilter scalar variance in large eddy simulations
NASA Astrophysics Data System (ADS)
Kaul, C. M.; Raman, V.; Balarac, G.; Pitsch, H.
2009-05-01
Subfilter scalar variance is a key quantity for scalar mixing at the small scales of a turbulent flow and thus plays a crucial role in large eddy simulation of combustion. While prior studies have mainly focused on the physical aspects of modeling subfilter variance, the current work discusses variance models in conjunction with the numerical errors due to their implementation using finite-difference methods. A priori tests on data from direct numerical simulation of homogeneous turbulence are performed to evaluate the numerical implications of specific model forms. Like other subfilter quantities, such as kinetic energy, subfilter variance can be modeled according to one of two general methodologies. In the first of these, an algebraic equation relating the variance to gradients of the filtered scalar field is coupled with a dynamic procedure for coefficient estimation. Although finite-difference methods substantially underpredict the gradient of the filtered scalar field, the dynamic method is shown to mitigate this error through overestimation of the model coefficient. The second group of models utilizes a transport equation for the subfilter variance itself or for the second moment of the scalar. Here, it is shown that the model formulation based on the variance transport equation is consistently biased toward underprediction of the subfilter variance. The numerical issues in the variance transport equation stem from discrete approximations to chain-rule manipulations used to derive convection, diffusion, and production terms associated with the square of the filtered scalar. These approximations can be avoided by solving the equation for the second moment of the scalar, suggesting that model's numerical superiority.
An Error Analysis of Elementary School Children's Number Production Abilities
ERIC Educational Resources Information Center
Skwarchuk, Sheri-Lynn; Betts, Paul
2006-01-01
Translating numerals into number words is a tacit task requiring linguistic and mathematical knowledge. This project expanded on previous number production models by examining developmental differences in children's number naming errors. Ninety-six children from grades one, three, five, and seven translated a random set of numerals into number…
Numerical likelihood analysis of cosmic ray anisotropies
Carlos Hojvat et al.
2003-07-02
A numerical likelihood approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.
Solar Tracking Error Analysis of Fresnel Reflector
Zheng, Jiantao; Yan, Junjie; Pei, Jie; Liu, Guanjie
2014-01-01
Depending on the rotational structure of Fresnel reflector, the rotation angle of the mirror was deduced under the eccentric condition. By analyzing the influence of the sun tracking rotation angle error caused by main factors, the change rule and extent of the influence were revealed. It is concluded that the tracking errors caused by the difference between the rotation axis and true north meridian, at noon, were maximum under certain conditions and reduced at morning and afternoon gradually. The tracking error caused by other deviations such as rotating eccentric, latitude, and solar altitude was positive at morning, negative at afternoon, and zero at a certain moment of noon. PMID:24895664
Numerical method to solve Cauchy type singular integral equation with error bounds
NASA Astrophysics Data System (ADS)
Setia, Amit; Sharma, Vaishali; Liu, Yucheng
2017-01-01
Cauchy type singular integral equations with index zero naturally occur in the field of aerodynamics. Literature is very much developed for these equations and Chebyshevs polynomials are most frequently used to solve these integral equations. In this paper, a residual based Galerkins method has been proposed by using Legendre polynomial as basis functions to solve Cauchy singular integral equation of index zero. It converts the Cauchy singular integral equation into system of equations which can be easily solved. The test examples are given for illustration of proposed numerical method. Error bounds are derived as well as implemented in all the test examples.
Nuclear numerical range and quantum error correction codes for non-unitary noise models
NASA Astrophysics Data System (ADS)
Lipka-Bartosik, Patryk; Życzkowski, Karol
2017-01-01
We introduce a notion of nuclear numerical range defined as the set of expectation values of a given operator A among normalized pure states, which belong to the nucleus of an auxiliary operator Z. This notion proves to be applicable to investigate models of quantum noise with block-diagonal structure of the corresponding Kraus operators. The problem of constructing a suitable quantum error correction code for this model can be restated as a geometric problem of finding intersection points of certain sets in the complex plane. This technique, worked out in the case of two-qubit systems, can be generalized for larger dimensions.
Numerical Package in Computer Supported Numeric Analysis Teaching
ERIC Educational Resources Information Center
Tezer, Murat
2007-01-01
At universities in the faculties of Engineering, Sciences, Business and Economics together with higher education in Computing, it is stated that because of the difficulty, calculators and computers can be used in Numerical Analysis (NA). In this study, the learning computer supported NA will be discussed together with important usage of the…
Error Analysis of Terrestrial Laser Scanning Data by Means of Spherical Statistics and 3D Graphs
Cuartero, Aurora; Armesto, Julia; Rodríguez, Pablo G.; Arias, Pedro
2010-01-01
This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS) data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles) that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP) and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis) and the proposed method (angular errors analysis) by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics. PMID:22163461
Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, Ronald M.
2015-01-01
The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.
Application of Interval Analysis to Error Control.
1976-09-01
We give simple examples of ways in which interval arithmetic can be used to alert instabilities in computer algorithms , roundoff error accumulation, and even the effects of hardware inadequacies. This paper is primarily tutorial. (Author)
Analysis of thematic map classification error matrices.
Rosenfield, G.H.
1986-01-01
The classification error matrix expresses the counts of agreement and disagreement between the classified categories and their verification. Thematic mapping experiments compare variables such as multiple photointerpretation or scales of mapping, and produce one or more classification error matrices. This paper presents a tutorial to implement a typical problem of a remotely sensed data experiment for solution by the linear model method.-from Author
Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G
2014-01-27
Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.
Asteroid orbital error analysis: Theory and application
NASA Technical Reports Server (NTRS)
Muinonen, K.; Bowell, Edward
1992-01-01
We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).
Error analysis of system mass properties
NASA Technical Reports Server (NTRS)
Brayshaw, J.
1984-01-01
An attempt is made to verify the margin of system mass properties over values that are sufficient for the support of such other critical system requirements as those of dynamic control. System nominal mass properties are designed on the basis of an imperfect understanding of the mass and location of constituent elements; the effect of such element errors is to introduce net errors into calculated system mass properties. The direct measurement of system mass properties is, however, impractical. Attention is given to these issues in the case of the Galileo spacecraft.
NASA Technical Reports Server (NTRS)
Fiske, David R.
2004-01-01
In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.
Dose error analysis for a scanned proton beam delivery system
NASA Astrophysics Data System (ADS)
Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.
2010-12-01
All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.
Size and Shape Analysis of Error-Prone Shape Data
Du, Jiejun; Dryden, Ian L.; Huang, Xianzheng
2015-01-01
We consider the problem of comparing sizes and shapes of objects when landmark data are prone to measurement error. We show that naive implementation of ordinary Procrustes analysis that ignores measurement error can compromise inference. To account for measurement error, we propose the conditional score method for matching configurations, which guarantees consistent inference under mild model assumptions. The effects of measurement error on inference from naive Procrustes analysis and the performance of the proposed method are illustrated via simulation and application in three real data examples. Supplementary materials for this article are available online. PMID:26109745
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher
1997-01-01
We proposed a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and is required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has two important applications, which we term the assessment application and the objective analysis application. For the assessment application, our approach results in new objective measures of forecast skill which are more in line with subjective measures of forecast skill and which are useful in validating models and diagnosing their shortcomings. With regard to the objective analysis application, meteorological analysis schemes balance forecast error and observational error to obtain an optimal analysis. Presently, representations of the error covariance matrix used to measure the forecast error are severely limited. For the objective analysis application our approach will improve analyses by providing a more realistic measure of the forecast error. We expect, a priori, that our approach should greatly improve the utility of remotely sensed data which have relatively high horizontal resolution, but which are indirectly related to the conventional atmospheric variables. In this project, we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP) and 500 hPa geopotential height fields for forecasts of the short and medium range. Since the forecasts are generated by the GEOS (Goddard Earth Observing System) data assimilation system with and without ERS 1 scatterometer data, these preliminary studies serve several purposes. They (1) provide a
Analysis of Errors in a Special Perturbations Satellite Orbit Propagator
Beckerman, M.; Jones, J.P.
1999-02-01
We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.
A Numerical Study of Some Potential Sources of Error in Side-by-Side Seismometer Evaluations
Holcomb, L. Gary
1990-01-01
INTRODUCTION This report presents the results of a series of computer simulations of potential errors in test data, which might be obtained when conducting side-by-side comparisons of seismometers. These results can be used as guides in estimating potential sources and magnitudes of errors one might expect when analyzing real test data. First, the derivation of a direct method for calculating the noise levels of two sensors in a side-by-side evaluation is repeated and extended slightly herein. This bulk of this derivation was presented previously (see Holcomb 1989); it is repeated here for easy reference. This method is applied to the analysis of a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of white noise spectra with known signal-to-noise ratios (SNR's). This report extends this analysis to high SNR's to determine the limitations of the direct method for calculating the noise levels at signal-to-noise levels which are much higher than presented previously (see Holcomb 1989). Next, the method is used to analyze a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of bandshaped noise spectra with known signal-to-noise ratios. This is a much more realistic representation of real world data because the earth's background spectrum is certainly not flat. Finally, the results of the analysis of simulated white and bandshaped side-by-side test data are used to assist in interpreting the analysis of the effects of simulated azimuthal misalignment in side-by-side sensor evaluations. A thorough understanding of azimuthal misalignment errors is important because of the physical impossibility of perfectly aligning two sensors in a real world situation. The analysis herein indicates that alignment errors place lower limits on the levels of system noise which can be resolved in a side-by-side measurement. It also indicates that alignment errors are the source of the fact that
Empirical Analysis of Systematic Communication Errors.
1981-09-01
human o~ . .... 8 components in communication systems. (Systematic errors were defined to be those that occur regularly in human communication links...phase of the human communication process and focuses on the linkage between a specific piece of information (and the receiver) and the transmission...communication flow. (2) Exchange. Exchange is the next phase in human communication and entails a concerted effort on the part of the sender and receiver to share
Exploratory Factor Analysis of Reading, Spelling, and Math Errors
ERIC Educational Resources Information Center
O'Brien, Rebecca; Pan, Xingyu; Courville, Troy; Bray, Melissa A.; Breaux, Kristina; Avitia, Maria; Choi, Dowon
2017-01-01
Norm-referenced error analysis is useful for understanding individual differences in students' academic skill development and for identifying areas of skill strength and weakness. The purpose of the present study was to identify underlying connections between error categories across five language and math subtests of the Kaufman Test of…
Implications of Error Analysis Studies for Academic Interventions
ERIC Educational Resources Information Center
Mather, Nancy; Wendling, Barbara J.
2017-01-01
We reviewed 13 studies that focused on analyzing student errors on achievement tests from the Kaufman Test of Educational Achievement-Third edition (KTEA-3). The intent was to determine what instructional implications could be derived from in-depth error analysis. As we reviewed these studies, several themes emerged. We explain how a careful…
The Use of Trigram Analysis for Spelling Error Detection.
ERIC Educational Resources Information Center
Zamora, E. M.; And Others
1981-01-01
Describes work performed under the Spelling Error Detection Correction Project (SPEEDCOP) at Chemical Abstracts Service to devise effective automatic methods of detecting and correcting misspellings in scholarly and scientific text. The trigram analysis technique developed determined sites but not types of errors. Thirteen references are listed.…
Error analysis for a laser differential confocal radius measurement system.
Wang, Xu; Qiu, Lirong; Zhao, Weiqian; Xiao, Yang; Wang, Zhongyu
2015-02-10
In order to further improve the measurement accuracy of the laser differential confocal radius measurement system (DCRMS) developed previously, a DCRMS error compensation model is established for the error sources, including laser source offset, test sphere position adjustment offset, test sphere figure, and motion error, based on analyzing the influences of these errors on the measurement accuracy of radius of curvature. Theoretical analyses and experiments indicate that the expanded uncertainty of the DCRMS is reduced to U=0.13 μm+0.9 ppm·R (k=2) through the error compensation model. The error analysis and compensation model established in this study can provide the theoretical foundation for improving the measurement accuracy of the DCRMS.
Orbital error analysis for comet Encke, 1980
NASA Technical Reports Server (NTRS)
Yeomans, D. K.
1976-01-01
Before a particular comet is selected as a flyby target, the following criteria should be considered in determining its ephemeris uncertainty: (1) A target comet should have good observability during the apparition of the proposed intercept; and (2) A target comet should have a good observational history. Several well observed and consecutive apparitions allow an accurate determination of a comet's mean motion and nongravitational parameters. Using these criteria, along with statistical and empirical error analyses, it has been demonstrated that the 1980 apparition of comet Encke is an excellent opportunity for a cometary flyby space probe. For this particular apparition, a flyby to within 1,000 km of comet Encke seems possible without the use of sophisticated and expensive onboard navigation instrumentation.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
Bahşı, Ayşe Kurt; Yalçınbaş, Salih
2016-01-01
In this study, the Fibonacci collocation method based on the Fibonacci polynomials are presented to solve for the fractional diffusion equations with variable coefficients. The fractional derivatives are described in the Caputo sense. This method is derived by expanding the approximate solution with Fibonacci polynomials. Using this method of the fractional derivative this equation can be reduced to a set of linear algebraic equations. Also, an error estimation algorithm which is based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation algorithm. If the exact solution of the problem is not known, the absolute error function of the problems can be approximately computed by using the Fibonacci polynomial solution. By using this error estimation function, we can find improved solutions which are more efficient than direct numerical solutions. Numerical examples, figures, tables are comparisons have been presented to show efficiency and usable of proposed method.
Classification error analysis in stereo vision
NASA Astrophysics Data System (ADS)
Gross, Eitan
2015-07-01
Depth perception in humans is obtained by comparing images generated by the two eyes to each other. Given the highly stochastic nature of neurons in the brain, this comparison requires maximizing the mutual information (MI) between the neuronal responses in the two eyes by distributing the coding information across a large number of neurons. Unfortunately, MI is not an extensive quantity, making it very difficult to predict how the accuracy of depth perception will vary with the number of neurons (N) in each eye. To address this question we present a two-arm, distributed decentralized sensors detection model. We demonstrate how the system can extract depth information from a pair of discrete valued stimuli represented here by a pair of random dot-matrix stereograms. Using the theory of large deviations we calculated the rate at which the global average error probability of our detector; and the MI between the two arms' output, vary with N. We found that MI saturates exponentially with N at a rate which decays as 1 / N. The rate function approached the Chernoff distance between the two probability distributions asymptotically. Our results may have implications in computer stereo vision that uses Hebbian-based algorithms for terrestrial navigation.
Error Analysis of p-Version Discontinuous Galerkin Method for Heat Transfer in Built-up Structures
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.
2004-01-01
The purpose of this paper is to provide an error analysis for the p-version of the discontinuous Galerkin finite element method for heat transfer in built-up structures. As a special case of the results in this paper, a theoretical error estimate for the numerical experiments recently conducted by James Tomey is obtained.
Error analysis of large aperture static interference imaging spectrometer
NASA Astrophysics Data System (ADS)
Li, Fan; Zhang, Guo
2015-12-01
Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.
Attitude Determination Error Analysis System (ADEAS) mathematical specifications document
NASA Technical Reports Server (NTRS)
Nicholson, Mark; Markley, F.; Seidewitz, E.
1988-01-01
The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.
2013-01-01
Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).
Simple numerical analysis of longboard speedometer data
NASA Astrophysics Data System (ADS)
Hare, Jonathan
2013-11-01
Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 Phys. Educ. 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as simple numerical differentiation and integration. This is an interesting, fun and instructive way to start to explore data manipulation at GCSE and A-level—analysis and skills so essential for the engineer and scientist.
The Use of Error Analysis to Assess Resident Performance
D’Angelo, Anne-Lise D.; Law, Katherine E.; Cohen, Elaine R.; Greenberg, Jacob A.; Kwan, Calvin; Greenberg, Caprice; Wiegmann, Douglas A.; Pugh, Carla M.
2015-01-01
Background The aim of this study is to assess validity of a human factors error assessment method for evaluating resident performance during a simulated operative procedure. Methods Seven PGY4-5 residents had 30 minutes to complete a simulated laparoscopic ventral hernia (LVH) repair on Day 1 of a national, advanced laparoscopic course. Faculty provided immediate feedback on operative errors and residents participated in a final product analysis of their repairs. Residents then received didactic and hands-on training regarding several advanced laparoscopic procedures during a lecture session and animate lab. On Day 2, residents performed a nonequivalent LVH repair using a simulator. Three investigators reviewed and coded videos of the repairs using previously developed human error classification systems. Results Residents committed 121 total errors on Day 1 compared to 146 on Day 2. One of seven residents successfully completed the LVH repair on Day 1 compared to all seven residents on Day 2 (p=.001). The majority of errors (85%) committed on Day 2 were technical and occurred during the last two steps of the procedure. There were significant differences in error type (p=<.001) and level (p=.019) from Day 1 to Day 2. The proportion of omission errors decreased from Day 1 (33%) to Day 2 (14%). In addition, there were more technical and commission errors on Day 2. Conclusion The error assessment tool was successful in categorizing performance errors, supporting known-groups validity evidence. Evaluating resident performance through error classification has great potential in facilitating our understanding of operative readiness. PMID:26003910
Simple Numerical Analysis of Longboard Speedometer Data
ERIC Educational Resources Information Center
Hare, Jonathan
2013-01-01
Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 "Phys. Educ." 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as…
Analysis and Numerical Treatment of Elliptic Equations with Stochastic Data
NASA Astrophysics Data System (ADS)
Cheng, Shi
Many science and engineering applications are impacted by a significant amount of uncertainty in the model. Examples include groundwater flow, microscopic biological system, material science and chemical engineering systems. Common mathematical problems in these applications are elliptic equations with stochastic data. In this dissertation, we examine two types of stochastic elliptic partial differential equations(SPDEs), namely nonlinear stochastic diffusion reaction equations and general linearized elastostatic problems in random media. We begin with the construction of an analysis framework for this class of SPDEs, extending prior work of Babuska in 2010. We then use the framework both for establishing well-posedness of the continuous problems and for posing Galerkintype numerical methods. In order to solve these two types of problems, single integral weak formulations and stochastic collocation methods are applied. Moreover, a priori error estimates for stochastic collocation methods are derived, which imply that the rate of convergence is exponential, along with the order of polynomial increasing in the space of random variables. As expected, numerical experiments show the exponential rate of convergence, verified by a posterior error analysis. Finally, an adaptive strategy driven by a posterior error indicators is designed.
Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.
Clustered Numerical Data Analysis Using Markov Lie Monoid Based Networks
NASA Astrophysics Data System (ADS)
Johnson, Joseph
2016-03-01
We have designed and build an optimal numerical standardization algorithm that links numerical values with their associated units, error level, and defining metadata thus supporting automated data exchange and new levels of artificial intelligence (AI). The software manages all dimensional and error analysis and computational tracing. Tables of entities verses properties of these generalized numbers (called ``metanumbers'') support a transformation of each table into a network among the entities and another network among their properties where the network connection matrix is based upon a proximity metric between the two items. We previously proved that every network is isomorphic to the Lie algebra that generates continuous Markov transformations. We have also shown that the eigenvectors of these Markov matrices provide an agnostic clustering of the underlying patterns. We will present this methodology and show how our new work on conversion of scientific numerical data through this process can reveal underlying information clusters ordered by the eigenvalues. We will also show how the linking of clusters from different tables can be used to form a ``supernet'' of all numerical information supporting new initiatives in AI.
Error analysis of two methods for range-images registration
NASA Astrophysics Data System (ADS)
Liu, Xiaoli; Yin, Yongkai; Li, Ameng; He, Dong; Peng, Xiang
2010-08-01
With the improvements in range image registration techniques, this paper focuses on error analysis of two registration methods being generally applied in industry metrology including the algorithm comparison, matching error, computing complexity and different application areas. One method is iterative closest points, by which beautiful matching results with little error can be achieved. However some limitations influence its application in automatic and fast metrology. The other method is based on landmarks. We also present a algorithm for registering multiple range-images with non-coding landmarks, including the landmarks' auto-identification and sub-pixel location, 3D rigid motion, point pattern matching, global iterative optimization techniques et al. The registering results by the two methods are illustrated and a thorough error analysis is performed.
Error Analysis of Variations on Larsen's Benchmark Problem
Azmy, YY
2001-06-27
Error norms for three variants of Larsen's benchmark problem are evaluated using three numerical methods for solving the discrete ordinates approximation of the neutron transport equation in multidimensional Cartesian geometry. The three variants of Larsen's test problem are concerned with the incoming flux boundary conditions: unit incoming flux on the left and bottom edges (Larsen's configuration); unit, incoming flux only on the left edge; unit incoming flux only on the bottom edge. The three methods considered are the Diamond Difference (DD) method, and the constant-approximation versions of the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic (AHOT-C) type. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L{sub 1}, L{sub 2}, and L{sub {infinity}} error norms are calculated. The results of this study demonstrate that while integral error norms, i.e. L{sub 1}, L{sub 2}, converge to zero with mesh refinement, the pointwise L{sub {infinity}} norm does not due to solution discontinuity across the singular characteristic. Little difference is observed between the error norm behavior of the three methods considered in spite of the fact that AHOT-C is locally exact, suggesting that numerical diffusion across the singular characteristic as the major source of error on the global scale. However, AHOT-C possesses a given accuracy in a larger fraction of computational cells than DD.
A simple and efficient error analysis for multi-step solution of the Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Fithen, R. M.
2002-02-01
A simple error analysis is used within the context of segregated finite element solution scheme to solve incompressible fluid flow. An error indicator is defined based on the difference between a numerical solution on an original mesh and an approximated solution on a related mesh. This error indicator is based on satisfying the steady-state momentum equations. The advantages of this error indicator are, simplicity of implementation (post-processing step), ability to show regions of high and/or low error, and as the indicator approaches zero the solution approaches convergence. Two examples are chosen for solution; first, the lid-driven cavity problem, followed by the solution of flow over a backward facing step. The solutions are compared to previously published data for validation purposes. It is shown that this rather simple error estimate, when used as a re-meshing guide, can be very effective in obtaining accurate numerical solutions. Copyright
Error control in the GCF: An information-theoretic model for error analysis and coding
NASA Technical Reports Server (NTRS)
Adeyemi, O.
1974-01-01
The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.
Zollanvari, Amin; Genton, Marc G
2013-08-01
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Application of human error analysis to aviation and space operations
Nelson, W.R.
1998-03-01
For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) the authors have been working to apply methods of human error analysis to the design of complex systems. They have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. They are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. The primary vehicle the authors have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. They are currently adapting their methods and tools of human error analysis to the domain of air traffic management (ATM) systems. Under the NASA-sponsored Advanced Air Traffic Technologies (AATT) program they are working to address issues of human reliability in the design of ATM systems to support the development of a free flight environment for commercial air traffic in the US. They are also currently testing the application of their human error analysis approach for space flight operations. They have developed a simplified model of the critical habitability functions for the space station Mir, and have used this model to assess the affects of system failures and human errors that have occurred in the wake of the collision incident last year. They are developing an approach so that lessons learned from Mir operations can be systematically applied to design and operation of long-term space missions such as the International Space Station (ISS) and the manned Mars mission.
The Use of Contrastive and Error Analysis to Practicing Teachers.
ERIC Educational Resources Information Center
Filipovic, Rudolf
A major problem in learning a second language is the interference of a structurally different native language. Contrastive analysis (CA) combined with learner error analysis (EA) provide an excellent basis for preparation of language instructional materials. The Yugoslav Serbo-Croatian-English Contrastive Project proved that full application of CA…
Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors
ERIC Educational Resources Information Center
Sarcevic, Aleksandra
2009-01-01
An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…
Bootstrap Standard Error Estimates in Dynamic Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Browne, Michael W.
2010-01-01
Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…
Enhanced orbit determination filter sensitivity analysis: Error budget development
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Burkhart, P. D.
1994-01-01
An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.
Numerical Analysis of Robust Phase Estimation
NASA Astrophysics Data System (ADS)
Rudinger, Kenneth; Kimmel, Shelby
Robust phase estimation (RPE) is a new technique for estimating rotation angles and axes of single-qubit operations, steps necessary for developing useful quantum gates [arXiv:1502.02677]. As RPE only diagnoses a few parameters of a set of gate operations while at the same time achieving Heisenberg scaling, it requires relatively few resources compared to traditional tomographic procedures. In this talk, we present numerical simulations of RPE that show both Heisenberg scaling and robustness against state preparation and measurement errors, while also demonstrating numerical bounds on the procedure's efficacy. We additionally compare RPE to gate set tomography (GST), another Heisenberg-limited tomographic procedure. While GST provides a full gate set description, it is more resource-intensive than RPE, leading to potential tradeoffs between the procedures. We explore these tradeoffs and numerically establish criteria to guide experimentalists in deciding when to use RPE or GST to characterize their gate sets.Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
McLaughlin, Douglas B
2012-01-01
The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors.
NASA Astrophysics Data System (ADS)
Assous, Franck; Chaskalovic, Joël
2013-03-01
In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.
NASA Astrophysics Data System (ADS)
Sarojkumar, K.; Krishna, S.
2016-08-01
Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.
Linear error analysis of slope-area discharge determinations
Kirby, W.H.
1987-01-01
The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
NASA Astrophysics Data System (ADS)
Privé, N. C.; Errico, R. M.; Tai, K.-S.
2013-06-01
The National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a 1 month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 h forecast, increased observation error only yields a slight decline in forecast skill in the extratropics and no discernible degradation of forecast skill in the tropics.
Geometric error analysis for shuttle imaging spectrometer experiment
NASA Technical Reports Server (NTRS)
Wang, S. J.; Ih, C. H.
1984-01-01
The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.
Numerical error in electron orbits with large. omega. sub ce. delta. t
Parker, S.E.; Birdsall, C.K.
1989-12-20
We have found that running electrostatic particle codes relatively large {omega}{sub ce}{Delta}t in some circumstances does not significantly affect the physical results. We first present results from a single particle mover finding the correct first order drifts for large {omega}{sub ce}{Delta}t. We then characterize the numerical orbit of the Boris algorithm for rotation when {omega}{sub ce}{Delta}t {much gt} 1. Next, an analysis of the guiding center motion is given showing why the first order drift is retained at large {omega}{sub ce}{Delta}t. Lastly, we present a plasma simulation of a one dimensional cross field sheath, with large and small {omega}{sub ce}{Delta}t, with very little difference in the results. 15 refs., 7 figs., 1 tab.
Error analysis for momentum conservation in Atomic-Continuum Coupled Model
NASA Astrophysics Data System (ADS)
Yang, Yantao; Cui, Junzhi; Han, Tiansi
2016-08-01
Atomic-Continuum Coupled Model (ACCM) is a multiscale computation model proposed by Xiang et al. (in IOP conference series materials science and engineering, 2010), which is used to study and simulate dynamics and thermal-mechanical coupling behavior of crystal materials, especially metallic crystals. In this paper, we construct a set of interpolation basis functions for the common BCC and FCC lattices, respectively, implementing the computation of ACCM. Based on this interpolation approximation, we give a rigorous mathematical analysis of the error of momentum conservation equation introduced by ACCM, and derive a sequence of inequalities that bound the error. Numerical experiment is carried out to verify our result.
The notion of error in Langevin dynamics. I. Linear analysis
NASA Astrophysics Data System (ADS)
Mishra, Bimal; Schlick, Tamar
1996-07-01
The notion of error in practical molecular and Langevin dynamics simulations of large biomolecules is far from understood because of the relatively large value of the timestep used, the short simulation length, and the low-order methods employed. We begin to examine this issue with respect to equilibrium and dynamic time-correlation functions by analyzing the behavior of selected implicit and explicit finite-difference algorithms for the Langevin equation. We derive: local stability criteria for these integrators; analytical expressions for the averages of the potential, kinetic, and total energy; and various limiting cases (e.g., timestep and damping constant approaching zero), for a system of coupled harmonic oscillators. These results are then compared to the corresponding exact solutions for the continuous problem, and their implications to molecular dynamics simulations are discussed. New concepts of practical and theoretical importance are introduced: scheme-dependent perturbative damping and perturbative frequency functions. Interesting differences in the asymptotic behavior among the algorithms become apparent through this analysis, and two symplectic algorithms, ``LIM2'' (implicit) and ``BBK'' (explicit), appear most promising on theoretical grounds. One result of theoretical interest is that for the Langevin/implicit-Euler algorithm (``LI'') there exist timesteps for which there is neither numerical damping nor shift in frequency for a harmonic oscillator. However, this idea is not practical for more complex systems because these special timesteps can account only for one frequency of the system, and a large damping constant is required. We therefore devise a more practical, delay-function approach to remove the artificial damping and frequency perturbation from LI. Indeed, a simple MD implementation for a system of coupled harmonic oscillators demonstrates very satisfactory results in comparison with the velocity-Verlet scheme. We also define a
A case of error disclosure: a communication privacy management analysis.
Petronio, Sandra; Helft, Paul R; Child, Jeffrey T
2013-12-01
To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios's theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient's family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public healthMuch of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information
Unbiased bootstrap error estimation for linear discriminant analysis.
Vu, Thang; Sima, Chao; Braga-Neto, Ulisses M; Dougherty, Edward R
2014-12-01
Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.
Manufacturing in space: Fluid dynamics numerical analysis
NASA Technical Reports Server (NTRS)
Robertson, S. J.; Nicholson, L. A.; Spradley, L. W.
1981-01-01
Natural convection in a spherical container with cooling at the center was numerically simulated using the Lockheed-developed General Interpolants Method (GIM) numerical fluid dynamic computer program. The numerical analysis was simplified by assuming axisymmetric flow in the spherical container, with the symmetry axis being a sphere diagonal parallel to the gravity vector. This axisymmetric spherical geometry was intended as an idealization of the proposed Lal/Kroes growing experiments to be performed on board Spacelab. Results were obtained for a range of Rayleigh numbers from 25 to 10,000. For a temperature difference of 10 C from the cooling sting at the center to the container surface, and a gravitional loading of 0.000001 g a computed maximum fluid velocity of about 2.4 x 0.00001 cm/sec was reached after about 250 sec. The computed velocities were found to be approximately proportional to the Rayleigh number over the range of Rayleigh numbers investigated.
Explicit a posteriori error estimates for eigenvalue analysis of heterogeneous elastic structures.
Walsh, Timothy Francis; Reese, Garth M.; Hetmaniuk, Ulrich L.
2005-07-01
An a posteriori error estimator is developed for the eigenvalue analysis of three-dimensional heterogeneous elastic structures. It constitutes an extension of a well-known explicit estimator to heterogeneous structures. We prove that our estimates are independent of the variations in material properties and independent of the polynomial degree of finite elements. Finally, we study numerically the effectivity of this estimator on several model problems.
Error analysis of flux limiter schemes at extrema
NASA Astrophysics Data System (ADS)
Kriel, A. J.
2017-01-01
Total variation diminishing (TVD) schemes have been an invaluable tool for the solution of hyperbolic conservation laws. One of the major shortcomings of commonly used TVD methods is the loss of accuracy near extrema. Although large amounts of anti-diffusion usually benefit the resolution of discontinuities, a balanced limiter such as Van Leer's performs better at extrema. Reliable criteria, however, for the performance of a limiter near extrema are not readily apparent. This work provides theoretical quantitative estimates for the local truncation errors of flux limiter schemes at extrema for a uniform grid. Moreover, the component of the error attributed to the flux limiter was obtained. This component is independent of the problem and grid spacing, and may be considered a property of the limiter that reflects the performance at extrema. Numerical test problems validate the results.
Analysis of possible systematic errors in the Oslo method
Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.
2011-03-15
In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.
Factor Loading Estimation Error and Stability Using Exploratory Factor Analysis
ERIC Educational Resources Information Center
Sass, Daniel A.
2010-01-01
Exploratory factor analysis (EFA) is commonly employed to evaluate the factor structure of measures with dichotomously scored items. Generally, only the estimated factor loadings are provided with no reference to significance tests, confidence intervals, and/or estimated factor loading standard errors. This simulation study assessed factor loading…
Listening Comprehension Strategies and Autonomy: Why Error Analysis?
ERIC Educational Resources Information Center
Henner-Stanchina, Carolyn
An experiment combining listening comprehension training and error analysis was conducted with students at the English Language Institute, Queens College, the City University of New York. The purpose of the study was to investigate how to take learners who were primarily dependent on perceptive skills for comprehension and widen their…
An analysis of pilot error-related aircraft accidents
NASA Technical Reports Server (NTRS)
Kowalsky, N. B.; Masters, R. L.; Stone, R. B.; Babcock, G. L.; Rypka, E. W.
1974-01-01
A multidisciplinary team approach to pilot error-related U.S. air carrier jet aircraft accident investigation records successfully reclaimed hidden human error information not shown in statistical studies. New analytic techniques were developed and applied to the data to discover and identify multiple elements of commonality and shared characteristics within this group of accidents. Three techniques of analysis were used: Critical element analysis, which demonstrated the importance of a subjective qualitative approach to raw accident data and surfaced information heretofore unavailable. Cluster analysis, which was an exploratory research tool that will lead to increased understanding and improved organization of facts, the discovery of new meaning in large data sets, and the generation of explanatory hypotheses. Pattern recognition, by which accidents can be categorized by pattern conformity after critical element identification by cluster analysis.
Star tracker error analysis: Roll-to-pitch nonorthogonality
NASA Technical Reports Server (NTRS)
Corson, R. W.
1979-01-01
An error analysis is described on an anomaly isolated in the star tracker software line of sight (LOS) rate test. The LOS rate cosine was found to be greater than one in certain cases which implied that either one or both of the star tracker measured end point unit vectors used to compute the LOS rate cosine had lengths greater than unity. The roll/pitch nonorthogonality matrix in the TNB CL module of the IMU software is examined as the source of error.
Gamma Ray Observatory (GRO) OBC attitude error analysis
NASA Technical Reports Server (NTRS)
Harman, R. R.
1990-01-01
This analysis involves an in-depth look into the onboard computer (OBC) attitude determination algorithm. A review of TRW error analysis and necessary ground simulations to understand the onboard attitude determination process are performed. In addition, a plan is generated for the in-flight calibration and validation of OBC computed attitudes. Pre-mission expected accuracies are summarized and sensitivity of onboard algorithms to sensor anomalies and filter tuning parameters are addressed.
Hybrid Experimental-Numerical Stress Analysis.
1983-04-01
components# biomechanics and fracture mechanics. .4 ELASTIC ANALYSIS OF STRUCTURAL COMPONENTS The numerical techniques used In modern hybrid technique for...measured E24] relations of probe force versus probe area under applanation tonametry. ELASTIC-PASTIC FRACTURE MECHANICS Fracture parameters governing...models of the crack. Strain energy release rate and stress intensity factor in linear elastic fracture mechanics, which is a well established analog
Numerical Analysis Of Flows With FIDAP
NASA Technical Reports Server (NTRS)
Sohn, Jeong L.
1990-01-01
Report presents an evaluation of accuracy of Fluid Dynamics Package (FIDAP) computer program. Finite-element code for analysis of flows of incompressible fluids and transfers of heat in multidimensional domains. Includes both available methods for treatment of spurious numerical coupling between simulated velocity and simulated pressure; namely, penalty method and mixed-interpolation method with variable choices of interpolation polynomials for velocity and pressure. Streamwise upwind (STU) method included as option for flows dominated by convection.
How psychotherapists handle treatment errors – an ethical analysis
2013-01-01
Background Dealing with errors in psychotherapy is challenging, both ethically and practically. There is almost no empirical research on this topic. We aimed (1) to explore psychotherapists’ self-reported ways of dealing with an error made by themselves or by colleagues, and (2) to reconstruct their reasoning according to the two principle-based ethical approaches that are dominant in the ethics discourse of psychotherapy, Beauchamp & Childress (B&C) and Lindsay et al. (L). Methods We conducted 30 semi-structured interviews with 30 psychotherapists (physicians and non-physicians) and analysed the transcripts using qualitative content analysis. Answers were deductively categorized according to the two principle-based ethical approaches. Results Most psychotherapists reported that they preferred to an disclose error to the patient. They justified this by spontaneous intuitions and common values in psychotherapy, rarely using explicit ethical reasoning. The answers were attributed to the following categories with descending frequency: 1. Respect for patient autonomy (B&C; L), 2. Non-maleficence (B&C) and Responsibility (L), 3. Integrity (L), 4. Competence (L) and Beneficence (B&C). Conclusions Psychotherapists need specific ethical and communication training to complement and articulate their moral intuitions as a support when disclosing their errors to the patients. Principle-based ethical approaches seem to be useful for clarifying the reasons for disclosure. Further research should help to identify the most effective and acceptable ways of error disclosure in psychotherapy. PMID:24321503
NASA Astrophysics Data System (ADS)
Prive, N.; Errico, R. M.; Tai, K.
2012-12-01
A global observing system simulation experiment (OSSE) has been developed at the NASA Global Modeling and Assimilation Office using the Global Earth Observing System (GEOS-5) forecast model and Gridpoint Statistical Interpolation data assimilation. A 13-month integration of the European Centre for Medium-Range Weather Forecasts operational forecast model is used as the Nature Run. Synthetic observations for conventional and radiance data types are interpolated from the Nature Run, with calibrated observation errors added to reproduce realistic statistics of analysis increment and observation innovation. It is found that correlated observation errors are necessary in order to replicate the statistics of analysis increment and observation innovation found with real data. The impact of these observation errors is explored in a series of OSSE experiments in which the magnitude of the applied observation error is varied from zero to double the calibrated values while the observation error covariances of the GSI are held fixed. Increased observation error has a strong effect on the variance of the analysis increment and observation innovation fields, but a much weaker impact on the root mean square (RMS) analysis error. For the 120 hour forecast, only slight degradation of forecast skill in terms of anomaly correlation and RMS forecast error is observed in the midlatitudes, and there is no appreciable impact of observation error on forecast skill in the tropics.
Error Estimation and h-Adaptivity for Optimal Finite Element Analysis
NASA Technical Reports Server (NTRS)
Cwik, Tom; Lou, John
1997-01-01
The objective of adaptive meshing and automatic error control in finite element analysis is to eliminate the need for the application engineer from re-meshing and re-running design simulations to verify numerical accuracy. The user should only need to enter the component geometry and a coarse finite element mesh. The software will then autonomously and adaptively refine this mesh where needed, reducing the error in the fields to a user prescribed value. The ideal end result of the simulation is a measurable quantity (e.g. scattered field, input impedance), calculated to a prescribed error, in less time and less machine memory than if the user applied typical uniform mesh refinement by hand. It would also allow for the simulation of larger objects since an optimal mesh is created.
Doctors' duty to disclose error: a deontological or Kantian ethical analysis.
Bernstein, Mark; Brown, Barry
2004-05-01
Medical (surgical) error is being talked about more openly and besides being the subject of retrospective reviews, is now the subject of prospective research. Disclosure of error has been a difficult issue because of fear of embarrassment for doctors in the eyes of their peers, and fear of punitive action by patients, consisting of medicolegal action and/or complaints to doctors' governing bodies. This paper examines physicians' and surgeons' duty to disclose error, from an ethical standpoint; specifically by applying the moral philosophical theory espoused by Immanuel Kant (ie. deontology). The purpose of this discourse is to apply moral philosophical analysis to a delicate but important issue which will be a matter all physicians and surgeons will have to confront, probably numerous times, in their professional careers.
NASA Astrophysics Data System (ADS)
Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin
2016-09-01
To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.
Numerical flow analysis for axial flow turbine
NASA Astrophysics Data System (ADS)
Sato, T.; Aoki, S.
Some numerical flow analysis methods adopted in the gas turbine interactive design system, TDSYS, are described. In the TDSYS, a streamline curvature program for axisymmetric flows, quasi 3-D and fully 3-D time marching programs are used respectively for blade to blade flows and annular cascade flows. The streamline curvature method has some advantages in that it can include the effect of coolant mixing and choking flow conditions. Comparison of the experimental results with calculated results shows that the overall accuracy is determined more by the empirical correlations used for loss and deviation than by the numerical scheme. The time marching methods are the best choice for the analysis of turbine cascade flows because they can handle mixed subsonic-supersonic flows with automatic inclusion of shock waves in a single calculation. Some experimental results show that a time marching method can predict the airfoil surface Mach number distribution more accurately than a finite difference method. One weakpoint of the time marching methods is a long computer time; they usually require several times as much CPU time as other methods. But reductions in computer costs and improvements in numerical methods have made the quasi 3-D and fully 3-D time marching methods usable as design tools, and they are now used in TDSYS.
Numerical Analysis of Rocket Exhaust Cratering
NASA Technical Reports Server (NTRS)
2008-01-01
Supersonic jet exhaust impinging onto a flat surface is a fundamental flow encountered in space or with a missile launch vehicle system. The flow is important because it can endanger launch operations. The purpose of this study is to evaluate the effect of a landing rocket s exhaust on soils. From numerical simulations and analysis, we developed characteristic expressions and curves, which we can use, along with rocket nozzle performance, to predict cratering effects during a soft-soil landing. We conducted a series of multiphase flow simulations with two phases: exhaust gas and sand particles. The main objective of the simulation was to obtain the numerical results as close to the experimental results as possible. After several simulating test runs, the results showed that packing limit and the angle of internal friction are the two critical and dominant factors in the simulations.
Treatment of numerical overflow in simulating error performance of free-space optical communication
NASA Astrophysics Data System (ADS)
Li, Fei; Hou, Zaihong; Wu, Yi
2012-11-01
Gamma-gamma distribution model was widely used in numerical simulations of the free-space optical communication system. The simulations are often interrupted by numerical overflow exception due to excessive parameters. Based on former researches, two modified models are presented using mathematical calculation software and computer program. By means of substitution and recurrence, factors of the original model are transformed into corresponding logarithmic formats, and potential overflow in calculation is eliminated. By numerical verification, the practicability and accuracy of the modified models are proved and the advantages and disadvantages are listed. The proper model should be selected according to practical conditions. The two models are also applicable to other numerical simulations based on gamma gamma distribution such as outrage probability and mean fade time of the free-space optical communication.
Numerical Analysis of Orbital Perturbation Effects on Inclined Geosynchronous SAR
Dong, Xichao; Hu, Cheng; Long, Teng; Li, Yuanhao
2016-01-01
The geosynchronous synthetic aperture radar (GEO SAR) is susceptible to orbit perturbations, leading to orbit drifts and variations. The influences behave very differently from those in low Earth orbit (LEO) SAR. In this paper, the impacts of perturbations on GEO SAR orbital elements are modelled based on the perturbed dynamic equations, and then, the focusing is analyzed theoretically and numerically by using the Systems Tool Kit (STK) software. The accurate GEO SAR slant range histories can be calculated according to the perturbed orbit positions in STK. The perturbed slant range errors are mainly the first and second derivatives, leading to image drifts and defocusing. Simulations of the point target imaging are performed to validate the aforementioned analysis. In the GEO SAR with an inclination of 53° and an argument of perigee of 90°, the Doppler parameters and the integration time are different and dependent on the geometry configurations. Thus, the influences are varying at different orbit positions: at the equator, the first-order phase errors should be mainly considered; at the perigee and apogee, the second-order phase errors should be mainly considered; at other positions, first-order and second-order exist simultaneously. PMID:27598168
A numerical model and spreadsheet interface for pumping test analysis.
Johnson, G S; Cosgrove, D M; Frederick, D B
2001-01-01
Curve-matching techniques have been the standard method of aquifer test analysis for several decades. A variety of techniques provide the capability of evaluating test data from confined, unconfined, leaky aquitard, and other conditions. Each technique, however, is accompanied by a set of assumptions, and evaluation of a combination of conditions can be complicated or impossible due to intractable mathematics or nonuniqueness of the solution. Numerical modeling of pumping tests provides two major advantages: (1) the user can choose which properties to calibrate and what assumptions to make; and (2) in the calibration process the user is gaining insights into the conceptual model of the flow system and uncertainties in the analysis. Routine numerical modeling of pumping tests is now practical due to computer hardware and software advances of the last decade. The RADFLOW model and spreadsheet interface presented in this paper is an easy-to-use numerical model for estimation of aquifer properties from pumping test data. Layered conceptual models and their properties are evaluated in a trial-and-error estimation procedure. The RADFLOW model can treat most combinations of confined, unconfined, leaky aquitard, partial penetration, and borehole storage conditions. RADFLOW is especially useful in stratified aquifer systems with no identifiable lateral boundaries. It has been verified to several analytical solutions and has been applied in the Snake River Plain Aquifer to develop and test conceptual models and provide estimates of aquifer properties. Because the model assumes axially symmetrical flow, it is limited to representing multiple aquifer layers that are laterally continuous.
Numerical analysis for finite Fresnel transform
NASA Astrophysics Data System (ADS)
Aoyagi, Tomohiro; Ohtsubo, Kouichi; Aoyagi, Nobuo
2016-10-01
The Fresnel transform is a bounded, linear, additive, and unitary operator in Hilbert space and is applied to many applications. In this study, a sampling theorem for a Fresnel transform pair in polar coordinate systems is derived. According to the sampling theorem, any function in the complex plane can be expressed by taking the products of the values of a function and sampling function systems. Sampling function systems are constituted by Bessel functions and their zeros. By computer simulations, we consider the application of the sampling theorem to the problem of approximating a function to demonstrate its validity. Our approximating function is a circularly symmetric function which is defined in the complex plane. Counting the number of sampling points requires the calculation of the zeros of Bessel functions, which are calculated by an approximation formula and numerical tables. Therefore, our sampling points are nonuniform. The number of sampling points, the normalized mean square error between the original function and its approximation function and phases are calculated and the relationship between them is revealed.
Error analysis for NMR polymer microstructure measurement without calibration standards.
Qiu, XiaoHua; Zhou, Zhe; Gobbi, Gian; Redwine, Oscar D
2009-10-15
We report an error analysis method for primary analytical methods in the absence of calibration standards. Quantitative (13)C NMR analysis of ethylene/1-octene (E/O) copolymers is given as an example. Because the method is based on a self-calibration scheme established by counting, it is a measure of accuracy rather than precision. We demonstrate it is self-consistent and neither underestimate nor excessively overestimate the experimental errors. We also show the method identified previously unknown systematic biases in a NMR instrument. The method can eliminate unnecessary data averaging to save valuable NMR resources. The accuracy estimate proposed is not unique to (13)C NMR spectroscopy of E/O but should be applicable to all other measurement systems where the accuracy of a subset of the measured responses can be established.
ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS
NASA Technical Reports Server (NTRS)
Putney, B.
1994-01-01
The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and
Efficient Reduction and Analysis of Model Predictive Error
NASA Astrophysics Data System (ADS)
Doherty, J.
2006-12-01
dominates the former) depends on the "innate variability" of hydraulic properties within the model domain. Knowledge of both of these is a prerequisite for characterisation of the magnitude of possible model predictive error. Unfortunately, in most cases, such knowledge is incomplete and subjective. Nevertheless, useful analysis of model predictive error can still take place. The present paper briefly discusses the means by which mathematical regularisation can be employed in the model calibration process in order to extract as much information as possible on hydraulic property heterogeneity prevailing within the model domain, thereby reducing predictive error to the lowest that can be achieved on the basis of that dataset. It then demonstrates the means by which predictive error variance can be quantified based on information supplied by the regularised inversion process. Both linear and nonlinear predictive error variance analysis is demonstrated using a number of real-world and synthetic examples.
Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis
NASA Technical Reports Server (NTRS)
Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl
2009-01-01
The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.
Eigenvector method for umbrella sampling enables error analysis.
Thiede, Erik H; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R
2016-08-28
Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence.
Eigenvector method for umbrella sampling enables error analysis
NASA Astrophysics Data System (ADS)
Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.
2016-08-01
Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence.
Error analysis of compensation cutting technique for wavefront error of KH2PO4 crystal.
Tie, Guipeng; Dai, Yifan; Guan, Chaoliang; Zhu, Dengchao; Song, Bing
2013-09-20
Considering the wavefront error of KH(2)PO(4) (KDP) crystal is difficult to control through face fly cutting process because of surface shape deformation during vacuum suction, an error compensation technique based on a spiral turning method is put forward. An in situ measurement device is applied to measure the deformed surface shape after vacuum suction, and the initial surface figure error, which is obtained off-line, is added to the in situ surface shape to obtain the final surface figure to be compensated. Then a three-axis servo technique is utilized to cut the final surface shape. In traditional cutting processes, in addition to common error sources such as the error in the straightness of guide ways, spindle rotation error, and error caused by ambient environment variance, three other errors, the in situ measurement error, position deviation error, and servo-following error, are the main sources affecting compensation accuracy. This paper discusses the effect of these three errors on compensation accuracy and provides strategies to improve the final surface quality. Experimental verification was carried out on one piece of KDP crystal with the size of Φ270 mm×11 mm. After one compensation process, the peak-to-valley value of the transmitted wavefront error dropped from 1.9λ (λ=632.8 nm) to approximately 1/3λ, and the mid-spatial-frequency error does not become worse when the frequency of the cutting tool trajectory is controlled by use of a low-pass filter.
Numerical and experimental analysis of spallation phenomena
NASA Astrophysics Data System (ADS)
Martin, Alexandre; Bailey, Sean C. C.; Panerai, Francesco; Davuluri, Raghava S. C.; Zhang, Huaibao; Vazsonyi, Alexander R.; Lippay, Zachary S.; Mansour, Nagi N.; Inman, Jennifer A.; Bathel, Brett F.; Splinter, Scott C.; Danehy, Paul M.
2016-12-01
The spallation phenomenon was studied through numerical analysis using a coupled Lagrangian particle tracking code and a hypersonic aerothermodynamics computational fluid dynamics solver. The results show that carbon emission from spalled particles results in a significant modification of the gas composition of the post-shock layer. Results from a test campaign at the NASA Langley HYMETS facility are presented. Using an automated image processing of short exposure images, two-dimensional velocity vectors of the spalled particles were calculated. In a 30-s test at 100 W/cm2 of cold-wall heat flux, more than 722 particles were detected, with an average velocity of 110 m/s.
Numerical analysis method for linear induction machines.
NASA Technical Reports Server (NTRS)
Elliott, D. G.
1972-01-01
A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.
NASA Astrophysics Data System (ADS)
Kochukhov, O.
2017-01-01
Context. Doppler imaging (DI) is a powerful spectroscopic inversion technique that enables conversion of a line profile time series into a two-dimensional map of the stellar surface inhomogeneities. DI has been repeatedly applied to reconstruct chemical spot topologies of magnetic Ap/Bp stars with the goal of understanding variability of these objects and gaining an insight into the physical processes responsible for spot formation. Aims: In this paper we investigate the accuracy of chemical abundance DI and assess the impact of several different systematic errors on the reconstructed spot maps. Methods: We have simulated spectroscopic observational data for two different Fe spot distributions with a surface abundance contrast of 1.5 dex in the presence of a moderately strong dipolar magnetic field. We then reconstructed chemical maps using different sets of spectral lines and making different assumptions about line formation in the inversion calculations. Results: Our numerical experiments demonstrate that a modern DI code successfully recovers the input chemical spot distributions comprised of multiple circular spots at different latitudes or an element overabundance belt at the magnetic equator. For the optimal reconstruction based on half a dozen spectral intervals, the average reconstruction errors do not exceed 0.10 dex. The errors increase to about 0.15 dex when abundance distributions are recovered from a few and/or blended spectral lines. Ignoring a 2.5 kG dipolar magnetic field in chemical abundance DI leads to an average relative error of 0.2 dex and maximum errors of 0.3 dex. Similar errors are encountered if a DI inversion is carried out neglecting a non-uniform continuum brightness distribution and variation of the local atmospheric structure. None of the considered systematic effects lead to major spurious features in the recovered abundance maps. Conclusions: This series of numerical DI simulations proves that inversions based on one or two spectral
Computing the surveillance error grid analysis: procedure and examples.
Kovatchev, Boris P; Wakeman, Christian A; Breton, Marc D; Kost, Gerald J; Louie, Richard F; Tran, Nam K; Klonoff, David C
2014-07-01
The surveillance error grid (SEG) analysis is a tool for analysis and visualization of blood glucose monitoring (BGM) errors, based on the opinions of 206 diabetes clinicians who rated 4 distinct treatment scenarios. Resulting from this large-scale inquiry is a matrix of 337 561 risk ratings, 1 for each pair of (reference, BGM) readings ranging from 20 to 580 mg/dl. The computation of the SEG is therefore complex and in need of automation. The SEG software introduced in this article automates the task of assigning a degree of risk to each data point for a set of measured and reference blood glucose values so that the data can be distributed into 8 risk zones. The software's 2 main purposes are to (1) distribute a set of BG Monitor data into 8 risk zones ranging from none to extreme and (2) present the data in a color coded display to promote visualization. Besides aggregating the data into 8 zones corresponding to levels of risk, the SEG computes the number and percentage of data pairs in each zone and the number/percentage of data pairs above/below the diagonal line in each zone, which are associated with BGM errors creating risks for hypo- or hyperglycemia, respectively. To illustrate the action of the SEG software we first present computer-simulated data stratified along error levels defined by ISO 15197:2013. This allows the SEG to be linked to this established standard. Further illustration of the SEG procedure is done with a series of previously published data, which reflect the performance of BGM devices and test strips under various environmental conditions. We conclude that the SEG software is a useful addition to the SEG analysis presented in this journal, developed to assess the magnitude of clinical risk from analytically inaccurate data in a variety of high-impact situations such as intensive care and disaster settings.
Low noise propeller design using numerical analysis
NASA Astrophysics Data System (ADS)
Humpert, Bryce
The purpose of this study is to explore methods for reducing aircraft propeller noise with minimal losses in performance. Using numerical analysis, a standard two blade propeller configuration was taken from experiments conducted by Patrick, Finn, and Stich, and implemented into the numerical code XROTOR. The blade design modifications that were investigated to achieve the proposed goals include: increasing the number of blades, adjusting the chord length, beta distribution, radius of the blade, airfoil shape, and operating RPM. In order to determine the optimal blade design, a baseline case is first developed and the parameters listed earlier are then varied to create a new propeller design that reduces the sound pressure level (SPL) while maintaining performance levels within a predetermined range of the original specifications. From the analysis, the most significant improvements observed in lowering the acoustic signature are dominated by operating rpm and blade radius. A three-, four-, and five-blade configuration was developed that reduced the SPL generated by the propeller during cruise flight conditions. The optimum configuration that produced the greatest SPL reduction was the five-blade configuration. The resulting sound pressure level was reduced from the original 77 dB at 1000' ft above ground level (AGL), to 54 dB at 1000' AGL while remaining within 1.4% of the original thrust and efficiency.
Numerical Analysis of Convection/Transpiration Cooling
NASA Technical Reports Server (NTRS)
Glass, David E.; Dilley, Arthur D.; Kelly, H. Neale
1999-01-01
An innovative concept utilizing the natural porosity of refractory-composite materials and hydrogen coolant to provide CONvective and TRANspiration (CONTRAN) cooling and oxidation protection has been numerically studied for surfaces exposed to a high heat flux, high temperature environment such as hypersonic vehicle engine combustor walls. A boundary layer code and a porous media finite difference code were utilized to analyze the effect of convection and transpiration cooling on surface heat flux and temperature. The boundary, layer code determined that transpiration flow is able to provide blocking of the surface heat flux only if it is above a minimum level due to heat addition from combustion of the hydrogen transpirant. The porous media analysis indicated that cooling of the surface is attained with coolant flow rates that are in the same range as those required for blocking, indicating that a coupled analysis would be beneficial.
Nozzle Numerical Analysis Of The Scimitar Engine
NASA Astrophysics Data System (ADS)
Battista, F.; Marini, M.; Cutrone, L.
2011-05-01
This work describes part of the activities on the LAPCAT-II A2 vehicle, in which starting from the available conceptual vehicle design and the related pre- cooled turbo-ramjet engine called SCIMITAR, well- thought assumptions made for performance figures of different components during the iteration process within LAPCAT-I will be assessed in more detail. In this paper it is presented a numerical analysis aimed at the design optimization of the nozzle contour of the LAPCAT A2 SCIMITAR engine designed by Reaction Engines Ltd. (REL) (see Figure 1). In particular, nozzle shape optimization process is presented for cruise conditions. All the computations have been carried out by using the CIRA C3NS code in non equilibrium conditions. The effect of considering detailed or reduced chemical kinetic schemes has been analyzed with a particular focus on the production of pollutants. An analysis of engine performance parameters, such as thrust and combustion efficiency has been carried out.
Numerical Analysis of Convection/Transpiration Cooling
NASA Technical Reports Server (NTRS)
Glass, David E.; Dilley, Arthur D.; Kelly, H. Neale
1999-01-01
An innovative concept utilizing the natural porosity of refractory-composite materials and hydrogen coolant to provide CONvective and TRANspiration (CONTRAN) cooling and oxidation protection has been numerically studied for surfaces exposed to a high heat flux high temperature environment such as hypersonic vehicle engine combustor walls. A boundary layer code and a porous media finite difference code were utilized to analyze the effect of convection and transpiration cooling on surface heat flux and temperature. The boundary layer code determined that transpiration flow is able to provide blocking of the surface heat flux only if it is above a minimum level due to heat addition from combustion of the hydrogen transpirant. The porous media analysis indicated that cooling of the surface is attained with coolant flow rates that are in the same range as those required for blocking, indicating that a coupled analysis would be beneficial.
Dynamic analysis of high speed gears by using loaded static transmission error
NASA Astrophysics Data System (ADS)
Özgüven, H. Nevzat; Houser, D. R.
1988-08-01
A single degree of freedom non-linear model is used for the dynamic analysis of a gear pair. Two methods are suggested and a computer program is developed for calculating the dynamic mesh and tooth forces, dynamic factors based on stresses, and dynamic transmission error from measured or calculated loaded static transmission errors. The analysis includes the effects of variable mesh stiffness and mesh damping, gear errors (pitch, profile and runout errors), profile modifications and backlash. The accuracy of the method, which includes the time variation of both mesh stiffness and damping is demonstrated with numerical examples. In the second method, which is an approximate one, the time average of the mesh stiffness is used. However, the formulation used in the approximate analysis allows for the inclusion of the excitation effect of the variable mesh stiffness. It is concluded from the comparison of the results of the two methods that the displacement excitation resulting from a variable mesh stiffness is more important than the change in system natural frequency resulting from the mesh stiffness variation. Although the theory presented is general and applicable to spur, helical and spiral bevel gears, the computer program prepared is for only spur gears.
Error analysis for matrix elastic-net regularization algorithms.
Li, Hong; Chen, Na; Li, Luoqing
2012-05-01
Elastic-net regularization is a successful approach in statistical modeling. It can avoid large variations which occur in estimating complex models. In this paper, elastic-net regularization is extended to a more general setting, the matrix recovery (matrix completion) setting. Based on a combination of the nuclear-norm minimization and the Frobenius-norm minimization, we consider the matrix elastic-net (MEN) regularization algorithm, which is an analog to the elastic-net regularization scheme from compressive sensing. Some properties of the estimator are characterized by the singular value shrinkage operator. We estimate the error bounds of the MEN regularization algorithm in the framework of statistical learning theory. We compute the learning rate by estimates of the Hilbert-Schmidt operators. In addition, an adaptive scheme for selecting the regularization parameter is presented. Numerical experiments demonstrate the superiority of the MEN regularization algorithm.
Numerical analysis of ellipsometric critical adsorption data
NASA Astrophysics Data System (ADS)
Smith, Dan S. P.; Law, Bruce M.; Smock, Martin; Landau, David P.
1997-01-01
A recent study [Dan S. P. Smith and Bruce M. Law, Phys. Rev. E 54, 2727 (1996)] presented measurements of the ellipsometric coefficient at the Brewster angle ρ-bar on the liquid-vapor surface of four different binary liquid mixtures in the vicinity of their liquid-liquid critical point and analyzed the data analytically for large reduced temperatures t. In the current report we analyze this (ρ-bar,t) data numerically over the entire range of t. Theoretical universal surface scaling functions P+/-(x) from a Monte Carlo (MC) simulation [M. Smock, H. W. Diehl, and D. P. Landau, Ber. Bunsenges. Phys. Chem. 98, 486 (1994)] and a renormalization-group (RG) calculation [H. W. Diehl and M. Smock, Phys. Rev. B 47, 5841 (1993); 48, 6470(E) (1993)] are used in the numerical integration of Maxwell's equations to provide theoretical (ρ-bar,t) curves that can be compared directly with the experimental data. While both the MC and RG curves are in qualitative agreement with the experimental data, the agreement is generally found to be better for the MC curves. However, systematic discrepancies are found in the quantitative comparison between the MC and experimental (ρ-bar,t) curves, and it is determined that these discrepancies are too large to be due to experimental error. Finally, it is demonstrated that ρ-bar can be rescaled to produce an approximately universal ellipsometric curve as a function of the single variable ξ+/-/λ, where ξ is the correlation length and λ is the wavelength of light. The position of the maximum of this curve in the one-phase region, (ξ+/λ)peak, is approximately a universal number. It is determined that (ξ+/λ)peak is dependent primarily on the ratio c+/P∞,+, where P+(x)≅c+x-Β/ν for x<<1 and P+(x)≅P∞,+e-x for x>>:1. This enables the experimental estimate of c+/P∞,+=0.90+/-0.24, which is significantly large compared to the MC and RG values of 0.577 and 0.442, respectively.
Hill, M.C.
1989-01-01
Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author
Zhu, Fangqiang; Hummer, Gerhard
2012-02-05
The weighted histogram analysis method (WHAM) has become the standard technique for the analysis of umbrella sampling simulations. In this article, we address the challenges (1) of obtaining fast and accurate solutions of the coupled nonlinear WHAM equations, (2) of quantifying the statistical errors of the resulting free energies, (3) of diagnosing possible systematic errors, and (4) of optimally allocating of the computational resources. Traditionally, the WHAM equations are solved by a fixed-point direct iteration method, despite poor convergence and possible numerical inaccuracies in the solutions. Here, we instead solve the mathematically equivalent problem of maximizing a target likelihood function, by using superlinear numerical optimization algorithms with a significantly faster convergence rate. To estimate the statistical errors in one-dimensional free energy profiles obtained from WHAM, we note that for densely spaced umbrella windows with harmonic biasing potentials, the WHAM free energy profile can be approximated by a coarse-grained free energy obtained by integrating the mean restraining forces. The statistical errors of the coarse-grained free energies can be estimated straightforwardly and then used for the WHAM results. A generalization to multidimensional WHAM is described. We also propose two simple statistical criteria to test the consistency between the histograms of adjacent umbrella windows, which help identify inadequate sampling and hysteresis in the degrees of freedom orthogonal to the reaction coordinate. Together, the estimates of the statistical errors and the diagnostics of inconsistencies in the potentials of mean force provide a basis for the efficient allocation of computational resources in free energy simulations.
Error threshold in optimal coding, numerical criteria, and classes of universalities for complexity
NASA Astrophysics Data System (ADS)
Saakian, David B.
2005-01-01
The free energy of the random energy model at the transition point between the ferromagnetic and spin glass phases is calculated. At this point, equivalent to the decoding error threshold in optimal codes, the free energy has finite size corrections proportional to the square root of the number of degrees. The response of the magnetization to an external ferromagnetic phase is maximal at values of magnetization equal to one-half. We give several criteria of complexity and define different universality classes. According to our classification, at the lowest class of complexity are random graphs, Markov models, and hidden Markov models. At the next level is the Sherrington-Kirkpatrick spin glass, connected to neuron-network models. On a higher level are critical theories, the spin glass phase of the random energy model, percolation, and self-organized criticality. The top level class involves highly optimized tolerance design, error thresholds in optimal coding, language, and, maybe, financial markets. Living systems are also related to the last class. The concept of antiresonance is suggested for complex systems.
Verifying the error bound of numerical computation implemented in computer systems
Sawada, Jun
2013-03-12
A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.
Error analysis of satellite attitude determination using a vision-based approach
NASA Astrophysics Data System (ADS)
Carozza, Ludovico; Bevilacqua, Alessandro
2013-09-01
Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).
Numerical Analysis of a Finite Element/Volume Penalty Method
NASA Astrophysics Data System (ADS)
Maury, Bertrand
The penalty method makes it possible to incorporate a large class of constraints in general purpose Finite Element solvers like freeFEM++. We present here some contributions to the numerical analysis of this method. We propose an abstract framework for this approach, together with some general error estimates based on the discretization parameter ɛ and the space discretization parameter h. As this work is motivated by the possibility to handle constraints like rigid motion for fluid-particle flows, we shall pay a special attention to a model problem of this kind, where the constraint is prescribed over a subdomain. We show how the abstract estimate can be applied to this situation, in the case where a non-body-fitted mesh is used. In addition, we describe how this method provides an approximation of the Lagrange multiplier associated to the constraint.
Lon N. Haney; David I. Gertman
2003-04-01
Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.
On the use of stability regions in the numerical analysis of initial value problems
NASA Astrophysics Data System (ADS)
Lenferink, H. W. J.; Spijker, M. N.
1991-07-01
This paper deals with the stability analysis of one-step methods in the numerical solution of initial (-boundary) value problems for linear, ordinary, and partial differential equations. Restrictions on the stepsize are derived which guarantee the rate of error growth in these methods to be of moderate size. These restrictions are related to the stability region of the method and to numerical ranges of matrices stemming from the differential equation under consideration. The errors in the one-step methods are measured in arbitrary norms (not necessarily generated by an inner product). The theory is illustrated in the numerical solution of the heat equation and some other differential equations, where the error growth is measured in the maximum norm.
Subramanyam, Busetty; Das, Ashutosh
2014-01-01
In adsorption study, to describe sorption process and evaluation of best-fitting isotherm model is a key analysis to investigate the theoretical hypothesis. Hence, numerous statistically analysis have been extensively used to estimate validity of the experimental equilibrium adsorption values with the predicted equilibrium values. Several statistical error analysis were carried out. In the present study, the following statistical analysis were carried out to evaluate the adsorption isotherm model fitness, like the Pearson correlation, the coefficient of determination and the Chi-square test, have been used. The ANOVA test was carried out for evaluating significance of various error functions and also coefficient of dispersion were evaluated for linearised and non-linearised models. The adsorption of phenol onto natural soil (Local name Kalathur soil) was carried out, in batch mode at 30 ± 20 C. For estimating the isotherm parameters, to get a holistic view of the analysis the models were compared between linear and non-linear isotherm models. The result reveled that, among above mentioned error functions and statistical functions were designed to determine the best fitting isotherm.
Numerical modeling techniques for flood analysis
NASA Astrophysics Data System (ADS)
Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.
2016-12-01
Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.
Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher
1998-01-01
We proposed a novel characterization of errors for numerical weather predictions. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has several important applications, including the model assessment application and the objective analysis application. In this project, we have focused on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP), the 500 hPa geopotential height, and the 315 K potential vorticity fields for forecasts of the short and medium range. The forecasts are generated by the Goddard Earth Observing System (GEOS) data assimilation system with and without ERS-1 scatterometer data. A great deal of novel work has been accomplished under the current contract. In broad terms, we have developed and tested an efficient algorithm for determining distortions. The algorithm and constraints are now ready for application to larger data sets to be used to determine the statistics of the distortion as outlined above, and to be applied in data analysis by using GEOS water vapor imagery to correct short-term forecast errors.
Error Analysis for the Airborne Direct Georeferincing Technique
NASA Astrophysics Data System (ADS)
Elsharkawy, Ahmed S.; Habib, Ayman F.
2016-10-01
Direct Georeferencing was shown to be an important alternative to standard indirect image orientation using classical or GPS-supported aerial triangulation. Since direct Georeferencing without ground control relies on an extrapolation process only, particular focus has to be laid on the overall system calibration procedure. The accuracy performance of integrated GPS/inertial systems for direct Georeferencing in airborne photogrammetric environments has been tested extensively in the last years. In this approach, the limiting factor is a correct overall system calibration including the GPS/inertial component as well as the imaging sensor itself. Therefore remaining errors in the system calibration will significantly decrease the quality of object point determination. This research paper presents an error analysis for the airborne direct Georeferencing technique, where integrated GPS/IMU positioning and navigation systems are used, in conjunction with aerial cameras for airborne mapping compared with GPS/INS supported AT through the implementation of certain amount of error on the EOP and Boresight parameters and study the effect of these errors on the final ground coordinates. The data set is a block of images consists of 32 images distributed over six flight lines, the interior orientation parameters, IOP, are known through careful camera calibration procedure, also 37 ground control points are known through terrestrial surveying procedure. The exact location of camera station at time of exposure, exterior orientation parameters, EOP, is known through GPS/INS integration process. The preliminary results show that firstly, the DG and GPS-supported AT have similar accuracy and comparing with the conventional aerial photography method, the two technologies reduces the dependence on ground control (used only for quality control purposes). Secondly, In the DG Correcting overall system calibration including the GPS/inertial component as well as the imaging sensor itself
The Communication Link and Error ANalysis (CLEAN) simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.; Crowe, Shane
1993-01-01
During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.
Rasch Analysis of the Student Refractive Error and Eyeglass Questionnaire
Crescioni, Mabel; Messer, Dawn H.; Warholak, Terri L.; Miller, Joseph M.; Twelker, J. Daniel; Harvey, Erin M.
2014-01-01
Purpose To evaluate and refine a newly developed instrument, the Student Refractive Error and Eyeglasses Questionnaire (SREEQ), designed to measure the impact of uncorrected and corrected refractive error on vision-related quality of life (VRQoL) in school-aged children. Methods. A 38 statement instrument consisting of two parts was developed: Part A relates to perceptions regarding uncorrected vision and Part B relates to perceptions regarding corrected vision and includes other statements regarding VRQoL with spectacle correction. The SREEQ was administered to 200 Native American 6th through 12th grade students known to have previously worn and who currently require eyeglasses. Rasch analysis was conducted to evaluate the functioning of the SREEQ. Statements on Part A and Part B were analyzed to examine the dimensionality and constructs of the questionnaire, how well the items functioned, and the appropriateness of the response scale used. Results Rasch analysis suggested two items be eliminated and the measurement scale for matching items be reduced from a 4-point response scale to a 3-point response scale. With these modifications, categorical data were converted to interval level data, to conduct an item and person analysis. A shortened version of the SREEQ was constructed with these modifications, the SREEQ-R, which included the statements that were able to capture changes in VRQoL associated with spectacle wear for those with significant refractive error in our study population. Conclusions While the SREEQ Part B appears to be a have less than optimal reliability to assess the impact of spectacle correction on VRQoL in our student population, it is also able to detect statistically significant differences from pretest to posttest on both the group and individual levels to show that the instrument can assess the impact that glasses have on VRQoL. Further modifications to the questionnaire, such as those included in the SREEQ-R, could enhance its functionality
Asymptotic analysis of Bayesian generalization error with Newton diagram.
Yamazaki, Keisuke; Aoyagi, Miki; Watanabe, Sumio
2010-01-01
Statistical learning machines that have singularities in the parameter space, such as hidden Markov models, Bayesian networks, and neural networks, are widely used in the field of information engineering. Singularities in the parameter space determine the accuracy of estimation in the Bayesian scenario. The Newton diagram in algebraic geometry is recognized as an effective method by which to investigate a singularity. The present paper proposes a new technique to plug the diagram in the Bayesian analysis. The proposed technique allows the generalization error to be clarified and provides a foundation for an efficient model selection. We apply the proposed technique to mixtures of binomial distributions.
Refractive error and the reading process: a literature analysis.
Grisham, J D; Simons, H D
1986-01-01
The literature analysis of refractive error and reading performance includes only those studies which adhere to the rudaments of scientific investigation. The relative strengths and weaknesses of each study are described and conclusions are drawn where possible. Hyperopia and anisometropia appear to be related to poor reading progress and their correction seems to result in improved performance. Reduced distance visual acuity and myopia are not generally associated with reading difficulties. There is little evidence relating astigmatism and reading, but studies have not been adequately designed to draw conclusions. Implications for school vision screening are discussed.
Pinpointing error analysis of metal detectors under field conditions
NASA Astrophysics Data System (ADS)
Takahashi, Kazunori; Preetz, Holger
2012-06-01
Metal detectors are used not only to detect but also to locate targets. The location performance has been evaluated previously only in laboratory. The performance probably differs that in the field. In this paper, the evaluation of the location performance based on the analysis of pinpointing error is discussed. The data for the evaluation were collected in a blind test in the field. Therefore, the analyzed performance can be seen as the performance under field conditions. Further, the performance is discussed in relation to the search head and footprint dimensions.
Analysis of ionospheric refraction error corrections for GRARR systems
NASA Technical Reports Server (NTRS)
Mallinckrodt, A. J.; Parker, H. C.; Berbert, J. H.
1971-01-01
A determination is presented of the ionospheric refraction correction requirements for the Goddard range and range rate (GRARR) S-band, modified S-band, very high frequency (VHF), and modified VHF systems. The relation ships within these four systems are analyzed to show that the refraction corrections are the same for all four systems and to clarify the group and phase nature of these corrections. The analysis is simplified by recognizing that the range rate is equivalent to a carrier phase range change measurement. The equation for the range errors are given.
Analysis of Random Segment Errors on Coronagraph Performance
NASA Technical Reports Server (NTRS)
Stahl, Mark T.; Stahl, H. Philip; Shaklan, Stuart B.; N'Diaye, Mamadou
2016-01-01
At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt
Fourier analysis of numerical algorithms for the Maxwell equations
NASA Technical Reports Server (NTRS)
Liu, Yen
1993-01-01
The Fourier method is used to analyze the dispersive, dissipative, and isotropy errors of various spatial and time discretizations applied to the Maxwell equations on multi-dimensional grids. Both Cartesian grids and non-Cartesian grids based on hexagons and tetradecahedra are studied and compared. The numerical errors are quantitatively determined in terms of phase speed, wave number, propagation direction, gridspacings, and CFL number. The study shows that centered schemes are more efficient than upwind schemes. The non-Cartesian grids yield superior isotropy and higher accuracy than the Cartesian ones. For the centered schemes, the staggered grids produce less errors than the unstaggered ones. A new unstaggered scheme which has all the best properties is introduced. The study also demonstrates that a proper choice of time discretization can reduce the overall numerical errors due to the spatial discretization.
NASA Astrophysics Data System (ADS)
Huang, Bo-Kai; Huang, Po-Hsuan
2016-09-01
This paper presents the finite element and wavefront error analysis with reverse engineering of the primary mirror of a small space telescope experimental model. The experimental space telescope with 280mm diameter primary mirror has been assembled and aligned in 2011, but the measured system optical performance and wavefront error did not achieve the goal. In order to find out the root causes, static structure finite element analysis (FEA) has been applied to analyze the structure model of the primary mirror assembly. Several assuming effects which may cause deformation of the primary mirror have been proposed, such as gravity effect, flexures bonding effect, thermal expansion effect, etc. According to each assuming effect, we establish a corresponding model and boundary condition setup, and the numerical model will be analyzed by finite element method (FEM) software and opto-mechanical analysis software to obtain numerical wavefront error and Zernike polynomials. Now new assumption of the flexures bonding effect is proposed, and we adopt reverse engineering to verify this effect. Finally, the numerically synthetic system wavefront error will be compared with measured system wavefront error of the telescope. By analyzing and realizing these deformation effects of the primary mirror, the opto-mechanical design and telescope assembly workmanship will be refined, and improve the telescope optical performance.
Analysis of Solar Two Heliostat Tracking Error Sources
Jones, S.A.; Stone, K.W.
1999-01-28
This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.
Close-range radar rainfall estimation and error analysis
NASA Astrophysics Data System (ADS)
van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.
2016-08-01
Quantitative precipitation estimation (QPE) using ground-based weather radar is affected by many sources of error. The most important of these are (1) radar calibration, (2) ground clutter, (3) wet-radome attenuation, (4) rain-induced attenuation, (5) vertical variability in rain drop size distribution (DSD), (6) non-uniform beam filling and (7) variations in DSD. This study presents an attempt to separate and quantify these sources of error in flat terrain very close to the radar (1-2 km), where (4), (5) and (6) only play a minor role. Other important errors exist, like beam blockage, WLAN interferences and hail contamination and are briefly mentioned, but not considered in the analysis. A 3-day rainfall event (25-27 August 2010) that produced more than 50 mm of precipitation in De Bilt, the Netherlands, is analyzed using radar, rain gauge and disdrometer data. Without any correction, it is found that the radar severely underestimates the total rain amount (by more than 50 %). The calibration of the radar receiver is operationally monitored by analyzing the received power from the sun. This turns out to cause a 1 dB underestimation. The operational clutter filter applied by KNMI is found to incorrectly identify precipitation as clutter, especially at near-zero Doppler velocities. An alternative simple clutter removal scheme using a clear sky clutter map improves the rainfall estimation slightly. To investigate the effect of wet-radome attenuation, stable returns from buildings close to the radar are analyzed. It is shown that this may have caused an underestimation of up to 4 dB. Finally, a disdrometer is used to derive event and intra-event specific Z-R relations due to variations in the observed DSDs. Such variations may result in errors when applying the operational Marshall-Palmer Z-R relation. Correcting for all of these effects has a large positive impact on the radar-derived precipitation estimates and yields a good match between radar QPE and gauge
NASA Astrophysics Data System (ADS)
Elliott, David; Johnston, Peter R.
2007-06-01
In the two-dimensional boundary element method, one often needs to evaluate numerically integrals of the form where j2 is a quadratic, g is a polynomial and f is a rational, logarithmic or algebraic function with a singularity at zero. The constants a and b are such that -1[less-than-or-equals, slant]a[less-than-or-equals, slant]1 and 0errors. By making the transformation x=a+bsinh([mu]u-[eta]), where the constants [mu] and [eta] are chosen so that the interval of integration is again [-1,1], it is found that the truncation errors arising, when the same Gauss-Legendre quadrature is applied to the transformed integral, are much reduced. The asymptotic error analysis for Gauss-Legendre quadrature, as given by Donaldson and Elliott [A unified approach to quadrature rules with asymptotic estimates of their remainders, SIAM J. Numer. Anal. 9 (1972) 573-602], is then used to explain this phenomenon and justify the transformation.
SIRTF Focal Plane Survey: A Pre-flight Error Analysis
NASA Technical Reports Server (NTRS)
Bayard, David S.; Brugarolas, Paul B.; Boussalis, Dhemetrios; Kang, Bryan H.
2003-01-01
This report contains a pre-flight error analysis of the calibration accuracies expected from implementing the currently planned SIRTF focal plane survey strategy. The main purpose of this study is to verify that the planned strategy will meet focal plane survey calibration requirements (as put forth in the SIRTF IOC-SV Mission Plan [4]), and to quantify the actual accuracies expected. The error analysis was performed by running the Instrument Pointing Frame (IPF) Kalman filter on a complete set of simulated IOC-SV survey data, and studying the resulting propagated covariances. The main conclusion of this study is that the all focal plane calibration requirements can be met with the currently planned survey strategy. The associated margins range from 3 to 95 percent, and tend to be smallest for frames having a 0.14" requirement, and largest for frames having a more generous 0.28" (or larger) requirement. The smallest margin of 3 percent is associated with the IRAC 3.6 and 5.8 micron array centers (frames 068 and 069), and the largest margin of 95 percent is associated with the MIPS 160 micron array center (frame 087). For pointing purposes, the most critical calibrations are for the IRS Peakup sweet spots and short wavelength slit centers (frames 019, 023, 052, 028, 034). Results show that these frames are meeting their 0.14" requirements with an expected accuracy of approximately 0.1", which corresponds to a 28 percent margin.
Error analysis for earth orientation recovery from GPS data
NASA Technical Reports Server (NTRS)
Zelensky, N.; Ray, J.; Liebrecht, P.
1990-01-01
The use of GPS navigation satellites to study earth-orientation parameters in real-time is examined analytically with simulations of network geometries. The Orbit Analysis covariance-analysis program is employed to simulate the block-II constellation of 18 GPS satellites, and attention is given to the budget for tracking errors. Simultaneous solutions are derived for earth orientation given specific satellite orbits, ground clocks, and station positions with tropospheric scaling at each station. Media effects and measurement noise are found to be the main causes of uncertainty in earth-orientation determination. A program similar to the Polaris network using single-difference carrier-phase observations can provide earth-orientation parameters with accuracies similar to those for the VLBI program. The GPS concept offers faster data turnaround and lower costs in addition to more accurate determinations of UT1 and pole position.
Soft X Ray Telescope (SXT) focus error analysis
NASA Technical Reports Server (NTRS)
Ahmad, Anees
1991-01-01
The analysis performed on the soft x-ray telescope (SXT) to determine the correct thickness of the spacer to position the CCD camera at the best focus of the telescope and to determine the maximum uncertainty in this focus position due to a number of metrology and experimental errors, and thermal, and humidity effects is presented. This type of analysis has been performed by the SXT prime contractor, Lockheed Palo Alto Research Lab (LPARL). The SXT project office at MSFC formed an independent team of experts to review the LPARL work, and verify the analysis performed by them. Based on the recommendation of this team, the project office will make a decision if an end to end focus test is required for the SXT prior to launch. The metrology and experimental data, and the spreadsheets provided by LPARL are used at the basis of the analysis presented. The data entries in these spreadsheets have been verified as far as feasible, and the format of the spreadsheets has been improved to make these easier to understand. The results obtained from this analysis are very close to the results obtained by LPARL. However, due to the lack of organized documentation the analysis uncovered a few areas of possibly erroneous metrology data, which may affect the results obtained by this analytical approach.
Field Error Analysis and a Correction Scheme for the KSTAR device
NASA Astrophysics Data System (ADS)
You, K.-I.; Lee, D. K.; Jhang, Hogun; Lee, G.-S.; Kwon, K. H.
2000-10-01
Non-axisymmetric error fields can lead to tokamak plasma performance degradation and ultimately premature plasma disruption, if some error field components are larger than threshold values. The major sources of the field error include the unavoidable winding irregularities of the poloidal field coils during manufacturing, poloidal field and toroidal field coils misalignments during installation, stray fields from bus and lead wires between coils and power supplies, and welded joints of the vacuum vessel. Numerical simulation results are presented for Fourier harmonics of the error field obtained on the (m,n) = (2,1) resonant flux surface with a coil current set for the reference equilibrium configuration. Field error contributions are considered separately for all major error sources. An error correction scheme designed to reduce key components of the total net error field is also discussed in relation to the field error correction coils inside the vacuum vessel.
Effects of Correlated Errors on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, Andres; Jacobs, C. S.
2011-01-01
As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.
A Framework for Examining Mathematics Teacher Knowledge as Used in Error Analysis
ERIC Educational Resources Information Center
Peng, Aihui; Luo, Zengru
2009-01-01
Error analysis is a basic and important task for mathematics teachers. Unfortunately, in the present literature there is a lack of detailed understanding about teacher knowledge as used in it. Based on a synthesis of the literature in error analysis, a framework for prescribing and assessing mathematics teacher knowledge in error analysis was…
Survey of Available Systems for Identifying Systematic Errors in Numerical Model Weather Forecasts.
1981-02-01
freedom for evaluating the confidence level of the "t" statistic. 62 v) Hovm’’ ller diagrams Advantages - The Hovm6 ller diagram is useful for indicating... gram which shows ridge and trough lines for analysis and forecast together will make interpretation easier. vi) Meridional cross sections Advantages...requirements. Existing programs which can generate fields for Hovm , ller plots using spectral data requires 14 cp seconds per latitude per level. In order to
Analysis of errors occurring in large eddy simulation.
Geurts, Bernard J
2009-07-28
We analyse the effect of second- and fourth-order accurate central finite-volume discretizations on the outcome of large eddy simulations of homogeneous, isotropic, decaying turbulence at an initial Taylor-Reynolds number Re(lambda)=100. We determine the implicit filter that is induced by the spatial discretization and show that a higher order discretization also induces a higher order filter, i.e. a low-pass filter that keeps a wider range of flow scales virtually unchanged. The effectiveness of the implicit filtering is correlated with the optimal refinement strategy as observed in an error-landscape analysis based on Smagorinsky's subfilter model. As a point of reference, a finite-volume method that is second-order accurate for both the convective and the viscous fluxes in the Navier-Stokes equations is used. We observe that changing to a fourth-order accurate convective discretization leads to a higher value of the Smagorinsky coefficient C(S) required to achieve minimal total error at given resolution. Conversely, changing only the viscous flux discretization to fourth-order accuracy implies that optimal simulation results are obtained at lower values of C(S). Finally, a fully fourth-order discretization yields an optimal C(S) that is slightly lower than the reference fully second-order method.
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Error analysis of exponential integrators for oscillatory second-order differential equations
NASA Astrophysics Data System (ADS)
Grimm, Volker; Hochbruck, Marlis
2006-05-01
In this paper, we analyse a family of exponential integrators for second-order differential equations in which high-frequency oscillations in the solution are generated by a linear part. Conditions are given which guarantee that the integrators allow second-order error bounds independent of the product of the step size with the frequencies. Our convergence analysis generalizes known results on the mollified impulse method by García-Archilla, Sanz-Serna and Skeel (1998, SIAM J. Sci. Comput. 30 930-63) and on Gautschi-type exponential integrators (Hairer E, Lubich Ch and Wanner G 2002 Geometric Numerical Integration (Berlin: Springer), Hochbruck M and Lubich Ch 1999 Numer. Math. 83 403-26).
Nonclassicality thresholds for multiqubit states: Numerical analysis
Gruca, Jacek; Zukowski, Marek; Laskowski, Wieslaw; Kiesel, Nikolai; Wieczorek, Witlef; Weinfurter, Harald; Schmid, Christian
2010-07-15
States that strongly violate Bell's inequalities are required in many quantum-informational protocols as, for example, in cryptography, secret sharing, and the reduction of communication complexity. We investigate families of such states with a numerical method which allows us to reveal nonclassicality even without direct knowledge of Bell's inequalities for the given problem. An extensive set of numerical results is presented and discussed.
Error Analysis in Composition of Iranian Lower Intermediate Students
ERIC Educational Resources Information Center
Taghavi, Mehdi
2012-01-01
Learners make errors during the process of learning languages. This study examines errors in writing task of twenty Iranian lower intermediate male students aged between 13 and 15. A subject was given to the participants was a composition about the seasons of a year. All of the errors were identified and classified. Corder's classification (1967)…
ERIC Educational Resources Information Center
Moqimipour, Kourosh; Shahrokhi, Mohsen
2015-01-01
The present study aimed at analyzing writing errors caused by the interference of the Persian language, regarded as the first language (L1), in three writing genres, namely narration, description, and comparison/contrast by Iranian EFL students. 65 English paragraphs written by the participants, who were at the intermediate level based on their…
Analysis of personnel error occurrence reports across Defense Program facilities
Stock, D.A.; Shurberg, D.A.; O`Brien, J.N.
1994-05-01
More than 2,000 reports from the Occurrence Reporting and Processing System (ORPS) database were examined in order to identify weaknesses in the implementation of the guidance for the Conduct of Operations (DOE Order 5480.19) at Defense Program (DP) facilities. The analysis revealed recurrent problems involving procedures, training of employees, the occurrence of accidents, planning and scheduling of daily operations, and communications. Changes to DOE 5480.19 and modifications of the Occurrence Reporting and Processing System are recommended to reduce the frequency of these problems. The primary tool used in this analysis was a coding scheme based on the guidelines in 5480.19, which was used to classify the textual content of occurrence reports. The occurrence reports selected for analysis came from across all DP facilities, and listed personnel error as a cause of the event. A number of additional reports, specifically from the Plutonium Processing and Handling Facility (TA55), and the Chemistry and Metallurgy Research Facility (CMR), at Los Alamos National Laboratory, were analyzed separately as a case study. In total, 2070 occurrence reports were examined for this analysis. A number of core issues were consistently found in all analyses conducted, and all subsets of data examined. When individual DP sites were analyzed, including some sites which have since been transferred, only minor variations were found in the importance of these core issues. The same issues also appeared in different time periods, in different types of reports, and at the two Los Alamos facilities selected for the case study.
Kitchen Physics: Lessons in Fluid Pressure and Error Analysis
NASA Astrophysics Data System (ADS)
Vieyra, Rebecca Elizabeth; Vieyra, Chrystian; Macchia, Stefano
2017-02-01
Although the advent and popularization of the "flipped classroom" tends to center around at-home video lectures, teachers are increasingly turning to at-home labs for enhanced student engagement. This paper describes two simple at-home experiments that can be accomplished in the kitchen. The first experiment analyzes the density of four liquids using a waterproof case and a smartphone barometer in a container, sink, or tub. The second experiment determines the relationship between pressure and temperature of an ideal gas in a constant volume container placed momentarily in a refrigerator freezer. These experiences provide a ripe opportunity both for learning fundamental physics concepts as well as to investigate a variety of error analysis techniques that are frequently overlooked in introductory physics courses.
Reduction of S-parameter errors using singular spectrum analysis
NASA Astrophysics Data System (ADS)
Ozturk, Turgut; Uluer, Ihsan; Ünal, Ilhami
2016-07-01
A free space measurement method, which consists of two horn antennas, a network analyzer, two frequency extenders, and a sample holder, is used to measure transmission (S21) coefficients in 75-110 GHz (W-Band) frequency range. Singular spectrum analysis method is presented to eliminate the error and noise of raw S21 data after calibration and measurement processes. The proposed model can be applied easily to remove the repeated calibration process for each sample measurement. Hence, smooth, reliable, and accurate data are obtained to determine the dielectric properties of materials. In addition, the dielectric constant of materials (paper, polyvinylchloride-PVC, Ultralam® 3850HT, and glass) is calculated by thin sheet approximation and Newton-Raphson extracting techniques using a filtered S21 transmission parameter.
Reduction of S-parameter errors using singular spectrum analysis.
Ozturk, Turgut; Uluer, İhsan; Ünal, İlhami
2016-07-01
A free space measurement method, which consists of two horn antennas, a network analyzer, two frequency extenders, and a sample holder, is used to measure transmission (S21) coefficients in 75-110 GHz (W-Band) frequency range. Singular spectrum analysis method is presented to eliminate the error and noise of raw S21 data after calibration and measurement processes. The proposed model can be applied easily to remove the repeated calibration process for each sample measurement. Hence, smooth, reliable, and accurate data are obtained to determine the dielectric properties of materials. In addition, the dielectric constant of materials (paper, polyvinylchloride-PVC, Ultralam® 3850HT, and glass) is calculated by thin sheet approximation and Newton-Raphson extracting techniques using a filtered S21 transmission parameter.
Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Ghaffari, Farhad
2012-01-01
Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.
Analysis of the impact of error detection on computer performance
NASA Technical Reports Server (NTRS)
Shin, K. C.; Lee, Y. H.
1983-01-01
Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.
Sun, Yuchun; Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Wang, Yong
2015-07-01
A three-axis numerically controlled picosecond laser was used to ablate dentin to investigate the quantitative relationships among the number of additive pulse layers in two-dimensional scans starting from the focal plane, step size along the normal of the focal plane (focal plane normal), and ablation depth error. A method to control the ablation depth error, suitable to control stepping along the focal plane normal, was preliminarily established. Twenty-four freshly removed mandibular first molars were cut transversely along the long axis of the crown and prepared as 48 tooth sample slices with approximately flat surfaces. Forty-two slices were used in the first section. The picosecond laser was 1,064 nm in wavelength, 3 W in power, and 10 kHz in repetition frequency. For a varying number (n = 5-70) of focal plane additive pulse layers (14 groups, three repetitions each), two-dimensional scanning and ablation were performed on the dentin regions of the tooth sample slices, which were fixed on the focal plane. The ablation depth, d, was measured, and the quantitative function between n and d was established. Six slices were used in the second section. The function was used to calculate and set the timing of stepwise increments, and the single-step size along the focal plane normal was d micrometer after ablation of n layers (n = 5-50; 10 groups, six repetitions each). Each sample underwent three-dimensional scanning and ablation to produce 2 × 2-mm square cavities. The difference, e, between the measured cavity depth and theoretical value was calculated, along with the difference, e 1, between the measured average ablation depth of a single-step along the focal plane normal and theoretical value. Values of n and d corresponding to the minimum values of e and e 1, respectively, were obtained. In two-dimensional ablation, d was largest (720.61 μm) when n = 65 and smallest when n = 5 (45.00 μm). Linear regression yielded the quantitative
Modeling error analysis of stationary linear discrete-time filters
NASA Technical Reports Server (NTRS)
Patel, R.; Toda, M.
1977-01-01
The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.
NASA Technical Reports Server (NTRS)
Mohr, R. L.
1975-01-01
A set of four digital computer programs is presented which can be used to investigate the effects of instrumentation errors on the accuracy of aircraft and helicopter stability-and-control derivatives identified from flight test data. The programs assume that the differential equations of motion are linear and consist of small perturbations about a quasi-steady flight condition. It is also assumed that a Newton-Raphson optimization technique is used for identifying the estimates of the parameters. Flow charts and printouts are included.
Error Analysis of Stereophotoclinometry in Support of the OSIRIS-REx Mission
NASA Astrophysics Data System (ADS)
Palmer, Eric; Gaskell, Robert W.; Weirich, John R.
2015-11-01
Stereophotoclinometry has been used on numerous planetary bodies to derive the shape model, most recently 67P-Churyumov-Gerasimenko (Jorda, et al., 2014), the Earth (Palmer, et al., 2014) and Vesta (Gaskell, 2012). SPC is planned to create the ultra-high resolution topography for the upcoming mission OSIRIS-REx that will sample the asteroid Bennu, arriving in 2018. This shape model will be used both for scientific analysis as well as operational navigation, to include providing the topography that will ensure a safe collection of the surface.We present the initial results of error analysis of SPC, with specific focus on how both systematic and non-systematic error propagate through SPC into the shape model. For this testing, we have created a notional global truth model at 5cm and a single region at 2.5mm ground sample distance. These truth models were used to create images using GSFC's software Freespace. Then these images were used by SPC to form a derived shape model with a ground sample distance of 5cm.We will report on both the absolute and relative error that the derived shape model has compared to the original truth model as well as other empirical and theoretical measurement of errors within SPC.Jorda, L. et al (2014) "The Shape of Comet 67P/Churyumov-Gerasimenko from Rosetta/Osiris Images", AGU Fall Meeting, #P41C-3943. Gaskell, R (2012) "SPC Shape and Topography of Vesta from DAWN Imaging Data", DSP Meeting #44, #209.03. Palmer, L. Sykes, M. V. Gaskll, R.W. (2014) "Mercator — Autonomous Navigation Using Panoramas", LPCS 45, #1777.
Chiu, Ming-Chuan; Hsieh, Min-Chih
2016-05-01
The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology.
Numerical analysis of Swiss roll metamaterials.
Demetriadou, A; Pendry, J B
2009-08-12
A Swiss roll metamaterial is a resonant magnetic medium, with a negative magnetic permeability for a range of frequencies, due to its self-inductance and self-capacitance components. In this paper, we discuss the band structure, S-parameters and effective electromagnetic parameters of Swiss roll metamaterials, with both analytical and numerical results, which show an exceptional convergence.
English Majors' Errors in Translating Arabic Endophora: Analysis and Remedy
ERIC Educational Resources Information Center
Abdellah, Antar Solhy
2007-01-01
Egyptian English majors in the faculty of Education, South Valley University tend to mistranslate the plural inanimate Arabic pronoun with the singular inanimate English pronoun. A diagnostic test was designed to analyze this error. Results showed that a large number of students (first year and fourth year students) make this error, that the error…
Factor Rotation and Standard Errors in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.
2015-01-01
In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…
An Analysis of Error-Correction Procedures during Discrimination Training.
ERIC Educational Resources Information Center
Rodgers, Teresa A.; Iwata, Brian A.
1991-01-01
Seven adults with severe to profound mental retardation participated in match-to-sample discrimination training under three conditions. Results indicated that error-correction procedures improve performance through negative reinforcement; that error correction may serve multiple functions; and that, for some subjects, trial repetition enhances…
Visual Retention Test: An Analysis of Children's Errors.
ERIC Educational Resources Information Center
Rice, James A., Bobele, R. Monte
Grade level norms were developed, based on a sample of 678 elementary school students, for various error scores of the Benton Visual Retention Test. Norms were also developed for 201 normal children, 58 minimal brain dysfunction children, and 101 educable mentally retarded children. In both the copying mode and the memory mode, most errors were…
TOWARD ERROR ANALYSIS OF LARGE-SCALE FOREST CARBON BUDGETS
Quantification of forest carbon sources and sinks is an important part of national inventories of net greenhouse gas emissions. Several such forest carbon budgets have been constructed, but little effort has been made to analyse the sources of error and how these errors propagate...
Analysis on the alignment errors of segmented Fresnel lens
NASA Astrophysics Data System (ADS)
Zhou, Xudong; Wu, Shibin; Yang, Wei; Wang, Lihua
2014-09-01
Stitching Fresnel lens are designed for the application in the micro-focus X-ray, but splicing errors between sub-apertures will affect optical performance of the entire mirror. The offset error tolerance of different degrees of freedom between the sub-apertures are analyzed theoretically according to the wave-front aberration theory and with the Rayleigh criterion as evaluation criteria, and then validate the correctness of the theory using simulation software of ZEMAX. The results show that Z-axis piston error tolerance and translation error tolerance of XY axis increases with the increasing F-number of stitching Fresnel lens, and tilt error tolerance of XY axis decreases with increasing diameter. The results provide a theoretical basis and guidance for the design, detection and alignment of stitching Fresnel lens.
NUC correction of IR FPA and error analysis with FPGA
NASA Astrophysics Data System (ADS)
Ge, Cheng-liang; Liu, Zhi-qiang; Wu, Jian-tao; Li, Zheng-dong; Huang, Zhi-wei; Wan, Min; Hu, Xiao-yang; Fan, Guo-bin; Liang, Zheng
2008-02-01
Infrared camera with IR FPA (Focal Plane Array) has often been used in the fields of target detection, temperature test, surface detection, and so on. And it is very important to run the Non Uniformity Correction (NUC) correction firstly to solve the non-uniformity of FPA which is the inherent character of IR FPA. The NUC character is the inherent performance of IR FPA which has different response rate among pixels for the same IR radiant. This NUC can decrease sensitivity of IR FPA and reduce the resolution of sensor. There are two kinds of methods to do this correction. One is hardware method which is using the DSP. Another one is software method. Within this device, two-point correction method is used to correct the NUC. The Field Programmable Gate Array (FPGA) is used. The FPGA can do better parallel arithmetic and has more programmability. After the NUC correction, the error analysis of this correction is also made. After the correction, the BPR (Bad Pixel Replacement) can be more than 98%.
Numerical Analysis of the SCHOLAR Supersonic Combustor
NASA Technical Reports Server (NTRS)
Rodriguez, Carlos G.; Cutler, Andrew D.
2003-01-01
The SCHOLAR scramjet experiment is the subject of an ongoing numerical investigation. The facility nozzle and combustor were solved separate and sequentially, with the exit conditions of the former used as inlet conditions for the latter. A baseline configuration for the numerical model was compared with the available experimental data. It was found that ignition-delay was underpredicted and fuel-plume penetration overpredicted, while the pressure rise was close to experimental values. In addition, grid-convergence by means of grid-sequencing could not be established. The effects of the different turbulence parameters were quantified. It was found that it was not possible to simultaneously predict the three main parameters of this flow: pressure-rise, ignition-delay, and fuel-plume penetration.
A Numerical Model for Atomtronic Circuit Analysis
Chow, Weng W.; Straatsma, Cameron J. E.; Anderson, Dana Z.
2015-07-16
A model for studying atomtronic devices and circuits based on finite-temperature Bose-condensed gases is presented. The approach involves numerically solving equations of motion for atomic populations and coherences, derived using the Bose-Hubbard Hamiltonian and the Heisenberg picture. The resulting cluster expansion is truncated at a level giving balance between physics rigor and numerical demand mitigation. This approach allows parametric studies involving time scales that cover both the rapid population dynamics relevant to nonequilibrium state evolution, as well as the much longer time durations typical for reaching steady-state device operation. This model is demonstrated by studying the evolution of a Bose-condensed gas in the presence of atom injection and extraction in a double-well potential. In this configuration phase locking between condensates in each well of the potential is readily observed, and its influence on the evolution of the system is studied.
SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1994-01-01
SAMSAN algorithm; however, it is generally agreed by experienced users, and in the numerical error analysis literature, that computation with non-symmetric matrices of order greater than about 200 should be avoided or treated with extreme care. SAMSAN attempts to support the needs of application oriented analysis by providing: 1) a methodology with unlimited growth potential, 2) a methodology to insure that associated documentation is current and available "on demand", 3) a foundation of basic computational algorithms that most controls analysis procedures are based upon, 4) a set of check out and evaluation programs which demonstrate usage of the algorithms on a series of problems which are structured to expose the limits of each algorithm's applicability, and 5) capabilities which support both a priori and a posteriori error analysis for the computational algorithms provided. The SAMSAN algorithms are coded in FORTRAN 77 for batch or interactive execution and have been implemented on a DEC VAX computer under VMS 4.7. An effort was made to assure that the FORTRAN source code was portable and thus SAMSAN may be adaptable to other machine environments. The documentation is included on the distribution tape or can be purchased separately at the price below. SAMSAN version 2.0 was developed in 1982 and updated to version 3.0 in 1988.
Error analysis of finite element method for Poisson–Nernst–Planck equations
Sun, Yuzhou; Sun, Pengtao; Zheng, Bin; Lin, Guang
2016-08-01
A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.
Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas
ERIC Educational Resources Information Center
Herzberg, Tina
2010-01-01
In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…
ERIC Educational Resources Information Center
El-khateeb, Mahmoud M. A.
2016-01-01
The purpose of this study aims to investigate the errors classes occurred by the Preparatory year students at King Saud University, through analysis student responses to the items of the study test, and to identify the varieties of the common errors and ratios of common errors that occurred in solving inequalities. In the collection of the data,…
Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; Littlewood, David J.; Baines, Andrew J.
2016-03-16
Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cell represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.
Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; ...
2016-03-16
Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less
Numerical analysis of slender vortex motion
Zhou, H.
1996-02-01
Several numerical methods for slender vortex motion (the local induction equation, the Klein-Majda equation, and the Klein-Knio equation) are compared on the specific example of sideband instability of Kelvin waves on a vortex. Numerical experiments on this model problem indicate that all these methods yield qualitatively similar behavior, and this behavior is different from the behavior of a non-slender vortex with variable cross-section. It is found that the boundaries between stable, recurrent, and chaotic regimes in the parameter space of the model problem depend on the method used. The boundaries of these domains in the parameter space for the Klein-Majda equation and for the Klein-Knio equation are closely related to the core size. When the core size is large enough, the Klein-Majda equation always exhibits stable solutions for our model problem. Various conclusions are drawn; in particular, the behavior of turbulent vortices cannot be captured by these local approximations, and probably cannot be captured by any slender vortex model with constant vortex cross-section. Speculations about the differences between classical and superfluid hydrodynamics are also offered.
Numerical Sensitivity Analysis of a Composite Impact Absorber
NASA Astrophysics Data System (ADS)
Caputo, F.; Lamanna, G.; Scarano, D.; Soprano, A.
2008-08-01
This work deals with a numerical investigation on the energy absorbing capability of structural composite components. There are several difficulties associated with the numerical simulation of a composite impact-absorber, such as high geometrical non-linearities, boundary contact conditions, failure criteria, material behaviour; all those aspects make the calibration of numerical models and the evaluation of their sensitivity to the governing geometrical, physical and numerical parameters one of the main objectives of whatever numerical investigation. The last aspect is a very important one for designers in order to make the application of the model to real cases robust from both a physical and a numerical point of view. At first, on the basis of experimental data from literature, a preliminary calibration of the numerical model of a composite impact absorber and then a sensitivity analysis to the variation of the main geometrical and material parameters have been developed, by using explicit finite element algorithms implemented in the Ls-Dyna code.
The Analysis, Numerical Simulation, and Diagnosis of Extratropical Weather Systems
1999-09-30
respectively, and iv ) the numerical simulation and observational validation of high-spatial resolution (~10 km) numerical predictions. APPROACH My approach...satellite and targeted dropwindsonde observations; in collaboration with Xiaolie Zou (Fla. State Univ.), Chris Velden (Univ. Wisc ./CIMMS), and Arlin...Univ. Wisc .), and Arlin Krueger (NASA/GSFC). Analysis and numerical simulation of the fine-scale structure of upper-level jet streams from high- spatial
Procedures for numerical analysis of circadian rhythms
REFINETTI, ROBERTO; LISSEN, GERMAINE CORNÉ; HALBERG, FRANZ
2010-01-01
This article reviews various procedures used in the analysis of circadian rhythms at the populational, organismal, cellular and molecular levels. The procedures range from visual inspection of time plots and actograms to several mathematical methods of time series analysis. Computational steps are described in some detail, and additional bibliographic resources and computer programs are listed. PMID:23710111
Numerical Analysis of Magnetic Sail Spacecraft
Sasaki, Daisuke; Yamakawa, Hiroshi; Usui, Hideyuki; Funaki, Ikkoh; Kojima, Hirotsugu
2008-12-31
To capture the kinetic energy of the solar wind by creating a large magnetosphere around the spacecraft, magneto-plasma sail injects a plasma jet into a strong magnetic field produced by an electromagnet onboard the spacecraft. The aim of this paper is to investigate the effect of the IMF (interplanetary magnetic field) on the magnetosphere of magneto-plasma sail. First, using an axi-symmetric two-dimensional MHD code, we numerically confirm the magnetic field inflation, and the formation of a magnetosphere by the interaction between the solar wind and the magnetic field. The expansion of an artificial magnetosphere by the plasma injection is then simulated, and we show that the magnetosphere is formed by the interaction between the solar wind and the magnetic field expanded by the plasma jet from the spacecraft. This simulation indicates the size of the artificial magnetosphere becomes smaller when applying the IMF.
Manufacturing in space: Fluid dynamics numerical analysis
NASA Technical Reports Server (NTRS)
Robertson, S. J.; Nicholson, L. A.; Spradley, L. W.
1982-01-01
Numerical computations were performed for natural convection in circular enclosures under various conditions of acceleration. It was found that subcritical acceleration vectors applied in the direction of the temperature gradient will lead to an eventual state of rest regardless of the initial state of motion. Supercritical acceleration vectors will lead to the same steady state condition of motion regardless of the initial state of motion. Convection velocities were computed for acceleration vectors at various angles of the initial temperature gradient. The results for Rayleigh numbers of 1000 or less were found to closely follow Weinbaum's first order theory. Higher Rayleigh number results were shown to depart significantly from the first order theory. Supercritical behavior was confirmed for Rayleigh numbers greater than the known supercritical value of 9216. Response times were determined to provide an indication of the time required to change states of motion for the various cases considered.
Stochastic modelling and analysis of IMU sensor errors
NASA Astrophysics Data System (ADS)
Zaho, Y.; Horemuz, M.; Sjöberg, L. E.
2011-12-01
The performance of a GPS/INS integration system is greatly determined by the ability of stand-alone INS system to determine position and attitude within GPS outage. The positional and attitude precision degrades rapidly during GPS outage due to INS sensor errors. With advantages of low price and volume, the Micro Electrical Mechanical Sensors (MEMS) have been wildly used in GPS/INS integration. Moreover, standalone MEMS can keep a reasonable positional precision only a few seconds due to systematic and random sensor errors. General stochastic error sources existing in inertial sensors can be modelled as (IEEE STD 647, 2006) Quantization Noise, Random Walk, Bias Instability, Rate Random Walk and Rate Ramp. Here we apply different methods to analyze the stochastic sensor errors, i.e. autoregressive modelling, Gauss-Markov process, Power Spectral Density and Allan Variance. Then the tests on a MEMS based inertial measurement unit were carried out with these methods. The results show that different methods give similar estimates of stochastic error model parameters. These values can be used further in the Kalman filter for better navigation accuracy and in the Doppler frequency estimate for faster acquisition after GPS signal outage.
A comprehensive analysis of translational missense errors in the yeast Saccharomyces cerevisiae.
Kramer, Emily B; Vallabhaneni, Haritha; Mayer, Lauren M; Farabaugh, Philip J
2010-09-01
The process of protein synthesis must be sufficiently rapid and sufficiently accurate to support continued cellular growth. Failure in speed or accuracy can have dire consequences, including disease in humans. Most estimates of the accuracy come from studies of bacterial systems, principally Escherichia coli, and have involved incomplete analysis of possible errors. We recently used a highly quantitative system to measure the frequency of all types of misreading errors by a single tRNA in E. coli. That study found a wide variation in error frequencies among codons; a major factor causing that variation is competition between the correct (cognate) and incorrect (near-cognate) aminoacyl-tRNAs for the mutant codon. Here we extend that analysis to measure the frequency of missense errors by two tRNAs in a eukaryote, the yeast Saccharomyces cerevisiae. The data show that in yeast errors vary by codon from a low of 4 x 10(-5) to a high of 6.9 x 10(-4) per codon and that error frequency is in general about threefold lower than in E. coli, which may suggest that yeast has additional mechanisms that reduce missense errors. Error rate again is strongly influenced by tRNA competition. Surprisingly, missense errors involving wobble position mispairing were much less frequent in S. cerevisiae than in E. coli. Furthermore, the error-inducing aminoglycoside antibiotic, paromomycin, which stimulates errors on all error-prone codons in E. coli, has a more codon-specific effect in yeast.
Research in applied mathematics, numerical analysis, and computer science
NASA Technical Reports Server (NTRS)
1984-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.
NASA Astrophysics Data System (ADS)
Pan, B.; Wang, B.; Lubineau, G.
2016-07-01
Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work.
Systematic error analysis and correction in quadriwave lateral shearing interferometer
NASA Astrophysics Data System (ADS)
Zhu, Wenhua; Li, Jinpeng; Chen, Lei; Zheng, Donghui; Yang, Ying; Han, Zhigang
2016-12-01
To obtain high-precision and high-resolution measurement of dynamic wavefront, the systematic error of the quadriwave lateral shearing interferometer (QWLSI) is analyzed and corrected. The interferometer combines a chessboard grating with an order selection mask to select four replicas of the wavefront under test. A collimating lens is introduced to collimate the replicas, which not only eliminates the coma induced by the shear between each two replicas, but also avoids the astigmatism and defocus caused by CCD tilt. Besides, this configuration permits the shear amount to vary from zero, which benefits calibrating the systematic errors. A practical transmitted wavefront was measured by the QWLSI with different shear amounts. The systematic errors of reconstructed wavefronts are well suppressed. The standard deviation of root mean square is 0.8 nm, which verifies the stability and reliability of QWLSI for dynamic wavefront measurement.
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2016-11-01
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2016-11-01
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.
Witkowski, W.R.; Eldred, M.S.; Harding, D.C.
1994-09-01
The use of state-of-the-art numerical analysis tools to determine the optimal design of a radioactive material (RAM) transportation container is investigated. The design of a RAM package`s components involves a complex coupling of structural, thermal, and radioactive shielding analyses. The final design must adhere to very strict design constraints. The current technique used by cask designers is uncoupled and involves designing each component separately with respect to its driving constraint. With the use of numerical optimization schemes, the complex couplings can be considered directly, and the performance of the integrated package can be maximized with respect to the analysis conditions. This can lead to more efficient package designs. Thermal and structural accident conditions are analyzed in the shape optimization of a simplified cask design. In this paper, details of the integration of numerical analysis tools, development of a process model, nonsmoothness difficulties with the optimization of the cask, and preliminary results are discussed.
Analysis and improvement of gas turbine blade temperature measurement error
NASA Astrophysics Data System (ADS)
Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui
2015-10-01
Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed.
Bayesian analysis of truncation errors in chiral effective field theory
NASA Astrophysics Data System (ADS)
Melendez, J.; Furnstahl, R. J.; Klco, N.; Phillips, D. R.; Wesolowski, S.
2016-09-01
In the Bayesian approach to effective field theory (EFT) expansions, truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. By encoding expectations about the naturalness of EFT expansion coefficients for observables, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. We extend and test previous calculations of DOB intervals for chiral EFT observables, examine correlations between contributions at different orders and energies, and explore methods to validate the statistical consistency of the EFT expansion parameter. Supported in part by the NSF and the DOE.
Dongarra, J. . Dept. of Computer Science Oak Ridge National Lab., TN ); Rosener, B. . Dept. of Computer Science)
1991-12-01
This report describes a facility called NA-NET created to allow numerical analysts (na) an easy method of communicating with one another. The main advantage of the NA-NET is uniformity of addressing. All mail is addressed to the Internet host na-net.ornl.gov'' at Oak Ridge National Laboratory. Hence, members of the NA-NET do not need to remember complicated addresses or even where a member is currently located. As long as moving members change their e-mail address in the NA-NET everything works smoothly. The NA-NET system is currently located at Oak Ridge National Laboratory. It is running on the same machine that serves netlib. Netlib is a separate facility that distributes mathematical software via electronic mail. For more information on netlib consult, or send the one-line message send index'' to netlib{at}ornl.gov. The following report describes the current NA-NET system from both a user's perspective and from an implementation perspective. Currently, there are over 2100 members in the NA-NET. An average of 110 mail messages pass through this facility daily.
Dongarra, J. |; Rosener, B.
1991-12-01
This report describes a facility called NA-NET created to allow numerical analysts (na) an easy method of communicating with one another. The main advantage of the NA-NET is uniformity of addressing. All mail is addressed to the Internet host ``na-net.ornl.gov`` at Oak Ridge National Laboratory. Hence, members of the NA-NET do not need to remember complicated addresses or even where a member is currently located. As long as moving members change their e-mail address in the NA-NET everything works smoothly. The NA-NET system is currently located at Oak Ridge National Laboratory. It is running on the same machine that serves netlib. Netlib is a separate facility that distributes mathematical software via electronic mail. For more information on netlib consult, or send the one-line message ``send index`` to netlib{at}ornl.gov. The following report describes the current NA-NET system from both a user`s perspective and from an implementation perspective. Currently, there are over 2100 members in the NA-NET. An average of 110 mail messages pass through this facility daily.
Numerical Analysis Of Interlaminar-Fracture Toughness
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.
1988-01-01
Finite-element analysis applied in conjunction with strain-energy and micromechanical concepts. Computational procedure involves local, local-crack-closure, and/or the "unique" local-crack-closure method developed at NASA Lewis Research Center, for mathematical modeling of ENF and MMF. Methods based on three-dimensional finite-element analysis in conjunction with concept of strain-energy-release rate and with micromechanics of composite materials. Assists in interpretation of ENF and MMF fracture tests performed to obtain fracture-toughness parameters, by enabling evaluation of states of stress likely to induce interlaminar fractures.
Analysis of Children's Errors in Comprehension and Expression
ERIC Educational Resources Information Center
Hatcher, Ryan C.; Breaux, Kristina C.; Liu, Xiaochen; Bray, Melissa A.; Ottone-Cross, Karen L.; Courville, Troy; Luria, Sarah R.; Langley, Susan Dulong
2017-01-01
Children's oral language skills typically begin to develop sooner than their written language skills; however, the four language systems (listening, speaking, reading, and writing) then develop concurrently as integrated strands that influence one another. This research explored relationships between students' errors in language comprehension of…
Oral Definitions of Newly Learned Words: An Error Analysis
ERIC Educational Resources Information Center
Steele, Sara C.
2012-01-01
This study examined and compared patterns of errors in the oral definitions of newly learned words. Fifteen 9- to 11-year-old children with language learning disability (LLD) and 15 typically developing age-matched peers inferred the meanings of 20 nonsense words from four novel reading passages. After reading, children provided oral definitions…
Young Children's Mental Arithmetic Errors: A Working-Memory Analysis.
ERIC Educational Resources Information Center
Brainerd, Charles J.
1983-01-01
Presents a stochastic model for distinguishing mental arithmetic errors according to causes of failure. A series of experiments (1) studied questions of goodness of fit and model validity among four and five year olds and (2) used the model to measure the relative contributions of developmental improvements in short-term memory and arithmetical…
Error and Uncertainty Analysis for Ecological Modeling and Simulation
2001-12-01
in GIS has been proposed by Openshaw (1992) based on Monte Carlo simulation (recommended method). As we mentioned above, however, this method is...Modelling, 8: 297-311. Openshaw , S., 1992. Learning to live with errors in spatial databases. Accuracy of spatial databases (Eds. Goodchild, M., & S
Pitch Error Analysis of Young Piano Students' Music Reading Performances
ERIC Educational Resources Information Center
Rut Gudmundsdottir, Helga
2010-01-01
This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…
Analysis of Errors Made by Students Solving Genetics Problems.
ERIC Educational Resources Information Center
Costello, Sandra Judith
The purpose of this study was to analyze the errors made by students solving genetics problems. A sample of 10 non-science undergraduate students was obtained from a private college in Northern New Jersey. The results support prior research in the area of genetics education and show that a weak understanding of the relationship of meiosis to…
Shape error analysis for reflective nano focusing optics
Modi, Mohammed H.; Idir, Mourad
2010-06-23
Focusing performance of reflective x-ray optics is determined by surface figure accuracy. Any surface imperfection present on such optics introduces a phase error in the outgoing wave fields. Therefore converging beam at the focal spot will differ from the desired performance. Effect of these errors on focusing performance can be calculated by wave optical approach considering a coherent wave field illumination of optical elements. We have developed a wave optics simulator using Fresnel-Kirchhoff diffraction integral to calculate the mirror pupil function. Both analytically calculated and measured surface topography data can be taken as an aberration source to outgoing wave fields. Simulations are performed to study the effect of surface height fluctuations on focusing performances over wide frequency range in high, mid and low frequency band. The results using real shape profile measured with long trace profilometer (LTP) suggest that the shape error of {lambda}/4 PV (peak to valley) is tolerable to achieve diffraction limited performance. It is desirable to remove shape error of very low frequency as 0.1 mm{sup -1} which otherwise will generate beam waist or satellite peaks. All other frequencies above this limit will not affect the focused beam profile but only caused a loss in intensity.
Uncertainity analysis of selected sources of errors in bioelectromagnetic investigations.
Dlugosz, Tomasz
2014-01-01
The aim of this paper is to focus attention of experimenters on several sources of error that are not taken into account in the majority of bioelectromagnetics experiments, and which may lead to complete falsification of the results of the experiments.
Analysis of Students' Error in Learning of Quadratic Equations
ERIC Educational Resources Information Center
Zakaria, Effandi; Ibrahim; Maat, Siti Mistima
2010-01-01
The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…
ERIC Educational Resources Information Center
McGuire, Patrick
2013-01-01
This article describes how a free, web-based intelligent tutoring system, (ASSISTment), was used to create online error analysis items for preservice elementary and secondary mathematics teachers. The online error analysis items challenged preservice teachers to analyze, diagnose, and provide targeted instructional remediation intended to help…
Rio Hondo Sediment Assessment Analysis Using SAM. Numerical Model Investigation
1991-05-01
MISCELLANEOUS PAPER HL-91-1 M ~ RIO HONDO SEDIMENT ASSESSMENT ANALYSIS USING SAM Numerical Model Investigation AD-A238 572Ii 1 11byIll lil Nolan K...FUNDING NUMBERS Rio Hondo Sediment Assessment Analysis Using SAM; Numerical Model Investigation 6. AUTHOR(S) Nolan K. Raphelt Michael J. Trawle William A... Rio Hondo through Roswell, NM, was conducted. The investigation represented a sediment assessment level study conducted to test for potential
Numerical Analysis of the Sea State Bias for Satellite Altimetry
NASA Technical Reports Server (NTRS)
Glazman, R. E.; Fabrikant, A.; Srokosz, M. A.
1996-01-01
Theoretical understanding of the dependence of sea state bias (SSB) on wind wave conditions has been achieved only for the case of a unidirectional wind-driven sea. Recent analysis of Geosat and TOPEX altimeter data showed that additional factors, such as swell, ocean currents, and complex directional properties of realistic wave fields, may influence SSB behavior. Here we investigate effects of two-dimensional multimodal wave spectra using a numerical model of radar reflection from a random, non-Gaussian surface. A recently proposed ocean wave spectrum is employed to describe sea surface statistics. The following findings appear to be of particular interest: (1) Sea swell has an appreciable effect in reducing the SSB coefficient compared with the pure wind sea case but has less effect on the actual SSB owing to the corresponding increase in significant wave height. (2) Hidden multimodal structure (the two-dimensional wavenumber spectrum contains separate peaks, for swell and wind seas, while the frequency spectrum looks unimodal) results in an appreciable change of SSB. (3) For unimodal, purely wind-driven seas, the influence of the angular spectral width is relatively unimportant; that is, a unidirectional sea provides a good qualitative model for SSB if the swell is absent. (4) The pseudo wave age is generally much better fo parametrization the SSB coefficient than the actual wave age (which is ill-defined for a multimodal sea) or wind speed. (5) SSB can be as high as 5% of the significant wave height, which is significantly greater than predicted by present empirical model functions tuned on global data sets. (6) Parameterization of SSB in terms of wind speed is likely to lead to errors due to the dependence on the (in practice, unknown) fetch.
Numerical analysis of the sea state bias for satellite altimetry
NASA Astrophysics Data System (ADS)
Glazman, R. E.; Fabrikant, A.; Srokosz, M. A.
1996-02-01
Theoretical understanding of the dependence of sea state bias (SSB) on wind wave conditions has been achieved only for the case of a unidirectional wind-driven sea [Jackson, 1979; Rodriguez et al., 1992; Glazman and Srokosz, 1991]. Recent analysis of Geosat and TOPEX altimeter data showed that additional factors, such as swell, ocean currents, and complex directional properties of realistic wave fields, may influence SSB behavior. Here we investigate effects of two-dimensional multimodal wave spectra using a numerical model of radar reflection from a random, non-Gaussian surface. A recently proposed ocean wave spectrum is employed to describe sea surface statistics. The following findings appear to be of particular interest: (1) Sea swell has an appreciable effect in reducing the SSB coefficient compared with the pure wind sea case but has less effect on the actual SSB, owing to the corresponding increase in significant wave height. (2) Hidden multimodal structure (the two-dimensional wavenumber spectrum contains separate peaks, for swell and wind seas, while the frequency spectrum looks unimodal) results in an appreciable change of SSB. (3) For unimodal, purely wind-driven seas, the influence of the angular spectral width is relatively unimportant; that is, a unidirectional sea provides a good qualitative model for SSB if the swell is absent. (4) The pseudo wave age is generally much better for parametrizing the SSB coefficient than the actual wave age (which is ill-defined for a multimodal sea) or wind speed. (5) SSB can be as high as 5% of the significant wave height, which is significantly greater than predicted by present empirical model functions tuned on global data sets. (6) Parameterization of SSB in terms of wind speed is likely to lead to errors due to the dependence on the (in practice, unknown) fetch.
Numerical analysis of the orthogonal descent method
Shokov, V.A.; Shchepakin, M.B.
1994-11-01
The author of the orthogonal descent method has been testing it since 1977. The results of these tests have only strengthened the need for further analysis and development of orthogonal descent algorithms for various classes of convex programming problems. Systematic testing of orthogonal descent algorithms and comparison of test results with other nondifferentiable optimization methods was conducted at TsEMI RAN in 1991-1992 using the results.
Numerical bifurcation analysis of immunological models with time delays
NASA Astrophysics Data System (ADS)
Luzyanina, Tatyana; Roose, Dirk; Bocharov, Gennady
2005-12-01
In recent years, a large number of mathematical models that are described by delay differential equations (DDEs) have appeared in the life sciences. To analyze the models' dynamics, numerical methods are necessary, since analytical studies can only give limited results. In turn, the availability of efficient numerical methods and software packages encourages the use of time delays in mathematical modelling, which may lead to more realistic models. We outline recently developed numerical methods for bifurcation analysis of DDEs and illustrate the use of these methods in the analysis of a mathematical model of human hepatitis B virus infection.
ERIC Educational Resources Information Center
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
Simultaneous control of error rates in fMRI data analysis.
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-12-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain.
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
Palaeomagnetic analysis of plunging fold structures: Errors and a simple fold test
NASA Astrophysics Data System (ADS)
Stewart, Simon A.
1995-02-01
The conventional corrections for bedding dip in palaeomagnetic studies involve either untilting about strike or about some inclined axis—the choice is usually governed by the perceived fold hinge orientation. While it has been recognised that untilting bedding about strike can be erroneous if the beds lie within plunging fold structures, there are several types of fold which have plunging hinges, but whose limbs have rotated about horizontal axes. Examples are interference structures and forced folds; restoration about inclined axes may be incorrect in these cases. The angular errors imposed upon palaeomagnetic lineation data via the wrong choice of rotation axis during unfolding are calculated here and presented for lineations in any orientation which could be associated with an upright, symmetrical fold. This extends to palaeomagnetic data previous analyses which were relevant to bedding-parallel lineations. This numerical analysis highlights the influence of various parameters which describe fold geometry and relative lineation orientation upon the angular error imparted to lineation data by the wrong unfolding method. The effect of each parameter is described, and the interaction of the parameters in producing the final error is discussed. Structural and palaeomagnetic data are cited from two field examples of fold structures which illustrate the alternative kinematic histories. Both are from thin-skinned thrust belts, but the data show that one is a true plunging fold, formed by rotation about its inclined hinge, whereas the other is an interference structure produced by rotation of the limbs about non-parallel horizontal axes. Since the angle between the palaeomagnetic lineations and the inclined fold hinge is equal on both limbs in the former type of structure, but varies from limb to limb in the latter, a simple test can be defined which uses palaeomagnetic lineation data to identify rotation axes and hence fold type. This test can use pre- or syn
Numerical analysis on pump turbine runaway points
NASA Astrophysics Data System (ADS)
Guo, L.; Liu, J. T.; Wang, L. Q.; Jiao, L.; Li, Z. F.
2012-11-01
To research the character of pump turbine runaway points with different guide vane opening, a hydraulic model was established based on a pumped storage power station. The RNG k-ε model and SMPLEC algorithms was used to simulate the internal flow fields. The result of the simulation was compared with the test data and good correspondence was got between experimental data and CFD result. Based on this model, internal flow analysis was carried out. The result show that when the pump turbine ran at the runway speed, lots of vortexes appeared in the flow passage of the runner. These vortexes could always be observed even if the guide vane opening changes. That is an important way of energy loss in the runaway condition. Pressure on two sides of the runner blades were almost the same. So the runner power is very low. High speed induced large centrifugal force and the small guide vane opening gave the water velocity a large tangential component, then an obvious water ring could be observed between the runner blades and guide vanes in small guide vane opening condition. That ring disappeared when the opening bigger than 20°. These conclusions can provide a theory basis for the analysis and simulation of the pump turbine runaway points.
Numerical Uncertainty Quantification for Radiation Analysis Tools
NASA Technical Reports Server (NTRS)
Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha
2007-01-01
Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.
Numerical analysis of soil-structure interaction
NASA Astrophysics Data System (ADS)
Vanlangen, Harry
1991-05-01
A study to improve some existing procedures for the finite element analysis of soil deformation and collapse is presented. Special attention is paid to problems of soil structure interaction. Emphasis is put on the behavior of soil rather than on that of structures. This seems to be justifiable if static interaction of stiff structures and soft soil is considered. In such a case nonlinear response will exclusively stem from soil deformation. In addition, the quality of the results depends to a high extent on the proper modeling of soil flow along structures and not on the modeling of the structure itself. An exception is made when geotextile reinforcement is considered. In that case the structural element, i.e., the geotextile, is highly flexible. The equation of continuum equilibrium, which serves as a starting point for the finite element formulation of large deformation elastoplasticity, is discussed with special attention being paid to the interpretation of some objective stress rate tensors. The solution of nonlinear finite element equations is addressed. Soil deformation in the prefailure range is discussed. Large deformation effect in the analysis of soil deformation is touched on.
Digital floodplain mapping and an analysis of errors involved
Hamblen, C.S.; Soong, D.T.; Cai, X.
2007-01-01
Mapping floodplain boundaries using geographical information system (GIS) and digital elevation models (DEMs) was completed in a recent study. However convenient this method may appear at first, the resulting maps potentially can have unaccounted errors. Mapping the floodplain using GIS is faster than mapping manually, and digital mapping is expected to be more common in the future. When mapping is done manually, the experience and judgment of the engineer or geographer completing the mapping and the contour resolution of the surface topography are critical in determining the flood-plain and floodway boundaries between cross sections. When mapping is done digitally, discrepancies can result from the use of the computing algorithm and digital topographic datasets. Understanding the possible sources of error and how the error accumulates through these processes is necessary for the validation of automated digital mapping. This study will evaluate the procedure of floodplain mapping using GIS and a 3 m by 3 m resolution DEM with a focus on the accumulated errors involved in the process. Within the GIS environment of this mapping method, the procedural steps of most interest, initially, include: (1) the accurate spatial representation of the stream centerline and cross sections, (2) properly using a triangulated irregular network (TIN) model for the flood elevations of the studied cross sections, the interpolated elevations between them and the extrapolated flood elevations beyond the cross sections, and (3) the comparison of the flood elevation TIN with the ground elevation DEM, from which the appropriate inundation boundaries are delineated. The study area involved is of relatively low topographic relief; thereby, making it representative of common suburban development and a prime setting for the need of accurately mapped floodplains. This paper emphasizes the impacts of integrating supplemental digital terrain data between cross sections on floodplain delineation
Calculating Internal Avalanche Velocities From Correlation With Error Analysis.
NASA Astrophysics Data System (ADS)
McElwaine, J. N.; Tiefenbacher, F.
Velocities inside avalanches have been calculated for many years by calculating the cross-correlation between light sensitive sensors using a method pioneered by Dent. His approach has been widely adopted but suffers from four shortcomings. (i) Corre- lations are studied between pairs of sensors rather than between all sensors simulta- neously. This can result in inconsistent velocities and does not extract the maximum information from the data. (ii) The longer the time that the correlations are taken over the better the noise rejection, but errors due to non-constant velocity increase. (iii) The errors are hard to quantify. (iv) The calculated velocities are usually widely scattered and discontinuous. A new approach is described that produces a continuous veloc- ity field from any number of sensors at arbitrary locations. The method is based on a variational principle that reconstructs the underlying signal as it is advected past the sensors and enforces differentiability on the velocity. The errors in the method are quantified and applied to the problem of optimal sensor positioning and design. Results on SLF data from chute experiments are discussed.
PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.
PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.
1999-03-29
All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.
Probability analysis of position errors using uncooled IR stereo camera
NASA Astrophysics Data System (ADS)
Oh, Jun Ho; Lee, Sang Hwa; Lee, Boo Hwan; Park, Jong-Il
2016-05-01
This paper analyzes the random phenomenon of 3D positions when tracking moving objects using the infrared (IR) stereo camera, and proposes a probability model of 3D positions. The proposed probability model integrates two random error phenomena. One is the pixel quantization error which is caused by discrete sampling pixels in estimating disparity values of stereo camera. The other is the timing jitter which results from the irregular acquisition-timing in the uncooled IR cameras. This paper derives a probability distribution function by combining jitter model with pixel quantization error. To verify the proposed probability function of 3D positions, the experiments on tracking fast moving objects are performed using IR stereo camera system. The 3D depths of moving object are estimated by stereo matching, and be compared with the ground truth obtained by laser scanner system. According to the experiments, the 3D depths of moving object are estimated within the statistically reliable range which is well derived by the proposed probability distribution. It is expected that the proposed probability model of 3D positions can be applied to various IR stereo camera systems that deal with fast moving objects.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
NASCRIN - NUMERICAL ANALYSIS OF SCRAMJET INLET
NASA Technical Reports Server (NTRS)
Kumar, A.
1994-01-01
The NASCRIN program was developed for analyzing two-dimensional flow fields in supersonic combustion ramjet (scramjet) inlets. NASCRIN solves the two-dimensional Euler or Navier-Stokes equations in conservative form by an unsplit, explicit, two-step finite-difference method. A more recent explicit-implicit, two-step scheme has also been incorporated in the code for viscous flow analysis. An algebraic, two-layer eddy-viscosity model is used for the turbulent flow calculations. NASCRIN can analyze both inviscid and viscous flows with no struts, one strut, or multiple struts embedded in the flow field. NASCRIN can be used in a quasi-three-dimensional sense for some scramjet inlets under certain simplifying assumptions. Although developed for supersonic internal flow, NASCRIN may be adapted to a variety of other flow problems. In particular, it should be readily adaptable to subsonic inflow with supersonic outflow, supersonic inflow with subsonic outflow, or fully subsonic flow. The NASCRIN program is available for batch execution on the CDC CYBER 203. The vectorized FORTRAN version was developed in 1983. NASCRIN has a central memory requirement of approximately 300K words for a grid size of about 3,000 points.
Combustion irreversibilities: Numerical simulation and analysis
NASA Astrophysics Data System (ADS)
Silva, Valter; Rouboa, Abel
2012-08-01
An exergy analysis was performed considering the combustion of methane and agro-industrial residues produced in Portugal (forest residues and vines pruning). Regarding that the irreversibilities of a thermodynamic process are path dependent, the combustion process was considering as resulting from different hypothetical paths each one characterized by four main sub-processes: reactant mixing, fuel oxidation, internal thermal energy exchange (heat transfer), and product mixing. The exergetic efficiency was computed using a zero dimensional model developed by using a Visual Basic home code. It was concluded that the exergy losses were mainly due to the internal thermal energy exchange sub-process. The exergy losses from this sub-process are higher when the reactants are preheated up to the ignition temperature without previous fuel oxidation. On the other hand, the global exergy destruction can be minored increasing the pressure, the reactants temperature and the oxygen content on the oxidant stream. This methodology allows the identification of the phenomena and processes that have larger exergy losses, the understanding of why these losses occur and how the exergy changes with the parameters associated to each system which is crucial to implement the syngas combustion from biomass products as a competitive technology.
Numerical analysis of human dental occlusal contact
NASA Astrophysics Data System (ADS)
Bastos, F. S.; Las Casas, E. B.; Godoy, G. C. D.; Meireles, A. B.
2010-06-01
The purpose of this study was to obtain real contact areas, forces, and pressures acting on human dental enamel as a function of the nominal pressure during dental occlusal contact. The described development consisted of three steps: characterization of the surface roughness by 3D contact profilometry test, finite element analysis of micro responses for each pair of main asperities in contact, and homogenization of macro responses using an assumed probability density function. The inelastic deformation of enamel was considered, adjusting the stress-strain relationship of sound enamel to that obtained from instrumented indentation tests conducted with spherical tip. A mechanical part of the static friction coefficient was estimated as the ratio between tangential and normal components of the overall resistive force, resulting in μd = 0.057. Less than 1% of contact pairs reached the yield stress of enamel, indicating that the occlusal contact is essentially elastic. The micro-models indicated an average hardness of 6.25GPa, and the homogenized result for macroscopic interface was around 9GPa. Further refinements of the methodology and verification using experimental data can provide a better understanding of processes related to contact, friction and wear of human tooth enamel.
Single trial time-frequency domain analysis of error processing in post-traumatic stress disorder.
Clemans, Zachary A; El-Baz, Ayman S; Hollifield, Michael; Sokhadze, Estate M
2012-09-13
Error processing studies in psychology and psychiatry are relatively common. Event-related potentials (ERPs) are often used as measures of error processing, two such response-locked ERPs being the error-related negativity (ERN) and the error-related positivity (Pe). The ERN and Pe occur following committed error in reaction time tasks as low frequency (4-8 Hz) electroencephalographic (EEG) oscillations registered at the midline fronto-central sites. We created an alternative method for analyzing error processing using time-frequency analysis in the form of a wavelet transform. A study was conducted in which subjects with PTSD and healthy control completed a forced-choice task. Single trial EEG data from errors in the task were processed using a continuous wavelet transform. Coefficients from the transform that corresponded to the theta range were averaged to isolate a theta waveform in the time-frequency domain. Measures called the time-frequency ERN and Pe were obtained from these waveforms for five different channels and then averaged to obtain a single time-frequency ERN and Pe for each error trial. A comparison of the amplitude and latency for the time-frequency ERN and Pe between the PTSD and control group was performed. A significant group effect was found on the amplitude of both measures. These results indicate that the developed single trial time-frequency error analysis method is suitable for examining error processing in PTSD and possibly other psychiatric disorders.
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
Errors in logic and statistics plague a meta-analysis
Technology Transfer Automated Retrieval System (TEKTRAN)
The non-target effects of transgenic insecticidal crops has been a topic of debate for over a decade and many laboratory and field studies have addressed the issue in numerous countries. In 2009 Lovei et al. (Transgenic Insecticidal Crops and Natural Enemies: A Detailed Review of Laboratory Studies)...
Quantitative analysis of errors in fractionated stereotactic radiotherapy.
Choi, D R; Kim, D Y; Ahn, Y C; Huh, S J; Yeo, I J; Nam, D H; Lee, J I; Park, K; Kim, J H
2001-01-01
Fractionated stereotactic radiotherapy (FSRT) offers a technique to minimize the absorbed dose to normal tissues; therefore, quality assurance is essential for these procedures. In this study, quality assurance for FSRT of 58 cases, between August 1995 and August 1997 are described, and the errors for each step and overall accuracy were estimated. Some of the important items for FSRT procedures are: accuracy in CT localization, transferred image distortion, laser alignment, isocentric accuracy of linear accelerator, head frame movement, portal verification, and various human errors. A geometric phantom, that has known coordinates was used to estimate the accuracy of CT localization. A treatment planning computer was used for checking the transferred image distortion. The mechanical isocenter standard (MIS), rectilinear phantom pointer: (RLPP), and laser target localizer frame (LTLF) were used for laser alignment and target coordinates setting. Head-frame stability check was performed by a depth confirmation helmet (DCH). A film test was done to check isocentric accuracy and portal verification. All measured data for the 58 patients were recorded and analyzed for each item. 4-MV x-rays from a linear accelerator, were used for FSRT, along with homemade circular cones with diameters from 20 to 70 mm (interval: 5 mm). The accuracy in CT localization was 1.2+/-0.5 mm. The isocentric accuracy of the linear accelerator, including laser alignment, was 0.5+/-0.2 mm. The reproducibility of the head frame was 1.1+/-0.6 mm. The overall accuracy was 1.7+/-0.7 mm, excluding human errors.
Hamming, R W
1965-04-23
I hope I have shown not that mathematicians are incompetent or wrong, but why I believe that their interests, tastes, and objectives are frequently different from those of practicing numerical analysts, and why activity in numerical analysis should be evaluated by its own standards and not by those of pure mathematics. I hope I have also shown you that much of the "art form" of mathematics consists of delicate, "noise-free" results, while many areas of applied mathematics, especially numerical analysis, are dominated by noise. Again, in computing the process is fundamental, and rigorous mathematical proofs are often meaningless in computing situations. Finally, in numerical analysis, as in engineering, choosing the right model is more important than choosing the model with the elegant mathematics.
Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors
NASA Astrophysics Data System (ADS)
San, Bingbing; Yang, Qingshan; Yin, Liwei
2017-03-01
Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.
ANALYSIS OF A CLASSIFICATION ERROR MATRIX USING CATEGORICAL DATA TECHNIQUES.
Rosenfield, George H.; Fitzpatrick-Lins, Katherine
1984-01-01
Summary form only given. A classification error matrix typically contains tabulation results of an accuracy evaluation of a thematic classification, such as that of a land use and land cover map. The diagonal elements of the matrix represent the counts corrected, and the usual designation of classification accuracy has been the total percent correct. The nondiagonal elements of the matrix have usually been neglected. The classification error matrix is known in statistical terms as a contingency table of categorical data. As an example, an application of these methodologies to a problem of remotely sensed data concerning two photointerpreters and four categories of classification indicated that there is no significant difference in the interpretation between the two photointerpreters, and that there are significant differences among the interpreted category classifications. However, two categories, oak and cottonwood, are not separable in classification in this experiment at the 0. 51 percent probability. A coefficient of agreement is determined for the interpreted map as a whole, and individually for each of the interpreted categories. A conditional coefficient of agreement for the individual categories is compared to other methods for expressing category accuracy which have already been presented in the remote sensing literature.
Error analysis of combined stereo/optical-flow passive ranging
NASA Technical Reports Server (NTRS)
Barniv, Yair
1991-01-01
The motion of an imaging sensor causes each imaged point of the scene to correspondingly describe a time trajectory on the image plane. The trajectories of all imaged points are reminiscent of a flow (e.g., of liquid) which is the source of the term 'optical flow'. Optical-flow ranging is a method by which the stream of two-dimensional images obtained from a forward-looking forward-moving passive sensor is used to compute depth (or range) to points in the field of view. Another well-known ranging method consists of triangulation based on stereo images obtained from at least two stationary sensors. In this paper we analyze the potential accuracies of a combined optical flow and stereo passive-ranging system in the context of helicopter nap-of-the-earth obstacle avoidance. The Cramer-Rao lower bound is developed for the combined system under the assumption of an unknown angular bias error common to both cameras of a stereo pair. It is shown that the depth accuracy degradations caused by a bias error is negligible for a combined optical-flow and stereo system as compared to a monocular optical-flow system.
Cole, David A; Preacher, Kristopher J
2014-06-01
Despite clear evidence that manifest variable path analysis requires highly reliable measures, path analyses with fallible measures are commonplace even in premier journals. Using fallible measures in path analysis can cause several serious problems: (a) As measurement error pervades a given data set, many path coefficients may be either over- or underestimated. (b) Extensive measurement error diminishes power and can prevent invalid models from being rejected. (c) Even a little measurement error can cause valid models to appear invalid. (d) Differential measurement error in various parts of a model can change the substantive conclusions that derive from path analysis. (e) All of these problems become increasingly serious and intractable as models become more complex. Methods to prevent and correct these problems are reviewed. The conclusion is that researchers should use more reliable measures (or correct for measurement error in the measures they do use), obtain multiple measures for use in latent variable modeling, and test simpler models containing fewer variables.
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
Analysis of Covariance with Linear Regression Error Model on Antenna Control Unit Tracking
2015-10-20
412TW-PA-15238 Analysis of Covariance with Linear Regression Error Model on Antenna Control Unit Tracking DANIEL T. LAIRD AIR...COVERED (From - To) 20 OCT 15 – 23 OCT 15 4. TITLE AND SUBTITLE Analysis of Covariance with Linear Regression Error Model on Antenna Control Tracking...analysis of variance (ANOVA) to decide for the null- or alternative-hypotheses of a telemetry antenna control unit’s (ACU) ability to track on C-band
Analysis of Errors and Misconceptions in the Learning of Calculus by Undergraduate Students
ERIC Educational Resources Information Center
Muzangwa, Jonatan; Chifamba, Peter
2012-01-01
This paper is going to analyse errors and misconceptions in an undergraduate course in Calculus. The study will be based on a group of 10 BEd. Mathematics students at Great Zimbabwe University. Data is gathered through use of two exercises on Calculus 1&2.The analysis of the results from the tests showed that a majority of the errors were due…
ERIC Educational Resources Information Center
Kingsdorf, Sheri; Krawec, Jennifer
2014-01-01
Solving word problems is a common area of struggle for students with learning disabilities (LD). In order for instruction to be effective, we first need to have a clear understanding of the specific errors exhibited by students with LD during problem solving. Error analysis has proven to be an effective tool in other areas of math but has had…
Mark-Up-Based Writing Error Analysis Model in an On-Line Classroom.
ERIC Educational Resources Information Center
Feng, Cheng; Yano, Yoneo; Ogata, Hiroaki
2000-01-01
Describes a new component called "Writing Error Analysis Model" (WEAM) in the CoCoA system for teaching writing composition in Japanese as a foreign language. The Weam can be used for analyzing learners' morphological errors and selecting appropriate compositions for learners' revising exercises. (Author/VWL)
An Analysis of College Students' Attitudes towards Error Correction in EFL Context
ERIC Educational Resources Information Center
Zhu, Honglin
2010-01-01
This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…
A Linguistic Analysis of Errors in the Compositions of Arba Minch University Students
ERIC Educational Resources Information Center
Tizazu, Yoseph
2014-01-01
This study reports the dominant linguistic errors that occur in the written productions of Arba Minch University (hereafter AMU) students. A sample of paragraphs was collected for two years from students ranging from freshmen to graduating level. The sampled compositions were then coded, described, and explained using error analysis method. Both…
Error analysis of mixed finite element methods for wave propagation in double negative metamaterials
NASA Astrophysics Data System (ADS)
Li, Jichun
2007-12-01
In this paper, we develop both semi-discrete and fully discrete mixed finite element methods for modeling wave propagation in three-dimensional double negative metamaterials. Optimal error estimates are proved for Nedelec spaces under the assumption of smooth solutions. To our best knowledge, this is the first error analysis obtained for Maxwell's equations when metamaterials are involved.
Analysis of remote sensing errors of omission and commission under FTP conditions
Stephens, R.D.; Cadle, S.H.; Qian, T.Z.
1996-06-01
Second-by-second modal emissions data from a 73-vehicle fleet of 1990 and 1991 light duty cars and trucks driven on the Federal Test Procedure (FTP) driving cycle were examined to determine remote sensing errors of commission in identifying high emissions vehicles. Results are combined with a similar analysis of errors of omission based on modal FTP data from high emissions vehicles. Extremely low errors of commission combined with modest errors of omission indicate that remote sensing should be very effective in isolating high CO and HC emitting vehicles in a fleet of late model vehicles on the road. 13 refs., 5 figs., 6 tabs.
Analysis of Remote Sensing Errors of Omission and Commission Under FTP Conditions.
Stephens, Robert D; Cadle, Steven H; Qian, Tim Z
1996-06-01
Second-by-second modal emissions data from a 73-vehicle fleet of 1990 and 1991 light duty cars and trucks driven on the Federal Test Procedure (FTP) driving cycle were examined to determine remote sensing errors of commission in identifying high emissions vehicles. Results are combined with a similar analysis of errors of omission based on modal FTP data from high emissions vehicles. Extremely low errors of commission combined with modest errors of omission indicate that remote sensing should be very effective in isolating high CO and HC emitting vehicles in a fleet of late model vehicles on the road.
Carriage Error Identification Based on Cross-Correlation Analysis and Wavelet Transformation
Mu, Donghui; Chen, Dongju; Fan, Jinwei; Wang, Xiaofeng; Zhang, Feihu
2012-01-01
This paper proposes a novel method for identifying carriage errors. A general mathematical model of a guideway system is developed, based on the multi-body system method. Based on the proposed model, most error sources in the guideway system can be measured. The flatness of a workpiece measured by the PGI1240 profilometer is represented by a wavelet. Cross-correlation analysis performed to identify the error source of the carriage. The error model is developed based on experimental results on the low frequency components of the signals. With the use of wavelets, the identification precision of test signals is very high. PMID:23012558
Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong
2014-01-01
We discuss and analyze an H(1)-Galerkin mixed finite element (H(1)-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H(1)-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H(1)-GMFE method. Based on the discussion on the theoretical error analysis in L(2)-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H(1)-norm. Moreover, we derive and analyze the stability of H(1)-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure.
Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong
2014-01-01
We discuss and analyze an H1-Galerkin mixed finite element (H1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H1-GMFE method. Based on the discussion on the theoretical error analysis in L2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H1-norm. Moreover, we derive and analyze the stability of H1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148
Hu, Juju; Hu, Haijiang; Ji, Yinghua
2010-03-15
Periodic nonlinearity that ranges from tens of nanometers to a few nanometers in heterodyne interferometer limits its use in high accuracy measurement. A novel method is studied to detect the nonlinearity errors based on the electrical subdivision and the analysis method of statistical signal in heterodyne Michelson interferometer. Under the movement of micropositioning platform with the uniform velocity, the method can detect the nonlinearity errors by using the regression analysis and Jackknife estimation. Based on the analysis of the simulations, the method can estimate the influence of nonlinearity errors and other noises for the dimensions measurement in heterodyne Michelson interferometer.
Quantitative analysis of numerical solvers for oscillatory biomolecular system models
Quo, Chang F; Wang, May D
2008-01-01
Background This article provides guidelines for selecting optimal numerical solvers for biomolecular system models. Because various parameters of the same system could have drastically different ranges from 10-15 to 1010, the ODEs can be stiff and ill-conditioned, resulting in non-unique, non-existing, or non-reproducible modeling solutions. Previous studies have not examined in depth how to best select numerical solvers for biomolecular system models, which makes it difficult to experimentally validate the modeling results. To address this problem, we have chosen one of the well-known stiff initial value problems with limit cycle behavior as a test-bed system model. Solving this model, we have illustrated that different answers may result from different numerical solvers. We use MATLAB numerical solvers because they are optimized and widely used by the modeling community. We have also conducted a systematic study of numerical solver performances by using qualitative and quantitative measures such as convergence, accuracy, and computational cost (i.e. in terms of function evaluation, partial derivative, LU decomposition, and "take-off" points). The results show that the modeling solutions can be drastically different using different numerical solvers. Thus, it is important to intelligently select numerical solvers when solving biomolecular system models. Results The classic Belousov-Zhabotinskii (BZ) reaction is described by the Oregonator model and is used as a case study. We report two guidelines in selecting optimal numerical solver(s) for stiff, complex oscillatory systems: (i) for problems with unknown parameters, ode45 is the optimal choice regardless of the relative error tolerance; (ii) for known stiff problems, both ode113 and ode15s are good choices under strict relative tolerance conditions. Conclusions For any given biomolecular model, by building a library of numerical solvers with quantitative performance assessment metric, we show that it is possible
Corina, David P.; Loudermilk, Brandon C.; Detwiler, Landon; Martin, Richard F.; Brinkley, James F.; Ojemann, George
2011-01-01
This study reports on the characteristics and distribution of naming errors of patients undergoing cortical stimulation mapping (CSM). During the procedure, electrical stimulation is used to induce temporary functional lesions and locate ‘essential’ language areas for preservation. Under stimulation, patients are shown slides of common objects and asked to name them. Cortical stimulation can lead to a variety of naming errors. In the present study, we aggregate errors across patients to examine the neuroanatomical correlates and linguistic characteristics of six common errors: semantic paraphasias, circumlocutions, phonological paraphasias, neologisms, performance errors, and no-response errors. Aiding analysis, we relied on a suite of web-based querying and imaging tools that enabled the summative mapping of normalized stimulation sites. Errors were visualized and analyzed by type and location. We provide descriptive statistics to characterize the commonality of errors across patients and location. The errors observed suggest a widely distributed and heterogeneous cortical network that gives rise to differential patterning of paraphasic errors. Data are discussed in relation to emerging models of language representation that honor distinctions between frontal, parietal, and posterior temporal dorsal implementation systems and ventral-temporal lexical semantic and phonological storage and assembly regions; the latter of which may participate both in language comprehension and production. PMID:20452661
Analysis and Correction of Systematic Height Model Errors
NASA Astrophysics Data System (ADS)
Jacobsen, K.
2016-06-01
The geometry of digital height models (DHM) determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC). Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3) has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP), but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM) digital surface model (DSM) or the new AW3D30 DSM, based on ALOS PRISM images, are
Cooper, S.E.; Wreathall, J.; Thompson, C.M., Drouin, M.; Bley, D.C.
1996-10-01
This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ``A Technique for Human Error Analysis`` (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst.
Error Analysis of Remotely-Acquired Mossbauer Spectra
NASA Technical Reports Server (NTRS)
Schaefer, Martha W.; Dyar, M. Darby; Agresti, David G.; Schaefer, Bradley E.
2005-01-01
On the Mars Exploration Rovers, Mossbauer spectroscopy has recently been called upon to assist in the task of mineral identification, a job for which it is rarely used in terrestrial studies. For example, Mossbauer data were used to support the presence of olivine in Martian soil at Gusev and jarosite in the outcrop at Meridiani. The strength (and uniqueness) of these interpretations lies in the assumption that peak positions can be determined with high degrees of both accuracy and precision. We summarize here what we believe to be the major sources of error associated with peak positions in remotely-acquired spectra, and speculate on their magnitudes. Our discussion here is largely qualitative because necessary background information on MER calibration sources, geometries, etc., have not yet been released to the PDS; we anticipate that a more quantitative discussion can be presented by March 2005.
Testing and error analysis of a real-time controller
NASA Technical Reports Server (NTRS)
Savolaine, C. G.
1983-01-01
Inexpensive ways to organize and conduct system testing that were used on a real-time satellite network control system are outlined. This system contains roughly 50,000 lines of executable source code developed by a team of eight people. For a small investment of staff, the system was thoroughly tested, including automated regression testing, before field release. Detailed records were kept for fourteen months, during which several versions of the system were written. A separate testing group was not established, but testing itself was structured apart from the development process. The errors found during testing are examined by frequency per subsystem by size and complexity as well as by type. The code was released to the user in March, 1983. To date, only a few minor problems found with the system during its pre-service testing and user acceptance has been good.
Study on analysis from sources of error for Airborne LIDAR
NASA Astrophysics Data System (ADS)
Ren, H. C.; Yan, Q.; Liu, Z. J.; Zuo, Z. Q.; Xu, Q. Q.; Li, F. F.; Song, C.
2016-11-01
With the advancement of Aerial Photogrammetry, it appears that to obtain geo-spatial information of high spatial and temporal resolution provides a new technical means for Airborne LIDAR measurement techniques, with unique advantages and broad application prospects. Airborne LIDAR is increasingly becoming a new kind of space for earth observation technology, which is mounted by launching platform for aviation, accepting laser pulses to get high-precision, high-density three-dimensional coordinate point cloud data and intensity information. In this paper, we briefly demonstrates Airborne laser radar systems, and that some errors about Airborne LIDAR data sources are analyzed in detail, so the corresponding methods is put forwarded to avoid or eliminate it. Taking into account the practical application of engineering, some recommendations were developed for these designs, which has crucial theoretical and practical significance in Airborne LIDAR data processing fields.
Comet Tempel 2: Orbit, ephemerides and error analysis
NASA Technical Reports Server (NTRS)
Yeomans, D. K.
1978-01-01
The dynamical behavior of comet Tempel 2 is investigated and the comet is found to be very well behaved and easily predictable. The nongravitational forces affecting the motion of this comet are the smallest of any comet that is affected by nongravitational forces. The sign and time history of these nongravitational forces imply (1) a direct rotation of the comet's nucleus and (2) the comet's ability to outgas has not changed substantially over its entire observational history. The well behaved dynamical motion of the comet, the well observed past apparitions, the small nongravitational forces and the excellent 1988 ground based observing conditions all contribute to relatively small position and velocity errors in 1988 -- the year of a proposed rendezvous space mission to this comet. To assist in planned ground based and earth orbital observations of this comet, ephemerides are given for the 1978-79, 1983-84 and 1988 apparitions.
Estep, Donald
2015-11-30
This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.
New dimension analyses with error analysis for quaking aspen and black spruce
NASA Technical Reports Server (NTRS)
Woods, K. D.; Botkin, D. B.; Feiveson, A. H.
1987-01-01
Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.
Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant
Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar
2015-01-01
Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485
Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.
Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F
2001-01-01
When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.
Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher
1996-01-01
We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.
Measurement error analysis of Brillouin lidar system using F-P etalon and ICCD
NASA Astrophysics Data System (ADS)
Yao, Yuan; Niu, Qunjie; Liang, Kun
2016-09-01
Brillouin lidar system using Fabry-Pérot (F-P) etalon and Intensified Charge Coupled Device (ICCD) is capable of real time remote measuring of properties like temperature of seawater. The measurement accuracy is determined by two key parameters, Brillouin frequency shift and Brillouin linewidth. Three major errors, namely the laser frequency instability, the calibration error of F-P etalon and the random shot noise are discussed. Theoretical analysis combined with simulation results showed that the laser and F-P etalon will cause about 4 MHz error to both Brillouin shift and linewidth, and random noise bring more error to linewidth than frequency shift. A comprehensive and comparative analysis of the overall errors under various conditions proved that colder ocean(10 °C) is more accurately measured with Brillouin linewidth, and warmer ocean (30 °C) is better measured with Brillouin shift.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.
Error analysis of penetrator impacts on bodies without atmospheres
NASA Technical Reports Server (NTRS)
Davis, D. R.
1975-01-01
Penetrators are missile shaped objects designed to implant electronic instrumentation in various of surface materials with a nominal impact speed around 150 m/sec. An interest in the application of this concept to in situ subsurface studies of extra terrestrial bodies and planetary satellites exists. Since many of these objects do not have atmospheres, the feasibility of successfully guiding penetrators to the required near-zero angle-of-attack impact conditions in the absence of an atmosphere was analyzed. Two potential targets were included, i.e., the moon and Mercury and several different penetrator deployment modes were involved. Impact errors arising from open-loop and closed-loop deployment control systems were given particular attention. Successful penetrator implacement requires: (1) that the impact speed be controlled, nominally to 150 m/sec, (2) that the angle of attack be in range 0 deg - 11 deg at impact, and (3) that the impact flight path angle be with 15 deg of vertical.
Error analysis in the measurement of average power with application to switching controllers
NASA Technical Reports Server (NTRS)
Maisel, J. E.
1980-01-01
Power measurement errors due to the bandwidth of a power meter and the sampling of the input voltage and current of a power meter were investigated assuming sinusoidal excitation and periodic signals generated by a model of a simple chopper system. Errors incurred in measuring power using a microcomputer with limited data storage were also considered. The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current, and the signal multiplier was studied. Results indicate that this power measurement error can be minimized if the frequency responses of the first order transfer functions are identical. The power error analysis was extended to include the power measurement error for a model of a simple chopper system with a power source and an ideal shunt motor acting as an electrical load for the chopper. The behavior of the power measurement error was determined as a function of the chopper's duty cycle and back EMF of the shunt motor. Results indicate that the error is large when the duty cycle or back EMF is small. Theoretical and experimental results indicate that the power measurement error due to sampling of sinusoidal voltages and currents becomes excessively large when the number of observation periods approaches one-half the size of the microcomputer data memory allocated to the storage of either the input sinusoidal voltage or current.
NASA Astrophysics Data System (ADS)
Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim
2012-12-01
This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Ephemeris data and error analysis in support of a Comet Encke intercept mission
NASA Technical Reports Server (NTRS)
Yeomans, D. K.
1974-01-01
Utilizing an orbit determination based upon 65 observations over the 1961 - 1973 interval, ephemeris data were generated for the 1976-77, 1980-81 and 1983-84 apparitions of short period comet Encke. For the 1980-81 apparition, results from a statistical error analysis are outlined. All ephemeris and error analysis computations include the effects of planetary perturbations as well as the nongravitational accelerations introduced by the outgassing cometary nucleus. In 1980, excellent observing conditions and a close approach of comet Encke to the earth permit relatively small uncertainties in the cometary position errors and provide an excellent opportunity for a close flyby of a physically interesting comet.
An Error Analysis for the Finite Element Method Applied to Convection Diffusion Problems.
1981-03-01
D TFhG-]NOLOGY k 4b 00 \\" ) ’b Technical Note BN-962 AN ERROR ANALYSIS FOR THE FINITE ELEMENT METHOD APPLIED TO CONVECTION DIFFUSION PROBLEM by I...Babu~ka and W. G. Szym’czak March 1981 V.. UNVI I Of- ’i -S AN ERROR ANALYSIS FOR THE FINITE ELEMENT METHOD P. - 0 w APPLIED TO CONVECTION DIFFUSION ...AOAO98 895 MARYLAND UNIVYCOLLEGE PARK INST FOR PHYSICAL SCIENCE--ETC F/G 12/I AN ERROR ANALYIS FOR THE FINITE ELEMENT METHOD APPLIED TO CONV..ETC (U
ACTION AND PHASE ANALYSIS TO DETERMINE SEXTUPOLE ERRORS IN RHIC AND THE SPS.
CARDONA,J.PEGGS,S.SATOGATA,T.TOMAS,R.
2003-05-12
Success in the application of the action and phase analysis to find linear errors at RHIC Interaction Regions [1] has encouraged the creation of a technique based on the action and phase analysis to find non linear errors. In this paper we show the first attempt to measure the sextupole components at RHIC interaction regions using the action and phase method. Experiments done by intentionally activating sextupoles in RHIC and in SPS [2] will also be analyzed with this method. First results have given values for the sextupole errors that at least have the same order of magnitude as the values found by an alternate technique during the RHIC 2001 run [3].
Error Patterns Analysis of Hearing Aid and Cochlear Implant Users as a Function of Noise
Chun, Hyungi; Ma, Sunmi; Chun, Youngmyoung
2015-01-01
Background and Objectives Not all impaired listeners may have the same speech perception ability although they will have similar pure-tone threshold and configuration. For this reason, the present study analyzes error patterns in the hearing-impaired compared to normal hearing (NH) listeners as a function of signal-to-noise ratio (SNR). Subjects and Methods Forty-four adults participated: 10 listeners with NH, 20 hearing aids (HA) users and 14 cochlear implants (CI) users. The Korean standardized monosyllables were presented as the stimuli in quiet and three different SNRs. Total error patterns were classified into types of substitution, omission, addition, fail, and no response, using stacked bar plots. Results Total error percent for the three groups significantly increased as the SNRs decreased. For error pattern analysis, the NH group showed substitution errors dominantly regardless of the SNRs compared to the other groups. Both the HA and CI groups had substitution errors that declined, while no response errors appeared as the SNRs increased. The CI group was characterized by lower substitution and higher fail errors than did the HA group. Substitutions of initial and final phonemes in the HA and CI groups were limited by place of articulation errors. However, the HA group had missed consonant place cues, such as formant transitions and stop consonant bursts, whereas the CI group usually had limited confusions of nasal consonants with low frequency characteristics. Interestingly, all three groups showed /k/ addition in the final phoneme, a trend that magnified as noise increased. Conclusions The HA and CI groups had their unique error patterns even though the aided thresholds of the two groups were similar. We expect that the results of this study will focus on high error patterns in auditory training of hearing-impaired listeners, resulting in reducing those errors and improving their speech perception ability. PMID:26771013
Schiff, G D; Amato, M G; Eguale, T; Boehne, J J; Wright, A; Koppel, R; Rashidee, A H; Elson, R B; Whitney, D L; Thach, T-T; Bates, D W; Seger, A C
2015-01-01
Importance Medication computerised provider order entry (CPOE) has been shown to decrease errors and is being widely adopted. However, CPOE also has potential for introducing or contributing to errors. Objectives The objectives of this study are to (a) analyse medication error reports where CPOE was reported as a ‘contributing cause’ and (b) develop ‘use cases’ based on these reports to test vulnerability of current CPOE systems to these errors. Methods A review of medication errors reported to United States Pharmacopeia MEDMARX reporting system was made, and a taxonomy was developed for CPOE-related errors. For each error we evaluated what went wrong and why and identified potential prevention strategies and recurring error scenarios. These scenarios were then used to test vulnerability of leading CPOE systems, asking typical users to enter these erroneous orders to assess the degree to which these problematic orders could be entered. Results Between 2003 and 2010, 1.04 million medication errors were reported to MEDMARX, of which 63 040 were reported as CPOE related. A review of 10 060 CPOE-related cases was used to derive 101 codes describing what went wrong, 67 codes describing reasons why errors occurred, 73 codes describing potential prevention strategies and 21 codes describing recurring error scenarios. Ability to enter these erroneous order scenarios was tested on 13 CPOE systems at 16 sites. Overall, 298 (79.5%) of the erroneous orders were able to be entered including 100 (28.0%) being ‘easily’ placed, another 101 (28.3%) with only minor workarounds and no warnings. Conclusions and relevance Medication error reports provide valuable information for understanding CPOE-related errors. Reports were useful for developing taxonomy and identifying recurring errors to which current CPOE systems are vulnerable. Enhanced monitoring, reporting and testing of CPOE systems are important to improve CPOE safety. PMID:25595599
Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E
2013-12-01
In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.
Numerical analysis of eccentric orifice plate using ANSYS Fluent software
NASA Astrophysics Data System (ADS)
Zahariea, D.
2016-11-01
In this paper the eccentric orifice plate is qualitative analysed as compared with the classical concentric orifice plate from the point of view of sedimentation tendency of solid particles in the fluid whose flow rate is measured. For this purpose, the numerical streamlines pattern will be compared for both orifice plates. The numerical analysis has been performed using ANSYS Fluent software. The methodology of CFD analysis is presented: creating the 3D solid model, fluid domain extraction, meshing, boundary condition, turbulence model, solving algorithm, convergence criterion, results and validation. Analysing the numerical streamlines, for the concentric orifice plate can be clearly observed two circumferential regions of separated flows, upstream and downstream of the orifice plate. The bottom part of these regions are the place where the solid particles could sediment. On the other hand, for the eccentric orifice plate, the streamlines pattern suggest that no sedimentation will occur because at the bottom area of the pipe there are no separated flows.
A general numerical model for wave rotor analysis
NASA Technical Reports Server (NTRS)
Paxson, Daniel W.
1992-01-01
Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.
A general numerical model for wave rotor analysis
NASA Astrophysics Data System (ADS)
Paxson, Daniel W.
1992-07-01
Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.
Generalized multiplicative error models: Asymptotic inference and empirical analysis
NASA Astrophysics Data System (ADS)
Li, Qian
This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.
Error Analysis for Discontinuous Galerkin Method for Parabolic Problems
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki
2004-01-01
In the proposal, the following three objectives are stated: (1) A p-version of the discontinuous Galerkin method for a one dimensional parabolic problem will be established. It should be recalled that the h-version in space was used for the discontinuous Galerkin method. An a priori error estimate as well as a posteriori estimate of this p-finite element discontinuous Galerkin method will be given. (2) The parameter alpha that describes the behavior double vertical line u(sub t)(t) double vertical line 2 was computed exactly. This was made feasible because of the explicitly specified initial condition. For practical heat transfer problems, the initial condition may have to be approximated. Also, if the parabolic problem is proposed on a multi-dimensional region, the parameter alpha, for most cases, would be difficult to compute exactly even in the case that the initial condition is known exactly. The second objective of this proposed research is to establish a method to estimate this parameter. This will be done by computing two discontinuous Galerkin approximate solutions at two different time steps starting from the initial time and use them to derive alpha. (3) The third objective is to consider the heat transfer problem over a two dimensional thin plate. The technique developed by Vogelius and Babuska will be used to establish a discontinuous Galerkin method in which the p-element will be used for through thickness approximation. This h-p finite element approach, that results in a dimensional reduction method, was used for elliptic problems, but the application appears new for the parabolic problem. The dimension reduction method will be discussed together with the time discretization method.
Numerical Analysis of Ice Impacts on Azimuth Propeller
2013-09-01
NAVAL POSTGRADUATE SCHOOL MONTEREY , CALIFORNIA THESIS Approved for public release; distribution is unlimited NUMERICAL ANALYSIS...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey , CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9... coastal ferries, workboats and fishing boats [3]. When considering icebreaking vessels, the likelihood of achieving damages to the propellers becomes
Scilab and Maxima Environment: Towards Free Software in Numerical Analysis
ERIC Educational Resources Information Center
Mora, Angel; Galan, Jose Luis; Aguilera, Gabriel; Fernandez, Alvaro; Merida, Enrique; Rodriguez, Pedro
2010-01-01
In this work we will present the ScilabUMA environment we have developed as an alternative to Matlab. This environment connects Scilab (for numerical analysis) and Maxima (for symbolic computations). Furthermore, the developed interface is, in our opinion at least, as powerful as the interface of Matlab. (Contains 3 figures.)
Numerical analysis of strongly nonlinear extensional vibrations in elastic rods.
Vanhille, Christian; Campos-Pozuelo, Cleofé
2007-01-01
In the framework of transduction, nondestructive testing, and nonlinear acoustic characterization, this article presents the analysis of strongly nonlinear vibrations by means of an original numerical algorithm. In acoustic and transducer applications in extreme working conditions, such as the ones induced by the generation of high-power ultrasound, the analysis of nonlinear ultrasonic vibrations is fundamental. Also, the excitation and analysis of nonlinear vibrations is an emergent technique in nonlinear characterization for damage detection. A third-order evolution equation is derived and numerically solved for extensional waves in isotropic dissipative media. A nine-constant theory of elasticity for isotropic solids is constructed, and the nonlinearity parameters corresponding to extensional waves are proposed. The nonlinear differential equation is solved by using a new numerical algorithm working in the time domain. The finite-difference numerical method proposed is implicit and only requires the solution of a linear set of equations at each time step. The model allows the analysis of strongly nonlinear, one-dimensional vibrations and can be used for prediction as well as characterization. Vibration waveforms are calculated at different points, and results are compared for different excitation levels and boundary conditions. Amplitude distributions along the rod axis for every harmonic component also are evaluated. Special attention is given to the study of high-amplitude damping of vibrations by means of several simulations. Simulations are performed for amplitudes ranging from linear to nonlinear and weak shock.
NASA Technical Reports Server (NTRS)
Borgia, Andrea; Spera, Frank J.
1990-01-01
This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.
Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors
NASA Technical Reports Server (NTRS)
Boussalis, Dhemetrios; Bayard, David S.
2013-01-01
G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to
An Analysis of Errors in a Reuse-Oriented Development Environment
NASA Technical Reports Server (NTRS)
Thomas, William M.; Delis, Alex; Basili, Victor R.
1995-01-01
Component reuse is widely considered vital for obtaining significant improvement in development productivity. However, as an organization adopts a reuse-oriented development process, the nature of the problems in development is likely to change. In this paper, we use a measurement-based approach to better understand and evaluate an evolving reuse process. More specifically, we study the effects of reuse across seven projects in narrow domain from a single development organization. An analysis of the errors that occur in new and reused components across all phases of system development provides insight into the factors influencing the reuse process. We found significant differences between errors associated with new and various types of reused components in terms of the types of errors committed, when errors are introduced, and the effect that the errors have on the development process.
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations.
Numerical analysis of a quasistatic piezoelectric problem with damage*
NASA Astrophysics Data System (ADS)
Fernández, José R.; Martínez, Rebeca; Stavroulakis, Georgios E.
2008-07-01
The quasistatic evolution of the mechanical state of a piezoelectric body with damage is numerically studied in this paper. Both damage and piezoelectric effects are included into the model. The variational formulation leads to a coupled system composed of two linear variational equations for the displacement field and the electric potential, and a nonlinear parabolic variational equation for the damage field. The existence of a unique weak solution is stated. Then, a fully discrete scheme is introduced by using a finite element method to approximate the spatial variable and an Euler scheme to discretize the time derivatives. Error estimates are derived on the approximate solutions, from which the linear convergence of the algorithm is deduced under suitable regularity conditions. Finally, a two-dimensional example is presented to demonstrate the behaviour of the solution. To cite this article: J.R. Fernández et al., C. R. Mecanique 336 (2008).
Numerical analysis of ossicular chain lesion of human ear
NASA Astrophysics Data System (ADS)
Liu, Yingxi; Li, Sheng; Sun, Xiuzhen
2009-04-01
Lesion of ossicular chain is a common ear disease impairing the sense of hearing. A comprehensive numerical model of human ear can provide better understanding of sound transmission. In this study, we propose a three-dimensional finite element model of human ear that incorporates the canal, tympanic membrane, ossicular bones, middle ear suspensory ligaments/muscles, middle ear cavity and inner ear fluid. Numerical analysis is conducted and employed to predict the effects of middle ear cavity, malleus handle defect, hypoplasia of the long process of incus, and stapedial crus defect on sound transmission. The present finite element model is shown to be reasonable in predicting the ossicular mechanics of human ear.
Wang, Ming; Flanders, W Dana; Bostick, Roberd M; Long, Qi
2012-12-20
Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch-specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch-specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a 'hybrid' approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch-specific and measurement-specific errors. We illustrate our method by using data from a colorectal adenoma study.
Phase error analysis and compensation considering ambient light for phase measuring profilometry
NASA Astrophysics Data System (ADS)
Zhou, Ping; Liu, Xinran; He, Yi; Zhu, Tongjing
2014-04-01
The accuracy of phase measuring profilometry (PMP) system based on phase-shifting method is susceptible to gamma non-linearity of the projector-camera pair and uncertain ambient light inevitably. Although many researches on gamma model and phase error compensation methods have been implemented, the effect of ambient light is not explicit all along. In this paper, we perform theoretical analysis and experiments of phase error compensation taking account of both gamma non-linearity and uncertain ambient light. First of all, a mathematical phase error model is proposed to illustrate the reason of phase error generation in detail. We propose that the phase error is related not only to the gamma non-linearity of the projector-camera pair, but also to the ratio of intensity modulation to average intensity in the fringe patterns captured by the camera which is affected by the ambient light. Subsequently, an accurate phase error compensation algorithm is proposed based on the mathematical model, where the relationship between phase error and ambient light is illustrated. Experimental results with four-step phase-shifting PMP system show that the proposed algorithm can alleviate the phase error effectively even though the ambient light is considered.
Dumas, Raphael; Branemark, Rickard; Frossard, Laurent
2016-08-18
Quantitative assessments of prostheses performances rely more and more frequently on gait analysis focusing on prosthetic knee joint forces and moments computed by inverse dynamics. However, this method is prone to errors, as demonstrated in comparison with direct measurements of these forces and moments. The magnitude of errors reported in the literature seems to vary depending on prosthetic components. Therefore, the purposes of this study were (A) to quantify and compare the magnitude of errors in knee joint forces and moments obtained with inverse dynamics and direct measurements on ten participants with transfemoral amputation during walking and (B) to investigate if these errors can be characterised for different prosthetic knees. Knee joint forces and moments computed by inverse dynamics presented substantial errors, especially during the swing phase of gait. Indeed, the median errors in percentage of the moment magnitude were 4% and 26% in extension/flexion, 6% and 19% in adduction/abduction as well as 14% and 27% in internal/external rotation during stance and swing phase, respectively. Moreover, errors varied depending on the prosthetic limb fitted with mechanical or microprocessorcontrolled knees. This study confirmed that inverse dynamics should be used cautiously while performing gait analysis of amputees. Alternatively, direct measurements of joint forces and moments could be relevant for mechanical characterising of components and alignments of prosthetic limbs.
Error analysis and feasibility study of dynamic stiffness matrix-based damping matrix identification
NASA Astrophysics Data System (ADS)
Ozgen, Gokhan O.; Kim, Jay H.
2009-02-01
Developing a method to formulate a damping matrix that represents the actual spatial distribution and mechanism of damping of the dynamic system has been an elusive goal. The dynamic stiffness matrix (DSM)-based damping identification method proposed by Lee and Kim is attractive and promising because it identifies the damping matrix from the measured DSM without relying on any unfounded assumptions. However, in ensuing works it was found that damping matrices identified from the method had unexpected forms and showed traces of large variance errors. The causes and possible remedies of the problem are sought for in this work. The variance and leakage errors are identified as the major sources of the problem, which are then related to system parameters through numerical and experimental simulations. An improved experimental procedure is developed to reduce the effect of these errors in order to make the DSM-based damping identification method a practical option.
NASA Astrophysics Data System (ADS)
Allen, S. E.; Dinniman, M. S.; Klinck, J. M.; Gorby, D. D.; Hewett, A. J.; Hickey, B. M.
2003-01-01
Submarine canyons which indent the continental shelf are frequently regions of steep (up to 45°), three-dimensional topography. Recent observations have delineated the flow over several submarine canyons during 2-4 day long upwelling episodes. Thus upwelling episodes over submarine canyons provide an excellent flow regime for evaluating numerical and physical models. Here we compare a physical and numerical model simulation of an upwelling event over a simplified submarine canyon. The numerical model being evaluated is a version of the S-Coordinate Rutgers University Model (SCRUM). Careful matching between the models is necessary for a stringent comparison. Results show a poor comparison for the homogeneous case due to nonhydrostatic effects in the laboratory model. Results for the stratified case are better but show a systematic difference between the numerical results and laboratory results. This difference is shown not to be due to nonhydrostatic effects. Rather, the difference is due to truncation errors in the calculation of the vertical advection of density in the numerical model. The calculation is inaccurate due to the terrain-following coordinates combined with a strong vertical gradient in density, vertical shear in the horizontal velocity and topography with strong curvature.
Swain, E.D.; Langevin, C.D.; Wang, J.D.
2008-01-01
In the present study, a spectral analysis was applied to field data and a numerical model of southeastern Everglades and northeastern Florida Bay that involved computing and comparing the power spectrum of simulated and measured flows at the primary coastal outflow creek. Four dominant power frequencies, corresponding to the S1, S2, M2, and 01 tidal periods, were apparent in the measured outflows. The model seemed to reproduce the magnitudes of the S1 and S2 components better than those of the M2 and 01 components. To determine the cause of the relatively poor representation of the M2 and 01 components, we created a steady-base version of the model by setting the time-varying forcing functions - rainfall, evapotranspiration, wind, and inland and tidal boundary conditions - to averaged values. The steady-base model was then modified to produce multiple simulations with only one time-varying forcing function for each model run. These experimental simulations approximated the individual effects of each forcing function on the system. The spectral analysis of the experimental simulations indicated that temporal fluctuations in rainfall, evapotranspiration, and inland water level and discharge boundaries have negligible effects on coastal creek flow fluctuations with periods of less than 48 hours. The tidal boundary seems to be the only forcing function inducing the M2 and 01 frequency flow fluctuations in the creek. An analytical formulation was developed, relating the errors induced by the tidal water-level gauge resolution to the errors in the simulated discharge fluctuations at the coastal creek. This formulation yielded a discharge-fluctuation error similar in magnitude to the errors observed when comparing the spectrum of the simulated and measured discharge. The dominant source of error in the simulation of discharge fluctuation magnitude is most likely the resolution of the water-level gauges used to create the model boundary.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
The Drop Volume Method for Interfacial Tension Determination: An Error Analysis.
Earnshaw; Johnson; Carroll; Doyle
1996-01-15
An error analysis of the drop volume method of determination of surface or interfacial tension is presented. It is shown that the presence of the empirical correction term may lead to either a decrease or an increase in the final uncertainty of the calculated tension. Recommendations to maximize the precision of measurement are made. It is further shown that the systematic error due to the correction term is less than 0.04%; under the conditions recommended to minimize the statistical uncertainty, the systematic error should be less than half this figure. Tabulations of recommended values of the correction function are given.
Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Zhang, Sara Q.
2004-01-01
Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.
Wavefront-error evaluation by mathematical analysis of experimental Foucault-test data
NASA Technical Reports Server (NTRS)
Wilson, R. G.
1975-01-01
The diffraction theory of the Foucault test provides an integral formula expressing the complex amplitude and irradiance distribution in the Foucault pattern of a test mirror (lens) as a function of wavefront error. Recent literature presents methods of inverting this formula to express wavefront error in terms of irradiance in the Foucault pattern. The present paper describes a study in which the inversion formulation was applied to photometric Foucault-test measurements on a nearly diffraction-limited mirror to determine wavefront errors for direct comparison with ones determined from scatter-plate interferometer measurements. The results affirm the practicability of the Foucault test for quantitative wavefront analysis of very small errors, and they reveal the fallacy of the prevalent belief that the test is limited to qualitative use only. Implications of the results with regard to optical testing and the potential use of the Foucault test for wavefront analysis in orbital space telescopes are discussed.
NASA Astrophysics Data System (ADS)
Yildirim, Murat; Okutucu-Özyurt, Tuba; Dursunkaya, Zafer
2016-11-01
Fiber optic interferometry has been used to detect small displacements in diverse applications. Counting the number of fringes in fiber-optic interferometry is challenging due to the external effects induced in dynamic systems. In this paper, a novel interference fringe counting technique is developed to convert the intensity of interference data into displacements in the range of micrometers to millimeters while simultaneously resolving external dynamic effects. This technique consists of filtering the rough experimental data, converting filtered optical interference data into displacements, and resolving dynamic effects of the experimental system. Filtering the rough data is performed in time by using the moving average method with a window size of 400 data points. Filtered optical data is further converted into displacement by calculating relative phase differences of each data point compared to local maximum and local minimum points. Next, a linear curve-fit is subtracted from the calculated displacement curve to reveal dynamic effects. Straightness error of the lead screw driven stage, dynamics of the stepper motor, and profile of the reflective surfaces are investigated as the external dynamic effects. Straightness error is characterized by a 9th order polynomial function, and the effect of the dynamics of the stepper motor is fitted using a sinusoidal function. The remaining part of the measurement is the effect of roughness and waviness of the reflective surfaces. As explained in the experimental setup part, two fiber-optic probes detect the vertical relative displacements in the range of 1-50 μm, and the encoder probe detects 13.5 mm horizontal displacement. Thus, this technique can detect three order of magnitude different dynamic displacements with sub-micrometer resolution. The current methodology can be utilized in different applications which require measuring straightness error of lead-screw driven stages, large area surface profile of specimens
Integrated numerical methods for hypersonic aircraft cooling systems analysis
NASA Technical Reports Server (NTRS)
Petley, Dennis H.; Jones, Stuart C.; Dziedzic, William M.
1992-01-01
Numerical methods have been developed for the analysis of hypersonic aircraft cooling systems. A general purpose finite difference thermal analysis code is used to determine areas which must be cooled. Complex cooling networks of series and parallel flow can be analyzed using a finite difference computer program. Both internal fluid flow and heat transfer are analyzed, because increased heat flow causes a decrease in the flow of the coolant. The steady state solution is a successive point iterative method. The transient analysis uses implicit forward-backward differencing. Several examples of the use of the program in studies of hypersonic aircraft and rockets are provided.
Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J
2014-12-10
Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured, the validity of mediation analysis can be severely undermined. In this paper, we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities, the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration, and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk.
Mars Entry Atmospheric Data System Modeling, Calibration, and Error Analysis
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; VanNorman, John; Siemers, Paul M.; Schoenenberger, Mark; Munk, Michelle M.
2014-01-01
The Mars Science Laboratory (MSL) Entry, Descent, and Landing Instrumentation (MEDLI)/Mars Entry Atmospheric Data System (MEADS) project installed seven pressure ports through the MSL Phenolic Impregnated Carbon Ablator (PICA) heatshield to measure heatshield surface pressures during entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the dynamic pressure, angle of attack, and angle of sideslip. This report describes the calibration of the pressure transducers utilized to reconstruct the atmospheric data and associated uncertainty models, pressure modeling and uncertainty analysis, and system performance results. The results indicate that the MEADS pressure measurement system hardware meets the project requirements.
NASA Astrophysics Data System (ADS)
Ye, R. L.; Guo, Z. Z.; Liu, R. Y.; Liu, J. N.
2016-11-01
Energy storage system (ESS) in a wind farm can effectively compensate the fluctuations of wind power. How to determine the size of ESS in wind farms is an urgent problem to be solved. A novel method is proposed for designing the optimal size of ESS considering wind power uncertainty. This approach uses non-parametric estimation method to analysis the wind power forecast error (WPFE) and the cumulative wind power deviation (CWPD) within the scheduling period. Then a cost-benefit analysis model is established to obtain the optimal size of ESS based on the analysis of WPFE and CWPD. A series of wind farm data in California are used as numerical cases, which presents that the algorithm presented in this paper has good feasibility and performance in optimal ESS sizing in wind farms.
One active debris removal control system design and error analysis
NASA Astrophysics Data System (ADS)
Wang, Weilin; Chen, Lei; Li, Kebo; Lei, Yongjun
2016-11-01
The increasing expansion of debris presents a significant challenge to space safety and sustainability. To address it, active debris removal, usually involving a chaser performing autonomous rendezvous with targeted debris to be removed is a feasible solution. In this paper, we explore a mid-range autonomous rendezvous control system based on augmented proportional navigation (APN), establishing a three-dimensional kinematic equation set constructed in a rotating coordinate system. In APN, feedback control is applied in the direction of line of sight (LOS), thus analytical solutions of LOS rate and relative motion are expectedly obtained. To evaluate the effectiveness of the control system, we adopt Zero-Effort-Miss (ZEM) in this research as the index, the uncertainty of which is directly determined by that of LOS rate. Accordingly, we apply covariance analysis (CA) method to analyze the propagation of LOS rate uncertainty. Consequently, we find that the accuracy of the control system can be verified even with uncertainty and the CA method is drastically more computationally efficient compared with nonlinear Monte-Carlo method. Additionally, to justify the superiority of the system, we further discuss more simulation cases to show the robustness and feasibility of APN proposed in the paper.
Numerical Methods for Harmonic Analysis on the Sphere
1981-03-01
numbers,much as a Montecarlo -type of analysis is conducted. The seeds were chosen widely apart, to ensure that the correlation between "trials" would be...estimate, where the sampling part Is the result of a Montecarlo -like approach, is much easier to obtain than the theoretical one that involves setting up...likely errors in the potential coefficients obtained from 10 x 1 mean anom- alies using the quadratures formula 6 = - - f:0 ’ U1 0,)da g• The Montecarlo
Dynamic error analysis based on flexible shaft of wind turbine gearbox
NASA Astrophysics Data System (ADS)
Liu, H.; Zhao, R. Z.
2013-12-01
In view of the asynchrony issue between excitation and response in the transmission system, a study on the system dynamic error caused by sun axis which suspended in the gear box of a 1.5MW wind turbine was carried out considering flexibility of components. Firstly, the numerical recursive model was established by using D'Alembert's principle, then an application of MATLAB was used to simulate and analyze the model which was verified by the equivalent system. The results show that the dynamic error is not only related to the inherent parameter of system but also the external load imposed on the system; the module value of dynamic error are represented as a linear superposition of synchronization error component and harmonic vibration component and the latter can cause a random fluctuations of the gears, However, the dynamic error could be compensated partly if the stiffness coefficient of the sun axis is increased, thereby it is beneficial to improve the stability and accuracy of transmission system.
Recent advances in numerical analysis of structural eigenvalue problems
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1973-01-01
A wide range of eigenvalue problems encountered in practical structural engineering analyses is defined, in which the structures are assumed to be discretized by any suitable technique such as the finite-element method. A review of the usual numerical procedures for the solution of such eigenvalue problems is presented and is followed by an extensive account of recently developed eigenproblem solution procedures. Particular emphasis is placed on the new numerical algorithms and associated computer programs based on the Sturm sequence method. Eigenvalue algorithms developed for efficient solution of natural frequency and buckling problems of structures are presented, as well as some eigenvalue procedures formulated in connection with the solution of quadratic matrix equations associated with free vibration analysis of structures. A new algorithm is described for natural frequency analysis of damped structural systems.
NASA Astrophysics Data System (ADS)
Sun, Hong; Wu, Qian-zhong
2013-09-01
In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-10
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Theoretical Implications of an Error Analysis of Second Language Phonology Production.
ERIC Educational Resources Information Center
Altenberg, Evelyn P.; Vago, Robert M.
1983-01-01
Investigates second language phonology (English) of two native Hungarian speakers. Finds evidence for phonetic and phonological transfer but argues that there are limitations on what can be transferred. Contrasts error analysis approach with autonomous system analysis and concludes that each provides unique information and should be used together…
The design and analysis of single flank transmission error tester for loaded gears
NASA Technical Reports Server (NTRS)
Bassett, Duane E.; Houser, Donald R.
1987-01-01
To strengthen the understanding of gear transmission error and to verify mathematical models which predict them, a test stand that will measure the transmission error of gear pairs under design loads has been investigated. While most transmission error testers have been used to test gear pairs under unloaded conditions, the goal of this report was to design and perform dynamic analysis of a unique tester with the capability of measuring the transmission error of gears under load. This test stand will have the capability to continuously load a gear pair at torques up to 16,000 in-lb at shaft speeds from 0 to 5 rpm. Error measurement will be accomplished with high resolution optical encoders and the accompanying signal processing unit from an existing unloaded transmission error tester. Input power to the test gear box will be supplied by a dc torque motor while the load will be applied with a similar torque motor. A dual input, dual output control system will regulate the speed and torque of the system. This control system's accuracy and dynamic response were analyzed and it was determined that proportional plus derivative speed control is needed in order to provide the precisely constant torque necessary for error-free measurement.
NASA Astrophysics Data System (ADS)
Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng
2016-06-01
The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.
Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.
Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D
2017-03-28
The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety.
Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System
Shi, Shuai; Zhao, Kaichun; You, Zheng; Ouyang, Chenguang; Cao, Yongkui; Wang, Zhenzhou
2017-01-01
The Multiple Field-of-view Navigation System (MFNS) is a spacecraft subsystem built to realize the autonomous navigation of the Spacecraft Inside Tiangong Space Station. This paper introduces the basics of the MFNS, including its architecture, mathematical model and analysis, and numerical simulation of system errors. According to the performance requirement of the MFNS, the calibration of both intrinsic and extrinsic parameters of the system is assumed to be essential and pivotal. Hence, a novel method based on the geometrical constraints in object space, called checkerboard-fixed post-processing calibration (CPC), is proposed to solve the problem of simultaneously obtaining the intrinsic parameters of the cameras integrated in the MFNS and the transformation between the MFNS coordinate and the cameras’ coordinates. This method utilizes a two-axis turntable and a prior alignment of the coordinates is needed. Theoretical derivation and practical operation of the CPC method are introduced. The calibration experiment results of the MFNS indicate that the extrinsic parameter accuracy of the CPC reaches 0.1° for each Euler angle and 0.6 mm for each position vector component (1σ). A navigation experiment verifies the calibration result and the performance of the MFNS. The MFNS is found to work properly, and the accuracy of the position vector components and Euler angle reaches 1.82 mm and 0.17° (1σ) respectively. The basic mechanism of the MFNS may be utilized as a reference for the design and analysis of multiple-camera systems. Moreover, the calibration method proposed has practical value for its convenience for use and potential for integration into a toolkit. PMID:28327538
NASA Astrophysics Data System (ADS)
Kavetski, D.; Clark, M. P.; Fenicia, F.
2011-12-01
Hydrologists often face sources of uncertainty that dwarf those normally encountered in many engineering and scientific disciplines. Especially when representing large scale integrated systems, internal heterogeneities such as stream networks, preferential flowpaths, vegetation, etc, are necessarily represented with a considerable degree of lumping. The inputs to these models are themselves often the products of sparse observational networks. Given the simplifications inherent in environmental models, especially lumped conceptual models, does it really matter how they are implemented? At the same time, given the complexities usually found in the response surfaces of hydrological models, increasingly sophisticated analysis methodologies are being proposed for sensitivity analysis, parameter calibration and uncertainty assessment. Quite remarkably, rather than being caused by the model structure/equations themselves, in many cases model analysis complexities are consequences of seemingly trivial aspects of the model implementation - often, literally, whether the start-of-step or end-of-step fluxes are used! The extent of problems can be staggering, including (i) degraded performance of parameter optimization and uncertainty analysis algorithms, (ii) erroneous and/or misleading conclusions of sensitivity analysis, parameter inference and model interpretations and, finally, (iii) poor reliability of a calibrated model in predictive applications. While the often nontrivial behavior of numerical approximations has long been recognized in applied mathematics and in physically-oriented fields of environmental sciences, it remains a problematic issue in many environmental modeling applications. Perhaps detailed attention to numerics is only warranted for complicated engineering models? Would not numerical errors be an insignificant component of total uncertainty when typical data and model approximations are present? Is this really a serious issue beyond some rare isolated
Numerical bifurcation analysis of conformal formulations of the Einstein constraints
NASA Astrophysics Data System (ADS)
Holst, M.; Kungurtsev, V.
2011-12-01
The Einstein constraint equations have been the subject of study for more than 50 years. The introduction of the conformal method in the 1970s as a parametrization of initial data for the Einstein equations led to increased interest in the development of a complete solution theory for the constraints, with the theory for constant mean curvature (CMC) spatial slices and closed manifolds completely developed by 1995. The first general non-CMC existence result was establish by Holst et al. in 2008, with extensions to rough data by Holst et al. in 2009, and to vacuum spacetimes by Maxwell in 2009. The non-CMC theory remains mostly open; moreover, recent work of Maxwell on specific symmetry models sheds light on fundamental nonuniqueness problems with the conformal method as a parametrization in non-CMC settings. In parallel with these mathematical developments, computational physicists have uncovered surprising behavior in numerical solutions to the extended conformal thin sandwich formulation of the Einstein constraints. In particular, numerical evidence suggests the existence of multiple solutions with a quadratic fold, and a recent analysis of a simplified model supports this conclusion. In this article, we examine this apparent bifurcation phenomena in a methodical way, using modern techniques in bifurcation theory and in numerical homotopy methods. We first review the evidence for the presence of bifurcation in the Hamiltonian constraint in the time-symmetric case. We give a brief introduction to the mathematical framework for analyzing bifurcation phenomena, and then develop the main ideas behind the construction of numerical homotopy, or path-following, methods in the analysis of bifurcation phenomena. We then apply the continuation software package AUTO to this problem, and verify the presence of the fold with homotopy-based numerical methods. We discuss these results and their physical significance, which lead to some interesting remaining questions to
Error analysis in post linac to driver linac transport beam line of RAON
NASA Astrophysics Data System (ADS)
Kim, Chanmi; Kim, Eun-San
2016-07-01
We investigated the effects of magnet errors in the beam transport line connecting the post linac to the driver linac (P2DT) in the Rare Isotope Accelerator in Korea (RAON). The P2DT beam line is bent by 180-degree to send the radioactive Isotope Separation On-line (ISOL) beams accelerated in Linac-3 to Linac-2. This beam line transports beams with multi-charge state 132Sn45,46,47. The P2DT beam line includes 42 quadrupole, 4 dipole and 10 sextupole magnets. We evaluate the effects of errors on the trajectory of the beam by using the TRACK code, which includes the translational and the rotational errors of the quadrupole, dipole and sextupole magnets in the beam line. The purpose of this error analysis is to reduce the rate of beam loss in the P2DT beam line. The distorted beam trajectories can be corrected by using six correctors and seven monitors.
Longwave surface radiation over the globe from satellite data - An error analysis
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Wilber, A. C.; Darnell, W. L.; Suttles, J. T.
1993-01-01
Errors have been analyzed for monthly-average downward and net longwave surface fluxes derived on a 5-deg equal-area grid over the globe, using a satellite technique. Meteorological data used in this technique are available from the TIROS Operational Vertical Sounder (TOVS) system flown aboard NOAA's operational sun-synchronous satellites. The data used are for February 1982 from NOAA-6 and NOAA-7 satellites. The errors in the parametrized equations were estimated by comparing their results with those from a detailed radiative transfer model. The errors in the TOVS-derived surface temperature, water vapor burden, and cloud cover were estimated by comparing these meteorological parameters with independent measurements obtained from other satellite sources. Analysis of the overall errors shows that the present technique could lead to underestimation of downward fluxes by 5 to 15 W/sq m and net fluxes by 4 to 12 W/sq m.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Asymptotic/numerical analysis of supersonic propeller noise
NASA Technical Reports Server (NTRS)
Myers, M. K.; Wydeven, R.
1989-01-01
An asymptotic analysis based on the Mach surface structure of the field of a supersonic helical source distribution is applied to predict thickness and loading noise radiated by high speed propeller blades. The theory utilizes an integral representation of the Ffowcs-Williams Hawkings equation in a fully linearized form. The asymptotic results are used for chordwise strips of the blade, while required spanwise integrations are performed numerically. The form of the analysis enables predicted waveforms to be interpreted in terms of Mach surface propagation. A computer code developed to implement the theory is described and found to yield results in close agreement with more exact computations.
Numerical analysis of heat transfer in the exhaust gas flow in a diesel power generator
NASA Astrophysics Data System (ADS)
Brito, C. H. G.; Maia, C. B.; Sodré, J. R.
2016-09-01
This work presents a numerical study of heat transfer in the exhaust duct of a diesel power generator. The analysis was performed using two different approaches: the Finite Difference Method (FDM) and the Finite Volume Method (FVM), this last one by means of a commercial computer software, ANSYS CFX®. In FDM, the energy conservation equation was solved taking into account the estimated velocity profile for fully developed turbulent flow inside a tube and literature correlations for heat transfer. In FVM, the mass conservation, momentum, energy and transport equations were solved for turbulent quantities by the K-ω SST model. In both methods, variable properties were considered for the exhaust gas composed by six species: CO2, H2O, H2, O2, CO and N2. The entry conditions for the numerical simulations were given by experimental data available. The results were evaluated for the engine operating under loads of 0, 10, 20, and 37.5 kW. Test mesh and convergence were performed to determine the numerical error and uncertainty of the simulations. The results showed a trend of increasing temperature gradient with load increase. The general behaviour of the velocity and temperature profiles obtained by the numerical models were similar, with some divergence arising due to the assumptions made for the resolution of the models.
A analytical method to low-low satellite-to-satellite tracking (ll-SST) error analysis
NASA Astrophysics Data System (ADS)
Cai, Lin; Zhou, Zebing; Bai, Yanzheng
2014-05-01
The conventional methods of error analysis for low-low satellite-to-satellite tracking (ll-SST) missions are mainly based on least-squares (LS) method, which addresses the whole effect of measurement errors and estimate the resolution of gravity field models mainly from a numerical point of view. A direct analytical expression between power spectral density of the ll-SST measurements and spherical harmonic coefficients of the Earth's gravity model is derived based on the relationship between temporal frequencies and sphere harmonics. In this study much effort has been put into the establishment of the observation equation, which derived from the linear perturbations theory and control theory, and the computation of the average power acceleration in the north direction with respect to a local north-oriented frame, which relates to the orthonormalization of derivatives of the Legendre functions. This method provides a physical insight into the relation between mission parameters, instrument parameters and gravity field parameters. In contrast, the least-squares method is mainly based on a mathematical viewpoint. The result explicitly expresses the relationship, which enables us to estimate the parameters of ll-SST missions quantitatively and directly, especially for analyzing the frequency characteristics of measurement noise. By taking advantage of the analytical expression, we discuss the effects of range, range-rate and non-conservative forces measurements errors on the gravity field recovery.
Error analysis for semi-analytic displacement derivatives with respect to shape and sizing variables
NASA Technical Reports Server (NTRS)
Fenyes, Peter A.; Lust, Robert V.
1989-01-01
Sensitivity analysis is fundamental to the solution of structural optimization problems. Consequently, much research has focused on the efficient computation of static displacement derivatives. As originally developed, these methods relied on analytical representations for the derivatives of the structural stiffness matrix (K) with respect to the design variables (b sub i). To extend these methods for use with complex finite element formulations and facilitate their implementation into structural optimization programs using the general finite element method analysis codes, the semi-analytic method was developed. In this method the matrix the derivative of K/the derivative b sub i is approximated by finite difference. Although it is well known that the accuracy of the semi-analytic method is dependent on the finite difference parameter, recent work has suggested that more fundamental inaccuracies exist in the method when used for shape optimization. Another study has argued qualitatively that these errors are related to nonuniform errors in the stiffness matrix derivatives. The accuracy of the semi-analytic method is investigated. A general framework was developed for the error analysis and then it is shown analytically that the errors in the method are entirely accounted for by errors in delta K/delta b sub i. Furthermore, it is demonstrated that acceptable accuracy in the derivatives can be obtained through careful selection of the finite difference parameter.
[Systemic error analysis as a key element of clinical risk management].
Bartz, Hans-Jürgen
2015-01-01
Systemic error analysis plays a key role in clinical risk management. This includes all clinical and administrative activities which identify, assess and reduce the risks of damage to patients and to the organization. The clinical risk management is an integral part of quality management. This is also the policy of the Federal Joint Committee (Gemeinsamer Bundesausschuss, G-BA) on the fundamental requirements of an internal quality management. The goal of all activities is to improve the quality of medical treatment and patient safety. Primarily this is done by a systemic analysis of incidents and errors. A results-oriented systemic error analysis needs an open and unprejudiced corporate culture. Errors have to be transparent and measures to improve processes have to be taken. Disciplinary action on staff must not be part of the process. If these targets are met, errors and incidents can be analyzed and the process can create added value to the organization. There are some proven instruments to achieve that. This paper discusses in detail the error and risk analysis (ERA), which is frequently used in German healthcare organizations. The ERA goes far beyond the detection of problems due to faulty procedures. It focuses on the analysis of the following contributory factors: patient factors, task and process factors, individual factors, team factors, occupational and environmental factors, psychological factors, organizational and management factors and institutional context. Organizations can only learn from mistakes by analyzing these factors systemically and developing appropriate corrective actions. This article describes the fundamentals and implementation of the method at the University Medical Center Hamburg-Eppendorf.
Numerical Analysis on Double Dome Stretching Tests of Woven Composites
NASA Astrophysics Data System (ADS)
Lee, Wonoh; Cao, Jian; Chen, Julie; Sherwood, James
2007-04-01
As a result of international corporative benchmark works, material characterization of the woven fabric reinforced composites has been examined to better understand their mechanical properties and to provide the process design information for numerical analysis. Here, in order to predict thermo-forming behaviors of woven composites, the double dome stretching tests have been numerically performed for the balanced plain weave. To account for the change of fiber orientation under the large deformation, the non-orthogonal constitutive model has been utilized and nonlinear friction behavior is incorporated in the simulation. Also the equivalent material properties based on the contact status have been used for the thermo-stamping process. Blank draw-in, punch force history and fiber orientation after forming will be reported.
Numerical Ergonomics Analysis in Operation Environment of CNC Machine
NASA Astrophysics Data System (ADS)
Wong, S. F.; Yang, Z. X.
2010-05-01
The performance of operator will be affected by different operation environments [1]. Moreover, poor operation environment may cause health problems of the operator [2]. Physical and psychological considerations are two main factors that will affect the performance of operator under different conditions of operation environment. In this paper, applying scientific and systematic methods find out the pivot elements in the field of physical and psychological factors. There are five main factors including light, temperature, noise, air flow and space that are analyzed. A numerical ergonomics model has been built up regarding the analysis results which can support to advance the design of operation environment. Moreover, the output of numerical ergonomic model can provide the safe, comfortable, more productive conditions for the operator.
1-D Numerical Analysis of ABCC Engine Performance
NASA Technical Reports Server (NTRS)
Holden, Richard
1999-01-01
ABCC engine combines air breathing and rocket engine into a single engine to increase the specific impulse over an entire flight trajectory. Except for the heat source, the basic operation of the ABCC is similar to the basic operation of the RBCC engine. The ABCC is intended to have a higher specific impulse than the RBCC for single stage Earth to orbit vehicle. Computational fluid dynamics (CFD) is a useful tool for the analysis of complex transport processes in various components in ABCC propulsion system. The objective of the present research was to develop a transient 1-D numerical model using conservation of mass, linear momentum, and energy equations that could be used to predict flow behavior throughout a generic ABCC engine following a flight path. At specific points during the development of the 1-D numerical model a myriad of tests were performed to prove the program produced consistent, realistic numbers that follow compressible flow theory for various inlet conditions.
Numeral-Incorporating Roots in Numeral Systems: A Comparative Analysis of Two Sign Languages
ERIC Educational Resources Information Center
Fuentes, Mariana; Massone, Maria Ignacia; Fernandez-Viader, Maria del Pilar; Makotrinsky, Alejandro; Pulgarin, Francisca
2010-01-01
Numeral-incorporating roots in the numeral systems of Argentine Sign Language (LSA) and Catalan Sign Language (LSC), as well as the main features of the number systems of both languages, are described and compared. Informants discussed the use of numerals and roots in both languages (in most cases in natural contexts). Ten informants took part in…
NASA Astrophysics Data System (ADS)
He, Jianbin; Yu, Simin; Cai, Jianping
2016-12-01
Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.
Lü, Li-hui; Liu, Wen-qing; Zhang, Tian-shu; Lu, Yi-huai; Dong, Yun-sheng; Chen, Zhen-yi; Fan, Guang-qiang; Qi, Shao-shuai
2015-07-01
Atmospheric aerosols have important impacts on human health, the environment and the climate system. Micro Pulse Lidar (MPL) is a new effective tool for detecting atmosphere aerosol horizontal distribution. And the extinction coefficient inversion and error analysis are important aspects of data processing. In order to detect the horizontal distribution of atmospheric aerosol near the ground, slope and Fernald algorithms were both used to invert horizontal MPL data and then the results were compared. The error analysis showed that the error of the slope algorithm and Fernald algorithm were mainly from theoretical model and some assumptions respectively. Though there still some problems exist in those two horizontal extinction coefficient inversions, they can present the spatial and temporal distribution of aerosol particles accurately, and the correlations with the forward-scattering visibility sensor are both high with the value of 95%. Furthermore relatively speaking, Fernald algorithm is more suitable for the inversion of horizontal extinction coefficient.
EAC: A program for the error analysis of STAGS results for plates
NASA Technical Reports Server (NTRS)
Sistla, Rajaram; Thurston, Gaylen A.; Bains, Nancy Jane C.
1989-01-01
A computer code is now available for estimating the error in results from the STAGS finite element code for a shell unit consisting of a rectangular orthotropic plate. This memorandum contains basic information about the computer code EAC (Error Analysis and Correction) and describes the connection between the input data for the STAGS shell units and the input data necessary to run the error analysis code. The STAGS code returns a set of nodal displacements and a discrete set of stress resultants; the EAC code returns a continuous solution for displacements and stress resultants. The continuous solution is defined by a set of generalized coordinates computed in EAC. The theory and the assumptions that determine the continuous solution are also outlined in this memorandum. An example of application of the code is presented and instructions on its usage on the Cyber and the VAX machines have been provided.
Numerical Analysis of a Radiant Heat Flux Calibration System
NASA Technical Reports Server (NTRS)
Jiang, Shanjuan; Horn, Thomas J.; Dhir, V. K.
1998-01-01
A radiant heat flux gage calibration system exists in the Flight Loads Laboratory at NASA's Dryden Flight Research Center. This calibration system must be well understood if the heat flux gages calibrated in it are to provide useful data during radiant heating ground tests or flight tests of high speed aerospace vehicles. A part of the calibration system characterization process is to develop a numerical model of the flat plate heater element and heat flux gage, which will help identify errors due to convection, heater element erosion, and other factors. A 2-dimensional mathematical model of the gage-plate system has been developed to simulate the combined problem involving convection, radiation and mass loss by chemical reaction. A fourth order finite difference scheme is used to solve the steady state governing equations and determine the temperature distribution in the gage and plate, incident heat flux on the gage face, and flat plate erosion. Initial gage heat flux predictions from the model are found to be within 17% of experimental results.
Errors in reduction methods. [in dynamic analysis of multi-degree of freedom systems
NASA Technical Reports Server (NTRS)
Utku, S.; Salama, M.; Clemente, J. L. M.
1985-01-01
A mathematical basis is given for comparing the relative merits of various techniques used to reduce the order of large linear and nonlinear dynamics problems during their numerical integration. In such techniques as Guyan-Irons, path derivatives, selected eigenvectors, Ritz vectors, etc., the nth order initial value problem of /y(dot) = f(y) for t greater than 0, y(0) given/ is typically reduced to the mth order (m is much less than n) problem of /z(dot) = g(z) for t greater than 0, z(0) given/ by the transformation y = Pz where P changes from technique to technique. This paper gives an explicit approximate expression for the reduction error e-i in terms of P and the Jacobian of f. It is shown that: (a) reduction techniques are more accurate when the time rate of change of the response y is relatively small; (b) the change in response between two successive stations contributes to the errors at future stations after the change in response is transformed by a filtering matrix H, defined in terms of P; (c) the error committed at a station propagates to future stations by a mixing and scaling matrix G, defined in terms of P, Jacobian and of f, and time increment h. The paper discusses the conditions under which the reduction errors may be minimized and gives guidelines for selecting the reduction basis vector, i.e., the columns of P.
Broder, Joshua S; Fox, James W; Milne, Judy; Theiling, Brent Jason; White, Ann
2016-04-01
Medical errors are commonly multifactorial, with adverse clinical consequences often requiring the simultaneous failure of a series of protective layers, termed the Swiss Cheese model. Remedying and preventing future medical errors requires a series of steps, including detection, mitigation of patient harm, disclosure, reporting, root cause analysis, system modification, regulatory action, and engineering and manufacturing reforms. We describe this process applied to two cases of improper orientation of a Heimlich valve in a thoracostomy tube system, resulting in enlargement of an existing pneumothorax and the development of radiographic features of tension pneumothorax. We analyse elements contributing to the occurrence of the error and depict the implementation of reforms within our healthcare system and with regulatory authorities and the manufacturer. We identify features of the Heimlich valve promoting this error and suggest educational, design, and regulatory reforms for enhanced patient safety.
Analysis of wavelength error in spectral phase shifting of digital holographic microscopy
NASA Astrophysics Data System (ADS)
Wang, Jie; Zhang, Xiangchao; Zhang, Xiaolei; Xiao, Hong; Xu, Min
2016-10-01
Digital holographic microscopy is an attractive technology of precision measurement. Phase shifting is required to correctly reconstruct the measured surfaces from interferograms. Spectral phase shifting scheme, as an alternative approach of phase shifting, has drawn intensive attention in recent years. However, the wavelength modulated by the acousto-optic tunable filter (AOTF) is not sufficiently precise. As a consequence, severe measurement errors will be caused. In this paper, an iterative calibration algorithm is proposed. It estimates the unknown wavelength errors in the 3-step spectral phase shifting interferometry and then reconstructs the complex object wave. The actual wavelength is obtained by minimizing the difference between the measured and calculated intensities. Numerical examples have demonstrated that this algorithm can achieve very high accuracy over a wide range of wavelengths.
A Monte Carlo error analysis program for near-Mars, finite-burn, orbital transfer maneuvers
NASA Technical Reports Server (NTRS)
Green, R. N.; Hoffman, L. H.; Young, G. R.
1972-01-01
A computer program was developed which performs an error analysis of a minimum-fuel, finite-thrust, transfer maneuver between two Keplerian orbits in the vicinity of Mars. The method of analysis is the Monte Carlo approach where each off-nominal initial orbit is targeted to the desired final orbit. The errors in the initial orbit are described by two covariance matrices of state deviations and tracking errors. The function of the program is to relate these errors to the resulting errors in the final orbit. The equations of motion for the transfer trajectory are those of a spacecraft maneuvering with constant thrust and mass-flow rate in the neighborhood of a single body. The thrust vector is allowed to rotate in a plane with a constant pitch rate. The transfer trajectory is characterized by six control parameters and the final orbit is defined, or partially defined, by the desired target parameters. The program is applicable to the deboost maneuver (hyperbola to ellipse), orbital trim maneuver (ellipse to ellipse), fly-by maneuver (hyperbola to hyperbola), escape maneuvers (ellipse to hyperbola), and deorbit maneuver.
On the relationship between anxiety and error monitoring: a meta-analysis and conceptual framework.
Moser, Jason S; Moran, Tim P; Schroder, Hans S; Donnellan, M Brent; Yeung, Nick
2013-01-01
Research involving event-related brain potentials has revealed that anxiety is associated with enhanced error monitoring, as reflected in increased amplitude of the error-related negativity (ERN). The nature of the relationship between anxiety and error monitoring is unclear, however. Through meta-analysis and a critical review of the literature, we argue that anxious apprehension/worry is the dimension of anxiety most closely associated with error monitoring. Although, overall, anxiety demonstrated a robust, "small-to-medium" relationship with enhanced ERN (r = -0.25), studies employing measures of anxious apprehension show a threefold greater effect size estimate (r = -0.35) than those utilizing other measures of anxiety (r = -0.09). Our conceptual framework helps explain this more specific relationship between anxiety and enhanced ERN and delineates the unique roles of worry, conflict processing, and modes of cognitive control. Collectively, our analysis suggests that enhanced ERN in anxiety results from the interplay of a decrease in processes supporting active goal maintenance and a compensatory increase in processes dedicated to transient reactivation of task goals on an as-needed basis when salient events (i.e., errors) occur.
On the relationship between anxiety and error monitoring: a meta-analysis and conceptual framework
Moser, Jason S.; Moran, Tim P.; Schroder, Hans S.; Donnellan, M. Brent; Yeung, Nick
2013-01-01
Research involving event-related brain potentials has revealed that anxiety is associated with enhanced error monitoring, as reflected in increased amplitude of the error-related negativity (ERN). The nature of the relationship between anxiety and error monitoring is unclear, however. Through meta-analysis and a critical review of the literature, we argue that anxious apprehension/worry is the dimension of anxiety most closely associated with error monitoring. Although, overall, anxiety demonstrated a robust, “small-to-medium” relationship with enhanced ERN (r = −0.25), studies employing measures of anxious apprehension show a threefold greater effect size estimate (r = −0.35) than those utilizing other measures of anxiety (r = −0.09). Our conceptual framework helps explain this more specific relationship between anxiety and enhanced ERN and delineates the unique roles of worry, conflict processing, and modes of cognitive control. Collectively, our analysis suggests that enhanced ERN in anxiety results from the interplay of a decrease in processes supporting active goal maintenance and a compensatory increase in processes dedicated to transient reactivation of task goals on an as-needed basis when salient events (i.e., errors) occur. PMID:23966928
Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto
2006-01-01
We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.
NASA Astrophysics Data System (ADS)
Wang, Jian; Zhang, Fang; Song, Qiang; Zeng, Aijun; Zhu, Jing; Huang, Huijie
2015-04-01
With the constant shrinking of printable critical dimensions in photolithography, off-axis illumination (OAI) becomes one of the effective resolution-enhancement methods facing these challenges. This, in turn, is driving much more strict requirements, such as higher diffractive efficiency of the diffractive optical elements (DOEs) used in the OAI system. Since the design algorithms to optimize DOEs' phase profile are improved, the fabrication process becomes the main limiting factor leading to energy loss. Tolerance analysis is the general method to evaluate the fabrication accuracy requirement, which is especially useful for highly specialized deep UV applications with small structures and tight tolerances. A subpixel DOE simulation model is applied for tolerance analysis of DOEs by converting the abstractive fabrication structure errors into quantifiable subpixel phase matrices. Adopting the proposed model, four kinds of fabrication errors including misetch, misalignment, feature size error, and feature rounding error are able to be investigated. In the simulation experiments, systematic fabrication error studies of five typical DOEs used in 90-nm scanning photolithography illumination system are carried out. These results are valuable in the range of high precision DOE design algorithm and fabrication process optimization.
The CarbonSat Earth Explorer 8 candidate mission: Error analysis for carbon dioxide and methane
NASA Astrophysics Data System (ADS)
Buchwitz, Michael; Bovensmann, Heinrich; Reuter, Maximilian; Gerilowski, Konstantin; Meijer, Yasjka; Sierk, Bernd; Caron, Jerome; Loescher, Armin; Ingmann, Paul; Burrows, John P.
2015-04-01
CarbonSat is one of two candidate missions for ESA's Earth Explorer 8 (EE8) satellite to be launched around 2022. The main goal of CarbonSat is to advance our knowledge on the natural and man-made sources and sinks of the two most important anthropogenic greenhouse gases (GHGs) carbon dioxide (CO2) and methane (CH4) on various temporal and spatial scales (e.g., regional, city and point source scale), as well as related climate feedbacks. CarbonSat will be the first satellite mission optimised to detect emission hot spots of CO2 (e.g., cities, industrialised areas, power plants) and CH4 (e.g., oil and gas fields) and to quantify their emissions. Furthermore, CarbonSat will deliver a number of important by-products such as Vegetation Chlorophyll Fluorescence (VCF, also called Solar Induced Fluorescence (SIF)) at 755 nm. These applications require appropriate retrieval algorithms which are currently being optimized and used for error analysis. The status of this error analysis will be presented based on the latest version of the CO2 and CH4 retrieval algorithm and taking the current instrument specification into account. An overview will be presented focusing on nadir observations over land. Focus will be on specific issues such as errors of the CO2 and CH4 products due to residual polarization related errors and errors related to inhomogeneous ground scenes.
Numerical MLPG Analysis of Piezoelectric Sensor in Structures
NASA Astrophysics Data System (ADS)
Staňák, Peter; Sládek, Ján; Sládek, Vladimír; Krahulec, Slavomír
2014-07-01
The paper deals with a numerical analysis of the electro-mechanical response of piezoelectric sensors subjected to an external non-uniform displacement field. The meshless method based on the local Petrov-Galerkin (MLPG) approach is utilized for the numerical solution of a boundary value problem for the coupled electro-mechanical fields that characterize the piezoelectric material. The sensor is modeled as a 3-D piezoelectric solid. The transient effects are not considered. Using the present MLPG approach, the assumed solid of the cylindrical shape is discretized with nodal points only, and a small spherical subdomain is introduced around each nodal point. Local integral equations constructed from the weak form of governing PDEs are defined over these local subdomains. A moving least-squares (MLS) approximation scheme is used to approximate the spatial variations of the unknown field variables, and the Heaviside unit step function is used as a test function. The electric field induced on the sensor is studied in a numerical example for two loading scenarios.
An improved numerical model for wave rotor design and analysis
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Wilson, Jack
1993-01-01
A numerical model has been developed which can predict both the unsteady flows within a wave rotor and the steady averaged flows in the ports. The model is based on the assumptions of one-dimensional, unsteady, and perfect gas flow. Besides the dominant wave behavior, it is also capable of predicting the effects of finite tube opening time, leakage from the tube ends, and viscosity. The relative simplicity of the model makes it useful for design, optimization, and analysis of wave rotor cycles for any application. This paper discusses some details of the model and presents comparisons between the model and two laboratory wave rotor experiments.
An improved numerical model for wave rotor design and analysis
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Wilson, Jack
1992-01-01
A numerical model has been developed which can predict both the unsteady flows within a wave rotor and the steady averaged flows in the ports. The model is based on the assumptions of one-dimensional, unsteady, and perfect gas flow. Besides the dominant wave behavior, it is also capable of predicting the effects of finite tube opening time, leakage from the tube ends, and viscosity. The relative simplicity of the model makes it useful for design, optimization, and analysis of wave rotor cycles for any application. This paper discusses some details of the model and presents comparisons between the model and two laboratory wave rotor experiments.
Numerical analysis of decoy state quantum key distribution protocols
Harrington, Jim W; Rice, Patrick R
2008-01-01
Decoy state protocols are a useful tool for many quantum key distribution systems implemented with weak coherent pulses, allowing significantly better secret bit rates and longer maximum distances. In this paper we present a method to numerically find optimal three-level protocols, and we examine how the secret bit rate and the optimized parameters are dependent on various system properties, such as session length, transmission loss, and visibility. Additionally, we show how to modify the decoy state analysis to handle partially distinguishable decoy states as well as uncertainty in the prepared intensities.
Diffraction patterns from multiple tilted laser apertures: numerical analysis
NASA Astrophysics Data System (ADS)
Kovalev, Anton V.; Polyakov, Vadim M.
2016-03-01
We propose a Rayleigh-Sommerfeld based method for numerical calculation of multiple tilted apertures near and far field diffraction patterns. Method is based on iterative procedure of fast Fourier transform based circular convolution of the initial field complex amplitudes distribution and impulse response function modified in order to account aperture and observation planes mutual tilt. The method is computationally efficient and has good accordance with the results of experimental diffraction patterns and can be applied for analysis of spatial noises occurring in master oscillator power amplifier laser systems. The example of diffraction simulation for a Phobos-Ground laser rangefinder amplifier is demonstrated.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations.
ERIC Educational Resources Information Center
Isik, Cemalettin; Kar, Tugrul
2012-01-01
The present study aimed to make an error analysis in the problems posed by pre-service elementary mathematics teachers about fractional division operation. It was carried out with 64 pre-service teachers studying in their final year in the Department of Mathematics Teaching in an eastern university during the spring semester of academic year…
Advanced GIS Exercise: Performing Error Analysis in ArcGIS ModelBuilder
ERIC Educational Resources Information Center
Hall, Steven T.; Post, Christopher J.
2009-01-01
Knowledge of Geographic Information Systems is quickly becoming an integral part of the natural resource professionals' skill set. With the growing need of professionals with these skills, we created an advanced geographic information systems (GIS) exercise for students at Clemson University to introduce them to the concept of error analysis,…
Formulation and error analysis for a generalized image point correspondence algorithm
NASA Technical Reports Server (NTRS)
Shapiro, Linda (Editor); Rosenfeld, Azriel (Editor); Fotedar, Sunil; Defigueiredo, Rui J. P.; Krishen, Kumar
1992-01-01
A Generalized Image Point Correspondence (GIPC) algorithm, which enables the determination of 3-D motion parameters of an object in a configuration where both the object and the camera are moving, is discussed. A detailed error analysis of this algorithm has been carried out. Furthermore, the algorithm was tested on both simulated and video-acquired data, and its accuracy was determined.
Utility of KTEA-3 Error Analysis for the Diagnosis of Specific Learning Disabilities
ERIC Educational Resources Information Center
Flanagan, Dawn P.; Mascolo, Jennifer T.; Alfonso, Vincent C.
2017-01-01
Through the use of excerpts from one of our own case studies, this commentary applied concepts inherent in, but not limited to, the neuropsychological literature to the interpretation of performance on the Kaufman Tests of Educational Achievement-Third Edition (KTEA-3), particularly at the level of error analysis. The approach to KTEA-3 test…
Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis
ERIC Educational Resources Information Center
Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara
2014-01-01
This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…
Error and Feedback: The Relationship between Content Analysis and Confidence of Response.
ERIC Educational Resources Information Center
Dempsey, John V.; Driscoll, Marcy P.
This study examined the relationship between discrimination error (determined by content analysis and tryout data) and confidence of response (determined by self report). Subjects were 63 undergraduate students enrolled in a biology class for nonmajors who received classroom expository information and read a text on the topic before they completed…
Diction and Expression in Error Analysis Can Enhance Academic Writing of L2 University Students
ERIC Educational Resources Information Center
Sajid, Muhammad
2016-01-01
Without proper linguistic competence in English language, academic writing is one of the most challenging tasks, especially, in various genre specific disciplines by L2 novice writers. This paper examines the role of diction and expression through error analysis in English language of L2 novice writers' academic writing in interdisciplinary texts…
The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications
Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em
2008-11-20
Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L{sup 2} error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods.
ERIC Educational Resources Information Center
Protopapas, Athanassios; Fakou, Aikaterini; Drakopoulou, Styliani; Skaloumbakas, Christos; Mouzaki, Angeliki
2013-01-01
In this study we propose a classification system for spelling errors and determine the most common spelling difficulties of Greek children with and without dyslexia. Spelling skills of 542 children from the general population and 44 children with dyslexia, Grades 3-4 and 7, were assessed with a dictated common word list and age-appropriate…
Error analysis of marker-based object localization using a single-plane XRII
Habets, Damiaan F.; Pollmann, Steven I.; Yuan, Xunhua; Peters, Terry M.; Holdsworth, David W.
2009-01-15
The role of imaging and image guidance is increasing in surgery and therapy, including treatment planning and follow-up. Fluoroscopy is used for two-dimensional (2D) guidance or localization; however, many procedures would benefit from three-dimensional (3D) guidance or localization. Three-dimensional computed tomography (CT) using a C-arm mounted x-ray image intensifier (XRII) can provide high-quality 3D images; however, patient dose and the required acquisition time restrict the number of 3D images that can be obtained. C-arm based 3D CT is therefore limited in applications for x-ray based image guidance or dynamic evaluations. 2D-3D model-based registration, using a single-plane 2D digital radiographic system, does allow for rapid 3D localization. It is our goal to investigate - over a clinically practical range - the impact of x-ray exposure on the resulting range of 3D localization precision. In this paper it is assumed that the tracked instrument incorporates a rigidly attached 3D object with a known configuration of markers. A 2D image is obtained by a digital fluoroscopic x-ray system and corrected for XRII distortions ({+-}0.035 mm) and mechanical C-arm shift ({+-}0.080 mm). A least-square projection-Procrustes analysis is then used to calculate the 3D position using the measured 2D marker locations. The effect of x-ray exposure on the precision of 2D marker localization and on 3D object localization was investigated using numerical simulations and x-ray experiments. The results show a nearly linear relationship between 2D marker localization precision and the 3D localization precision. However, a significant amplification of error, nonuniformly distributed among the three major axes, occurs, and that is demonstrated. To obtain a 3D localization error of less than {+-}1.0 mm for an object with 20 mm marker spacing, the 2D localization precision must be better than {+-}0.07 mm. This requirement was met for all investigated nominal x-ray exposures at 28 cm
Error analysis of marker-based object localization using a single-plane XRII.
Habets, Damiaan F; Pollmann, Steven I; Yuan, Xunhua; Peters, Terry M; Holdsworth, David W
2009-01-01
The role of imaging and image guidance is increasing in surgery and therapy, including treatment planning and follow-up. Fluoroscopy is used for two-dimensional (2D) guidance or localization; however, many procedures would benefit from three-dimensional (3D) guidance or localization. Three-dimensional computed tomography (CT) using a C-arm mounted x-ray image intensifier (XRII) can provide high-quality 3D images; however, patient dose and the required acquisition time restrict the number of 3D images that can be obtained. C-arm based 3D CT is therefore limited in applications for x-ray based image guidance or dynamic evaluations. 2D-3D model-based registration, using a single-plane 2D digital radiographic system, does allow for rapid 3D localization. It is our goal to investigate-over a clinically practical range-the impact of x-ray exposure on the resulting range of 3D localization precision. In this paper it is assumed that the tracked instrument incorporates a rigidly attached 3D object with a known configuration of markers. A 2D image is obtained by a digital fluoroscopic x-ray system and corrected for XRII distortions (+/- 0.035 mm) and mechanical C-arm shift (+/- 0.080 mm). A least-square projection-Procrustes analysis is then used to calculate the 3D position using the measured 2D marker locations. The effect of x-ray exposure on the precision of 2D marker localization and on 3D object localization was investigated using numerical simulations and x-ray experiments. The results show a nearly linear relationship between 2D marker localization precision and the 3D localization precision. However, a significant amplification of error, nonuniformly distributed among the three major axes, occurs, and that is demonstrated. To obtain a 3D localization error of less than +/- 1.0 mm for an object with 20 mm marker spacing, the 2D localization precision must be better than +/- 0.07 mm. This requirement was met for all investigated nominal x-ray exposures at 28 cm FOV
NASA Technical Reports Server (NTRS)
Smith, D. R.; Leslie, F. W.
1984-01-01
The Purdue Regional Objective Analysis of the Mesoscale (PROAM) is a successive correction type scheme for the analysis of surface meteorological data. The scheme is subjected to a series of experiments to evaluate its performance under a variety of analysis conditions. The tests include use of a known analytic temperature distribution to quantify error bounds for the scheme. Similar experiments were conducted using actual atmospheric data. Results indicate that the multiple pass technique increases the accuracy of the analysis. Furthermore, the tests suggest appropriate values for the analysis parameters in resolving disturbances for the data set used in this investigation.
NASA Astrophysics Data System (ADS)
Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing
2016-09-01
The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.
Antenna motion errors in bistatic SAR imagery
NASA Astrophysics Data System (ADS)
Wang, Ling; Yazıcı, Birsen; Cagri Yanik, H.
2015-06-01
Antenna trajectory or motion errors are pervasive in synthetic aperture radar (SAR) imaging. Motion errors typically result in smearing and positioning errors in SAR images. Understanding the relationship between the trajectory errors and position errors in reconstructed images is essential in forming focused SAR images. Existing studies on the effect of antenna motion errors are limited to certain geometries, trajectory error models or monostatic SAR configuration. In this paper, we present an analysis of position errors in bistatic SAR imagery due to antenna motion errors. Bistatic SAR imagery is becoming increasingly important in the context of passive imaging and multi-sensor imaging. Our analysis provides an explicit quantitative relationship between the trajectory errors and the positioning errors in bistatic SAR images. The analysis is applicable to arbitrary trajectory errors and arbitrary imaging geometries including wide apertures and large scenes. We present extensive numerical simulations to validate the analysis and to illustrate the results in commonly used bistatic configurations and certain trajectory error models.
Execution-Error Modeling and Analysis of the GRAIL Spacecraft Pair
NASA Technical Reports Server (NTRS)
Goodson, Troy D.
2013-01-01
The GRAIL spacecraft, Ebb and Flow (aka GRAIL-A and GRAIL-B), completed their prime mission in June and extended mission in December 2012. The excellent performance of the propulsion and attitude control subsystems contributed significantly to the mission's success. In order to better understand this performance, the Navigation Team has analyzed and refined the execution-error models for delta-v maneuvers. There were enough maneuvers in the prime mission to form the basis of a model update that was used in the extended mission. This paper documents the evolution of the execution-error models along with the analysis and software used.
Error analysis of coefficient-based regularized algorithm for density-level detection.
Chen, Hong; Pan, Zhibin; Li, Luoqing; Tang, Yuanyan
2013-04-01
In this letter, we consider a density-level detection (DLD) problem by a coefficient-based classification framework with [Formula: see text]-regularizer and data-dependent hypothesis spaces. Although the data-dependent characteristic of the algorithm provides flexibility and adaptivity for DLD, it leads to difficulty in generalization error analysis. To overcome this difficulty, an error decomposition is introduced from an established classification framework. On the basis of this decomposition, the estimate of the learning rate is obtained by using Rademacher average and stepping-stone techniques. In particular, the estimate is independent of the capacity assumption used in the previous literature.
NASA Astrophysics Data System (ADS)
Montanari, A.; Grossi, G.
2007-12-01
It is well known that uncertainty assessment in hydrological forecasting is a topical issue. Already in 1905 W.E. Cooke, who was issuing daily weather forecasts in Australia, stated: "It seems to me that the condition of confidence or otherwise form a very important part of the prediction, and ought to find expression". Uncertainty assessment in hydrology involves the analysis of multiple sources of error. The contribution of these latter to the formation of the global uncertainty cannot be quantified independently, unless (a) one is willing to introduce subjective assumptions about the nature of the individual error components or (2) independent observations are available for estimating input error, model error, parameter error and state error. An alternative approach, that is applied in this study and still requires the introduction of some assumptions, is to quantify the global hydrological uncertainty in an integrated way, without attempting to quantify each independent contribution. This methodology can be applied in situations characterized by limited data availability and therefore is gaining increasing attention by end users. This work aims to propose a statistically based approach for assessing the global uncertainty in hydrological forecasting, by building a statistical model for the forecast error xt,d, where t is the forecast time and d is the lead time. Accordingly, the probability distribution of xt,d is inferred through a non linear multiple regression, depending on an arbitrary number of selected conditioning variables. These include the current forecast issued by the hydrological model, the past forecast error and internal state variables of the model. The final goal is to indirectly relate the forecast error to the sources of uncertainty, through a probabilistic link with the conditioning variables. Any statistical model is based on assumptions whose fulfilment is to be checked in order to assure the validity of the underlying theory. Statistical
NASA Astrophysics Data System (ADS)
Corzo P, Gerald A.; Solomatine, Dimitri
2014-05-01
In operational flow forecasting conceptual or process-based hydrological models are typically used, and more and more in combination with precipitation forecasts complemented by corrected data assimilation or data-driven error corrector models. Alternatively, predictive data-driven models, alone or in ensembles, have been employed in different researches, claiming that they ensure high accuracy of flow forecasting; for this, an artificial neural network (ANN) seems to be the most developed in studies. In this paper a comparative analysis of different error correctors and ANN models is made to contribute on the selection of operational. For this we explore the performance of various model combinations forecasting single and multiple time steps. The HBV hydrological model with and without error correction, data-driven models (ANNs) and hybrid committee models integrating conceptual models and ANNs. The capabilities of a model at a single time step (simulation) as well as multiple forecast horizons are represented in comparative graphs. Limitations of the meteorological forecasts are not contemplated in the hydrological forecast scenarios, so precipitation hindcast information was used as input in all models. Single time step forecast simulation of the HBV has 30 percent higher error than a one day forecast ANN model. However, for forecast horizons higher than 3 days a high variability of models' accuracy is found, and the clear dominant performance of the HBV hydrological model with an ANN error corrector is observed. In the forecasts for up to two days the committee and error-corrected models were the best, followed by ANN, and the conceptual model without error correction. The conceptual HBV model alone shows to perform best on long term sequential or iterative forecasts.
Guilera, Georgina; Gómez-Benito, Juana; Hidalgo, Maria Dolores; Sánchez-Meca, Julio
2013-12-01
This article presents a meta-analysis of studies investigating the effectiveness of the Mantel-Haenszel (MH) procedure when used to detect differential item functioning (DIF). Studies were located electronically in the main databases, representing the codification of 3,774 different simulation conditions, 1,865 related to Type I error and 1,909 to statistical power. The homogeneity of effect-size distributions was assessed by the Q statistic. The extremely high heterogeneity in both error rates (I² = 94.70) and power (I² = 99.29), due to the fact that numerous studies test the procedure in extreme conditions, means that the main interest of the results lies in explaining the variability in detection rates. One-way analysis of variance was used to determine the effects of each variable on detection rates, showing that the MH test was more effective when purification procedures were used, when the data fitted the Rasch model, when test contamination was below 20%, and with sample sizes above 500. The results imply a series of recommendations for practitioners who wish to study DIF with the MH test. A limitation, one inherent to all meta-analyses, is that not all the possible moderator variables, or the levels of variables, have been explored. This serves to remind us of certain gaps in the scientific literature (i.e., regarding the direction of DIF or variances in ability distribution) and is an aspect that methodologists should consider in future simulation studies.
Direct numerical simulation and analysis of shock turbulence interaction
NASA Technical Reports Server (NTRS)
Lee, Sangsan; Lele, Sanjiva K.; Moin, Parviz
1991-01-01
Two kinds of linear analysis, rapid distortion theory (RDT) and linear interaction analysis (LIA), were used to investigate the effects of a shock wave on turbulence. Direct numerical simulations of two-dimensional isotropic turbulence interaction with a normal shock were also performed. The results from RDT and LIA are in good agreement for weak shock waves, where the effects of shock front curvature and shock front unsteadiness are not significant in producing vorticity. The linear analyses predict wavenumber-dependent amplification of the upstream one-dimensional energy spectrum, leading to turbulence scale length scale decrease through the interaction. Instantaneous vorticity fields show that vortical structures are enhanced while they are compressed in the shock normal direction. Entrophy amplfication through the shock wave compares favorably with the results of linear analyses.
Analysis of the orbit errors in the CERN accelerators using model simulation
Lee, M.; Kleban, S.; Clearwater, S.; Scandale, W.; Pettersson, T.; Kugler, H.; Riche, A.; Chanel, M.; Martensson, E.; Lin, In-Ho
1987-09-01
This paper will describe the use of the PLUS program to find various types of machine and beam errors such as, quadrupole strength, dipole strength, beam position monitors (BPMs), energy profile, and beam launch. We refer to this procedure as the GOLD (Generic Orbit and Lattice Debugger) Method which is a general technique that can be applied to analysis of errors in storage rings and transport lines. One useful feature of the Method is that it analyzes segments of a machine at a time so that the application and efficiency is independent of the size of the overall machine. Because the techniques are the same for all the types of problems it solves, the user need learn only how to find one type of error in order to use the program.
Analysis of the screw compressor rotors’ non-uniform thermal field effect on transmission error
NASA Astrophysics Data System (ADS)
Mustafin, T. N.; Yakupov, R. R.; Burmistrov, A. V.; Khamidullin, M. S.; Khisameev, I. G.
2015-08-01
The vibrational state of the screw compressor is largely dependent on the gearing of the rotors and on the possibility of angular backlash in the gears. The presence of the latter leads to a transmission error and is caused by the need for the downward bias of the actual profile in relation to the theoretical. The loss of contact between rotors and, as a consequence, the current value of the quantity, characterizing the transmission error, is affected by a large number of different factors. In particular, a major influence on the amount of possible movement in the gearing will be exerted by thermal deformations of the rotor and the housing parts in the working mode of the machine. The present work is devoted to the analysis of the thermal state in the operation of the screw oil-flooded compressor and its impact on the transmission error and the possibility of losing contact between them during the operating cycle.
Numerical analysis of modified Central Solenoid insert design
Khodak, Andrei; Martovetsky, Nicolai; Smirnov, Aleksandre; ...
2015-06-21
The United States ITER Project Office (USIPO) is responsible for fabrication of the Central Solenoid (CS) for ITER project. The ITER machine is currently under construction by seven parties in Cadarache, France. The CS Insert (CSI) project should provide a verification of the conductor performance in relevant conditions of temperature, field, currents and mechanical strain. The US IPO designed the CSI that will be tested at the Central Solenoid Model Coil (CSMC) Test Facility at JAEA, Naka. To validate the modified design we performed three-dimensional numerical simulations using coupled solver for simultaneous structural, thermal and electromagnetic analysis. Thermal and electromagneticmore » simulations supported structural calculations providing necessary loads and strains. According to current analysis design of the modified coil satisfies ITER magnet structural design criteria for the following conditions: (1) room temperature, no current, (2) temperature 4K, no current, (3) temperature 4K, current 60 kA direct charge, and (4) temperature 4K, current 60 kA reverse charge. Fatigue life assessment analysis is performed for the alternating conditions of: temperature 4K, no current, and temperature 4K, current 45 kA direct charge. Results of fatigue analysis show that parts of the coil assembly can be qualified for up to 1 million cycles. Distributions of the Current Sharing Temperature (TCS) in the superconductor were obtained from numerical results using parameterization of the critical surface in the form similar to that proposed for ITER. Lastly, special ADPL scripts were developed for ANSYS allowing one-dimensional representation of TCS along the cable, as well as three-dimensional fields of TCS in superconductor material. Published by Elsevier B.V.« less
Numerical analysis of modified Central Solenoid insert design
Khodak, Andrei; Martovetsky, Nicolai; Smirnov, Aleksandre; Titus, Peter
2015-06-21
The United States ITER Project Office (USIPO) is responsible for fabrication of the Central Solenoid (CS) for ITER project. The ITER machine is currently under construction by seven parties in Cadarache, France. The CS Insert (CSI) project should provide a verification of the conductor performance in relevant conditions of temperature, field, currents and mechanical strain. The US IPO designed the CSI that will be tested at the Central Solenoid Model Coil (CSMC) Test Facility at JAEA, Naka. To validate the modified design we performed three-dimensional numerical simulations using coupled solver for simultaneous structural, thermal and electromagnetic analysis. Thermal and electromagnetic simulations supported structural calculations providing necessary loads and strains. According to current analysis design of the modified coil satisfies ITER magnet structural design criteria for the following conditions: (1) room temperature, no current, (2) temperature 4K, no current, (3) temperature 4K, current 60 kA direct charge, and (4) temperature 4K, current 60 kA reverse charge. Fatigue life assessment analysis is performed for the alternating conditions of: temperature 4K, no current, and temperature 4K, current 45 kA direct charge. Results of fatigue analysis show that parts of the coil assembly can be qualified for up to 1 million cycles. Distributions of the Current Sharing Temperature (TCS) in the superconductor were obtained from numerical results using parameterization of the critical surface in the form similar to that proposed for ITER. Lastly, special ADPL scripts were developed for ANSYS allowing one-dimensional representation of TCS along the cable, as well as three-dimensional fields of TCS in superconductor material. Published by Elsevier B.V.
Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo
2014-01-01
This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.
A neighbourhood analysis based technique for real-time error concealment in H.264 intra pictures
NASA Astrophysics Data System (ADS)
Beesley, Steven T. C.; Grecos, Christos; Edirisinghe, Eran
2007-02-01
H.264s extensive use of context-based adaptive binary arithmetic or variable length coding makes streams highly susceptible to channel errors, a common occurrence over networks such as those used by mobile devices. Even a single bit error will cause a decoder to discard all stream data up to the next fixed length resynchronisation point, the worst scenario is that an entire slice is lost. In cases where retransmission and forward error concealment are not possible, a decoder should conceal any erroneous data in order to minimise the impact on the viewer. Stream errors can often be spotted early in the decode cycle of a macroblock which if aborted can provide unused processor cycles, these can instead be used to conceal errors at minimal cost, even as part of a real time system. This paper demonstrates a technique that utilises Sobel convolution kernels to quickly analyse the neighbourhood surrounding erroneous macroblocks before performing a weighted multi-directional interpolation. This generates significantly improved statistical (PSNR) and visual (IEEE structural similarity) results when compared to the commonly used weighted pixel value averaging. Furthermore it is also computationally scalable, both during analysis and concealment, achieving maximum performance from the spare processing power available.
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk.
Error analysis of tomographic reconstructions in the absence of projection data.
Shakya, Snehlata; Munshi, Prabhat
2015-06-13
Error estimates for tomographic reconstructions (using Fourier transform-based algorithm) are available for cases where projection data are available. These data are used for reconstructions with different filter functions and the reliability of these reconstructions can be checked as per guidelines of those error estimates. There are cases where projection data are large (in gigabytes or terabytes) so storage of these data becomes an issue. It leads to storing of only the reconstructed images. Error estimation in such cases is presented here. Second-level projection data are calculated from the given reconstructed images ('first-level' images). These 'second-level' data are now used to generate 'second-level' reconstructed images. Different filter functions are employed to check the fidelity of these 'second-level' images. This inference is extended to first-level images in view of the characteristics of the convolution operator. This approach is validated with experimental data obtained by the X-ray micro-CT scanner installed at IIT Kanpur. Five specimens (of same material) have been scanned. Data are available in this case thus we have performed a comparative error estimate analysis for the 'first-level' reconstructions (data obtained from CT machine) and second-level reconstructions (data generated from first-level reconstructions). We observe that both approaches show similar outcome. It indicates that error estimates can also be applied to images when data are not available.
Error Analysis in a Device to Test Optical Systems by Using Ronchi Test and Phase Shifting
Cabrera-Perez, Brasilia; Castro-Ramos, Jorge; Gordiano-Alvarado, Gabriel; Vazquez y Montiel, Sergio
2008-04-15
In optical workshops, Ronchi test is used to determine the optical quality of any concave surface, while it is in the polishing process its quality is verified. The Ronchi test is one of the simplest and most effective methods used for evaluating and measuring aberrations. In this work, we describe a device to test converging mirrors and lenses either with small F/numbers or large F/numbers, using LED (Light-Emitting Diode) that has been adapted in the Ronchi testing as source of illumination. With LED used the radiation angle is bigger than common LED. It uses external power supplies to have well stability intensity to avoid error during the phase shift. The setup also has the advantage to receive automatic input and output data, this is possible because phase shifting interferometry and a square Ronchi ruling with a variable intensity LED were used. Error analysis of the different parameters involved in the test of Ronchi was made. For example, we analyze the error in the shifting of phase, the error introduced by the movement of the motor, misalignments of x-axis, y-axis and z-axis of the surface under test, error in the period of the grid used.
Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid
NASA Technical Reports Server (NTRS)
VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)
1997-01-01
The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).
Error analysis of deep sequencing of phage libraries: peptides censored in sequencing.
Matochko, Wadim L; Derda, Ratmir
2013-01-01
Next-generation sequencing techniques empower selection of ligands from phage-display libraries because they can detect low abundant clones and quantify changes in the copy numbers of clones without excessive selection rounds. Identification of errors in deep sequencing data is the most critical step in this process because these techniques have error rates >1%. Mechanisms that yield errors in Illumina and other techniques have been proposed, but no reports to date describe error analysis in phage libraries. Our paper focuses on error analysis of 7-mer peptide libraries sequenced by Illumina method. Low theoretical complexity of this phage library, as compared to complexity of long genetic reads and genomes, allowed us to describe this library using convenient linear vector and operator framework. We describe a phage library as N × 1 frequency vector n = ||ni||, where ni is the copy number of the ith sequence and N is the theoretical diversity, that is, the total number of all possible sequences. Any manipulation to the library is an operator acting on n. Selection, amplification, or sequencing could be described as a product of a N × N matrix and a stochastic sampling operator (Sa). The latter is a random diagonal matrix that describes sampling of a library. In this paper, we focus on the properties of Sa and use them to define the sequencing operator (Seq). Sequencing without any bias and errors is Seq = Sa IN, where IN is a N × N unity matrix. Any bias in sequencing changes IN to a nonunity matrix. We identified a diagonal censorship matrix (CEN), which describes elimination or statistically significant downsampling, of specific reads during the sequencing process.
Error Analysis of Deep Sequencing of Phage Libraries: Peptides Censored in Sequencing
Matochko, Wadim L.; Derda, Ratmir
2013-01-01
Next-generation sequencing techniques empower selection of ligands from phage-display libraries because they can detect low abundant clones and quantify changes in the copy numbers of clones without excessive selection rounds. Identification of errors in deep sequencing data is the most critical step in this process because these techniques have error rates >1%. Mechanisms that yield errors in Illumina and other techniques have been proposed, but no reports to date describe error analysis in phage libraries. Our paper focuses on error analysis of 7-mer peptide libraries sequenced by Illumina method. Low theoretical complexity of this phage library, as compared to complexity of long genetic reads and genomes, allowed us to describe this library using convenient linear vector and operator framework. We describe a phage library as N × 1 frequency vector n = ||ni||, where ni is the copy number of the ith sequence and N is the theoretical diversity, that is, the total number of all possible sequences. Any manipulation to the library is an operator acting on n. Selection, amplification, or sequencing could be described as a product of a N × N matrix and a stochastic sampling operator (Sa). The latter is a random diagonal matrix that describes sampling of a library. In this paper, we focus on the properties of Sa and use them to define the sequencing operator (Seq). Sequencing without any bias and errors is Seq = Sa IN, where IN is a N × N unity matrix. Any bias in sequencing changes IN to a nonunity matrix. We identified a diagonal censorship matrix (CEN), which describes elimination or statistically significant downsampling, of specific reads during the sequencing process. PMID:24416071
Analysis and reduction of tropical systematic errors through a unified modelling strategy
NASA Astrophysics Data System (ADS)
Copsey, D.; Marshall, A.; Martin, G.; Milton, S.; Senior, C.; Sellar, A.; Shelly, A.
2009-04-01
Systematic errors in climate models are usually addressed in a number of ways, but current methods often make use of model climatological fields as a starting point for model modification. This approach has limitations due to non-linear feedback mechanisms which occur over longer timescales and make the source of the errors difficult to identify. In a unified modelling environment, short-range (1-5 day) weather forecasts are readily available from NWP models with very similar dynamical and physical formulations to the climate models, but often increased horizontal (and vertical) resolution. Where such forecasts exhibit similar systematic errors to their climate model counterparts, there is much to be gained from combined analysis and sensitivity testing. For example, the Met Office Hadley Centre climate model HadGEM1 (Johns et al 2007) exhibits precipitation errors in the Asian summer monsoon, with too little rainfall over the Indian peninsula and too much over the equatorial Indian Ocean to the southwest of the peninsula (Martin et al., 2004). Examination of the development of precipitation errors in the Asian summer monsoon region in Met Office NWP forecasts shows that different parts of the error pattern evolve on different timescales. Excessive rainfall over the equatorial Indian Ocean to the southwest of the Indian peninsula develops rapidly, over the first day or two of the forecast, while a dry bias over the Indian land area takes ~10 days to develop. Such information is invaluable for understanding the processes involved and how to tackle them. Other examples of the use of this approach will be discussed, including analysis of the sensitivity of the representation of the Madden-Julian Oscillation (MJO) to the convective parametrisation, and the reduction of systematic tropical temperature and moisture biases in both climate and NWP models through improved representation of convective detrainment.
Numerical analysis of sheet cavitation on marine propellers, considering the effect of cross flow
NASA Astrophysics Data System (ADS)
Yari, Ehsan; Ghassemi, Hassan
2013-12-01
The research performed in this paper was carried out to investigate the numerical analysis of the sheet cavitation on marine propeller. The method is boundary element method (BEM). Using the Green's theorem, the velocity potential is expressed as an integral equation on the surface of the propeller by hyperboloid-shaped elements. Employing the boundary conditions, the potential is determined via solving the resulting system of equations. For the case study, a DTMB4119 propeller is analyzed with and without cavitating conditions. The pressure distribution and hydrodynamic performance curves of the propellers as well as cavity thickness obtained by numerical method are calculated and compared by the experimental results. Specifically in this article cavitation changes are investigate in both the radial and chord direction. Thus, cross flow variation has been studied in the formation and growth of sheet cavitation. According to the data obtained it can be seen that there is a better agreement and less error between the numerical results gained from the present method and Fluent results than Hong Sun method. This confirms the accurate estimation of the detachment point and the cavity change in radial direction.
Birge, Jonathan R.; Kaertner, Franz X.
2008-06-15
We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty.
Evaluation of parametric models by the prediction error in colorectal cancer survival analysis
Baghestani, Ahmad Reza; Gohari, Mahmood Reza; Orooji, Arezoo; Pourhoseingholi, Mohamad Amin; Zali, Mohammad Reza
2015-01-01
Aim: The aim of this study is to determine the factors influencing predicted survival time for patients with colorectal cancer (CRC) using parametric models and select the best model by predicting error’s technique. Background: Survival models are statistical techniques to estimate or predict the overall time up to specific events. Prediction is important in medical science and the accuracy of prediction is determined by a measurement, generally based on loss functions, called prediction error. Patients and methods: A total of 600 colorectal cancer patients who admitted to the Cancer Registry Center of Gastroenterology and Liver Disease Research Center, Taleghani Hospital, Tehran, were followed at least for 5 years and have completed selected information for this study. Body Mass Index (BMI), Sex, family history of CRC, tumor site, stage of disease and histology of tumor included in the analysis. The survival time was compared by the Log-rank test and multivariate analysis was carried out using parametric models including Log normal, Weibull and Log logistic regression. For selecting the best model, the prediction error by apparent loss was used. Results: Log rank test showed a better survival for females, BMI more than 25, patients with early stage at diagnosis and patients with colon tumor site. Prediction error by apparent loss was estimated and indicated that Weibull model was the best one for multivariate analysis. BMI and Stage were independent prognostic factors, according to Weibull model. Conclusion: In this study, according to prediction error Weibull regression showed a better fit. Prediction error would be a criterion to select the best model with the ability to make predictions of prognostic factors in survival analysis. PMID:26328040
Numerical analysis of electrically tunable aspherical optofluidic lenses.
In this work, we use the numerical simulation platform Zemax to investigate the optical properties of electrically tunable aspherical liquid lenses, as we recently reported in an experimental study [
2016-06-27
In this work, we use the numerical simulation platform Zemax to investigate the optical properties of electrically tunable aspherical liquid lenses, as we recently reported in an experimental study [
Mars gravity field error analysis from simulated radio tracking of Mars Observer
NASA Technical Reports Server (NTRS)
Smith, D. E.; Lerch, F. J.; Chan, J. C.; Chinn, D. S.; Iz, H. B.
1990-01-01
Results are presented on the analysis of the recovery of the Martian gravity field from tracking data in the presence of unmodeled error effects associated with different orbit orientations. The analysis was based on the mission plan for the Mars Observer (MO) radio tracking data from the Deep Space Network. From the analysis, a conservative estimate of the gravitational accuracy for the entire mission could be obtained. The results suggest that, because the atmospheric drag is the dominant error source, the spacecraft orbit could possibly be raised in altitude without a significant loss of gravitational signal. A change in altitude will also alleviate the large effects seen in the spectrum the satellite resonant orders.
The SIMEX approach to measurement error correction in meta-analysis with baseline risk as covariate.
Guolo, A
2014-05-30
This paper investigates the use of SIMEX, a simulation-based measurement error correction technique, for meta-analysis of studies involving the baseline risk of subjects in the control group as explanatory variable. The approach accounts for the measurement error affecting the information about either the outcome in the treatment group or the baseline risk available from each study, while requiring no assumption about the distribution of the true unobserved baseline risk. This robustness property, together with the feasibility of computation, makes SIMEX very attractive. The approach is suggested as an alternative to the usual likelihood analysis, which can provide misleading inferential results when the commonly assumed normal distribution for the baseline risk is violated. The performance of SIMEX is compared to the likelihood method and to the moment-based correction through an extensive simulation study and the analysis of two datasets from the medical literature.
Mean-square convergence analysis of ADALINE training with minimum error entropy criterion.
Chen, Badong; Zhu, Yu; Hu, Jinchun
2010-07-01
Recently, the minimum error entropy (MEE) criterion has been used as an information theoretic alternative to traditional mean-square error criterion in supervised learning systems. MEE yields nonquadratic, nonconvex performance surface even for adaptive linear neuron (ADALINE) training, which complicates the theoretical analysis of the method. In this paper, we develop a unified approach for mean-square convergence analysis for ADALINE training under MEE criterion. The weight update equation is formulated in the form of block-data. Based on a block version of energy conservation relation, and under several assumptions, we carry out the mean-square convergence analysis of this class of adaptation algorithm, including mean-square stability, mean-square evolution (transient behavior) and the mean-square steady-state performance. Simulation experimental results agree with the theoretical predictions very well.
On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator
Ballesteros, Joaquin; Urdiales, Cristina; Martinez, Antonio B.; van Dieën, Jaap H.
2016-01-01
Gait analysis can provide valuable information on a person’s condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i) there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars—related to the user condition—and the estimation error; and (ii) this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation. PMID:27834911
On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator.
Ballesteros, Joaquin; Urdiales, Cristina; Martinez, Antonio B; van Dieën, Jaap H
2016-11-10
Gait analysis can provide valuable information on a person's condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i) there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars-related to the user condition-and the estimation error; and (ii) this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.
West, Jamie; Atherton, Jennifer; Costelloe, Seán J; Pourmahram, Ghazaleh; Stretton, Adam; Cornes, Michael
2017-01-01
Preanalytical errors have previously been shown to contribute a significant proportion of errors in laboratory processes and contribute to a number of patient safety risks. Accreditation against ISO 15189:2012 requires that laboratory Quality Management Systems consider the impact of preanalytical processes in areas such as the identification and control of non-conformances, continual improvement, internal audit and quality indicators. Previous studies have shown that there is a wide variation in the definition, repertoire and collection methods for preanalytical quality indicators. The International Federation of Clinical Chemistry Working Group on Laboratory Errors and Patient Safety has defined a number of quality indicators for the preanalytical stage, and the adoption of harmonized definitions will support interlaboratory comparisons and continual improvement. There are a variety of data collection methods, including audit, manual recording processes, incident reporting mechanisms and laboratory information systems. Quality management processes such as benchmarking, statistical process control, Pareto analysis and failure mode and effect analysis can be used to review data and should be incorporated into clinical governance mechanisms. In this paper, The Association for Clinical Biochemistry and Laboratory Medicine PreAnalytical Specialist Interest Group review the various data collection methods available. Our recommendation is the use of the laboratory information management systems as a recording mechanism for preanalytical errors as this provides the easiest and most standardized mechanism of data capture.
A Meta-Analysis for Association of Maternal Smoking with Childhood Refractive Error and Amblyopia.
Li, Li; Qi, Ya; Shi, Wei; Wang, Yuan; Liu, Wen; Hu, Man
2016-01-01
Background. We aimed to evaluate the association between maternal smoking and the occurrence of childhood refractive error and amblyopia. Methods. Relevant articles were identified from PubMed and EMBASE up to May 2015. Combined odds ratio (OR) corresponding with its 95% confidence interval (CI) was calculated to evaluate the influence of maternal smoking on childhood refractive error and amblyopia. The heterogeneity was evaluated with the Chi-square-based Q statistic and the I (2) test. Potential publication bias was finally examined by Egger's test. Results. A total of 9 articles were included in this meta-analysis. The pooled OR showed that there was no significant association between maternal smoking and childhood refractive error. However, children whose mother smoked during pregnancy were 1.47 (95% CI: 1.12-1.93) times and 1.43 (95% CI: 1.23-1.66) times more likely to suffer from amblyopia and hyperopia, respectively, compared with children whose mother did not smoke, and the difference was significant. Significant heterogeneity was only found among studies involving the influence of maternal smoking on children's refractive error (P < 0.05; I (2) = 69.9%). No potential publication bias was detected by Egger's test. Conclusion. The meta-analysis suggests that maternal smoking is a risk factor for childhood hyperopia and amblyopia.
Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response
Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios
2016-01-01
In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system’s response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor’s optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section. PMID:27136562
Numerical analysis of the V-Y shaped advancement flap.
Remache, D; Chambert, J; Pauchot, J; Jacquet, E
2015-10-01
The V-Y advancement flap is a usual technique for the closure of skin defects. A triangular flap is incised adjacent to a skin defect of rectangular shape. As the flap is advanced to close the initial defect, two smaller defects in the shape of a parallelogram are formed with respect to a reflection symmetry. The height of the defects depends on the apex angle of the flap and the closure efforts are related to the defects height. Andrades et al. 2005 have performed a geometrical analysis of the V-Y flap technique in order to reach a compromise between the flap size and the defects width. However, the geometrical approach does not consider the mechanical properties of the skin. The present analysis based on the finite element method is proposed as a complement to the geometrical one. This analysis aims to highlight the major role of the skin elasticity for a full analysis of the V-Y advancement flap. Furthermore, the study of this technique shows that closing at the flap apex seems mechanically the most interesting step. Thus different strategies of defect closure at the flap apex stemming from surgeon's know-how have been tested by numerical simulations.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
Zhu, Jin; Wang, Dayan; Xie, Wanqing
2015-02-20
Diversified wavefront deformation is an inevitable phenomenon in intersatellite optical communication systems, which will decrease system performance. In this paper, we investigate the description of wavefront deformation and its influence on the packet error rate (PER) of digital pulse interval modulation (DPIM). With the wavelet method, the diversified wavefront deformation can be described by wavelet parameters: coefficient, dilation, and shift factors, where the coefficient factor represents the depth, dilation factor represents the area, and shift factor is for location. Based on this, the relationship between PER and wavelet parameters is analyzed from a theoretical viewpoint. Numerical results illustrate the validity of theoretical analysis: PER increases with the depth and area and decreases if location gets farther from the center of the optical antenna. In addition to describing diversified deformation, the advantage of the wavelet method over Zernike polynomials in computational complexity is shown via numerical example. This work provides a feasible method for the description along with influence analysis of diversified wavefront deformation from a practical viewpoint and will be helpful for designing optical systems.
1983-03-01
AN ANALYSIS OF A FINITE ELEMENT METHOD FOR CONVECTION- DIFFUSION PROBLEMS PART II: A POSTERIORI ERROR ESTIMATES AND ADAPTIVITY by W. G. Szymczak Y 6a...PERIOD COVERED AN ANALYSIS OF A FINITE ELEMENT METHOD FOR final life of the contract CONVECTION- DIFFUSION PROBLEM S. Part II: A POSTERIORI ERROR ...Element Method for Convection- Diffusion Problems. Part II: A Posteriori Error Estimates and Adaptivity W. G. Szvmczak and I. Babu~ka# Laboratory for
Numerical Analysis of Film Cooling at High Blowing Ratio
NASA Technical Reports Server (NTRS)
El-Gabry, Lamyaa; Heidmann, James; Ameri, Ali
2009-01-01
Computational Fluid Dynamics is used in the analysis of a film cooling jet in crossflow. Predictions of film effectiveness are compared with experimental results for a circular jet at blowing ratios ranging from 0.5 to 2.0. Film effectiveness is a surface quantity which alone is insufficient in understanding the source and finding a remedy for shortcomings of the numerical model. Therefore, in addition, comparisons are made to flow field measurements of temperature along the jet centerline. These comparisons show that the CFD model is accurately predicting the extent and trajectory of the film cooling jet; however, there is a lack of agreement in the near-wall region downstream of the film hole. The effects of main stream turbulence conditions, boundary layer thickness, turbulence modeling, and numerical artificial dissipation are evaluated and found to have an insufficient impact in the wake region of separated films (i.e. cannot account for the discrepancy between measured and predicted centerline fluid temperatures). Analyses of low and moderate blowing ratio cases are carried out and results are in good agreement with data.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. The SD module rejects waste heat from the power conversion cycle to space through a pumped-loop, multi-panel, deployable radiator. The baseline radiator configuration was defined during the Space Station conceptual design phase and is a function of the state point and heat rejection requirements of the power conversion unit. Requirements determined by the overall station design such as mass, system redundancy, micrometeoroid and space debris impact survivability, launch packaging, costs, and thermal and structural interaction with other station components have also been design drivers for the radiator configuration. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations. A brief description and discussion of the numerical model, it's capabilities and limitations, and results of the parametric studies performed is presented.
NASA Astrophysics Data System (ADS)
You, Jiong; Pei, Zhiyuan
2015-01-01
With the development of remote sensing technology, its applications in agriculture monitoring systems, crop mapping accuracy, and spatial distribution are more and more being explored by administrators and users. Uncertainty in crop mapping is profoundly affected by the spatial pattern of spectral reflectance values obtained from the applied remote sensing data. Errors in remotely sensed crop cover information and the propagation in derivative products need to be quantified and handled correctly. Therefore, this study discusses the methods of error modeling for uncertainty characterization in crop mapping using GF-1 multispectral imagery. An error modeling framework based on geostatistics is proposed, which introduced the sequential Gaussian simulation algorithm to explore the relationship between classification errors and the spectral signature from remote sensing data source. On this basis, a misclassification probability model to produce a spatially explicit classification error probability surface for the map of a crop is developed, which realizes the uncertainty characterization for crop mapping. In this process, trend surface analysis was carried out to generate a spatially varying mean response and the corresponding residual response with spatial variation for the spectral bands of GF-1 multispectral imagery. Variogram models were employed to measure the spatial dependence in the spectral bands and the derived misclassification probability surfaces. Simulated spectral data and classification results were quantitatively analyzed. Through experiments using data sets from a region in the low rolling country located at the Yangtze River valley, it was found that GF-1 multispectral imagery can be used for crop mapping with a good overall performance, the proposal error modeling framework can be used to quantify the uncertainty in crop mapping, and the misclassification probability model can summarize the spatial variation in map accuracy and is helpful for
Orbit Determination Error Analysis Results for the Triana Sun-Earth L2 Libration Point Mission
NASA Technical Reports Server (NTRS)
Marr, G.
2003-01-01
Using the NASA Goddard Space Flight Center's Orbit Determination Error Analysis System (ODEAS), orbit determination error analysis results are presented for all phases of the Triana Sun-Earth L1 libration point mission and for the science data collection phase of a future Sun-Earth L2 libration point mission. The Triana spacecraft was nominally to be released by the Space Shuttle in a low Earth orbit, and this analysis focuses on that scenario. From the release orbit a transfer trajectory insertion (TTI) maneuver performed using a solid stage would increase the velocity be approximately 3.1 km/sec sending Triana on a direct trajectory to its mission orbit. The Triana mission orbit is a Sun-Earth L1 Lissajous orbit with a Sun-Earth-vehicle (SEV) angle between 4.0 and 15.0 degrees, which would be achieved after a Lissajous orbit insertion (LOI) maneuver at approximately launch plus 6 months. Because Triana was to be launched by the Space Shuttle, TTI could potentially occur over a 16 orbit range from low Earth orbit. This analysis was performed assuming TTI was performed from a low Earth orbit with an inclination of 28.5 degrees and assuming support from a combination of three Deep Space Network (DSN) stations, Goldstone, Canberra, and Madrid and four commercial Universal Space Network (USN) stations, Alaska, Hawaii, Perth, and Santiago. These ground stations would provide coherent two-way range and range rate tracking data usable for orbit determination. Larger range and range rate errors were assumed for the USN stations. Nominally, DSN support would end at TTI+144 hours assuming there were no USN problems. Post-TTI coverage for a range of TTI longitudes for a given nominal trajectory case were analyzed. The orbit determination error analysis after the first correction maneuver would be generally applicable to any libration point mission utilizing a direct trajectory.
Asymptotic analysis of numerical wave propagation in finite difference equations
NASA Technical Reports Server (NTRS)
Giles, M.; Thompkins, W. T., Jr.
1983-01-01
An asymptotic technique is developed for analyzing the propagation and dissipation of wave-like solutions to finite difference equations. It is shown that for each fixed complex frequency there are usually several wave solutions with different wavenumbers and the slowly varying amplitude of each satisfies an asymptotic amplitude equation which includes the effects of smoothly varying coefficients in the finite difference equations. The local group velocity appears in this equation as the velocity of convection of the amplitude. Asymptotic boundary conditions coupling the amplitudes of the different wave solutions are also derived. A wavepacket theory is developed which predicts the motion, and interaction at boundaries, of wavepackets, wave-like disturbances of finite length. Comparison with numerical experiments demonstrates the success and limitations of the theory. Finally an asymptotic global stability analysis is developed.
A Cartesian parametrization for the numerical analysis of material instability
Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.; Ostien, Jakob T.; Lai, Zhengshou
2016-02-25
We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, the performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.
A Cartesian parametrization for the numerical analysis of material instability
Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.; ...
2016-02-25
We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less
Preliminary Numerical and Experimental Analysis of the Spallation Phenomenon
NASA Technical Reports Server (NTRS)
Martin, Alexandre; Bailey, Sean C. C.; Panerai, Francesco; Davuluri, Raghava S. C.; Vazsonyi, Alexander R.; Zhang, Huaibao; Lippay, Zachary S.; Mansour, Nagi N.; Inman, Jennifer A.; Bathel, Brett F.; Splinter, Scott C.; Danehy, Paul M.
2015-01-01
The spallation phenomenon was studied through numerical analysis using a coupled Lagrangian particle tracking code and a hypersonic aerothermodynamics computational fluid dynamics solver. The results show that carbon emission from spalled particles results in a significant modification of the gas composition of the post shock layer. Preliminary results from a test-campaign at the NASA Langley HYMETS facility are presented. Using an automated image processing of high-speed images, two-dimensional velocity vectors of the spalled particles were calculated. In a 30 second test at 100 W/cm2 of cold-wall heat-flux, more than 1300 particles were detected, with an average velocity of 102 m/s, and most frequent observed velocity of 60 m/s.
Measurement and numerical analysis of flammability limits of halogenated hydrocarbons.
Kondo, Shigeo; Takizawa, Kenji; Takahashi, Akifumi; Tokuhashi, Kazuaki
2004-06-18
Flammability limits measurement was made for a number of halogenated compounds by the ASHRAE method. Most of compounds measured are the ones for which discrepancy was noted between the literature values and predicted values of flammability limits. As a result, it has been found that most of the newly obtained values of flammability limits are not in accordance with the literature values. Numerical analysis was carried out for a set of flammability limits data including the newly obtained ones using a modified analytical method based on F-number scheme. In this method, fitting procedure was done directly to flammability limits themselves instead of fitting to F-number. After the fitting process, the average relative deviation between the observed and calculated values is 9.3% for the lower limits and 14.6% for the upper limits.
Numerical analysis of the dynamics of distributed vortex configurations
NASA Astrophysics Data System (ADS)
Govorukhin, V. N.
2016-08-01
A numerical algorithm is proposed for analyzing the dynamics of distributed plane vortex configurations in an inviscid incompressible fluid. At every time step, the algorithm involves the computation of unsteady vortex flows, an analysis of the configuration structure with the help of heuristic criteria, the visualization of the distribution of marked particles and vorticity, the construction of streamlines of fluid particles, and the computation of the field of local Lyapunov exponents. The inviscid incompressible fluid dynamic equations are solved by applying a meshless vortex method. The algorithm is used to investigate the interaction of two and three identical distributed vortices with various initial positions in the flow region with and without the Coriolis force.
Experimental and Numerical Analysis of Notched Composites Under Tension Loading
NASA Astrophysics Data System (ADS)
Aidi, Bilel; Case, Scott W.
2015-12-01
Experimental quasi-static tests were performed on center notched carbon fiber reinforced polymer (CFRP) composites having different stacking sequences made of G40-600/5245C prepreg. The three-dimensional Digital Image Correlation (DIC) technique was used during quasi-static tests conducted on quasi-isotropic notched samples to obtain the distribution of strains as a function of applied stress. A finite element model was built within Abaqus to predict the notched strength and the strain profiles for comparison with measured results. A user-material subroutine using the multi-continuum theory (MCT) as a failure initiation criterion and an energy-based damage evolution law as implemented by Autodesk Simulation Composite Analysis (ASCA) was used to conduct a quantitative comparison of strain components predicted by the analysis and obtained in the experiments. Good agreement between experimental data and numerical analyses results are observed. Modal analysis was carried out to investigate the effect of static damage on the dominant frequencies of the notched structure using the resulted degraded material elements. The first in-plane mode was found to be a good candidate for tracking the level of damage.
Comparison of analytical and numerical analysis of the reference region model for DCE-MRI.
Lee, Joonsang; Cárdenas-Rodríguez, Julio; Pagel, Mark D; Platt, Simon; Kent, Marc; Zhao, Qun
2014-09-01
This study compared three methods for analyzing DCE-MRI data with a reference region (RR) model: a linear least-square fitting with numerical analysis (LLSQ-N), a nonlinear least-square fitting with numerical analysis (NLSQ-N), and an analytical analysis (NLSQ-A). The accuracy and precision of estimating the pharmacokinetic parameter ratios KR and VR, where KR is defined as a ratio between the two volume transfer constants, K(trans,TOI) and K(trans,RR), and VR is the ratio between the two extracellular extravascular volumes, ve,TOI and ve,RR, were assessed using simulations under various signal-to-noise ratios (SNRs) and temporal resolutions (4, 6, 30, and 60s). When no noise was added, the simulations showed that the mean percent error (MPE) for the estimated KR and VR using the LLSQ-N and NLSQ-N methods ranged from 1.2% to 31.6% with various temporal resolutions while the NLSQ-A method maintained a very high accuracy (<1.0×10(-4) %) regardless of the temporal resolution. The simulation also indicated that the LLSQ-N and NLSQ-N methods appear to underestimate the parameter ratios more than the NLSQ-A method. In addition, seven in vivo DCE-MRI datasets from spontaneously occurring canine brain tumors were analyzed with each method. Results for the in vivo study showed that KR (ranging from 0.63 to 3.11) and VR (ranging from 2.82 to 19.16) for the NLSQ-A method were both higher than results for the other two methods (KR ranging from 0.01 to 1.29 and VR ranging from 1.48 to 19.59). A temporal downsampling experiment showed that the averaged percent error for the NLSQ-A method (8.45%) was lower than the other two methods (22.97% for LLSQ-N and 65.02% for NLSQ-N) for KR, and the averaged percent error for the NLSQ-A method (6.33%) was lower than the other two methods (6.57% for LLSQ-N and 13.66% for NLSQ-N) for VR. Using simulations, we showed that the NLSQ-A method can estimate the ratios of pharmacokinetic parameters more accurately and precisely than the NLSQ-N and
Mars gravity field error analysis from simulated radio tracking of Mars Observer
Smith, D.E.; Lerch, F.J. ); Chan, J.C.; Chinn, D.S.; Iz, H.B.; Mallama, A.; Patel, G.B. )
1990-08-30
The Mars Observer (MO) Mission, in a near-polar orbit at 360-410 km altitude for nearly a 2-year observing period, will greatly improve our understanding of the geophysics of Mars, including its gravity field. To assess the expected improvement of the gravity field, the authors have conducted an error analysis based upon the mission plan for the Mars Observer radio tracking data from the Deep Space Network. Their results indicate that it should be possible to obtain a high-resolution model (spherical harmonics complete to degree and order 50 corresponding to a 200-km horizontal resolution) for the gravitational field of the planet. This model, in combination with topography from MO altimetry, should provide for an improved determination of the broad scale density structure and stress state of the Martian crust and upper mantle. The mathematical model for the error analysis is based on the representation of doppler tracking data as a function of the Martian gravity field in spherical harmonics, solar radiation pressure, atmospheric drag, angular momentum desaturation residual acceleration (AMDRA) effects, tracking station biases, and the MO orbit parameters. Two approaches are employed. In the first case, the error covariance matrix of the gravity model is estimated including the effects from all the nongravitational parameters (noise-only case). In the second case, the gravity recovery error is computed as above but includes unmodelled systematic effects from atmospheric drag, AMDRA, and solar radiation pressure (biased case). The error spectrum of gravity shows an order of magnitude of improvement over current knowledge based on doppler data precision from a single station of 0.3 mm s{sup {minus}1} noise for 1-min integration intervals during three 60-day periods.
NASA Astrophysics Data System (ADS)
Elmaroud, Brahim; Faqihi, Ahmed; Aboutajdine, Driss
2017-01-01
In this paper, we study the performance of asynchronous and nonlinear FBMC-based multi-cellular networks. The considered system includes a reference mobile perfectly synchronized with its reference base station (BS) and K interfering BSs. Both synchronization errors and high-power amplifier (HPA) distortions will be considered and a theoretical analysis of the interference signal will be conducted. On the basis of this analysis, we will derive an accurate expression of signal-to-noise-plus-interference ratio (SINR) and bit error rate (BER) in the presence of a frequency-selective channel. In order to reduce the computational complexity of the BER expression, we applied an interesting lemma based on the moment generating function of the interference power. Finally, the proposed model is evaluated through computer simulations which show a high sensitivity of the asynchronous FBMC-based multi-cellular network to HPA nonlinear distortions.
An Analysis of Ripple and Error Fields Induced by a Blanket in the CFETR
NASA Astrophysics Data System (ADS)
Yu, Guanying; Liu, Xufeng; Liu, Songlin
2016-10-01
The Chinese Fusion Engineering Tokamak Reactor (CFETR) is an important intermediate device between ITER and DEMO. The Water Cooled Ceramic Breeder (WCCB) blanket whose structural material is mainly made of Reduced Activation Ferritic/Martensitic (RAFM) steel, is one of the candidate conceptual blanket design. An analysis of ripple and error field induced by RAFM steel in WCCB is evaluated with the method of static magnetic analysis in the ANSYS code. Significant additional magnetic field is produced by blanket and it leads to an increased ripple field. Maximum ripple along the separatrix line reaches 0.53% which is higher than 0.5% of the acceptable design value. Simultaneously, one blanket module is taken out for heating purpose and the resulting error field is calculated to be seriously against the requirement. supported by National Natural Science Foundation of China (No. 11175207) and the National Magnetic Confinement Fusion Program of China (No. 2013GB108004)
Error analysis for the ground-based microwave ozone measurements during STOIC
NASA Technical Reports Server (NTRS)
Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick
1995-01-01
We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ('baseline'). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17km at 0.2 mbar. We show the 'blind' microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE 2. The STOIC results and comparisons are broadly consistent with the formal analysis.
Development of an improved HRA method: A technique for human error analysis (ATHEANA)
Taylor, J.H.; Luckas, W.J.; Wreathall, J.
1996-03-01
Probabilistic risk assessment (PRA) has become an increasingly important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. The NRC recently published a final policy statement, SECY-95-126, encouraging the use of PRA in regulatory activities. Human reliability analysis (HRA), while a critical element of PRA, has limitations in the analysis of human actions in PRAs that have long been recognized as a constraint when using PRA. In fact, better integration of HRA into the PRA process has long been a NRC issue. Of particular concern, has been the omission of errors of commission - those errors that are associated with inappropriate interventions by operators with operating systems. To address these concerns, the NRC identified the need to develop an improved HRA method, so that human reliability can be better represented and integrated into PRA modeling and quantification.
NASA Technical Reports Server (NTRS)
Byrnes, D. V.; Carney, P. C.; Underwood, J. W.; Vogt, E. D.
1974-01-01
The six month effort was responsible for the development, test, conversion, and documentation of computer software for the mission analysis of missions to halo orbits about libration points in the earth-sun system. The software consisting of two programs called NOMNAL and ERRAN is part of the Space Trajectories Error Analysis Programs. The program NOMNAL targets a transfer trajectory from earth on a given launch date to a specified halo orbit on a required arrival date. Either impulsive or finite thrust insertion maneuvers into halo orbit are permitted by the program. The transfer trajectory is consistent with a realistic launch profile input by the user. The second program ERRAN conducts error analyses of the targeted transfer trajectory. Measurements including range, doppler, star-planet angles, and apparent planet diameter are processed in a Kalman-Schmidt filter to determine the trajectory knowledge uncertainty.
The use of failure mode effect and criticality analysis in a medication error subcommittee.
Williams, E; Talley, R
1994-04-01
Failure Mode Effect and Criticality Analysis (FMECA) is the systematic assessment of a process or product that enables one to determine the location and mechanism of potential failures. It has been used by engineers, particularly in the aerospace industry, to identify and prioritize potential failures during product development when there is a lack of data but an abundance of expertise. The Institute for Safe Medication Practices has recommended its use in analyzing the medication administration process in hospitals and in drug product development in the pharamceutical industry. A medication error subcommittee adopted and modified FMECA to identify and prioritize significant failure modes in its specific medication administration process. Based on this analysis, the subcommittee implemented solutions to four of the five highest ranked failure modes. FMECA provided a method for a multidisciplinary group to address the most important medication error concerns based upon the expertise of the group members. It also facilitated consensus building in a group with varied perceptions.
Loran digital phase-locked loop and RF front-end system error analysis
NASA Technical Reports Server (NTRS)
Mccall, D. L.
1979-01-01
An analysis of the system performance of the digital phase locked loops (DPLL) and RF front end that are implemented in the MINI-L4 Loran receiver is presented. Three of the four experiments deal with the performance of the digital phase locked loops. The other experiment deals with the RF front end and DPLL system error which arise in the front end due to poor signal to noise ratios. The ability of the DPLLs to track the offsets is studied.
Mixed Methods Analysis of Medical Error Event Reports: A Report from the ASIPS Collaborative
2005-05-01
mixed methods approach to analyzing narrative error event reports. Mixed methods studies integrate one or more qualitative and quantitative techniques for...authors present a protocol for applying a mixed methods approach to the study of patient safety reporting data to inform the development of interventions...Using mixed methods to study patient safety is an effective and efficient approach to data analysis that provides both information and motivation for developing and implementing patient safety
Principal components analysis of reward prediction errors in a reinforcement learning task.
Sambrook, Thomas D; Goslin, Jeremy
2016-01-01
Models of reinforcement learning represent reward and punishment in terms of reward prediction errors (RPEs), quantitative signed terms describing the degree to which outcomes are better than expected (positive RPEs) or worse (negative RPEs). An electrophysiological component known as feedback related negativity (FRN) occurs at frontocentral sites 240-340ms after feedback on whether a reward or punishment is obtained, and has been claimed to neurally encode an RPE. An outstanding question however, is whether the FRN is sensitive to the size of both positive RPEs and negative RPEs. Previous attempts to answer this question have examined the simple effects of RPE size for positive RPEs and negative RPEs separately. However, this methodology can be compromised by overlap from components coding for unsigned prediction error size, or "salience", which are sensitive to the absolute size of a prediction error but not its valence. In our study, positive and negative RPEs were parametrically modulated using both reward likelihood and magnitude, with principal components analysis used to separate out overlying components. This revealed a single RPE encoding component responsive to the size of positive RPEs, peaking at ~330ms, and occupying the delta frequency band. Other components responsive to unsigned prediction error size were shown, but no component sensitive to negative RPE size was found.
An analysis of the effects of initial velocity errors on geometric pairing
NASA Astrophysics Data System (ADS)
Schricker, Bradley C.; Ford, Louis
2007-04-01
For a number of decades, among the most prevalent training media in the military has been Tactical Engagement Simulation (TES) training. TES has allowed troops to train for practical missions in highly realistic combat environments without the associated risks involved with live weaponry and munitions. This has been possible because current TES has relied largely upon the Multiple Integrated Laser Engagement System (MILES) and similar systems for a number of years for direct-fire weapons, using a laser to pair the shooter to the potential target(s). Emerging systems, on the other hand, will use a pairing method called geometric pairing (geo-pairing), which uses a set of data about both the shooter and target, such as locations, weapon orientations, velocities, and weapon projectile velocities, nearby terrain to resolve an engagement. A previous paper [1] introduces various potential sources of error for a geo-pairing solution. This paper goes into greater depth regarding the impact of errors that originate within initial velocity errors, beginning with a short introduction into the TES system (TESS). The next section will explain the modeling characteristics of the projectile motion followed by a mathematical analysis illustrating the impacts of errors related to those characteristics. A summary and conclusion containing recommendations will close this paper.
NASA Astrophysics Data System (ADS)
Ballman, Katherine; Lee, Christopher; Dunn, Thomas; Bean, Alexander
2016-05-01
Due to the impact on image placement and overlay errors inherent in all reflective lithography systems, EUV reticles will need to adhere to flatness specifications below 10nm for 2018 production. These single value metrics are near impossible to meet using current tooling infrastructure (current state of the art reticles report P-V flatness ~60nm). In order to focus innovation on areas which lack capability for flatness compensation or correction, this paper redefines flatness metrics as being "correctable" vs. "non-correctable" based on the surface topography's contributions to the final IP budget at wafer, as well as whether data driven corrections (write compensation or at scanner) are available for the reticle's specific shape. To better understand and define the limitations of write compensation and scanner corrections, an error budget for processes contributing to these two methods is presented. Photomask flatness measurement tools are now targeting 6σ reproducibility <1nm (previous 3σ reproducibility ~3nm) in order to drive down error contributions and provide more accurate data for correction techniques. Taking advantage of the high order measurement capabilities of improved metrology tooling, as well as computational capabilities which enable fast measurements and analysis of sophisticated shapes, we propose a methodology for the industry to create functional tolerances focused on the flatness errors that are not correctable with compensation.
The linear Fresnel lens - Solar optical analysis of tracking error effects
NASA Technical Reports Server (NTRS)
Cosby, R. M.
1977-01-01
Real sun-tracking solar concentrators imperfectly follow the solar disk, operationally sustaining both transverse and axial misalignments. This paper describes an analysis of the solar concentration performance of a line-focusing flat-base Fresnel lens in the presence of small transverse tracking errors. Simple optics and ray-tracing techniques are used to evaluate the lens solar transmittance and focal-plane imaging characteristics. Computer-generated example data for an f/1.0 lens indicate that less than a 1% transmittance degradation occurs for transverse errors up to 2.5 deg. In this range, solar-image profiles shift laterally in the focal plane, the peak concentration ratio drops, and profile asymmetry increases with tracking error. With profile shift as the primary factor, the ninety-percent target-intercept width increases rapidly for small misalignments, e.g., almost threefold for a 1-deg error. The analytical model and computational results provide a design base for tracking and absorber systems for the linear-Fresnel-lens solar concentrator.
Multitemporal Error Analysis of LiDAR Data for Geomorphological Feature Detection
NASA Astrophysics Data System (ADS)
Sailer, R.; Höfle, B.; Bollmann, E.; Vetter, M.; Stötter, J.; Pfeifer, N.; Rutzinger, M.; Geist, T.
2009-04-01
Since 2001 airborne LiDAR measurements have been carried out regularly at the Hintereisferner region (Ötztal, Tyrol, Austria). This results in a worldwide unique data set, which is primarily used for multitemporal glacial and periglacial analyses. Several methods and tools i) to delineate the glacier boundary, ii) to derive standard glaciological mass balance parameters (e.g. volume changes), iii) to excerpt crevasse zones or iv) to classify glacier surface features (e.g. snow, firn, glacier ice, debris covered glacier ice) have been developed as yet. Furthermore, the available multitemporal LiDAR data set offers the opportunity to identify surface changes occurring outside the glacier boundary, which have not been recognized until now. The respective areas are characterized by small variations of the surface topography from year to year. These changes of the surface topography are primarily caused by periglacial processes further initiating secondary gravitative mass movements. The present study aims at quantifying the error range of LiDAR measurements. The error analysis, which is based on (at least) 66 cross-combinations of the single LiDAR measurement campaigns, excluding areas which are obviously related to glacial surface changes, results in statistically derived error margins. Hence, surface changes which exceed these error margins have to be assigned to periglacial or gravitative process activities. The study further aims at identifying areas which are explicitly related to those periglacial and gravitative processes.