Numerical study of error propagation in Monte Carlo depletion simulations
Wyant, T.; Petrovic, B.
2012-07-01
Improving computer technology and the desire to more accurately model the heterogeneity of the nuclear reactor environment have made the use of Monte Carlo depletion codes more attractive in recent years, and feasible (if not practical) even for 3-D depletion simulation. However, in this case statistical uncertainty is combined with error propagating through the calculation from previous steps. In an effort to understand this error propagation, a numerical study was undertaken to model and track individual fuel pins in four 17 x 17 PWR fuel assemblies. By changing the code's initial random number seed, the data produced by a series of 19 replica runs was used to investigate the true and apparent variance in k{sub eff}, pin powers, and number densities of several isotopes. While this study does not intend to develop a predictive model for error propagation, it is hoped that its results can help to identify some common regularities in the behavior of uncertainty in several key parameters. (authors)
Simulation of radar rainfall errors and their propagation into rainfall-runoff processes
NASA Astrophysics Data System (ADS)
Aghakouchak, A.; Habib, E.
2008-05-01
Radar rainfall data compared with rain gauge measurements provide higher spatial and temporal resolution. However, radar data obtained form reflectivity patterns are subject to various errors such as errors in Z-R relationship, vertical profile of reflectivity, spatial and temporal sampling, etc. Characterization of such uncertainties in radar data and their effects on hydrologic simulations (e.g., streamflow estimation) is a challenging issue. This study aims to analyze radar rainfall error characteristics empirically to gain information on prosperities of random error representativeness and its temporal and spatial dependency. To empirically analyze error characteristics, high resolution and accurate rain gauge measurements are required. The Goodwin Creek watershed located in the north part of Mississippi is selected for this study due to availability of a dense rain gauge network. A total of 30 rain gauge measurement stations within Goodwin Creak watershed and the NWS Level II radar reflectivity data obtained from the WSR-88dD Memphis radar station with temporal resolution of 5min and spatial resolution of 1 km2 are used in this study. Radar data and rain gauge measurements comparisons are used to estimate overall bias, and statistical characteristics and spatio-temporal dependency of radar rainfall error fields. This information is then used to simulate realizations of radar error patterns with multiple correlated variables using Monte Calro method and the Cholesky decomposition. The generated error fields are then imposed on radar rainfall fields to obtain statistical realizations of input rainfall fields. Each simulated realization is then fed as input to a distributed physically based hydrological model resulting in an ensemble of predicted runoff hydrographs. The study analyzes the propagation of radar errors on the simulation of different rainfall-runoff processes such as streamflow, soil moisture, infiltration, and over-land flooding.
Error Propagation in a System Model
NASA Technical Reports Server (NTRS)
Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)
2015-01-01
Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.
NLO error propagation exercise: statistical results
Pack, D.J.; Downing, D.J.
1985-09-01
Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or /sup 235/U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, /sup 235/U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and /sup 235/U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods.
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
Scout trajectory error propagation computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1982-01-01
Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
NASA Astrophysics Data System (ADS)
Noble, Viveca K.
1993-11-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Observations concerning licensee practices in error propagation
Lumb, R.F.; Messinger, M.; Tingey, F.H.
1983-07-01
This paper describes some of NUSAC's observations concerning licensee error propagation practice. NUSAC's findings are based on the results of work performed for the NRC whereby NUSAC visited seven nuclear fuel fabrication facilities, four processing low enriched uranium (LEU) and three processing high enriched uranium (HEU), in order to develop a detailed evaluation of the processing of material accounting data by those facilities. Discussed is the diversity that was found to exist across the industry in material accounting data accumulation; in error propagation methodology, for both inventory difference (ID) and shipper/receiver difference (SRD); as well as in measurement error modeling and estimation. Problems that have been identified are, in general, common to the industry. The significance of nonmeasurement effects on the variance of ID is discussed. This paper will also outline a four-phase program that can be implemented to improve the existing situation.
NASA Astrophysics Data System (ADS)
Hasegawa, Kei; Geller, Robert J.; Hirabayashi, Nobuyasu
2016-02-01
We present a theoretical analysis of the error of synthetic seismograms computed by higher-order finite element methods (ho-FEMs). We show the existence of a previously unrecognized type of error due to degenerate coupling between waves with the same frequency but different wavenumbers. These results are confirmed by simple numerical experiments using the spectral element method (SEM) as an example of ho-FEMs. Errors of the type found by this study may occur generally in applications of ho-FEMs.
Observation error propagation on video meteor orbit determination
NASA Astrophysics Data System (ADS)
SonotaCo
2016-04-01
A new radiant direction error computation method on SonotaCo Network meteor observation data was tested. It uses single station observation error obtained by reference star measurement and trajectory linearity measurement on each video, as its source error value, and propagates this to the radiant and orbit parameter errors via the Monte Carlo simulation method. The resulting error values on a sample data set showed a reasonable error distribution that makes accuracy-based selecting feasible. A sample set of selected orbits obtained by this method revealed a sharper concentration of shower meteor radiants than we have ever seen before. The simultaneously observed meteor data sets published by the SonotaCo Network will be revised to include this error value on each record and will be publically available along with the computation program in near future.
Error Propagation Analysis for Quantitative Intracellular Metabolomics
Tillack, Jana; Paczia, Nicole; Nöh, Katharina; Wiechert, Wolfgang; Noack, Stephan
2012-01-01
Model-based analyses have become an integral part of modern metabolic engineering and systems biology in order to gain knowledge about complex and not directly observable cellular processes. For quantitative analyses, not only experimental data, but also measurement errors, play a crucial role. The total measurement error of any analytical protocol is the result of an accumulation of single errors introduced by several processing steps. Here, we present a framework for the quantification of intracellular metabolites, including error propagation during metabolome sample processing. Focusing on one specific protocol, we comprehensively investigate all currently known and accessible factors that ultimately impact the accuracy of intracellular metabolite concentration data. All intermediate steps are modeled, and their uncertainty with respect to the final concentration data is rigorously quantified. Finally, on the basis of a comprehensive metabolome dataset of Corynebacterium glutamicum, an integrated error propagation analysis for all parts of the model is conducted, and the most critical steps for intracellular metabolite quantification are detected. PMID:24957773
NLO error propagation exercise data collection system
Keisch, B.; Bieber, A.M. Jr.
1983-01-01
A combined automated and manual system for data collection is described. The system is suitable for collecting, storing, and retrieving data related to nuclear material control at a bulk processing facility. The system, which was applied to the NLO operated Feed Materials Production Center, was successfully demonstrated for a selected portion of the facility. The instrumentation consisted of off-the-shelf commercial equipment and provided timeliness, convenience, and efficiency in providing information for generating a material balance and performing error propagation on a sound statistical basis.
Error propagation in energetic carrying capacity models
Pearse, Aaron T.; Stafford, Joshua D.
2014-01-01
Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.
Error Propagation Made Easy--Or at Least Easier
ERIC Educational Resources Information Center
Gardenier, George H.; Gui, Feng; Demas, James N.
2011-01-01
Complex error propagation is reduced to formula and data entry into a Mathcad worksheet or an Excel spreadsheet. The Mathcad routine uses both symbolic calculus analysis and Monte Carlo methods to propagate errors in a formula of up to four variables. Graphical output is used to clarify the contributions to the final error of each of the…
Error propagation in a digital avionic mini processor. M.S. Thesis
NASA Technical Reports Server (NTRS)
Lomelino, Dale L.
1987-01-01
A methodology is introduced and demonstrated for the study of error propagation from the gate to the chip level. The importance of understanding error propagation derives from its close tie with system activity. In this system the target system is BDX-930, a digital avionic multiprocessor. The simulator used was developed at NASA-Langley, and is a gate level, event-driven, unit delay, software logic simulator. An approach is highly structured and easily adapted to other systems. The analysis shows the nature and extent of the dependency of error propagation on microinstruction type, assembly level instruction, and fault-free gate activity.
Techniques for containing error propagation in compression/decompression schemes
NASA Technical Reports Server (NTRS)
Kobler, Ben
1991-01-01
Data compression has the potential for increasing the risk of data loss. It can also cause bit error propagation, resulting in catastrophic failures. There are a number of approaches possible for containing error propagation due to data compression: (1) data retransmission; (2) data interpolation; (3) error containment; and (4) error correction. The most fruitful techniques will be ones where error containment and error correction are integrated with data compression to provide optimal performance for both. The error containment characteristics of existing compression schemes should be analyzed for their behavior under different data and error conditions. The error tolerance requirements of different data sets need to be understood, so guidelines can then be developed for matching error requirements to suitable compression algorithms.
CADNA: a library for estimating round-off error propagation
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie
2008-06-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each
NASA Astrophysics Data System (ADS)
Noble, Viveca K.
1994-10-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Detecting and preventing error propagation via competitive learning.
Silva, Thiago Christiano; Zhao, Liang
2013-05-01
Semisupervised learning is a machine learning approach which is able to employ both labeled and unlabeled samples in the training process. It is an important mechanism for autonomous systems due to the ability of exploiting the already acquired information and for exploring the new knowledge in the learning space at the same time. In these cases, the reliability of the labels is a crucial factor, because mislabeled samples may propagate wrong labels to a portion of or even the entire data set. This paper has the objective of addressing the error propagation problem originated by these mislabeled samples by presenting a mechanism embedded in a network-based (graph-based) semisupervised learning method. Such a procedure is based on a combined random-preferential walk of particles in a network constructed from the input data set. The particles of the same class cooperate among them, while the particles of different classes compete with each other to propagate class labels to the whole network. Computer simulations conducted on synthetic and real-world data sets reveal the effectiveness of the model. PMID:23200192
Error analysis using organizational simulation.
Fridsma, D. B.
2000-01-01
Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885
Uncertainty and error in computational simulations
Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.
1997-10-01
The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.
Analysis of Errors in a Special Perturbations Satellite Orbit Propagator
Beckerman, M.; Jones, J.P.
1999-02-01
We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.
Position error propagation in the simplex strapdown navigation system
NASA Technical Reports Server (NTRS)
1976-01-01
The results of an analysis of the effects of deterministic error sources on position error in the simplex strapdown navigation system were documented. Improving the long term accuracy of the system was addressed in two phases: understanding and controlling the error within the system, and defining methods of damping the net system error through the use of an external reference velocity or position. Review of the flight and ground data revealed error containing the Schuler frequency as well as non-repeatable trends. The only unbounded terms are those involving gyro bias and azimuth error coupled with velocity. All forms of Schuler-periodic position error were found to be sufficiently large to require update or damping capability unless the source coefficients can be limited to values less than those used in this analysis for misalignment and gyro and accelerometer bias. The first-order effects of the deterministic error sources were determined with a simple error propagator which provided plots of error time functions in response to various source error values.
NASA Astrophysics Data System (ADS)
Hasegawa, Kei; Geller, Robert J.; Hirabayashi, Nobuyasu
2016-06-01
We present a theoretical analysis of the error of synthetic seismograms computed by higher-order finite-element methods (ho-FEMs). We show the existence of a previously unrecognized type of error due to degenerate coupling between waves with the same frequency but different wavenumbers. These results are confirmed by simple numerical experiments using the spectral element method as an example of ho-FEMs. Errors of the type found by this study may occur generally in applications of ho-FEMs.
Error propagation in PIV-based Poisson pressure calculations
NASA Astrophysics Data System (ADS)
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2015-11-01
After more than 20 years of development, PIV has become a standard non-invasive velocity field measurement technique, and promises to make PIV-based pressure calculations possible. However, the errors inherent in PIV velocity fields propagate through integration and contaminate the calculated pressure field. We propose an analysis that shows how the uncertainties in the velocity field propagate to the pressure field through the Poisson equation. First we model the dynamics of error propagation using boundary value problems (BVPs). Next, L2-norm and/or L∞-norm are utilized as the measure of error in the velocity and pressure field. Finally, using analysis techniques including the maximum principle, the Poincare inequality pressure field can be bounded by the error level of the data by considering the well-posedness of the BVPs. Specifically, we exam if and how the error in the pressure field depend continually on the BVP data. Factors such as flow field geometry, boundary conditions, and velocity field noise levels will be discussed analytically.
Inductively Coupled Plasma Mass Spectrometry Uranium Error Propagation
Hickman, D P; Maclean, S; Shepley, D; Shaw, R K
2001-07-01
The Hazards Control Department at Lawrence Livermore National Laboratory (LLNL) uses Inductively Coupled Plasma Mass Spectrometer (ICP/MS) technology to analyze uranium in urine. The ICP/MS used by the Hazards Control Department is a Perkin-Elmer Elan 6000 ICP/MS. The Department of Energy Laboratory Accreditation Program requires that the total error be assessed for bioassay measurements. A previous evaluation of the errors associated with the ICP/MS measurement of uranium demonstrated a {+-} 9.6% error in the range of 0.01 to 0.02 {micro}g/l. However, the propagation of total error for concentrations above and below this level have heretofore been undetermined. This document is an evaluation of the errors associated with the current LLNL ICP/MS method for a more expanded range of uranium concentrations.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Error and efficiency of simulated tempering simulations
Rosta, Edina; Hummer, Gerhard
2010-01-01
We derive simple analytical expressions for the error and computational efficiency of simulated tempering (ST) simulations. The theory applies to the important case of systems whose dynamics at long times is dominated by the slow interconversion between two metastable states. An extension to the multistate case is described. We show that the relative gain in efficiency of ST simulations over regular molecular dynamics (MD) or Monte Carlo (MC) simulations is given by the ratio of their reactive fluxes, i.e., the number of transitions between the two states summed over all ST temperatures divided by the number of transitions at the single temperature of the MD or MC simulation. This relation for the efficiency is derived for the limit in which changes in the ST temperature are fast compared to the two-state transitions. In this limit, ST is most efficient. Our expression for the maximum efficiency gain of ST simulations is essentially identical to the corresponding expression derived by us for replica exchange MD and MC simulations [E. Rosta and G. Hummer, J. Chem. Phys. 131, 165102 (2009)] on a different route. We find quantitative agreement between predicted and observed efficiency gains in a test against ST and replica exchange MC simulations of a two-dimensional Ising model. Based on the efficiency formula, we provide recommendations for the optimal choice of ST simulation parameters, in particular, the range and number of temperatures, and the frequency of attempted temperature changes. PMID:20095723
Error and efficiency of simulated tempering simulations.
Rosta, Edina; Hummer, Gerhard
2010-01-21
We derive simple analytical expressions for the error and computational efficiency of simulated tempering (ST) simulations. The theory applies to the important case of systems whose dynamics at long times is dominated by the slow interconversion between two metastable states. An extension to the multistate case is described. We show that the relative gain in efficiency of ST simulations over regular molecular dynamics (MD) or Monte Carlo (MC) simulations is given by the ratio of their reactive fluxes, i.e., the number of transitions between the two states summed over all ST temperatures divided by the number of transitions at the single temperature of the MD or MC simulation. This relation for the efficiency is derived for the limit in which changes in the ST temperature are fast compared to the two-state transitions. In this limit, ST is most efficient. Our expression for the maximum efficiency gain of ST simulations is essentially identical to the corresponding expression derived by us for replica exchange MD and MC simulations [E. Rosta and G. Hummer, J. Chem. Phys. 131, 165102 (2009)] on a different route. We find quantitative agreement between predicted and observed efficiency gains in a test against ST and replica exchange MC simulations of a two-dimensional Ising model. Based on the efficiency formula, we provide recommendations for the optimal choice of ST simulation parameters, in particular, the range and number of temperatures, and the frequency of attempted temperature changes. PMID:20095723
A neural fuzzy controller learning by fuzzy error propagation
NASA Technical Reports Server (NTRS)
Nauck, Detlef; Kruse, Rudolf
1992-01-01
In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.
Simulation of wave propagation in three-dimensional random media
NASA Technical Reports Server (NTRS)
Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1993-01-01
Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.
Cross Section Sensitivity and Propagated Errors in HZE Exposures
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Wilson, John W.; Blatnig, Steve R.; Qualls, Garry D.; Badavi, Francis F.; Cucinotta, Francis A.
2005-01-01
It has long been recognized that galactic cosmic rays are of such high energy that they tend to pass through available shielding materials resulting in exposure of astronauts and equipment within space vehicles and habitats. Any protection provided by shielding materials result not so much from stopping such particles but by changing their physical character in interaction with shielding material nuclei forming, hopefully, less dangerous species. Clearly, the fidelity of the nuclear cross-sections is essential to correct specification of shield design and sensitivity to cross-section error is important in guiding experimental validation of cross-section models and database. We examine the Boltzmann transport equation which is used to calculate dose equivalent during solar minimum, with units (cSv/yr), associated with various depths of shielding materials. The dose equivalent is a weighted sum of contributions from neutrons, protons, light ions, medium ions and heavy ions. We investigate the sensitivity of dose equivalent calculations due to errors in nuclear fragmentation cross-sections. We do this error analysis for all possible projectile-fragment combinations (14,365 such combinations) to estimate the sensitivity of the shielding calculations to errors in the nuclear fragmentation cross-sections. Numerical differentiation with respect to the cross-sections will be evaluated in a broad class of materials including polyethylene, aluminum and copper. We will identify the most important cross-sections for further experimental study and evaluate their impact on propagated errors in shielding estimates.
Phase unwrapping algorithms in laser propagation simulation
NASA Astrophysics Data System (ADS)
Du, Rui; Yang, Lijia
2013-08-01
Currently simulating on laser propagation in atmosphere usually need to deal with beam in strong turbulence, which may lose a part of information via Fourier Transform to simulate the transmission, makes the phase of beam as a 2-D array wrap by 2π . An effective unwrapping algorithm is needed for continuing result and faster calculation. The unwrapping algorithms in atmospheric propagation are similar to the unwrapping algorithm in radar or 3-D surface rebuilding, but not the same. In this article, three classic unwrapping algorithms: the block least squares (BLS), mask-cut (MCUT), and the Flynn's minimal discontinuity algorithm (FMD) are tried in wave-front reconstruction simulation. Each of those algorithms are tested 100 times in 6 same conditions, including low(64x64), medium(128x128), and high(256x256) resolution phase array, with and without noises. Compared the results, the conclusions are delivered as follows. The BLS-based algorithm is the fastest, and the result is acceptable in low resolution environment without noise. The MCUT are higher in accuracy, though they are slower with the array resolution increased, and it is sensitive to noise, resulted in large area errors. Flynn's algorithm has the better accuracy, and it occupies large memory in calculation. After all, the article delivered a new algorithm that based on Active on Vertex (AOV) Network, to build a logical graph to cut the search space then find minimal discontinuity solution. The AOV is faster than MCUT in dealing with high resolution phase arrays, and better accuracy as FMD that has been tested.
Relationships between GPS-signal propagation errors and EISCAT observations
NASA Astrophysics Data System (ADS)
Jakowski, N.; Sardon, E.; Engler, E.; Jungstand, A.; Klähn, D.
1996-12-01
When travelling through the ionosphere the signals of space-based radio navigation systems such as the Global Positioning System (GPS) are subject to modifications in amplitude, phase and polarization. In particular, phase changes due to refraction lead to propagation errors of up to 50 m for single-frequency GPS users. If both the L1 and the L2 frequencies transmitted by the GPS satellites are measured, first-order range error contributions of the ionosphere can be determined and removed by difference methods. The ionospheric contribution is proportional to the total electron content (TEC) along the ray path between satellite and receiver. Using about ten European GPS receiving stations of the International GPS Service for Geodynamics (IGS), the TEC over Europe is estimated within the geographic ranges -20°leq
Optimal control of quaternion propagation errors in spacecraft navigation
NASA Technical Reports Server (NTRS)
Vathsal, S.
1986-01-01
Optimal control techniques are used to drive the numerical error (truncation, roundoff, commutation) in computing the quaternion vector to zero. The normalization of the quaternion is carried out by appropriate choice of a performance index, which can be optimized. The error equations are derived from Friedland's (1978) theoretical development, and a matrix Riccati equation results for the computation of the gain matrix. Simulation results show that a high precision of the order of 10 to the -12th can be obtained using this technique in meeting the q(T)q=1 constraint. The performance of the estimator in the presence of the feedback control that maintains the normalization, is studied.
Using back error propagation networks for automatic document image classification
NASA Astrophysics Data System (ADS)
Hauser, Susan E.; Cookson, Timothy J.; Thoma, George R.
1993-09-01
The Lister Hill National Center for Biomedical Communications is a Research and Development Division of the National Library of Medicine. One of the Center's current research projects involves the conversion of entire journals to bitmapped binary page images. In an effort to reduce operator errors that sometimes occur during document capture, three back error propagation networks were designed to automatically identify journal title based on features in the binary image of the journal's front cover page. For all three network designs, twenty five journal titles were randomly selected from the stored database of image files. Seven cover page images from each title were selected as the training set. For each title, three other cover page images were selected as the test set. Each bitmapped image was initially processed by counting the total number of black pixels in 32-pixel wide rows and columns of the page image. For the first network, these counts were scaled to create 122-element count vectors as the input vectors to a back error propagation network. The network had one output node for each journal classification. Although the network was successful in correctly classifying the 25 journals, the large input vector resulted in a large network and, consequently, a long training period. In an alternative approach, the first thirty-five coefficients of the Fast Fourier Transform of the count vector were used as the input vector to a second network. A third approach was to train a separate network for each journal using the original count vectors as input and with only one output node. The output of the network could be 'yes' (it is this journal) or 'no' (it is not this journal). This final design promises to be most efficient for a system in which journal titles are added or removed as it does not require retraining a large network for each change.
On the uncertainty of stream networks derived from elevation data: the error propagation approach
NASA Astrophysics Data System (ADS)
Hengl, T.; Heuvelink, G. B. M.; van Loon, E. E.
2010-07-01
DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks) using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations), and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations). All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial) clearly depicts areas where the extracted stream network is least precise - usually areas of low local relief and slightly convex (0-10 difference from the mean value). In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor) show high error (H>0.5) of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy the required accuracy level. Such error propagation tool should become a standard functionality in any modern GIS. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies w_{r} in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = w_{r}/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less
NASA Astrophysics Data System (ADS)
Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok
2005-08-01
Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of
On the error propagation of semi-Lagrange and Fourier methods for advection problems☆
Einkemmer, Lukas; Ostermann, Alexander
2015-01-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018
Molecular dynamics simulation of propagating cracks
NASA Technical Reports Server (NTRS)
Mullins, M.
1982-01-01
Steady state crack propagation is investigated numerically using a model consisting of 236 free atoms in two (010) planes of bcc alpha iron. The continuum region is modeled using the finite element method with 175 nodes and 288 elements. The model shows clear (010) plane fracture to the edge of the discrete region at moderate loads. Analysis of the results obtained indicates that models of this type can provide realistic simulation of steady state crack propagation.
Numerical Simulation of Coherent Error Correction
NASA Astrophysics Data System (ADS)
Crow, Daniel; Joynt, Robert; Saffman, Mark
A major goal in quantum computation is the implementation of error correction to produce a logical qubit with an error rate lower than that of the underlying physical qubits. Recent experimental progress demonstrates physical qubits can achieve error rates sufficiently low for error correction, particularly for codes with relatively high thresholds such as the surface code and color code. Motivated by experimental capabilities of neutral atom systems, we use numerical simulation to investigate whether coherent error correction can be effectively used with the 7-qubit color code. The results indicate that coherent error correction does not work at the 10-qubit level in neutral atom array quantum computers. By adding more qubits there is a possibility of making the encoding circuits fault-tolerant which could improve performance.
Error propagation in the computation of volumes in 3D city models with the Monte Carlo method
NASA Astrophysics Data System (ADS)
Biljecki, F.; Ledoux, H.; Stoter, J.
2014-11-01
This paper describes the analysis of the propagation of positional uncertainty in 3D city models to the uncertainty in the computation of their volumes. Current work related to error propagation in GIS is limited to 2D data and 2D GIS operations, especially of rasters. In this research we have (1) developed two engines, one that generates random 3D buildings in CityGML in multiple LODs, and one that simulates acquisition errors to the geometry; (2) performed an error propagation analysis on volume computation based on the Monte Carlo method; and (3) worked towards establishing a framework for investigating error propagation in 3D GIS. The results of the experiments show that a comparatively small error in the geometry of a 3D city model may cause significant discrepancies in the computation of its volume. This has consequences for several applications, such as in estimation of energy demand and property taxes. The contribution of this work is twofold: this is the first error propagation analysis in 3D city modelling, and the novel approach and the engines that we have created can be used for analysing most of 3D GIS operations, supporting related research efforts in the future.
Propagation Of Error And The Reliability Of Global Air Temperature Projections
NASA Astrophysics Data System (ADS)
Frank, P.
2013-12-01
General circulation model (GCM) projections of the impact of rising greenhouse gases (GHGs) on globally averaged annual surface air temperatures are a simple linear extrapolation of GHG forcing, as indicated by their accurate simulation using the equation, ΔT = a×33K×[(F0+∑iΔFi)/F0], where F0 is the total GHG forcing of projection year zero, ΔFi is the increment of GHG forcing in the ith year, and a is a variable dimensionless fraction that follows GCM climate sensitivity. Linearity of GCM air temperature projections means that uncertainty propagates step-wise as the root-sum-square of error. The annual average error in total cloud fraction (TCF) resulting from CMIP5 model theory-bias is ×12%, equivalent to ×5 Wm-2 uncertainty in the energy state of the projected atmosphere. Propagated uncertainty due to TCF error is always much larger than the projected globally averaged air temperature anomaly, and reaches ×20 C in a centennial projection. CMIP5 GCMs thus have no predictive value.
Simulation of guided wave propagation near numerical Brillouin zones
NASA Astrophysics Data System (ADS)
Kijanka, Piotr; Staszewski, Wieslaw J.; Packo, Pawel
2016-04-01
Attractive properties of guided waves provides very unique potential for characterization of incipient damage, particularly in plate-like structures. Among other properties, guided waves can propagate over long distances and can be used to monitor hidden structural features and components. On the other hand, guided propagation brings substantial challenges for data analysis. Signal processing techniques are frequently supported by numerical simulations in order to facilitate problem solution. When employing numerical models additional sources of errors are introduced. These can play significant role for design and development of a wave-based monitoring strategy. Hence, the paper presents an investigation of numerical models for guided waves generation, propagation and sensing. Numerical dispersion analysis, for guided waves in plates, based on the LISA approach is presented and discussed in the paper. Both dispersion and modal amplitudes characteristics are analysed. It is shown that wave propagation in a numerical model resembles propagation in a periodic medium. Consequently, Lamb wave propagation close to numerical Brillouin zone is investigated and characterized.
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated
Propagation of errors from the sensitivity image in list mode reconstruction
Qi, Jinyi; Huesman, Ronald H.
2003-11-15
List mode image reconstruction is attracting renewed attention. It eliminates the storage of empty sinogram bins. However, a single back projection of all LORs is still necessary for the pre-calculation of a sensitivity image. Since the detection sensitivity is dependent on the object attenuation and detector efficiency, it must be computed for each study. Exact computation of the sensitivity image can be a daunting task for modern scanners with huge numbers of LORs. Thus, some fast approximate calculation may be desirable. In this paper, we theoretically analyze the error propagation from the sensitivity image into the reconstructed image. The theoretical analysis is based on the fixed point condition of the list mode reconstruction. The non-negativity constraint is modeled using the Kuhn-Tucker condition. With certain assumptions and the first order Taylor series approximation, we derive a closed form expression for the error in the reconstructed image as a function of the error in the sensitivity image. The result provides insights on what kind of error might be allowable in the sensitivity image. Computer simulations show that the theoretical results are in good agreement with the measured results.
Error propagation and scaling for tropical forest biomass estimates.
Chave, Jerome; Condit, Richard; Aguilar, Salomon; Hernandez, Andres; Lao, Suzanne; Perez, Rolando
2004-01-01
The above-ground biomass (AGB) of tropical forests is a crucial variable for ecologists, biogeochemists, foresters and policymakers. Tree inventories are an efficient way of assessing forest carbon stocks and emissions to the atmosphere during deforestation. To make correct inferences about long-term changes in biomass stocks, it is essential to know the uncertainty associated with AGB estimates, yet this uncertainty is rarely evaluated carefully. Here, we quantify four types of uncertainty that could lead to statistical error in AGB estimates: (i) error due to tree measurement; (ii) error due to the choice of an allometric model relating AGB to other tree dimensions; (iii) sampling uncertainty, related to the size of the study plot; (iv) representativeness of a network of small plots across a vast forest landscape. In previous studies, these sources of error were reported but rarely integrated into a consistent framework. We estimate all four terms in a 50 hectare (ha, where 1 ha = 10(4) m2) plot on Barro Colorado Island, Panama, and in a network of 1 ha plots scattered across central Panama. We find that the most important source of error is currently related to the choice of the allometric model. More work should be devoted to improving the predictive power of allometric models for biomass. PMID:15212093
Hoogeveen, R. C.; Martens, E. P.; van der Stelt, P. F.; Berkhout, W. E. R.
2015-01-01
Objective. To investigate if software simulation is practical for quantifying random error (RE) in phantom dosimetry. Materials and Methods. We applied software error simulation to an existing dosimetry study. The specifications and the measurement values of this study were brought into the software (R version 3.0.2) together with the algorithm of the calculation of the effective dose (E). Four sources of RE were specified: (1) the calibration factor; (2) the background radiation correction; (3) the read-out process of the dosimeters; and (4) the fluctuation of the X-ray generator. Results. The amount of RE introduced by these sources was calculated on the basis of the experimental values and the mathematical rules of error propagation. The software repeated the calculations of E multiple times (n = 10,000) while attributing the applicable RE to the experimental values. A distribution of E emerged as a confidence interval around an expected value. Conclusions. Credible confidence intervals around E in phantom dose studies can be calculated by using software modelling of the experiment. With credible confidence intervals, the statistical significance of differences between protocols can be substantiated or rejected. This modelling software can also be used for a power analysis when planning phantom dose experiments. PMID:26881200
Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.
2010-09-15
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article
NASA Astrophysics Data System (ADS)
Burnicki, Amy Colette
Improving our understanding of the uncertainty associated with a map of land-cover change is needed given the importance placed on modeling our changing landscape. My dissertation research addressed the challenges of estimating the accuracy of a map of change by improving our understanding of the spatio-temporal structure of error in multi-date classified imagery, investigating the relative strength and importance of a temporal dependence between classification errors in multi-date imagery, and exploring the interaction of classification errors within a simulated model of land-cover change. First, I quantified the spatial and temporal patterns of error in multi-date classified imagery acquired for Pittsfield Township, Michigan. Specifically, I examined the propagation of error in a post-classification change analysis. The spatial patterns of misclassification for each classified map, the temporal correlation between the errors in each classified map, and secondary variables that may have affected the pattern of error associated with the map of change were analyzed by addressing a series of research hypothesis. The results of all analyses provided a thorough description and understanding of the spatio-temporal error structure for this test township. Second, I developed a model of error propagation in land-cover change that simulated user-defined spatial and temporal patterns of error within a time-series of classified maps to assess the impact of the specified error patterns on the accuracy of the resulting map of change. Two models were developed. The first established the overall modeling framework using land-cover maps composed of two land-cover classes. The second extended the initial model by using three land-cover class maps to investigate model performance under increased landscape complexity. The results of the simulated model demonstrated that the presence of temporal interaction between the errors of individual classified maps affected the resulting
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.
2013-08-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006-2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC) ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92) are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors in the 7-15 hPa layer (29-32 km), a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold). Comparisons of total column O3 yield average differences of +1.6 DU (-1.1 to +4.9 DU 10th to 90th percentiles) when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (-1.1 to +2.2 DU) when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly distinguishable
Richardson Extrapolation Based Error Estimation for Stochastic Kinetic Plasma Simulations
NASA Astrophysics Data System (ADS)
Cartwright, Keigh
2014-10-01
To have a high degree of confidence in simulations one needs code verification, validation, solution verification and uncertainty qualification. This talk will focus on numerical error estimation for stochastic kinetic plasma simulations using the Particle-In-Cell (PIC) method and how it impacts the code verification and validation. A technique Is developed to determine the full converged solution with error bounds from the stochastic output of a Particle-In-Cell code with multiple convergence parameters (e.g. ?t, ?x, and macro particle weight). The core of this method is a multi parameter regression based on a second-order error convergence model with arbitrary convergence rates. Stochastic uncertainties in the data set are propagated through the model usin gstandard bootstrapping on a redundant data sets, while a suite of nine regression models introduces uncertainties in the fitting process. These techniques are demonstrated on Flasov-Poisson Child-Langmuir diode, relaxation of an electro distribution to a Maxwellian due to collisions and undriven sheaths and pre-sheaths. Sandia National Laboratories is a multie-program laboratory managed and operated by Sandia Corporation, a wholly owned subisidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Simulating tsunami propagation in fjords with long-wave models
NASA Astrophysics Data System (ADS)
Løvholt, F.; Glimsdal, S.; Lynett, P.; Pedersen, G.
2015-03-01
Tsunamis induced by rock slides constitute a severe hazard towards coastal fjord communities. Fjords are narrow and rugged with steep slopes, and modeling the short-frequency and high-amplitude tsunamis in this environment is demanding. In the present paper, our ability (and the lack thereof) to simulate tsunami propagation and run-up in fjords for typical wave characteristics of rock-slide-induced waves is demonstrated. The starting point is a 1 : 500 scale model of the topography and bathymetry of the southern part of Storfjorden fjord system in western Norway. Using measured wave data from the scale model as input to numerical simulations, we find that the leading wave is moderately influenced by nonlinearity and dispersion. For the trailing waves, dispersion and dissipation from the alongshore inundation on the traveling wave become more important. The tsunami inundation was simulated at the two locations of Hellesylt and Geiranger, providing a good match with the measurements in the former location. In Geiranger, the most demanding case of the two, discrepancies are larger. The discrepancies may be explained by a combinations of factors, such as the accumulated errors in the wave propagation along large stretches of the fjord, the coarse grid resolution needed to ensure model stability, and scale effects in the laboratory experiments.
Modeling human response errors in synthetic flight simulator domain
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.
1992-01-01
This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.
Simulation of MAD Cow Disease Propagation
NASA Astrophysics Data System (ADS)
Magdoń-Maksymowicz, M. S.; Maksymowicz, A. Z.; Gołdasz, J.
Computer simulation of dynamic of BSE disease is presented. Both vertical (to baby) and horizontal (to neighbor) mechanisms of the disease spread are considered. The game takes place on a two-dimensional square lattice Nx×Ny = 1000×1000 with initial population randomly distributed on the net. The disease may be introduced either with the initial population or by a spontaneous development of BSE in an item, at a small frequency. Main results show a critical probability of the BSE transmission above which the disease is present in the population. This value is vulnerable to possible spatial clustering of the population and it also depends on the mechanism responsible for the disease onset, evolution and propagation. A threshold birth rate below which the population is extinct is seen. Above this threshold the population is disease free at equilibrium until another birth rate value is reached when the disease is present in population. For typical model parameters used for the simulation, which may correspond to the mad cow disease, we are close to the BSE-free case.
Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results
Clark, E.L.
1994-07-01
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Choi, G.; Iyer, R. K.
1990-01-01
A simulation study is described which predicts the susceptibility of an advanced control system to electrical transients resulting in logic errors, latched errors, error propagation, and digital upset. The system is based on a custom-designed microprocessor and it incorporates fault-tolerant techniques. The system under test and the method to perform the transient injection experiment are described. Results for 2100 transient injections are analyzed and classified according to charge level, type of error, and location of injection.
Spatio-temporal precipitation error propagation in runoff modelling: a case study in central Sweden
NASA Astrophysics Data System (ADS)
Olsson, J.
2006-07-01
The propagation of spatio-temporal errors in precipitation estimates to runoff errors in the output from the conceptual hydrological HBV model was investigated. The study region was the Gimån catchment in central Sweden, and the period year 2002. Five precipitation sources were considered: NWP model (H22), weather radar (RAD), precipitation gauges (PTH), and two versions of a mesoscale analysis system (M11, M22). To define the baseline estimates of precipitation and runoff, used to define seasonal precipitation and runoff biases, the mesoscale climate analysis M11 was used. The main precipitation biases were a systematic overestimation of precipitation by H22, in particular during winter and early spring, and a pronounced local overestimation by RAD during autumn, in the western part of the catchment. These overestimations in some cases exceeded 50% in terms of seasonal subcatchment relative accumulated volume bias, but generally the bias was within ±20%. The precipitation data from the different sources were used to drive the HBV model, set up and calibrated for two stations in Gimån, both for continuous simulation during 2002 and for forecasting of the spring flood peak. In summer, autumn and winter all sources agreed well. In spring H22 overestimated the accumulated runoff volume by ~50% and peak discharge by almost 100%, owing to both overestimated snow depth and precipitation during the spring flood. PTH overestimated spring runoff volumes by ~15% owing to overestimated winter precipitation. The results demonstrate how biases in precipitation estimates may exhibit a substantial space-time variability, and may further become either magnified or reduced when applied for hydrological purposes, depending on both temporal and spatial variations in the catchment. Thus, the uncertainty in precipitation estimates should preferably be specified as a function of both time and space.
Effects of Error Experience When Learning to Simulate Hypernasality
ERIC Educational Resources Information Center
Wong, Andus W.-K.; Tse, Andy C.-Y.; Ma, Estella P.-M.; Whitehill, Tara L.; Masters, Rich S. W.
2013-01-01
Purpose: The purpose of this study was to evaluate the effects of error experience on the acquisition of hypernasal speech. Method: Twenty-eight healthy participants were asked to simulate hypernasality in either an "errorless learning" condition (in which the possibility for errors was limited) or an "errorful learning"…
NASA Technical Reports Server (NTRS)
Mallinckrodt, A. J.
1977-01-01
Data from an extensive array of collocated instrumentation at the Wallops Island test facility were intercompared in order to (1) determine the practical achievable accuracy limitations of various tropospheric and ionospheric correction techniques; (2) examine the theoretical bases and derivation of improved refraction correction techniques; and (3) estimate internal systematic and random error levels of the various tracking stations. The GEOS 2 satellite was used as the target vehicle. Data were obtained regarding the ionospheric and tropospheric propagation errors, the theoretical and data analysis of which was documented in some 30 separate reports over the last 6 years. An overview of project results is presented.
Simulation of Systematic Errors in Phase-Referenced VLBI Astrometry
NASA Astrophysics Data System (ADS)
Pradel, N.; Charlot, P.; Lestrade, J.-F.
2005-12-01
The astrometric accuracy in the relative coordinates of two angularly-close radio sources observed with the phase-referencing VLBI technique is limited by systematic errors. These include geometric errors and atmospheric errors. Based on simulation with the SPRINT software, we evaluate the impact of these errors in the estimated relative source coordinates for standard VLBA observations. Such evaluations are useful to estimate the actual accuracy of phase-referenced VLBI astrometry.
Saarelma, Jukka; Botts, Jonathan; Hamilton, Brian; Savioja, Lauri
2016-04-01
Finite-difference time-domain (FDTD) simulation has been a popular area of research in room acoustics due to its capability to simulate wave phenomena in a wide bandwidth directly in the time-domain. A downside of the method is that it introduces a direction and frequency dependent error to the simulated sound field due to the non-linear dispersion relation of the discrete system. In this study, the perceptual threshold of the dispersion error is measured in three-dimensional FDTD schemes as a function of simulation distance. Dispersion error is evaluated for three different explicit, non-staggered FDTD schemes using the numerical wavenumber in the direction of the worst-case error of each scheme. It is found that the thresholds for the different schemes do not vary significantly when the phase velocity error level is fixed. The thresholds are found to vary significantly between the different sound samples. The measured threshold for the audibility of dispersion error at the probability level of 82% correct discrimination for three-alternative forced choice is found to be 9.1 m of propagation in a free field, that leads to a maximum group delay error of 1.8 ms at 20 kHz with the chosen phase velocity error level of 2%. PMID:27106330
Error propagation: a comparison of Shack-Hartmann and curvature sensors.
Kellerer, Aglaé N; Kellerer, Albrecht M
2011-05-01
Phase estimates in adaptive-optics systems are computed by use of wavefront sensors, such as Shack-Hartmann or curvature sensors. In either case, the standard error of the phase estimates is proportional to the standard error of the measurements; but the error-propagation factors are different. We calculate the ratio of these factors for curvature and Shack-Hartmann sensors in dependence on the number of sensors, n, on a circular aperture. If the sensor spacing is kept constant and the pupil is enlarged, the ratio increases as n(0.4). When more sensing elements are accommodated on the same aperture, it increases even faster, namely, proportional to n(0.8). With large numbers of sensing elements, this increase can limit the applicability of curvature sensors. PMID:21532691
Error-Based Simulation for Error-Awareness in Learning Mechanics: An Evaluation
ERIC Educational Resources Information Center
Horiguchi, Tomoya; Imai, Isao; Toumoto, Takahito; Hirashima, Tsukasa
2014-01-01
Error-based simulation (EBS) has been developed to generate phenomena by using students' erroneous ideas and also offers promise for promoting students' awareness of errors. In this paper, we report the evaluation of EBS used in learning "normal reaction" in a junior high school. An EBS class, where students learned the concept…
Exploring Discretization Error in Simulation-Based Aerodynamic Databases
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2010-01-01
This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.
NASA Astrophysics Data System (ADS)
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
NASA Technical Reports Server (NTRS)
James, R.; Brownlow, J. D.
1985-01-01
A study is performed under NASA contract to evaluate data from an AN/FPS-16 radar installed for support of flight programs at Dryden Flight Research Facility of NASA Ames Research Center. The purpose of this study is to provide information necessary for improving post-flight data reduction and knowledge of accuracy of derived radar quantities. Tracking data from six flights are analyzed. Noise and bias errors in raw tracking data are determined for each of the flights. A discussion of an altitude bias error during all of the tracking missions is included. This bias error is defined by utilizing pressure altitude measurements made during survey flights. Four separate filtering methods, representative of the most widely used optimal estimation techniques for enhancement of radar tracking data, are analyzed for suitability in processing both real-time and post-mission data. Additional information regarding the radar and its measurements, including typical noise and bias errors in the range and angle measurements, is also presented. This report is in two parts. This is part 2, a discussion of the modeling of propagation path errors.
Simulation of Radar Rainfall Fields: A Random Error Model
NASA Astrophysics Data System (ADS)
Aghakouchak, A.; Habib, E.; Bardossy, A.
2008-12-01
Precipitation is a major input in hydrological and meteorological models. It is believed that uncertainties due to input data will propagate in modeling hydrologic processes. Stochastically generated rainfall data are used as input to hydrological and meteorological models to assess model uncertainties and climate variability in water resources systems. The superposition of random errors of different sources is one of the main factors in uncertainty of radar estimates. One way to express these uncertainties is to stochastically generate random error fields to impose them on radar measurements in order to obtain an ensemble of radar rainfall estimates. In the method introduced here, the random error consists of two components: purely random error and dependent error on the indicator variable. Model parameters of the error model are estimated using a heteroscedastic maximum likelihood model in order to account for variance heterogeneity in radar rainfall error estimates. When reflectivity values are considered, the exponent and multiplicative factor of the Z-R relationship are estimated simultaneously with the model parameters. The presented model performs better compared to the previous approaches that generally result in unaccounted heteroscedasticity in error fields and thus radar ensemble.
NASA Astrophysics Data System (ADS)
Addor, Nans; Fischer, Erich M.
2015-10-01
Climate model simulations are routinely compared to observational data sets for evaluation purposes. The resulting differences can be large and induce artifacts if propagated through impact models. They are usually termed "model biases," suggesting that they exclusively stem from systematic models errors. Here we explore for Switzerland the contribution of two other components of this mismatch, which are usually overlooked: interpolation errors and natural variability. Precipitation and temperature simulations from the RCM COSMO-Community Land Model were compared to two observational data sets, for which estimates of interpolation errors were derived. Natural variability on the multidecadal time scale was estimated using three approaches relying on homogenized time series, multiple runs of the same climate model, and bootstrapping of 30 year meteorological records. We find that although these methods yield different estimates, the contribution of the natural variability to RCM-observation differences in 30 year means is usually small. In contrast, uncertainties in observational data sets induced by interpolation errors can explain a substantial proportion of the mismatch of 30 year means. In those cases, we argue that the model biases can hardly be distinguished from interpolation errors, making the characterization and reduction of model biases particularly delicate. In other regions, RCM biases clearly exceed the estimated contribution of natural variability and interpolation errors, enabling bias characterization and robust model evaluation. Overall, we argue that bias correction of climate simulations needs to account for observational uncertainties and natural variability. We particularly stress the need for reliable error estimates to accompany observational data sets.
Programmable simulator for beam propagation in turbulent atmosphere.
Rickenstorff, Carolina; Rodrigo, José A; Alieva, Tatiana
2016-05-01
The study of light propagation though the atmosphere is crucial in different areas such as astronomy, free-space communications, remote sensing, etc. Since outdoors experiments are expensive and difficult to reproduce it is important to develop realistic numerical and experimental simulations. It has been demonstrated that spatial light modulators (SLMs) are well-suited for simulating different turbulent conditions in the laboratory. Here, we present a programmable experimental setup based on liquid crystal SLMs for simulation and analysis of the beam propagation through weak turbulent atmosphere. The simulator allows changing the propagation distances and atmospheric conditions without the need of moving optical elements. Its performance is tested for Gaussian and vortex beams. PMID:27137610
Simulation of long distance optical propagation on a benchtop.
Fein, M E; Sheng, S C; Sobottke, M
1989-04-15
An optical instrument derived from two telescopes simulates long distance propagation of optical wavefronts, in short real distances. Both geometric and wave optical effects are correctly simulated. One 900:1 distance scaler is used routinely for benchtop testing and adjustment of laser leveling instruments that work at ranges of the order of a kilometer. PMID:20548700
Reducing the error growth in the numerical propagation of satellite orbits
NASA Astrophysics Data System (ADS)
Ferrandiz, Jose M.; Vigo, Jesus; Martin, P.
1991-12-01
An algorithm especially designed for the long term numerical integration of perturbed oscillators, in one or several frequencies, is presented. The method is applied to the numerical propagation of satellite orbits, using focal variables, and the results concerning highly eccentric or nearly circular cases are reported. The method performs particularly well for high eccentricity. For e = 0.99 and J2 + J3 perturbations it allows the last perigee after 1000 revolutions with an error less than 1 cm, with only 80 derivative evaluations per revolution. In general the approach provides about a hundred times more accuracy than Bettis methods over one thousand revolutions.
Belief Propagation for Error Correcting Codes and Lossy Compression Using Multilayer Perceptrons
NASA Astrophysics Data System (ADS)
Mimura, Kazushi; Cousseau, Florent; Okada, Masato
2011-03-01
The belief propagation (BP) based algorithm is investigated as a potential decoder for both of error correcting codes and lossy compression, which are based on non-monotonic tree-like multilayer perceptron encoders. We discuss that whether the BP can give practical algorithms or not in these schemes. The BP implementations in those kind of fully connected networks unfortunately shows strong limitation, while the theoretical results seems a bit promising. Instead, it reveals it might have a rich and complex structure of the solution space via the BP-based algorithms.
Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES
NASA Astrophysics Data System (ADS)
Sarkar, B.; Bhunia, C. T.; Maulik, U.
2012-06-01
Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
NASA Astrophysics Data System (ADS)
Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
Wavefront error simulator for evaluating optical testing instrumentation
NASA Technical Reports Server (NTRS)
Golden, L. J.
1975-01-01
A wavefront error simulator has been designed and fabricated to evaluate experimentally test instrumentation for the Large Space Telescope (LST) program. The principal operating part of the simulator is an aberration generator that introduces low-order aberrations of several waves magnitude with an incremented adjustment capability of lambda/100. Each aberration type can be introduced independently with any desired spatial orientation.
The Propagation of Errors in Experimental Data Analysis: A Comparison of Pre-and Post-Test Designs
ERIC Educational Resources Information Center
Gorard, Stephen
2013-01-01
Experimental designs involving the randomization of cases to treatment and control groups are powerful and under-used in many areas of social science and social policy. This paper reminds readers of the pre-and post-test, and the post-test only, designs, before explaining briefly how measurement errors propagate according to error theory. The…
Characteristics and dependencies of error in satellite-based flood event simulations
NASA Astrophysics Data System (ADS)
Mei, Yiwen; Nikolopoulos, Efthymios I.; Anagnostou, Emmanouil N.; Zoccatelli, Davide; Borga, Marco
2016-04-01
The error in satellite precipitation driven complex terrain flood simulations is characterized in this study for eight different global satellite products and 128 flood events over the Eastern Italian Alps. The flood events are grouped according to two flood types: rain floods and flash floods. The satellite precipitation products and runoff simulations are evaluated based on systematic and random error metrics applied on the matched event pairs and basin scale event properties (i.e. rainfall and runoff cumulative depth and time series shape). Overall, error characteristics exhibit dependency on the flood type. Generally, timing of the event precipitation mass center and dispersion of the time series derived from satellite-precipitation exhibits good agreement with reference; the cumulative depth is mostly underestimated. The study shows a dampening effect in both systematic and random error components of the satellite-driven hydrograph relative to the satellite-retrieved hyetograph. The systematic error in shape of time series shows significant dampening effect. The random error dampening effect is less pronounced for the flash flood events, and the rain flood events with high runoff coefficient. This event-based analysis of the satellite precipitation error propagation in flood modeling sheds light on the application of satellite precipitation in mountain flood hydrology.
Abundance recovery error analysis using simulated AVIRIS data
NASA Technical Reports Server (NTRS)
Stoner, William W.; Harsanyi, Joseph C.; Farrand, William H.; Wong, Jennifer A.
1992-01-01
Measurement noise and imperfect atmospheric correction translate directly into errors in the determination of the surficial abundance of materials from imaging spectrometer data. The effects of errors on abundance recovery were investigated previously using Monte Carlo simulation methods by Sabol et. al. The drawback of the Monte Carlo approach is that thousands of trials are needed to develop good statistics on the probable error in abundance recovery. This computational burden invariably limits the number of scenarios of interest that can practically be investigated. A more efficient approach is based on covariance analysis. The covariance analysis approach expresses errors in abundance as a function of noise in the spectral measurements and provides a closed form result eliminating the need for multiple trials. Monte Carlo simulation and covariance analysis are used to predict confidence limits for abundance recovery for a scenario which is modeled as being derived from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).
Numerical error in groundwater flow and solute transport simulation
NASA Astrophysics Data System (ADS)
Woods, Juliette A.; Teubner, Michael D.; Simmons, Craig T.; Narayan, Kumar A.
2003-06-01
Models of groundwater flow and solute transport may be affected by numerical error, leading to quantitative and qualitative changes in behavior. In this paper we compare and combine three methods of assessing the extent of numerical error: grid refinement, mathematical analysis, and benchmark test problems. In particular, we assess the popular solute transport code SUTRA [Voss, 1984] as being a typical finite element code. Our numerical analysis suggests that SUTRA incorporates a numerical dispersion error and that its mass-lumped numerical scheme increases the numerical error. This is confirmed using a Gaussian test problem. A modified SUTRA code, in which the numerical dispersion is calculated and subtracted, produces better results. The much more challenging Elder problem [Elder, 1967; Voss and Souza, 1987] is then considered. Calculation of its numerical dispersion coefficients and numerical stability show that the Elder problem is prone to error. We confirm that Elder problem results are extremely sensitive to the simulation method used.
NASA Astrophysics Data System (ADS)
Chen, Shanyong; Li, Shengyi; Wang, Guilin
2014-11-01
environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.
Temperature measurement error simulation of the pure rotational Raman lidar
NASA Astrophysics Data System (ADS)
Jia, Jingyu; Huang, Yong; Wang, Zhirui; Yi, Fan; Shen, Jianglin; Jia, Xiaoxing; Chen, Huabin; Yang, Chuan; Zhang, Mingyang
2015-11-01
Temperature represents the atmospheric thermodynamic state. Measure the atmospheric temperature accurately and precisely is very important to understand the physics of the atmospheric process. Lidar has some advantages in the atmospheric temperature measurement. Based on the lidar equation and the theory of pure rotational Raman (PRR), we've simulated the temperature measurement errors of the double-grating-polychromator (DGP) based PRR lidar. First of all, without considering the attenuation terms of the atmospheric transmittance and the range in the lidar equation, we've simulated the temperature measurement errors which are influenced by the beam splitting system parameters, such as the center wavelength, the receiving bandwidth and the atmospheric temperature. We analyzed three types of the temperature measurement errors in theory. We've proposed several design methods for the beam splitting system to reduce the temperature measurement errors. Secondly, we simulated the temperature measurement error profiles by the lidar equation. As the lidar power-aperture product is determined, the main target of our lidar system is to reduce the statistical and the leakage errors.
Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko
2015-01-01
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
A simulation of high energy cosmic ray propagation 2
NASA Technical Reports Server (NTRS)
Honda, M.; Kamata, K.; Kifune, T.; Matsubara, Y.; Mori, M.; Nishijima, K.
1985-01-01
The cosmic ray propagation in the Galactic arm is simulated. The Galactic magnetic fields are known to go along with so called Galactic arms as a main structure with turbulences of the scale about 30pc. The distribution of cosmic ray in Galactic arm is studied. The escape time and the possible anisotropies caused by the arm structure are discussed.
Monte Carlo Simulations of Light Propagation in Apples
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper reports on the investigation of light propagation in fresh apples in the visible and short-wave near-infrared region using Monte Carlo simulations. Optical properties of ‘Golden Delicious’ apples were determined over the spectral range of 500-1100 nm using a hyperspectral imaging method, ...
New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations
NASA Technical Reports Server (NTRS)
Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.
2012-01-01
In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.
Propagation of radar rainfall uncertainty in urban flood simulations
NASA Astrophysics Data System (ADS)
Liguori, Sara; Rico-Ramirez, Miguel
2013-04-01
This work discusses the results of the implementation of a novel probabilistic system designed to improve ensemble sewer flow predictions for the drainage network of a small urban area in the North of England. The probabilistic system has been developed to model the uncertainty associated to radar rainfall estimates and propagate it through radar-based ensemble sewer flow predictions. The assessment of this system aims at outlining the benefits of addressing the uncertainty associated to radar rainfall estimates in a probabilistic framework, to be potentially implemented in the real-time management of the sewer network in the study area. Radar rainfall estimates are affected by uncertainty due to various factors [1-3] and quality control and correction techniques have been developed in order to improve their accuracy. However, the hydrological use of radar rainfall estimates and forecasts remains challenging. A significant effort has been devoted by the international research community to the assessment of the uncertainty propagation through probabilistic hydro-meteorological forecast systems [4-5], and various approaches have been implemented for the purpose of characterizing the uncertainty in radar rainfall estimates and forecasts [6-11]. A radar-based ensemble stochastic approach, similar to the one implemented for use in the Southern-Alps by the REAL system [6], has been developed for the purpose of this work. An ensemble generator has been calibrated on the basis of the spatial-temporal characteristics of the residual error in radar estimates assessed with reference to rainfall records from around 200 rain gauges available for the year 2007, previously post-processed and corrected by the UK Met Office [12-13]. Each ensemble member is determined by summing a perturbation field to the unperturbed radar rainfall field. The perturbations are generated by imposing the radar error spatial and temporal correlation structure to purely stochastic fields. A
Simulation of the elastic wave propagation in anisotropic microstructures
NASA Astrophysics Data System (ADS)
Bryner, Juerg; Vollmann, Jacqueline; Profunser, Dieter M.; Dual, Jurg
2007-06-01
For the interpretation of optical Pump-Probe Measurements on microstructures the wave propagation in anisotropic 3-D structures with arbitrary geometries is numerically calculated. The laser acoustic Pump-Probe technique generates bulk waves in structures in a thermo-elastic way. This method is well established for non-destructive measurements of thin films with an indepth resolution in the order of 10 nm. The Pump-Probe technique can also be used for measurements, e.g. for quality inspection of three-dimensional structures with arbitrary geometries, like MEMS components. For the interpretation of the measurements it is necessary that the wave propagation in the specimen to be inspected can be calculated. Here, the wave propagation for various geometries and materials is investigated. In the first part, the wave propagation in isotropic axisymmetric structures is simulated with a 2-D finite difference formulation. The numerical results are verified with measurements of macroscopic specimens. In a second step, the simulations are extended to 3-D structures with orthotopic material properties. The implemented code allows the calculation of the wave propagation for different orientations of the material axes (orientation of the orthotropic axes relative to the geometry of the structure). Limits of the presented algorithm are discussed and future directions of the on-going research project are presented.
Simulations of time spreading in shallow water propagation
NASA Astrophysics Data System (ADS)
Thorsos, Eric I.; Elam, W. T.; Tang, Dajun; Henyey, Frank S.; Williams, Kevin L.; Reynolds, Stephen A.
2002-11-01
Pulse propagation in a shallow water wave guide leads to time spreading due to multipath effects. Results of PE simulations will be described for pulse propagation in shallow water with a rough sea surface and a flat sandy sea floor. The simulations illustrate that such time spreading may be significantly less at longer ranges than for the flat surface case. Pressure fields are simulated in two space dimensions and have been obtained using a wide-angle PE code developed by Rosenberg [A. D. Rosenberg, J. Acoust. Soc. Am. 105, 144-153 (1999)]. The effect of rough surface scattering is to cause acoustic energy initially propagating at relatively high angles but still below the critical angle at the sea floor to be eventually shifted to grazing angles above the critical angle. This energy is then lost into the bottom, effectively stripping higher propagating modes. The surviving energy at longer ranges is concentrated in the lowest modes and shows little effect of time spreading. Thus, the effect of rough surface scattering is found to produce a simpler temporal field structure than if the surface were treated as flat. [Work supported by ONR.
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
Targeting Error Simulator for Image-guided Prostate Needle Placement
Lasso, Andras; Avni, Shachar; Fichtinger, Gabor
2010-01-01
Motivation Needle-based biopsy and local therapy of prostate cancer depend multimodal imaging for both target planning and needle guidance. The clinical process involves selection of target locations in a pre-operative image volume and registering these to an intra-operative volume. Registration inaccuracies inevitably lead to targeting error, a major clinical concern. The analysis of targeting error requires a large number of images with known ground truth, which has been infeasible even for the largest research centers. Methods We propose to generate realistic prostate imaging data in a controllable way, with known ground truth, by simulation of prostate size, shape, motion and deformation typically encountered in prostatic needle placement. This data is then used to evaluate a given registration algorithm, by testing its ability to reproduce ground truth contours, motions and deformations. The method builds on statistical shape atlas to generate large number of realistic prostate shapes and finite element modeling to generate high-fidelity deformations, while segmentation error is simulated by warping the ground truth data in specific prostate regions. Expected target registration error (TRE) is computed as a vector field. Results The simulator was configured to evaluate the TRE when using a surface-based rigid registration algorithm in a typical prostate biopsy targeting scenario. Simulator parameters, such as segmentation error and deformation, were determined by measurements in clinical images. Turnaround time for the full simulation of one test case was below 3 minutes. The simulator is customizable for testing, comparing, optimizing segmentation and registration methods and is independent of the imaging modalities used. PMID:21096275
Statistical error in particle simulations of low mach number flows
Hadjiconstantinou, N G; Garcia, A L
2000-11-13
We present predictions for the statistical error due to finite sampling in the presence of thermal fluctuations in molecular simulation algorithms. The expressions are derived using equilibrium statistical mechanics. The results show that the number of samples needed to adequately resolve the flowfield scales as the inverse square of the Mach number. Agreement of the theory with direct Monte Carlo simulations shows that the use of equilibrium theory is justified.
Discreteness noise versus force errors in N-body simulations
NASA Technical Reports Server (NTRS)
Hernquist, Lars; Hut, Piet; Makino, Jun
1993-01-01
A low accuracy in the force calculation per time step of a few percent for each particle pair is sufficient for collisionless N-body simulations. Higher accuracy is made meaningless by the dominant discreteness noise in the form of two-body relaxation, which can be reduced only by increasing the number of particles. Since an N-body simulation is a Monte Carlo procedure in which each particle-particle force is essentially random, i.e., carries an error of about 1000 percent, the only requirement is a systematic averaging-out of these intrinsic errors. We illustrate these assertions with two specific examples in which individual pairwise forces are deliberately allowed to carry significant errors: tree-codes on supercomputers and algorithms on special-purpose machines with low-precision hardware.
Ar-Ar_Redux: rigorous error propagation of 40Ar/39Ar data, including covariances
NASA Astrophysics Data System (ADS)
Vermeesch, P.
2015-12-01
Rigorous data reduction and error propagation algorithms are needed to realise Earthtime's objective to improve the interlaboratory accuracy of 40Ar/39Ar dating to better than 1% and thereby facilitate the comparison and combination of the K-Ar and U-Pb chronometers. Ar-Ar_Redux is a new data reduction protocol and software program for 40Ar/39Ar geochronology which takes into account two previously underappreciated aspects of the method: 1. 40Ar/39Ar measurements are compositional dataIn its simplest form, the 40Ar/39Ar age equation can be written as: t = log(1+J [40Ar/39Ar-298.5636Ar/39Ar])/λ = log(1 + JR)/λ Where λ is the 40K decay constant and J is the irradiation parameter. The age t does not depend on the absolute abundances of the three argon isotopes but only on their relative ratios. Thus, the 36Ar, 39Ar and 40Ar abundances can be normalised to unity and plotted on a ternary diagram or 'simplex'. Argon isotopic data are therefore subject to the peculiar mathematics of 'compositional data', sensu Aitchison (1986, The Statistical Analysis of Compositional Data, Chapman & Hall). 2. Correlated errors are pervasive throughout the 40Ar/39Ar methodCurrent data reduction protocols for 40Ar/39Ar geochronology propagate the age uncertainty as follows: σ2(t) = [J2 σ2(R) + R2 σ2(J)] / [λ2 (1 + R J)], which implies zero covariance between R and J. In reality, however, significant error correlations are found in every step of the 40Ar/39Ar data acquisition and processing, in both single and multi collector instruments, during blank, interference and decay corrections, age calculation etc. Ar-Ar_Redux revisits every aspect of the 40Ar/39Ar method by casting the raw mass spectrometer data into a contingency table of logratios, which automatically keeps track of all covariances in a compositional context. Application of the method to real data reveals strong correlations (r2 of up to 0.9) between age measurements within a single irradiation batch. Propertly taking
Communication Systems Simulator with Error Correcting Codes Using MATLAB
ERIC Educational Resources Information Center
Gomez, C.; Gonzalez, J. E.; Pardo, J. M.
2003-01-01
In this work, the characteristics of a simulator for channel coding techniques used in communication systems, are described. This software has been designed for engineering students in order to facilitate the understanding of how the error correcting codes work. To help students understand easily the concepts related to these kinds of codes, a…
Error and efficiency of replica exchange molecular dynamics simulations
Rosta, Edina; Hummer, Gerhard
2009-01-01
We derive simple analytical expressions for the error and computational efficiency of replica exchange molecular dynamics (REMD) simulations (and by analogy replica exchange Monte Carlo simulations). The theory applies to the important case of systems whose dynamics at long times is dominated by the slow interconversion between two metastable states. As a specific example, we consider the folding and unfolding of a protein. The efficiency is defined as the rate with which the error in an estimated equilibrium property, as measured by the variance of the estimator over repeated simulations, decreases with simulation time. For two-state systems, this rate is in general independent of the particular property. Our main result is that, with comparable computational resources used, the relative efficiency of REMD and molecular dynamics (MD) simulations is given by the ratio of the number of transitions between the two states averaged over all replicas at the different temperatures, and the number of transitions at the single temperature of the MD run. This formula applies if replica exchange is frequent, as compared to the transition times. High efficiency of REMD is thus achieved by including replica temperatures in which the frequency of transitions is higher than that at the temperature of interest. In tests of the expressions for the error in the estimator, computational efficiency, and the rate of equilibration we find quantitative agreement with the results both from kinetic models of REMD and from actual all-atom simulations of the folding of a peptide in water. PMID:19894977
Error and efficiency of replica exchange molecular dynamics simulations.
Rosta, Edina; Hummer, Gerhard
2009-10-28
We derive simple analytical expressions for the error and computational efficiency of replica exchange molecular dynamics (REMD) simulations (and by analogy replica exchange Monte Carlo simulations). The theory applies to the important case of systems whose dynamics at long times is dominated by the slow interconversion between two metastable states. As a specific example, we consider the folding and unfolding of a protein. The efficiency is defined as the rate with which the error in an estimated equilibrium property, as measured by the variance of the estimator over repeated simulations, decreases with simulation time. For two-state systems, this rate is in general independent of the particular property. Our main result is that, with comparable computational resources used, the relative efficiency of REMD and molecular dynamics (MD) simulations is given by the ratio of the number of transitions between the two states averaged over all replicas at the different temperatures, and the number of transitions at the single temperature of the MD run. This formula applies if replica exchange is frequent, as compared to the transition times. High efficiency of REMD is thus achieved by including replica temperatures in which the frequency of transitions is higher than that at the temperature of interest. In tests of the expressions for the error in the estimator, computational efficiency, and the rate of equilibration we find quantitative agreement with the results both from kinetic models of REMD and from actual all-atom simulations of the folding of a peptide in water. PMID:19894977
Nordgård, Oddmund; Kvaløy, Jan Terje; Farmen, Ragne Kristin; Heikkilä, Reino
2006-09-15
Real-time reverse transcription polymerase chain reaction (RT-PCR) has gained wide popularity as a sensitive and reliable technique for mRNA quantification. The development of new mathematical models for such quantifications has generally paid little attention to the aspect of error propagation. In this study we evaluate, both theoretically and experimentally, several recent models for relative real-time RT-PCR quantification of mRNA with respect to random error accumulation. We present error propagation expressions for the most common quantification models and discuss the influence of the various components on the total random error. Normalization against a calibrator sample to improve comparability between different runs is shown to increase the overall random error in our system. On the other hand, normalization against multiple reference genes, introduced to improve accuracy, does not increase error propagation compared to normalization against a single reference gene. Finally, we present evidence that sample-specific amplification efficiencies determined from individual amplification curves primarily increase the random error of real-time RT-PCR quantifications and should be avoided. Our data emphasize that the gain of accuracy associated with new quantification models should be validated against the corresponding loss of precision. PMID:16899212
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Symbol error rate bound of DPSK modulation system in directional wave propagation
NASA Astrophysics Data System (ADS)
Hua, Jingyu; Zhuang, Changfei; Zhao, Xiaomin; Li, Gang; Meng, Qingmin
This paper presents a new approach to determine the symbol error rate (SER) bound of differential phase shift keying (DPSK) systems in a directional fading channel, where the von Mises distribution is used to illustrate the non-isotropic angle of arrival (AOA). Our approach relies on the closed-form expression of the phase difference probability density function (pdf) in coherent fading channels and leads to expressions of the DPSK SER bound involving a single finite-range integral which can be readily evaluated numerically. Moreover, the simulation yields results consistent with numerical computation.
Propagation of radiation in fluctuating multiscale plasmas. II. Kinetic simulations
Pal Singh, Kunwar; Robinson, P. A.; Cairns, Iver H.; Tyshetskiy, Yu.
2012-11-15
A numerical algorithm is developed and tested that implements the kinetic treatment of electromagnetic radiation propagating through plasmas whose properties have small scale fluctuations, which was developed in a companion paper. This method incorporates the effects of refraction, damping, mode structure, and other aspects of large-scale propagation of electromagnetic waves on the distribution function of quanta in position and wave vector, with small-scale effects of nonuniformities, including scattering and mode conversion approximated as causing drift and diffusion in wave vector. Numerical solution of the kinetic equation yields the distribution function of radiation quanta in space, time, and wave vector. Simulations verify the convergence, accuracy, and speed of the methods used to treat each term in the equation. The simulations also illustrate the main physical effects and place the results in a form that can be used in future applications.
Myers, Casey A; Laz, Peter J; Shelburne, Kevin B; Davidson, Bradley S
2015-05-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5-95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535
Myers, Casey A.; Laz, Peter J.; Shelburne, Kevin B.; Davidson, Bradley S.
2015-01-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5–95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535
Propagation of radar rainfall uncertainty in urban flood simulations
NASA Astrophysics Data System (ADS)
Liguori, Sara; Rico-Ramirez, Miguel
2013-04-01
This work discusses the results of the implementation of a novel probabilistic system designed to improve ensemble sewer flow predictions for the drainage network of a small urban area in the North of England. The probabilistic system has been developed to model the uncertainty associated to radar rainfall estimates and propagate it through radar-based ensemble sewer flow predictions. The assessment of this system aims at outlining the benefits of addressing the uncertainty associated to radar rainfall estimates in a probabilistic framework, to be potentially implemented in the real-time management of the sewer network in the study area. Radar rainfall estimates are affected by uncertainty due to various factors [1-3] and quality control and correction techniques have been developed in order to improve their accuracy. However, the hydrological use of radar rainfall estimates and forecasts remains challenging. A significant effort has been devoted by the international research community to the assessment of the uncertainty propagation through probabilistic hydro-meteorological forecast systems [4-5], and various approaches have been implemented for the purpose of characterizing the uncertainty in radar rainfall estimates and forecasts [6-11]. A radar-based ensemble stochastic approach, similar to the one implemented for use in the Southern-Alps by the REAL system [6], has been developed for the purpose of this work. An ensemble generator has been calibrated on the basis of the spatial-temporal characteristics of the residual error in radar estimates assessed with reference to rainfall records from around 200 rain gauges available for the year 2007, previously post-processed and corrected by the UK Met Office [12-13]. Each ensemble member is determined by summing a perturbation field to the unperturbed radar rainfall field. The perturbations are generated by imposing the radar error spatial and temporal correlation structure to purely stochastic fields. A
The Communication Link and Error ANalysis (CLEAN) simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.; Crowe, Shane
1993-01-01
During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.
A wideband propagation simulator for high speed mobile radio communications
NASA Astrophysics Data System (ADS)
Busson, P.; Lejannic, J. C.; Elzein, G.; Citerne, J.
1994-07-01
Multipath, jamming, listening and detection are the main limitations for mobile radio communications. Spread spectrum techniques, especially frequency hopping, can be used to avoid these problems. Therefore, a wideband simulation for multipath mobile channels appeared the most appropriate evaluation technique. It also gives useful indications for system characteristic improvements. This paper presents the design and realization of a new UHF-VHF propagation simulator, which can be considered as an extended version of Bussgang's one. This frequency hopping simulator (up to 100,000 hops per second) is wideband thus capable to deal with spread spectrum signals. As it generates up to 16 paths, it can be used in almost all mobile radio propagation situations. Moreover, it is also able to simulate high mobile relative speeds up to 2000km/h such as air-air communication systems. This simulator can reproduce, in laboratory, 16 rays Rician or Rayleigh fading channels with a maximum time delay of about 15 ms. At the highest frequency of 1200 MHz, Doppler rates up to 2 kHz can be generated corresponding to vehicle speeds up to 2000 km/h. Let note that the Bussgang simulator was defined for narrowband and fixed radio communications. In both equipments, in-phase and quadrature signals are obtained using two numerical transversal filters. Simulation results were derived in various situations especially in terrestrial urban and suburban environments, where they could be compared with measurements. The main advantage of the simulator lies in its capacity to simulate the high speed and wideband mobile radio communication channels.
An error model for GCM precipitation and temperature simulations
NASA Astrophysics Data System (ADS)
Sharma, A.; Woldemeskel, F.; Mehrotra, R.; Sivakumar, B.
2012-04-01
Water resources assessments for future climates require meaningful simulations of likely precipitation and evaporation for simulation of flow and derived quantities of interest. The current approach for making such assessments involve using simulations from one or a handful of General Circulation Models (GCMs), for usually one assumed future greenhouse gas emission scenario, deriving associated flows and the planning or design attributes required, and using these as the basis of any planning or design that is needed. An assumption that is implicit in this approach is that the single or multiple simulations being considered are representative of what is likely to occur in the future. Is this a reasonable assumption to make and use in designing future water resources infrastructure? Is the uncertainty in the simulations captured through this process a real reflection of the likely uncertainty, even though a handful of GCMs are considered? Can one, instead, develop a measure of this uncertainty for a given GCM simulation for all variables in space and time, and use this information as the basis of water resources planning (similar to using "input uncertainty" in rainfall-runoff modelling)? These are some of the questions we address in course of this presentation. We present here a new basis for assigning a measure of uncertainty to GCM simulations of precipitation and temperature. Unlike other alternatives which assess overall GCM uncertainty, our approach leads to a unique measure of uncertainty in the variable of interest for each simulated value in space and time. We refer to this as an error model of GCM precipitation and temperature simulations, to allow a complete assessment of the merits or demerits associated with future infrastructure options being considered, or mitigation plans being devised. The presented error model quantifies the error variance of GCM monthly precipitation and temperature, and reports it as the Square Root Error Variance (SREV
Energy Science and Technology Software Center (ESTSC)
1987-09-30
Version 00 The REFERDOU system can be used to calculate the response function of a NE-213 scintillation detector for energies up to 100 MeV, to interpolate and spread (Gaussian) the response function, and unfold the measured spectrum of neutrons while propagating errors from the response functions to the unfolded spectrum.
Fast Video Encryption Using the H.264 Error Propagation Property for Smart Mobile Devices
Chung, Yongwha; Lee, Sungju; Jeon, Taewoong; Park, Daihee
2015-01-01
In transmitting video data securely over Video Sensor Networks (VSNs), since mobile handheld devices have limited resources in terms of processor clock speed and battery size, it is necessary to develop an efficient method to encrypt video data to meet the increasing demand for secure connections. Selective encryption methods can reduce the amount of computation needed while satisfying high-level security requirements. This is achieved by selecting an important part of the video data and encrypting it. In this paper, to ensure format compliance and security, we propose a special encryption method for H.264, which encrypts only the DC/ACs of I-macroblocks and the motion vectors of P-macroblocks. In particular, the proposed new selective encryption method exploits the error propagation property in an H.264 decoder and improves the collective performance by analyzing the tradeoff between the visual security level and the processing speed compared to typical selective encryption methods (i.e., I-frame, P-frame encryption, and combined I-/P-frame encryption). Experimental results show that the proposed method can significantly reduce the encryption workload without any significant degradation of visual security. PMID:25850068
Correction of Discretization Errors Simulated at Supply Wells.
MacMillan, Gordon J; Schumacher, Jens
2015-01-01
Many hydrogeology problems require predictions of hydraulic heads in a supply well. In most cases, the regional hydraulic response to groundwater withdrawal is best approximated using a numerical model; however, simulated hydraulic heads at supply wells are subject to errors associated with model discretization and well loss. An approach for correcting the simulated head at a pumping node is described here. The approach corrects for errors associated with model discretization and can incorporate the user's knowledge of well loss. The approach is model independent, can be applied to finite difference or finite element models, and allows the numerical model to remain somewhat coarsely discretized and therefore numerically efficient. Because the correction is implemented external to the numerical model, one important benefit of this approach is that a response matrix, reduced model approach can be supported even when nonlinear well loss is considered. PMID:25142180
Starlight emergence angle error analysis of star simulator
NASA Astrophysics Data System (ADS)
Zhang, Jian; Zhang, Guo-yu
2015-10-01
With continuous development of the key technologies of star sensor, the precision of star simulator have been to be further improved, for it directly affects the accuracy of star sensor laboratory calibration. For improving the accuracy level of the star simulator, a theoretical accuracy analysis model need to be proposed. According the ideal imaging model of star simulator, the theoretical accuracy analysis model can be established. Based on theoretically analyzing the theoretical accuracy analysis model we can get that the starlight emergent angle deviation is primarily affected by star position deviation, main point position deviation, focal length deviation, distortion deviation and object plane tilt deviation. Based on the above affecting factors, a comprehensive deviation model can be established. According to the model, the formula of each factors deviation model separately and the comprehensive deviation model can be summarized and concluded out. By analyzing the properties of each factors deviation model and the comprehensive deviation model formula, concluding the characteristics of each factors respectively and the weight relationship among them. According the result of analysis of the comprehensive deviation model, a reasonable designing indexes can be given by considering the star simulator optical system requirements and the precision of machining and adjustment. So, starlight emergence angle error analysis of star simulator is very significant to guide the direction of determining and demonstrating the index of star simulator, analyzing and compensating the error of star simulator for improving the accuracy of star simulator and establishing a theoretical basis for further improving the starlight angle precision of the star simulator can effectively solve the problem.
A new version of the CADNA library for estimating round-off error propagation in Fortran programs
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc
2010-11-01
The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round
Numerical simulation of shock wave propagation in flows
NASA Astrophysics Data System (ADS)
Rénier, Mathieu; Marchiano, Régis; Gaudard, Eric; Gallin, Louis-Jonardan; Coulouvrat, François
2012-09-01
Acoustical shock waves propagate through flows in many situations. The sonic boom produced by a supersonic aircraft influenced by winds, or the so-called Buzz-Saw-Noise produced by turbo-engine fan blades when rotating at supersonic speeds, are two examples of such a phenomenon. In this work, an original method called FLHOWARD, acronym for FLow and Heterogeneous One-Way Approximation for Resolution of Diffraction, is presented. It relies on a scalar nonlinear wave equation, which takes into account propagation in a privileged direction (one-way approach), with diffraction, flow, heterogeneous and nonlinear effects. Theoretical comparison of the dispersion relations between that equation and parabolic equations (standard or wide angle) shows that this approach is more precise than the parabolic approach because there are no restrictions about the angle of propagation. A numerical procedure based on the standard split-step technique is used. It consists in splitting the nonlinear wave equation into simpler equations. Each of these equations is solved thanks to an analytical solution when it is possible, and a finite differences scheme in other cases. The advancement along the propagation direction is done with an implicit scheme. The validity of that numerical procedure is assessed by comparisons with analytical solutions of the Lilley's equation in waveguides for uniform or shear flows in linear regime. Attention is paid to the advantages and drawbacks of that method. Finally, the numerical code is used to simulate the propagation of sonic boom through a piece of atmosphere with flows and heterogeneities. The effects of the various parameters are analysed.
Hybrid simulation of wave propagation in the Io plasma torus
NASA Astrophysics Data System (ADS)
Stauffer, B. H.; Delamere, P. A.; Damiano, P. A.
2015-12-01
The transmission of waves between Jupiter and Io is an excellent case study of magnetosphere/ionosphere (MI) coupling because the power generated by the interaction at Io and the auroral power emitted at Jupiter can be reasonably estimated. Wave formation begins with mass loading as Io passes through the plasma torus. A ring beam distribution of pickup ions and perturbation of the local flow by the conducting satellite generate electromagnetic ion cyclotron waves and Alfven waves. We investigate wave propagation through the torus and to higher latitudes using a hybrid plasma simulation with a physically realistic density gradient, assessing the transmission of Poynting flux and wave dispersion. We also analyze the propagation of kinetic Alfven waves through a density gradient in two dimensions.
Numerical Homogenization of Jointed Rock Masses Using Wave Propagation Simulation
NASA Astrophysics Data System (ADS)
Gasmi, Hatem; Hamdi, Essaïeb; Bouden Romdhane, Nejla
2014-07-01
Homogenization in fractured rock analyses is essentially based on the calculation of equivalent elastic parameters. In this paper, a new numerical homogenization method that was programmed by means of a MATLAB code, called HLA-Dissim, is presented. The developed approach simulates a discontinuity network of real rock masses based on the International Society of Rock Mechanics (ISRM) scanline field mapping methodology. Then, it evaluates a series of classic joint parameters to characterize density (RQD, specific length of discontinuities). A pulse wave, characterized by its amplitude, central frequency, and duration, is propagated from a source point to a receiver point of the simulated jointed rock mass using a complex recursive method for evaluating the transmission and reflection coefficient for each simulated discontinuity. The seismic parameters, such as delay, velocity, and attenuation, are then calculated. Finally, the equivalent medium model parameters of the rock mass are computed numerically while taking into account the natural discontinuity distribution. This methodology was applied to 17 bench fronts from six aggregate quarries located in Tunisia, Spain, Austria, and Sweden. It allowed characterizing the rock mass discontinuity network, the resulting seismic performance, and the equivalent medium stiffness. The relationship between the equivalent Young's modulus and rock discontinuity parameters was also analyzed. For these different bench fronts, the proposed numerical approach was also compared to several empirical formulas, based on RQD and fracture density values, published in previous research studies, showing its usefulness and efficiency in estimating rapidly the Young's modulus of equivalent medium for wave propagation analysis.
Time-dependent simulations of filament propagation in photoconducting switches
Rambo, P.W.; Lawson, W.S.; Capps, C.D.; Falk, R.A.
1994-05-01
The authors present a model for investigating filamentary structures observed in laser-triggered photoswitches. The model simulates electrons and holes in two-dimensional cylindrical (r-z) geometry, with realistic electron and hole mobilities and field dependent impact ionization. Because of the large range of spatial and temporal scales to be resolved, they are using an explicit approach with fast, direct solution of the field equation. A flux limiting scheme is employed to avoid the time-step constraint due to the short time for resistive relaxation in the high density filament. Self-consistent filament propagation with speeds greater than the carrier drift velocity are observed in agreement with experiments.
Shock Propagation in Dusty Plasmas by MD Simulations
NASA Astrophysics Data System (ADS)
Marciante, Mathieu; Murillo, Michael
2014-10-01
The study of shock propagation has become a common way to obtain statistical information on a medium, as one can relate properties of the undisturbed medium to the shock dynamics through the Rankine-Hugoniot (R-H) relations. However, theoretical investigations of shock dynamics are often done through idealized fluid models, which mainly neglect kinetic properties of the medium constituents. Motivated by recent experimental results, we use molecular dynamics simulations to study the propagation of shocks in 2D-dusty plasmas, focusing our attention on the influence of kinetic aspects of the plasma, such as viscosity effects. This study is undertaken on two sides. On a first side, the shock wave is generated by an external electric field acting on the dust particles, giving rise to a shock wave as obtained in a laboratory experiment. On another side, we generate a shock wave by the displacement of a two-dimensional piston at constant velocity, allowing to obtain a steady-state shock wave. Experiment-like shock waves propagate in a highly non-steady state, what should ask for a careful application of the R-H relations in the context of non-steady shocks. Steady-state shock waves show an oscillatory pattern attributed to the dominating dispersive effect of the dusty plasma.
Unraveling the uncertainty and error propagation in the vertical flux Martin curve
NASA Astrophysics Data System (ADS)
Olli, Kalle
2015-06-01
Analyzing the vertical particle flux and particle retention in the upper twilight zone has commonly been accomplished by fitting a power function to the data. Measuring the vertical particle flux in the upper twilight zone, where most of the re-mineralization occurs, is a complex endeavor. Here I use field data and simulations to show how uncertainty in the particle flux measurements propagates into the vertical flux attenuation model parameters. Further, I analyze how the number of sampling depths, and variations in the vertical sampling locations influences the model performance and parameters stability. The arguments provide a simple framework to optimize sampling scheme when vertical flux attenuation profiles are measured in the field, either by using an array of sediment traps or 234Th methodology. A compromise between effort and quality of results is to sample from at least six depths: upper sampling depth as close to the base of the euphotic layer as feasible, the vertical sampling depths slightly aggregated toward the upper aphotic zone where most of the vertical flux attenuation takes place, and extending the lower end of the sampling range to as deep as practicable in the twilight zone.
A simulation of high energy cosmic ray propagation 1
NASA Technical Reports Server (NTRS)
Honda, M.; Kifune, T.; Matsubara, Y.; Mori, M.; Nishijima, K.; Teshima, M.
1985-01-01
High energy cosmic ray propagation of the energy region 10 to the 14.5 power - 10 to the 18th power eV is simulated in the inter steller circumstances. In conclusion, the diffusion process by turbulent magnetic fields is classified into several regions by ratio of the gyro-radius and the scale of turbulence. When the ratio becomes larger then 10 to the minus 0.5 power, the analysis with the assumption of point scattering can be applied with the mean free path E sup 2. However, when the ratio is smaller than 10 to the minus 0.5 power, we need a more complicated analysis or simulation. Assuming the turbulence scale of magnetic fields of the Galaxy is 10-30pc and the mean magnetic field strength is 3 micro gauss, the energy of cosmic ray with that gyro-radius is about 10 to the 16.5 power eV.
Seismic Wave Propagation Simulation using Circular Hough Transform
NASA Astrophysics Data System (ADS)
Miah, K.; Potter, D. K.
2012-12-01
Synthetic data generation by numerically solving a two-way wave equation is an essential part of seismic tomography, especially in full-waveform inversion. Finite-difference and finite-element are the two common methods of seismic wave propagation modeling in heterogeneous media. Either time or frequency domain representation of wave equation is used for these simulations. Hanahara and Hiyane [1] proposed and implemented a circle-detection algorithm based on the Circular Hough transform (CHT) to numerically solve a two-dimensional wave equation. The Hough transform is generally used in image processing applications to identify objects of various shapes in an image [2]. In this abstract, we use the Circular Hough transform to numerically solve an acoustic wave equation, with the purpose to identify and locate primaries and multiples in the transform domain. Relationships between different seismic events and the CHT parameter are also investigated. [1] Hanahara, K. and Hiyane, M., A Circle-Detection Algorithm Simulating Wave Propagation, Machine Vision and Applications, vol. 3, pp. 97-111, 1990. [2 ] Petcher, P. A. and Dixon, S., A modified Hough transform for removal of direct and reflected surface waves from B-scans, NDT & E International, vol. 44, no. 2, pp. 139-144, 2011.
Numerical simulation of premixed flame propagation in a closed tube
NASA Astrophysics Data System (ADS)
Kuzuu, Kazuto; Ishii, Katsuya; Kuwahara, Kunio
1996-08-01
Premixed flame propagation of methane-air mixture in a closed tube is estimated through a direct numerical simulation of the three-dimensional unsteady Navier-Stokes equations coupled with chemical reaction. In order to deal with a combusting flow, an extended version of the MAC method, which can be applied to a compressible flow with strong density variation, is employed as a numerical method. The chemical reaction is assumed to be an irreversible single step reaction between methane and oxygen. The chemical species are CH 4, O 2, N 2, CO 2, and H 2O. In this simulation, we reproduce a formation of a tulip flame in a closed tube during the flame propagation. Furthermore we estimate not only a two-dimensional shape but also a three-dimensional structure of the flame and flame-induced vortices, which cannot be observed in the experiments. The agreement between the calculated results and the experimental data is satisfactory, and we compare the phenomenon near the side wall with the one in the corner of the tube.
Simulation of 3D Seismic Wave Propagation with Volcano Topography
NASA Astrophysics Data System (ADS)
Ripperger, J.; Igel, H.; Wassermann, J.
2001-12-01
We investigate the possibilities of using three-dimensional finite difference (FD) methods for numerical simulation of the seismic wave field at active volcanoes. We put special emphasis on the implementation of the boundary conditions for free surface topography. We compare two different approaches to solve the free surface boundary conditions. The algorithms are implemented on parallel hardware and have been tested for correctness and stability. We apply them to smooth artificial topographies and to the real topography of Mount Merapi, Indonesia. We conclude, that grid stretching type methods (e.g. Hestholm & Ruud, 1994) are not well suited for realistic volcano topography as they tend to become unstable for large topographic gradients. The representation of topography through staircase shaped grids (Ohminato & Chouet, 1997) results in stable calculations, while demanding very fine gridding. The simulations show the effects of a three-dimensional surface topography on elastic wave propagation. Ground motion at the surface is severely affected by topography. If neglected, this may jeopardize attempts to determine source location by analyzing particle motion. Numerical studies like this can help to understand wave propagation phenomena observed on field recordings in volcano seismology. Future studies will aim at separating the wave effects of internal scattering, topography and sources (tremors, tectonic events, pyroclastic flows).
NASA Astrophysics Data System (ADS)
Vicent, Jorge; Alonso, Luis; Sabater, Neus; Miesch, Christophe; Kraft, Stefan; Moreno, Jose
2015-09-01
The uncertainties in the knowledge of the Instrument Spectral Response Function (ISRF), barycenter of the spectral channels and bandwidth / spectral sampling (spectral resolution) are important error sources in the processing of satellite imaging spectrometers within narrow atmospheric absorption bands. The exhaustive laboratory spectral characterization is a costly engineering process that differs from the instrument configuration in-flight given the harsh space environment and harmful launching phase. The retrieval schemes at Level-2 commonly assume a Gaussian ISRF, leading to uncorrected spectral stray-light effects and wrong characterization and correction of the spectral shift and smile. These effects produce inaccurate atmospherically corrected data and are propagated to the final Level-2 mission products. Within ESA's FLEX satellite mission activities, the impact of the ISRF knowledge error and spectral calibration at Level-1 products and its propagation to Level-2 retrieved chlorophyll fluorescence has been analyzed. A spectral recalibration scheme has been implemented at Level-2 reducing the errors in Level-1 products below the 10% error in retrieved fluorescence within the oxygen absorption bands enhancing the quality of the retrieved products. The work presented here shows how the minimization of the spectral calibration errors requires an effort both for the laboratory characterization and for the implementation of specific algorithms at Level-2.
Monte Carlo simulation of light propagation in the adult brain
NASA Astrophysics Data System (ADS)
Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter
2004-06-01
When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.
Supporting the Development of Soft-Error Resilient Message Passing Applications using Simulation
Engelmann, Christian; Naughton III, Thomas J
2016-01-01
Radiation-induced bit flip faults are of particular concern in extreme-scale high-performance computing systems. This paper presents a simulation-based tool that enables the development of soft-error resilient message passing applications by permitting the investigation of their correctness and performance under various fault conditions. The documented extensions to the Extreme-scale Simulator (xSim) enable the injection of bit flip faults at specific of injection location(s) and fault activation time(s), while supporting a significant degree of configurability of the fault type. Experiments show that the simulation overhead with the new feature is ~2,325% for serial execution and ~1,730% at 128 MPI processes, both with very fine-grain fault injection. Fault injection experiments demonstrate the usefulness of the new feature by injecting bit flips in the input and output matrices of a matrix-matrix multiply application, revealing vulnerability of data structures, masking and error propagation. xSim is the very first simulation-based MPI performance tool that supports both, the injection of process failures and bit flip faults.
Ren, Shenghan; Chen, Xueli; Wang, Hailong; Qu, Xiaochao; Wang, Ge; Liang, Jimin; Tian, Jie
2013-01-01
The study of light propagation in turbid media has attracted extensive attention in the field of biomedical optical molecular imaging. In this paper, we present a software platform for the simulation of light propagation in turbid media named the “Molecular Optical Simulation Environment (MOSE)”. Based on the gold standard of the Monte Carlo method, MOSE simulates light propagation both in tissues with complicated structures and through free-space. In particular, MOSE synthesizes realistic data for bioluminescence tomography (BLT), fluorescence molecular tomography (FMT), and diffuse optical tomography (DOT). The user-friendly interface and powerful visualization tools facilitate data analysis and system evaluation. As a major measure for resource sharing and reproducible research, MOSE aims to provide freeware for research and educational institutions, which can be downloaded at http://www.mosetm.net. PMID:23577215
Simulation of intense microwave pulse propagation in air breakdown environment
NASA Technical Reports Server (NTRS)
Kuo, S. P.; Zhang, Y. S.
1991-01-01
An experiment is conducted to examine the tail erosion phenomenon which occurs to an intense microwave pulse propagating in air breakdown environment. In the experiment, a 1 MW microwave pulse (1.1 microsec) is transmitted through a large plexiglas chamber filled with dry air at about 1-2 torr pressure. Two different degrees of tail erosion caused by two different mechanisms are identified. This experimental effort leads to the understanding of the fundamental behavior of tail erosion and provides a data base for validating the theoretical model. A theoretical model based on two coupled partial differential equations is established to describe the propagation on an intense microwave pulse in air breakdown environment. One is derived from the Poynting theorem, and the other one is the rate equation of electron density. A semi-empirical formula of the ionization frequency is adopted for this model. A transformation of these two equations to local time frame of reference is introduced so that they can be solved numerically with considerably reduced computation time. This model is tested by using it to perform the computer simulation of the experiment. The numerical results are shown to agree well with the experimental results.
NASA Technical Reports Server (NTRS)
Greatorex, Scott (Editor); Beckman, Mark
1996-01-01
Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would
Clark, E.L.
1993-08-01
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, calibration Mach number and Reynolds number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-stream Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for nine fundamental aerodynamic ratios, most of which relate free-stream test conditions (pressure, temperature, density or velocity) to a reference condition. Tables of the ratios, R, absolute sensitivity coefficients, {partial_derivative}R/{partial_derivative}M{infinity}, and relative sensitivity coefficients, (M{infinity}/R) ({partial_derivative}R/{partial_derivative}M{infinity}), are provided as functions of M{infinity}.
Multiscale simulation of 2D elastic wave propagation
NASA Astrophysics Data System (ADS)
Zhang, Wensheng; Zheng, Hui
2016-06-01
In this paper, we develop the multiscale method for simulation of elastic wave propagation. Based on the first-order velocity-stress hyperbolic form of 2D elastic wave equation, the particle velocities are solved first ona coarse grid by the finite volume method. Then the stress tensor is solved by using the multiscale basis functions which can represent the fine-scale variation of the wavefield on the coarse grid. The basis functions are computed by solving a local problem with the finite element method. The theoretical formulae and description of the multiscale method for elastic wave equation are given in more detail. The numerical computations for an inhomogeneous model with random scatter are completed. The results show the effectiveness of the multiscale method.
Simulation of Crack Propagation in Metal Powder Compaction
NASA Astrophysics Data System (ADS)
Tahir, S. M.; Ariffin, A. K.
2006-08-01
This paper presents the fracture criterion of metal powder compact and simulation of the crack initiation and propagation during cold compaction process. Based on the fracture criterion of rock in compression, a displacement-based finite element model has been developed to analyze fracture initiation and crack growth in iron powder compact. Estimation of fracture toughness variation with relative density is established in order to provide the fracture parameter as compaction proceeds. A finite element model with adaptive remeshing technique is used to accommodate changes in geometry during the compaction and fracture process. Friction between crack faces is modelled using the six-node isoparametric interface elements. The shear stress and relative density distributions of the iron compact with predicted crack growth are presented, where the effects of different loading conditions are presented for comparison purposes.
Simulations of ultra-high-energy cosmic rays propagation
Kalashev, O. E.; Kido, E.
2015-05-15
We compare two techniques for simulation of the propagation of ultra-high-energy cosmic rays (UHECR) in intergalactic space: the Monte Carlo approach and a method based on solving transport equations in one dimension. For the former, we adopt the publicly available tool CRPropa and for the latter, we use the code TransportCR, which has been developed by the first author and used in a number of applications, and is made available online with publishing this paper. While the CRPropa code is more universal, the transport equation solver has the advantage of a roughly 100 times higher calculation speed. We conclude that the methods give practically identical results for proton or neutron primaries if some accuracy improvements are introduced to the CRPropa code.
Simulation of seismic wave propagation for reconnaissance in machined tunnelling
NASA Astrophysics Data System (ADS)
Lambrecht, L.; Friederich, W.
2012-04-01
During machined tunnelling, there is a complex interaction chain of the involved components. For example, on one hand the machine influences the surrounding ground during excavation, on the other hand supporting measures are needed acting on the ground. Furthermore, the different soil conditions are influencing the wearing of tools, the speed of the excavation and the safety of the construction site. In order to get information about the ground along the tunnel track, one can use seismic imaging. To get a better understanding of seismic wave propagation for a tunnel environment, we want to perform numerical simulations. For that, we use the spectral element method (SEM) and the nodal discontinuous galerkin method (NDG). In both methods, elements are the basis to discretize the domain of interest for performing high order elastodynamic simulations. The SEM is a fast and widely used method but the biggest drawback is it's limitation to hexahedral elements. For complex heterogeneous models with a tunnel included, it is a better choice to use the NDG, which needs more computation time but can be adapted to tetrahedral elements. Using this technique, we can perform high resolution simulations of waves initialized by a single force acting either on the front face or the side face of the tunnel. The aim is to produce waves that travel mainly in the direction of the tunnel track and to get as much information as possible from the backscattered part of the wave field.
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel
NASA Astrophysics Data System (ADS)
Privé, N. C.; Errico, R. M.; Tai, K.-S.
2013-06-01
The National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a 1 month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 h forecast, increased observation error only yields a slight decline in forecast skill in the extratropics and no discernible degradation of forecast skill in the tropics.
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
Evaluation of color error and noise on simulated images
NASA Astrophysics Data System (ADS)
Mornet, Clémence; Vaillant, Jérôme; Decroux, Thomas; Hérault, Didier; Schanen, Isabelle
2010-01-01
The evaluation of CMOS sensors performance in terms of color accuracy and noise is a big challenge for camera phone manufacturers. On this paper, we present a tool developed with Matlab at STMicroelectronics which allows quality parameters to be evaluated on simulated images. These images are computed based on measured or predicted Quantum Efficiency (QE) curves and noise model. By setting the parameters of integration time and illumination, the tool optimizes the color correction matrix (CCM) and calculates the color error, color saturation and signal-to-noise ratio (SNR). After this color correction optimization step, a Graphics User Interface (GUI) has been designed to display a simulated image at a chosen illumination level, with all the characteristics of a real image taken by the sensor with the previous color correction. Simulated images can be a synthetic Macbeth ColorChecker, for which reflectance of each patch is known, or a multi-spectral image, described by the reflectance spectrum of each pixel or an image taken at high-light level. A validation of the results has been performed with ST under development sensors. Finally we present two applications one based on the trade-offs between color saturation and noise by optimizing the CCM and the other based on demosaicking SNR trade-offs.
NASA Astrophysics Data System (ADS)
Pasternack, Gregory B.; Gilbert, Andrew T.; Wheaton, Joseph M.; Buckland, Evan M.
2006-08-01
SummaryResource managers, scientists, government regulators, and stakeholders are considering sophisticated numerical models for managing complex environmental problems. In this study, observations from a river-rehabilitation experiment involving gravel augmentation and spawning habitat enhancement were used to assess sources and magnitudes of error in depth, velocity, and shear velocity predictions made at the 1-m scale with a commercial two-dimensional (depth-averaged) model. Error in 2D model depth prediction averaged 21%. This error was attributable to topographic survey resolution, which at 1 pt per 1.14 m 2, was inadequate to resolve small humps and depressions influencing point measurements. Error in 2D model velocity prediction averaged 29%. More than half of this error was attributable to depth prediction error. Despite depth and velocity error, 56% of tested 2D model predictions of shear velocity were within the 95% confidence limit of the best field-based estimation method. Ninety percent of the error in shear velocity prediction was explained by velocity prediction error. Multiple field-based estimates of shear velocity differed by up to 160%, so the lower error for the 2D model's predictions suggests such models are at least as accurate as field measurement. 2D models enable detailed, spatially distributed estimates compared to the small number measurable in a field campaign of comparable cost. They also can be used for design evaluation. Although such numerical models are limited to channel types adhering to model assumptions and yield predictions only accurate to ˜20-30%, they can provide a useful tool for river-rehabilitation design and assessment, including spatially diverse habitat heterogeneity as well as for pre- and post-project appraisal.
Comparison of Tropospheric Signal Delay Models for GNSS Error Simulation
NASA Astrophysics Data System (ADS)
Kim, Hye-In; Ha, Jihyun; Park, Kwan-Dong; Lee, Sanguk; Kim, Jaehoon
2009-06-01
As one of the GNSS error simulation case studies, we computed tropospheric signal delays based on three well-known models (Hopfield, Modified Hopfield and Saastamoinen) and a simple model. In the computation, default meteorological values were used. The result was compared with the GIPSY result, which we assumed as truth. The RMS of a simple model with Marini mapping function was the largest, 31.0 cm. For the other models, the average RMS is 5.2 cm. In addition, to quantify the influence of the accuracy of meteorological information on the signal delay, we did sensitivity analysis of pressure and temperature. As a result, all models used this study were not very sensitive to pressure variations. Also all models, except for the modified Hopfield model, were not sensitive to temperature variations.
Simulation of 3D Global Wave Propagation Through Geodynamic Models
NASA Astrophysics Data System (ADS)
Schuberth, B.; Piazzoni, A.; Bunge, H.; Igel, H.; Steinle-Neumann, G.
2005-12-01
This project aims at a better understanding of the forward problem of global 3D wave propagation. We use the spectral element program "SPECFEM3D" (Komatitsch and Tromp, 2002a,b) with varying input models of seismic velocities derived from mantle convection simulations (Bunge et al., 2002). The purpose of this approach is to obtain seismic velocity models independently from seismological studies. In this way one can test the effects of varying parameters of the mantle convection models on the seismic wave field. In order to obtain the seismic velocities from the temperature field of the geodynamical simulations we follow a mineral physics approach. Assuming a certain mantle composition (e.g. pyrolite with CMASF composition) we compute the stable phases for each depth (i.e. pressure) and temperature by system Gibbs free energy minimization. Elastic moduli and density are calculated from the equations of state of the stable mineral phases. For this we use a mineral physics database derived from calorimetric experiments (enthalphy and entropy of formation, heat capacity) and EOS parameters.
Numerical Simulation of Time-Dependent Wave Propagation Using Nonreflective Boundary Conditions
NASA Astrophysics Data System (ADS)
Ionescu, D.; Muehlhaus, H.
2003-12-01
Solving numerically the wave equation for modelling wave propagation on an unbounded domain with complex geometry requires a truncation of the domain, to fit the infinite region on a finite computer. Minimizing the amount of spurious reflections requires in many cases the introduction of an artificial boundary and of associated nonreflecting boundary conditions. Here, a question arises, namely which boundary condition guarantees that the solution of the time dependent problem inside the artificial boundary coincides with the solution of the original problem in the infinite region. Recent investigations have shown that the accuracy and performance of numerical algorithms and the interpretation of the results critically depend on the proper treatment of external boundaries. Despite the computational speed of finite difference schemes and the robustness of finite elements in handling complex geometries the resulting numerical error consists of two independent contributions: the discretization error of the numerical method used and the spurious reflection generated at the artificial boundary. This spurious contribution travels back and substantially degrades the accuracy of the solution everywhere in the computational domain. Unless both error components are reduced systematically, the numerical solution does not converge to the solution of the original problem in the infinite region. In the present study we present and discuss absorbing boundary condition techniques for the time-dependent scalar wave equation in three spatial dimensions. In particular, exact conditions that annihilate wave harmonics on a spherical artificial boundary up to a given order are obtained and subsequently applied in numerical simulations by employing a finite differences implementation.
NASA Astrophysics Data System (ADS)
Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan
2015-10-01
Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.
NASA Astrophysics Data System (ADS)
Straatsma, Menno
2010-05-01
Accurate water level prediction for the design discharge of large rivers is of main importance for the flood safety of large embanked areas in The Netherlands. Within a larger framework of uncertainty assessment, this report focusses on the effect of uncertainty in roughness parameterization in a 2D hydrodynamic model. Two key elements are considered in this roughness parameterization. Firstly the manually classified ecotope map that provides base data for roughness classes, and secondly the lookup table that translates roughness classes to vegetation structural characteristics. The aim is to quantify the effects of these two error sources on the following hydrodynamic aspects: 1. the discharge distribution at the bifurcation points within the river Rhine 2. peak water levels at a stationary discharge of 16000 m3/s. To assess the effect of the first error source, new realisations of ecotope maps were made based on the current ecotope map and an error matrix of the classification. Using these realisations of the ecotope maps, twelve succesfull model runs were carried out of the Rhine distributaries at design discharge. The classification error leads to a standard deviation of the water levels per river kilometer of 0.08, 0.05 and 0.10 m for Upper Rhine- Waal, Pannerdensch Kanaal-Nederrijn-Lek and the IJssel river respectively. The range is maximum range in water levels is 0.40, 0.40 and 0.57 m for these river sections respectively. Largest effects are found in the IJssel river and the Pannerdensch Kanaal. For the second error source, the accuracy of the values in the lookup table, a compilation was made of 445 field measurements of vegetation structure was carried out. For each of the vegetation types, the minimum, 25-percentile, median, 75-percentile and maximum for vegetation height and density were computed. These five values were subsequently put in the lookup table that was used for the hydrodynamic model. The interquartile range in vegetation height and
NASA Technical Reports Server (NTRS)
Snow, L. S.; Kuhn, A. E.
1975-01-01
Previous error analyses conducted by the Guidance and Dynamics Branch of NASA have used the Guidance Analysis Program (GAP) as the trajectory simulation tool. Plans are made to conduct all future error analyses using the Space Vehicle Dynamics Simulation (SVDS) program. A study was conducted to compare the inertial measurement unit (IMU) error simulations of the two programs. Results of the GAP/SVDS comparison are presented and problem areas encountered while attempting to simulate IMU errors, vehicle performance uncertainties and environmental uncertainties using SVDS are defined. An evaluation of the SVDS linear error analysis capability is also included.
How to measure propagation velocity in cardiac tissue: a simulation study
Linnenbank, Andre C.; de Bakker, Jacques M. T.; Coronel, Ruben
2014-01-01
To estimate conduction velocities from activation times in myocardial tissue, the “average vector” method computes all the local activation directions and velocities from local activation times and estimates the fastest and slowest propagation speed from these local values. The “single vector” method uses areas of apparent uniform elliptical spread of activation and chooses a single vector for the estimated longitudinal velocity and one for the transversal. A simulation study was performed to estimate the influence of grid size, anisotropy, and vector angle bin size. The results indicate that the “average vector” method can best be used if the grid- or bin-size is large, although systematic errors occur. The “single vector” method performs better, but requires human intervention for the definition of fiber direction. The average vector method can be automated. PMID:25101004
Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui
2016-01-01
Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506
Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation
NASA Astrophysics Data System (ADS)
Schiavazzi, Daniele; Marsden, Alison
2015-11-01
Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.
Petrov, Nikolay V; Pavlov, Pavel V; Malov, A N
2013-06-30
Using the equations of scalar diffraction theory we consider the formation of an optical vortex on a diffractive optical element. The algorithms are proposed for simulating the processes of propagation of spiral wavefronts in free space and their reflections from surfaces with different roughness parameters. The given approach is illustrated by the results of numerical simulations. (propagation of wave fronts)
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.
Statistical error propagation in ab initio no-core full configuration calculations of light nuclei
NASA Astrophysics Data System (ADS)
Navarro Pérez, R.; Amaro, J. E.; Ruiz Arriola, E.; Maris, P.; Vary, J. P.
2015-12-01
We propagate the statistical uncertainty of experimental N N scattering data into the binding energy of 3H and 4He. We also study the sensitivity of the magnetic moment and proton radius of the 3H to changes in the N N interaction. The calculations are made with the no-core full configuration method in a sufficiently large harmonic oscillator basis. For those light nuclei we obtain Δ Estat(3H) =0.015 MeV and Δ Estat(4He) =0.055 MeV .
Errors Characteristics of Two Grid Refinement Approaches in Aquaplanet Simulations: MPAS-A and WRF
Hagos, Samson M.; Leung, Lai-Yung R.; Rauscher, Sara; Ringler, Todd
2013-09-01
This study compares the error characteristics associated with two grid refinement approaches including global variable resolution and nesting for high resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales-Atmosphere (MPAS-A), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context. For MPAS-A, simulations have been performed with a quasi-uniform resolution global domain at coarse (1°) and high (0.25°) resolution, and a variable resolution domain with a high resolution region at 0.25° configured inside a coarse resolution global domain at 1° resolution. Similarly, WRF has been configured to run on a coarse (1°) and high (0.25°) tropical channel domain as well as a nested domain with a high resolution region at 0.25° nested two-way inside the coarse resolution (1°) tropical channel. The variable resolution or nested simulations are compared against the high resolution simulations. Both models respond to increased resolution with enhanced precipitation. Limited and significant reduction in the ratio of convective to non-convective precipitation. The limited area grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. Within the high resolution limited area, the zonal distribution of precipitation is affected by advection in MPAS-A and by the nesting strategy in WRF. In both models, 20 day Kelvin waves propagate through the high-resolution domains fairly unaffected by the change in resolution (and the presence of a boundary in WRF) but increased resolution strengthens eastward propagating inertio-gravity waves.
NASA Astrophysics Data System (ADS)
Lei, YANG; Caifa, GUO; Zhengxu, DAI; Xiaoyong, LI; Shaolin, WANG
2016-02-01
The space tracking ship is a moving platform in the TT&C network. The orbit determination precision of the ship plays a key role in the TT&C mission. Based on the measuring data obtained by the ship-borne equipments, the paper presents the mathematic models of the complicated error from the space tracking ship, which can separate the random error and the correction residual error with secondary low frequency from the complicated error. An error simulation algorithm is proposed to analyze the orbit determination precision based on the two set of the different equipments. With this algorithm, a group of complicated error can be simulated from a measured sample. The simulated error groups can meet the requirements of sufficient complicated error for the equipment tests before the mission execution, which is helpful to the practical application.
Error propagation in the numerical solutions of the differential equations of orbital mechanics
NASA Technical Reports Server (NTRS)
Bond, V. R.
1982-01-01
The relationship between the eigenvalues of the linearized differential equations of orbital mechanics and the stability characteristics of numerical methods is presented. It is shown that the Cowell, Encke, and Encke formulation with an independent variable related to the eccentric anomaly all have a real positive eigenvalue when linearized about the initial conditions. The real positive eigenvalue causes an amplification of the error of the solution when used in conjunction with a numerical integration method. In contrast an element formulation has zero eigenvalues and is numerically stable.
Shi, Xianbo; Reininger, Ruben; Sanchez del Rio, Manuel; Assoufid, Lahsen
2014-01-01
A new method for beamline simulation combining ray-tracing and wavefront propagation is described. The ‘Hybrid Method’ computes diffraction effects when the beam is clipped by an aperture or mirror length and can also simulate the effect of figure errors in the optical elements when diffraction is present. The effect of different spatial frequencies of figure errors on the image is compared with SHADOW results pointing to the limitations of the latter. The code has been benchmarked against the multi-electron version of SRW in one dimension to show its validity in the case of fully, partially and non-coherent beams. The results demonstrate that the code is considerably faster than the multi-electron version of SRW and is therefore a useful tool for beamline design and optimization. PMID:24971960
Shi, Xianbo; Reininger, Ruben; Sanchez Del Rio, Manuel; Assoufid, Lahsen
2014-07-01
A new method for beamline simulation combining ray-tracing and wavefront propagation is described. The `Hybrid Method' computes diffraction effects when the beam is clipped by an aperture or mirror length and can also simulate the effect of figure errors in the optical elements when diffraction is present. The effect of different spatial frequencies of figure errors on the image is compared with SHADOW results pointing to the limitations of the latter. The code has been benchmarked against the multi-electron version of SRW in one dimension to show its validity in the case of fully, partially and non-coherent beams. The results demonstrate that the code is considerably faster than the multi-electron version of SRW and is therefore a useful tool for beamline design and optimization. PMID:24971960
PLASIM: A computer code for simulating charge exchange plasma propagation
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Deininger, W. D.; Winder, D. R.; Kaufman, H. R.
1982-01-01
The propagation of the charge exchange plasma for an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ANSI Standard FORTRAN.
Modeling and Simulation for Realistic Propagation Environments of Communications Signals at SHF Band
NASA Technical Reports Server (NTRS)
Ho, Christian
2005-01-01
In this article, most of widely accepted radio wave propagation models that have proven to be accurate in practice as well as numerically efficient at SHF band will be reviewed. Weather and terrain data along the signal's paths can be input in order to more accurately simulate the propagation environments under particular weather and terrain conditions. Radio signal degradation and communications impairment severity will be investigated through the realistic radio propagation channel simulator. Three types of simulation approaches in predicting signal's behaviors are classified as: deterministic, stochastic and attenuation map. The performance of the simulation can be evaluated under operating conditions for the test ranges of interest. Demonstration tests of a real-time propagation channel simulator will show the capabilities and limitations of the simulation tool and underlying models.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
Revised error propagation of 40Ar/39Ar data, including covariances
NASA Astrophysics Data System (ADS)
Vermeesch, Pieter
2015-12-01
The main advantage of the 40Ar/39Ar method over conventional K-Ar dating is that it does not depend on any absolute abundance or concentration measurements, but only uses the relative ratios between five isotopes of the same element -argon- which can be measured with great precision on a noble gas mass spectrometer. The relative abundances of the argon isotopes are subject to a constant sum constraint, which imposes a covariant structure on the data: the relative amount of any of the five isotopes can always be obtained from that of the other four. Thus, the 40Ar/39Ar method is a classic example of a 'compositional data problem'. In addition to the constant sum constraint, covariances are introduced by a host of other processes, including data acquisition, blank correction, detector calibration, mass fractionation, decay correction, interference correction, atmospheric argon correction, interpolation of the irradiation parameter, and age calculation. The myriad of correlated errors arising during the data reduction are best handled by casting the 40Ar/39Ar data reduction protocol in a matrix form. The completely revised workflow presented in this paper is implemented in a new software platform, Ar-Ar_Redux, which takes raw mass spectrometer data as input and generates accurate 40Ar/39Ar ages and their (co-)variances as output. Ar-Ar_Redux accounts for all sources of analytical uncertainty, including those associated with decay constants and the air ratio. Knowing the covariance matrix of the ages removes the need to consider 'internal' and 'external' uncertainties separately when calculating (weighted) mean ages. Ar-Ar_Redux is built on the same principles as its sibling program in the U-Pb community (U-Pb_Redux), thus improving the intercomparability of the two methods with tangible benefits to the accuracy of the geologic time scale. The program can be downloaded free of charge from
Standard Errors of Equating Differences: Prior Developments, Extensions, and Simulations
ERIC Educational Resources Information Center
Moses, Tim; Zhang, Wenmin
2011-01-01
The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…
ITER Test Blanket Module Error Field Simulation Experiments
NASA Astrophysics Data System (ADS)
Schaffer, M. J.
2010-11-01
Recent experiments at DIII-D used an active-coil mock-up to investigate effects of magnetic error fields similar to those expected from two ferromagnetic Test Blanket Modules (TBMs) in one ITER equatorial port. The largest and most prevalent observed effect was plasma toroidal rotation slowing across the entire radial profile, up to 60% in H-mode when the mock-up local ripple at the plasma was ˜4 times the local ripple expected in front of ITER TBMs. Analysis showed the slowing to be consistent with non-resonant braking by the mock-up field. There was no evidence of strong electromagnetic braking by resonant harmonics. These results are consistent with the near absence of resonant helical harmonics in the TBM field. Global particle and energy confinement in H-mode decreased by <20% for the maximum mock-up ripple, but <5% at the local ripple expected in ITER. These confinement reductions may be linked with the large velocity reductions. TBM field effects were small in L-mode but increased with plasma beta. The L-H power threshold was unaffected within error bars. The mock-up field increased plasma sensitivity to mode locking by a known n=1 test field (n = toroidal harmonic number). In H-mode the increased locking sensitivity was from TBM torque slowing plasma rotation. At low beta, locked mode tolerance was fully recovered by re-optimizing the conventional DIII-D ``I-coils'' empirical compensation of n=1 errors in the presence of the TBM mock-up field. Empirical error compensation in H-mode should be addressed in future experiments. Global loss of injected neutral beam fast ions was within error bars, but 1 MeV fusion triton loss may have increased. The many DIII-D mock-up results provide important benchmarks for models needed to predict effects of TBMs in ITER.
SimProp: a simulation code for ultra high energy cosmic ray propagation
Aloisio, R.; Grillo, A.F.; Boncioli, D.; Petrera, S.; Salamida, F. E-mail: denise.boncioli@roma2.infn.it E-mail: petrera@aquila.infn.it
2012-10-01
A new Monte Carlo simulation code for the propagation of Ultra High Energy Cosmic Rays is presented. The results of this simulation scheme are tested by comparison with results of another Monte Carlo computation as well as with the results obtained by directly solving the kinetic equation for the propagation of Ultra High Energy Cosmic Rays. A short comparison with the latest flux published by the Pierre Auger collaboration is also presented.
NASA Astrophysics Data System (ADS)
Nguyen-Dinh, Maxime; Gainville, Olaf; Lardjane, Nicolas
2015-10-01
We present new results for the blast wave propagation from strong shock regime to the weak shock limit. For this purpose, we analyse the blast wave propagation using both Direct Numerical Simulation and an acoustic asymptotic model. This approach allows a full numerical study of a realistic pyrotechnic site taking into account for the main physical effects. We also compare simulation results with first measurements. This study is a part of the french ANR-Prolonge project (ANR-12-ASTR-0026).
Accumulation of errors in numerical simulations of chemically reacting gas dynamics
NASA Astrophysics Data System (ADS)
Smirnov, N. N.; Betelin, V. B.; Nikitin, V. F.; Stamov, L. I.; Altoukhov, D. I.
2015-12-01
The aim of the present study is to investigate problems of numerical simulations precision and stochastic errors accumulation in solving problems of detonation or deflagration combustion of gas mixtures in rocket engines. Computational models for parallel computing on supercomputers incorporating CPU and GPU units were tested and verified. Investigation of the influence of computational grid size on simulation precision and computational speed was performed. Investigation of accumulation of errors for simulations implying different strategies of computation were performed.
Numerical simulation of impurity propagation in sea channels
NASA Astrophysics Data System (ADS)
Cherniy, Dmitro; Dovgiy, Stanislav; Gourjii, Alexandre
2009-11-01
Building the dike (2003) in Kerch channel (between Black and Azov seas) from Taman peninsula is an example of technological influence on the fluid flow and hydrological conditions in the channel. Increasing velocity flow by two times in a fairway region results in the appearance dangerous tendencies in hydrology of Kerch channel. A flow near the coastal edges generates large scale vortices, which move along the channel. A shipwreck (November 11, 2007) of tanker ``Volganeft-139'' in Kerch channel resulted in an ecological catastrophe in the indicated region. More than 1300 tons of petroleum appeared on the sea surface. Intensive vortices formed here involve part of the impurity region in own motion. Boundary of the impurity region is deformed, stretched and cover the center part of the channel. The adapted vortex singularity method for the impurity propagation in Kerch channel and analyze of the pollution propagation are the main goal of the report.
Abe, H.; Okuda, H.
1993-08-01
In this Letter, we first present a new computer simulation model developed to study the propagation of electromagnetic waves in a dielectric medium in the linear and nonlinear regimes. The model is constructed by combining a microscopic model used in the semi-classical approximation for the dielectric media and the particle model developed for the plasma simulations. The model was then used for studying linear and nonlinear wave propagation in the dielectric medium such as an optical fiber. It is shown that the model may be useful for studying nonlinear wave propagation and harmonics generation in the nonlinear dielectric media.
Evaluating the Effect of Global Positioning System (GPS) Satellite Clock Error via GPS Simulation
NASA Astrophysics Data System (ADS)
Sathyamoorthy, Dinesh; Shafii, Shalini; Amin, Zainal Fitry M.; Jusoh, Asmariah; Zainun Ali, Siti
2016-06-01
This study is aimed at evaluating the effect of Global Positioning System (GPS) satellite clock error using GPS simulation. Two conditions of tests are used; Case 1: All the GPS satellites have clock errors within the normal range of 0 to 7 ns, corresponding to pseudorange error range of 0 to 2.1 m; Case 2: One GPS satellite suffers from critical failure, resulting in clock error in the pseudorange of up to 1 km. It is found that increase of GPS satellite clock error causes increase of average positional error due to increase of pseudorange error in the GPS satellite signals, which results in increasing error in the coordinates computed by the GPS receiver. Varying average positional error patterns are observed for the each of the readings. This is due to the GPS satellite constellation being dynamic, causing varying GPS satellite geometry over location and time, resulting in GPS accuracy being location / time dependent. For Case 1, in general, the highest average positional error values are observed for readings with the highest PDOP values, while the lowest average positional error values are observed for readings with the lowest PDOP values. For Case 2, no correlation is observed between the average positional error values and PDOP, indicating that the error generated is random.
Investigation of Radar Propagation in Buildings: A 10 Billion Element Cartesian-Mesh FETD Simulation
Stowell, M L; Fasenfest, B J; White, D A
2008-01-14
In this paper large scale full-wave simulations are performed to investigate radar wave propagation inside buildings. In principle, a radar system combined with sophisticated numerical methods for inverse problems can be used to determine the internal structure of a building. The composition of the walls (cinder block, re-bar) may effect the propagation of the radar waves in a complicated manner. In order to provide a benchmark solution of radar propagation in buildings, including the effects of typical cinder block and re-bar, we performed large scale full wave simulations using a Finite Element Time Domain (FETD) method. This particular FETD implementation is tuned for the special case of an orthogonal Cartesian mesh and hence resembles FDTD in accuracy and efficiency. The method was implemented on a general-purpose massively parallel computer. In this paper we briefly describe the radar propagation problem, the FETD implementation, and we present results of simulations that used over 10 billion elements.
End-to-End Network Simulation Using a Site-Specific Radio Wave Propagation Model
Djouadi, Seddik M; Kuruganti, Phani Teja; Nutaro, James J
2013-01-01
The performance of systems that rely on a wireless network depends on the propagation environment in which that network operates. To predict how these systems and their supporting networks will perform, simulations must take into consideration the propagation environment and how this effects the performance of the wireless network. Network simulators typically use empirical models of the propagation environment. However, these models are not intended for, and cannot be used, to predict a wireless system will perform in a specific location, e.g., in the center of a particular city or the interior of a specific manufacturing facility. In this paper, we demonstrate how a site-specific propagation model and the NS3 simulator can be used to predict the end-to-end performance of a wireless network.
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
Coherent-wave Monte Carlo method for simulating light propagation in tissue
NASA Astrophysics Data System (ADS)
Kraszewski, Maciej; Pluciński, Jerzy
2016-03-01
Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime.
Whistler propagation in ionospheric density ducts: Simulations and DEMETER observations
NASA Astrophysics Data System (ADS)
Woodroffe, J. R.; Streltsov, A. V.; Vartanyan, A.; Milikh, G. M.
2013-11-01
On 16 October 2009, the Detection of Electromagnetic Emissions Transmitted from Earthquake Regions (DEMETER) satellite observed VLF whistler wave activity coincident with an ionospheric heating experiment conducted at HAARP. At the same time, density measurements by DEMETER indicate the presence of multiple field-aligned enhancements. Using an electron MHD model, we show that the distribution of VLF power observed by DEMETER is consistent with the propagation of whistlers from the heating region inside the observed density enhancements. We also discuss other interesting features of this event, including coupling of the lower hybrid and whistler modes, whistler trapping in artificial density ducts, and the interference of whistlers waves from two adjacent ducts.
LOCA simulation: analysis of rarefaction waves propagating through geometric singularities
Crouzet, Fabien; Faucher, Vincent; Galon, Pascal; Piteau, Philippe; Izquierdo, Patrick
2012-07-01
The propagation of a transient wave through an orifice is investigated for applications to Loss Of Coolant Accident in nuclear plants. An analytical model is proposed for the response of an orifice plate and implemented in the EUROPLEXUS fast transient dynamics software. It includes an acoustic inertial effect in addition to a quasi-steady dissipation term. The model is experimentally validated on a test rig consisting in a single pipe filled with pressurized water. The test rig is designed to generate a rapid depressurization of the pipe, by means of a bursting disk. The proposed model gives results which compare favourably with experimental data. (authors)
NASA Astrophysics Data System (ADS)
Couairon, A.; Brambilla, E.; Corti, T.; Majus, D.; de J. Ramírez-Góngora, O.; Kolesik, M.
2011-11-01
The purpose of this article is to provide practical introduction into numerical modeling of ultrashort optical pulses in extreme nonlinear regimes. The theoretic background section covers derivation of modern pulse propagation models starting from Maxwell's equations, and includes both envelope-based models and carrier-resolving propagation equations. We then continue with a detailed description of implementation in software of Nonlinear Envelope Equations as an example of a mixed approach which combines finite-difference and spectral techniques. Fully spectral numerical solution methods for the Unidirectional Pulse Propagation Equation are discussed next. The modeling part of this guide concludes with a brief introduction into efficient implementations of nonlinear medium responses. Finally, we include several worked-out simulation examples. These are mini-projects designed to highlight numerical and modeling issues, and to teach numerical-experiment practices. They are also meant to illustrate, first and foremost for a non-specialist, how tools discussed in this guide can be applied in practical numerical modeling.
NASA Astrophysics Data System (ADS)
Martowicz, A.; Ruzzene, M.; Staszewski, W. J.; Rimoli, J. J.; Uhl, T.
2014-03-01
The work deals with the reduction of numerical dispersion in simulations of wave propagation in solids. The phenomenon of numerical dispersion naturally results from time and spatial discretization present in a numerical model of mechanical continuum. Although discretization itself makes possible to model wave propagation in structures with complicated geometries and made of different materials, it inevitably causes simulation errors when improper time and length scales are chosen for the simulations domains. Therefore, by definition, any characteristic parameter for spatial and time resolution must create limitations on maximal wavenumber and frequency for a numerical model. It should be however noted that expected increase of the model quality and its functionality in terms of affordable wavenumbers, frequencies and speeds should not be achieved merely by denser mesh and reduced time integration step. The computational cost would be simply unacceptable. The authors present a nonlocal finite difference scheme with the coefficients calculated applying a Fourier series, which allows for considerable reduction of numerical dispersion. There are presented the results of analyses for 2D models, with isotropic and anisotropic materials, fulfilling the planar stress state. Reduced numerical dispersion is shown in the dispersion surfaces for longitudinal and shear waves propagating for different directions with respect to the mesh orientation and without dramatic increase of required number of nonlocal interactions. A case with the propagation of longitudinal wave in composite material is studied with given referential solution of the initial value problem for verification of the time-domain outcomes. The work gives a perspective of modeling of any type of real material dispersion according to measurements and with assumed accuracy.
Digital simulation error curves for a spring-mass-damper system
NASA Technical Reports Server (NTRS)
Knox, L. A.
1971-01-01
Plotting digital simulation errors for a spring-mass-damper system and using these error curves to select type of integration, feedback update method, and number of samples per cycle at resonance reduces excessive number of samples per cycle and unnecessary iterations.
Ghanem, Roger G. . E-mail: ghanem@usc.edu; Doostan, Alireza . E-mail: doostan@jhu.edu
2006-09-01
This paper investigates the predictive accuracy of stochastic models. In particular, a formulation is presented for the impact of data limitations associated with the calibration of parameters for these models, on their overall predictive accuracy. In the course of this development, a new method for the characterization of stochastic processes from corresponding experimental observations is obtained. Specifically, polynomial chaos representations of these processes are estimated that are consistent, in some useful sense, with the data. The estimated polynomial chaos coefficients are themselves characterized as random variables with known probability density function, thus permitting the analysis of the dependence of their values on further experimental evidence. Moreover, the error in these coefficients, associated with limited data, is propagated through a physical system characterized by a stochastic partial differential equation (SPDE). This formalism permits the rational allocation of resources in view of studying the possibility of validating a particular predictive model. A Bayesian inference scheme is relied upon as the logic for parameter estimation, with its computational engine provided by a Metropolis-Hastings Markov chain Monte Carlo procedure.
Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S
2016-05-01
The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. PMID:26894840
Gray, E.R.; Nath, S.; Wangler, T.P.
1997-08-01
The current design for the production of tritium uses both normal-conducting (NC) and superconducting (SC) structures. To evaluate the performance of the superconducting part of the linac which constitutes more than 80% of the accelerator, studies have been made to include the effects of various error and fault conditions. Here, the authors present the simulation results of studies such as effects of rf phase and amplitude errors, cavity/klystron failure, quadrupole misalignment errors, quadrupole gradient error, and beam-input mismatches.
Simulation-based reasoning about the physical propagation of fault effects
NASA Technical Reports Server (NTRS)
Feyock, Stefan; Li, Dalu
1990-01-01
The research described deals with the effects of faults on complex physical systems, with particular emphasis on aircraft and spacecraft systems. Given that a malfunction has occurred and been diagnosed, the goal is to determine how that fault will propagate to other subsystems, and what the effects will be on vehicle functionality. In particular, the use of qualitative spatial simulation to determine the physical propagation of fault effects in 3-D space is described.
Theory and simulations of electrostatic field error transport
Dubin, Daniel H. E.
2008-07-15
Asymmetries in applied electromagnetic fields cause plasma loss (or compression) in stellarators, tokamaks, and non-neutral plasmas. Here, this transport is studied using idealized simulations that follow guiding centers in given fields, neglecting collective effects on the plasma evolution, but including collisions at rate {nu}. For simplicity the magnetic field is assumed to be uniform; transport is due to asymmetries in applied electrostatic fields. Also, the Fokker-Planck equation describing the particle distribution is solved, and the predicted transport is found to agree with the simulations. Banana, plateau, and fluid regimes are identified and observed in the simulations. When separate trapped particle populations are created by application of an axisymmetric squeeze potential, enhanced transport regimes are observed, scaling as {radical}({nu}) when {nu}<{omega}{sub 0}<{omega}{sub B} and as 1/{nu} when {omega}{sub 0}<{nu}<{omega}{sub B} (where {omega}{sub 0} and {omega}{sub B} are the rotation and axial bounce frequencies, respectively). These regimes are similar to those predicted for neoclassical transport in stellarators.
Error-measure for anisotropic grid-adaptation in turbulence-resolving simulations
NASA Astrophysics Data System (ADS)
Toosi, Siavash; Larsson, Johan
2015-11-01
Grid-adaptation requires an error-measure that identifies where the grid should be refined. In the case of turbulence-resolving simulations (DES, LES, DNS), a simple error-measure is the small-scale resolved energy, which scales with both the modeled subgrid-stresses and the numerical truncation errors in many situations. Since this is a scalar measure, it does not carry any information on the anisotropy of the optimal grid-refinement. The purpose of this work is to introduce a new error-measure for turbulence-resolving simulations that is capable of predicting nearly-optimal anisotropic grids. Turbulent channel flow at Reτ ~ 300 is used to assess the performance of the proposed error-measure. The formulation is geometrically general, applicable to any type of unstructured grid.
Modeling decenter, wedge, and tilt errors in optical tolerance analysis and simulation
NASA Astrophysics Data System (ADS)
Youngworth, Richard N.; Herman, Eric
2014-09-01
Many optical designs have lenses with circular outer profiles that are mounted in cylindrical barrels. This geometry leads to errors on mounting parameters such as decenter and tilt, and component error like wedge which are best modeled with a cylindrical or spherical coordinate system. In the absence of clocking registration, this class of errors is effectively reduced to an error magnitude with a random clocking azimuth. Optical engineers consequently must fully understand how cylindrical or spherical basis geometry relates to Cartesian representation. Understanding these factors as well as how optical design codes can differ in error application for Monte Carlo simulations produces the most effective statistical simulations for tolerance assignment, analysis, and verification. This paper covers these topics to aid practicing optical engineers and designers.
Modelling laser light propagation in thermoplastics using Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Parkinson, Alexander
Laser welding has great potential as a fast, non-contact joining method for thermoplastic parts. In the laser transmission welding of thermoplastics, light passes through a semi-transparent part to reach the weld interface. There, it is absorbed as heat, which causes melting and subsequent welding. The distribution and quantity of light reaching the interface are important for predicting the quality of a weld, but are experimentally difficult to estimate. A model for simulating the path of this laser light through these light-scattering plastic parts has been developed. The technique uses a Monte-Carlo approach to generate photon paths through the material, accounting for absorption, scattering and reflection between boundaries in the transparent polymer. It was assumed that any light escaping the bottom surface contributed to welding. The photon paths are then scaled according to the input beam profile in order to simulate non-Gaussian beam profiles. A method for determining the 3 independent optical parameters to accurately predict transmission and beam power distribution at the interface was established using experimental data for polycarbonate at 4 different glass fibre concentrations and polyamide-6 reinforced with 20% long glass fibres. Exit beam profiles and transmissions predicted by the simulation were found to be in generally good agreement (R2>0.90) with experimental measurements. The simulations allowed the prediction of transmission and power distributions at other thicknesses as well as information on reflection, energy absorption and power distributions at other thicknesses for these materials.
FDTD Simulation on Terahertz Waves Propagation Through a Dusty Plasma
NASA Astrophysics Data System (ADS)
Wang, Maoyan; Zhang, Meng; Li, Guiping; Jiang, Baojun; Zhang, Xiaochuan; Xu, Jun
2016-08-01
The frequency dependent permittivity for dusty plasmas is provided by introducing the charging response factor and charge relaxation rate of airborne particles. The field equations that describe the characteristics of Terahertz (THz) waves propagation in a dusty plasma sheath are derived and discretized on the basis of the auxiliary differential equation (ADE) in the finite difference time domain (FDTD) method. Compared with numerical solutions in reference, the accuracy for the ADE FDTD method is validated. The reflection property of the metal Aluminum interlayer of the sheath at THz frequencies is discussed. The effects of the thickness, effective collision frequency, airborne particle density, and charge relaxation rate of airborne particles on the electromagnetic properties of Terahertz waves through a dusty plasma slab are investigated. Finally, some potential applications for Terahertz waves in information and communication are analyzed. supported by National Natural Science Foundation of China (Nos. 41104097, 11504252, 61201007, 41304119), the Fundamental Research Funds for the Central Universities (Nos. ZYGX2015J039, ZYGX2015J041), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (No. 20120185120012)
A Compact Code for Simulations of Quantum Error Correction in Classical Computers
Nyman, Peter
2009-03-10
This study considers implementations of error correction in a simulation language on a classical computer. Error correction will be necessarily in quantum computing and quantum information. We will give some examples of the implementations of some error correction codes. These implementations will be made in a more general quantum simulation language on a classical computer in the language Mathematica. The intention of this research is to develop a programming language that is able to make simulations of all quantum algorithms and error corrections in the same framework. The program code implemented on a classical computer will provide a connection between the mathematical formulation of quantum mechanics and computational methods. This gives us a clear uncomplicated language for the implementations of algorithms.
HERMES: Simulating the propagation of ultra-high energy cosmic rays
NASA Astrophysics Data System (ADS)
De Domenico, Manlio
2013-08-01
The study of ultra-high energy cosmic rays (UHECR) at Earth cannot prescind from the study of their propagation in the Universe. In this paper, we present HERMES, the ad hoc Monte Carlo code we have developed for the realistic simulation of UHECR propagation. We discuss the modeling adopted to simulate the cosmology, the magnetic fields, the interactions with relic photons and the production of secondary particles. In order to show the potential applications of HERMES for astroparticle studies, we provide an estimation of the surviving probability of UHE protons, the GZK horizons of nuclei and the all-particle spectrum observed at Earth in different astrophysical scenarios. Finally, we show the expected arrival direction distribution of UHECR produced from nearby candidate sources. A stable version of HERMES will be released in the next future for public use together with libraries of already propagated nuclei to allow the community to perform mass composition and energy spectrum analysis with our simulator.
Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, Ronald M.
2015-01-01
The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.
SIMULATION OF SHOCK WAVE PROPAGATION AND DAMAGE IN GEOLOGIC MATERIALS
Lomov, I; Vorobiev, O; Antoun, T H
2004-09-17
A new thermodynamically consistent material model for large deformation has been developed. It describes quasistatic loading of limestone as well as high-rate phenomena. This constitutive model has been implemented into an Eulerian shock wave code with adaptive mesh refinement. This approach was successfully used to reproduce static triaxial compression tests and to simulate experiments of blast loading and damage of limestone. Results compare favorably with experimentally available wave profiles from spherically-symmetric explosion in rock samples.
CFD simulation of vented explosion and turbulent flame propagation
NASA Astrophysics Data System (ADS)
Tulach, Aleš; Mynarz, Miroslav; Kozubková, Milada
2015-05-01
Very rapid physical and chemical processes during the explosion require both quality and quantity of detection devices. CFD numerical simulations are suitable instruments for more detailed determination of explosion parameters. The paper deals with mathematical modelling of vented explosion and turbulent flame spread with use of ANSYS Fluent software. The paper is focused on verification of preciseness of calculations comparing calculated data with the results obtained in realised experiments in the explosion chamber.
Monte Carlo simulations of intensity profiles for energetic particle propagation
NASA Astrophysics Data System (ADS)
Tautz, R. C.; Bolte, J.; Shalchi, A.
2016-02-01
Aims: Numerical test-particle simulations are a reliable and frequently used tool for testing analytical transport theories and predicting mean-free paths. The comparison between solutions of the diffusion equation and the particle flux is used to critically judge the applicability of diffusion to the stochastic transport of energetic particles in magnetized turbulence. Methods: A Monte Carlo simulation code is extended to allow for the generation of intensity profiles and anisotropy-time profiles. Because of the relatively low number density of computational particles, a kernel function has to be used to describe the spatial extent of each particle. Results: The obtained intensity profiles are interpreted as solutions of the diffusion equation by inserting the diffusion coefficients that have been directly determined from the mean-square displacements. The comparison shows that the time dependence of the diffusion coefficients needs to be considered, in particular the initial ballistic phase and the often subdiffusive perpendicular coefficient. Conclusions: It is argued that the perpendicular component of the distribution function is essential if agreement between the diffusion solution and the simulated flux is to be obtained. In addition, time-dependent diffusion can provide a better description than the classic diffusion equation only after the initial ballistic phase.
Simulations of Wave Propagation in the Jovian Atmosphere after SL9 Impact Events
NASA Astrophysics Data System (ADS)
Pond, Jarrad W.; Palotai, C.; Korycansky, D.; Harrington, J.
2013-10-01
Our previous numerical investigations into Jovian impacts, including the Shoemaker Levy- 9 (SL9) event (Korycansky et al. 2006 ApJ 646. 642; Palotai et al. 2011 ApJ 731. 3), the 2009 bolide (Pond et al. 2012 ApJ 745. 113), and the ephemeral flashes caused by smaller impactors in 2010 and 2012 (Hueso et al. 2013; Submitted to A&A), have covered only up to approximately 3 to 30 seconds after impact. Here, we present further SL9 impacts extending to minutes after collision with Jupiter’s atmosphere, with a focus on the propagation of shock waves generated as a result of the impact events. Using a similar yet more efficient remapping method than previously presented (Pond et al. 2012; DPS 2012), we move our simulation results onto a larger computational grid, conserving quantities with minimal error. The Jovian atmosphere is extended as needed to accommodate the evolution of the features of the impact event. We restart the simulation, allowing the impact event to continue to progress to greater spatial extents and for longer times, but at lower resolutions. This remap-restart process can be implemented multiple times to achieve the spatial and temporal scales needed to investigate the observable effects of waves generated by the deposition of energy and momentum into the Jovian atmosphere by an SL9-like impactor. As before, we use the three-dimensional, parallel hydrodynamics code ZEUS-MP 2 (Hayes et al. 2006 ApJ.SS. 165. 188) to conduct our simulations. Wave characteristics are tracked throughout these simulations. Of particular interest are the wave speeds and wave positions in the atmosphere as a function of time. These properties are compared to the characteristics of the HST rings to see if shock wave behavior within one hour of impact is consistent with waves observed at one hour post-impact and beyond (Hammel et al. 1995 Science 267. 1288). This research was supported by National Science Foundation Grant AST-1109729 and NASA Planetary Atmospheres Program Grant
Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue
Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William
2008-01-01
In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.
Simulation of ultrasonic wave propagation in welds using ray-based methods
NASA Astrophysics Data System (ADS)
Gardahaut, A.; Jezzine, K.; Cassereau, D.; Leymarie, N.
2014-04-01
Austenitic or bimetallic welds are particularly difficult to control due to their anisotropic and inhomogeneous properties. In this paper, we present a ray-based method to simulate the propagation of ultrasonic waves in such structures, taking into account their internal properties. This method is applied on a smooth representation of the orientation of the grain in the weld. The propagation model consists in solving the eikonal and transport equations in an inhomogeneous anisotropic medium. Simulation results are presented and compared to finite elements for a distribution of grain orientation expressed in a closed-form.
Simulations of elastic wave propagation through Voronoi polycrystals
NASA Astrophysics Data System (ADS)
Turner, Joseph A.; Ghoshal, Goutam
2002-11-01
The scattering of elastic waves in polycrystalline media is relevant for ultrasonic materials characterization and nondestructive evaluation. Ultrasonic attenuation and backscatter are routinely used for extracting microstructural parameters such as grain size and grain texture. The inversion of experimental data requires robust ultrasonic scattering models. Such models are often idealizations of real media through assumptions such as constant density, single grain size, and randomness hypotheses. The accuracy and limits of applicability of these models cannot be fully tested due to practical limits of real materials processing. Here, this problem is examined in terms of numerical simulations of elastic waves through two-dimensional polycrystals. The numerical models are based on the Voronoi polycrystal. Voronoi tessellations have been shown to model accurately the microstructure of polycrystalline metals and ceramics. The Voronoi cells are discretized using finite elements and integrated directly in time. The material properties of the individual Voronoi cells are chosen according to appropriate distributions here, cubic crystals that are statistically isotropic. Results are presented and compared with scattering theories. Issues relevant to spatial/ensemble averaging will also be discussed. These simulations will provide insight into the attenuation models relevant for polycrystalline materials. [Work supported by DOE.
Hybrid simulations of rotational discontinuities. [Alfven wave propagation in astrophysics
NASA Technical Reports Server (NTRS)
Goodrich, C. C.; Cargill, P. J.
1991-01-01
1D hybrid simulations of rotational discontinuities (RDs) are presented. When the angle between the discontinuity normal and the magnetic field (theta-BN) is 30 deg, the RD broadens into a quasi-steady state of width 60-80 c/omega-i. The hodogram has a characteristic S-shape. When theta-BN = 60 deg, the RD is much narrower (10 c/omega-i). For right handed rotations, the results are similar to theta-BN = 30 deg. For left handed rotations, the RD does not evolve much from its initial conditions and the S-shape in the hodogram is much less visible. The results can be understood in terms of matching a fast mode wavelike structure upstream of the RD with an intermediate mode one downstream.
Design of a predictive targeting error simulator for MRI-guided prostate biopsy
NASA Astrophysics Data System (ADS)
Avni, Shachar; Vikal, Siddharth; Fichtinger, Gabor
2010-02-01
Multi-parametric MRI is a new imaging modality superior in quality to Ultrasound (US) which is currently used in standard prostate biopsy procedures. Surface-based registration of the pre-operative and intra-operative prostate volumes is a simple alternative to side-step the challenges involved with deformable registration. However, segmentation errors inevitably introduced during prostate contouring spoil the registration and biopsy targeting accuracies. For the crucial purpose of validating this procedure, we introduce a fully interactive and customizable simulator which determines the resulting targeting errors of simulated registrations between prostate volumes given user-provided parameters for organ deformation, segmentation, and targeting. We present the workflow executed by the simulator in detail and discuss the parameters involved. We also present a segmentation error introduction algorithm, based on polar curves and natural cubic spline interpolation, which introduces statistically realistic contouring errors. One simulation, including all I/O and preparation for rendering, takes approximately 1 minute and 40 seconds to complete on a system with 3 GB of RAM and four Intel Core 2 Quad CPUs each with a speed of 2.40 GHz. Preliminary results of our simulation suggest the maximum tolerable segmentation error given the presence of a 5.0 mm wide small tumor is between 4-5 mm. We intend to validate these results via clinical trials as part of our ongoing work.
SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors
Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I
2014-06-01
Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with γ<κ was < (or ≥) τ. (Conventionally, κ=1 and τ=90%.) A ROC curve was generated from the 616 tests by varying τ. For a range of κ and τ, true/false positive/negative rates were calculated. This procedure was repeated for inserted errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some κ+τ combination. Results: 20mm{sup 2} errors with intensity altered by ≥20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ≥50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though
Simulating underwater plasma sound sources to evaluate focusing performance and analyze errors
NASA Astrophysics Data System (ADS)
Ma, Tian; Huang, Jian-Guo; Lei, Kai-Zhuo; Chen, Jian-Feng; Zhang, Qun-Fei
2010-03-01
Focused underwater plasma sound sources are being applied in more and more fields. Focusing performance is one of the most important factors determining transmission distance and peak values of the pulsed sound waves. The sound source’s components and focusing mechanism were all analyzed. A model was built in 3D Max and wave strength was measured on the simulation platform. Error analysis was fully integrated into the model so that effects on sound focusing performance of processing-errors and installation-errors could be studied. Based on what was practical, ways to limit the errors were proposed. The results of the error analysis should guide the design, machining, placement, debugging and application of underwater plasma sound sources.
Time-Sliced Thawed Gaussian Propagation Method for Simulations of Quantum Dynamics.
Kong, Xiangmeng; Markmann, Andreas; Batista, Victor S
2016-05-19
A rigorous method for simulations of quantum dynamics is introduced on the basis of concatenation of semiclassical thawed Gaussian propagation steps. The time-evolving state is represented as a linear superposition of closely overlapping Gaussians that evolve in time according to their characteristic equations of motion, integrated by fourth-order Runge-Kutta or velocity Verlet. The expansion coefficients of the initial superposition are updated after each semiclassical propagation period by implementing the Husimi Transform analytically in the basis of closely overlapping Gaussians. An advantage of the resulting time-sliced thawed Gaussian (TSTG) method is that it allows for full-quantum dynamics propagation without any kind of multidimensional integral calculation, or inversion of overlap matrices. The accuracy of the TSTG method is demonstrated as applied to simulations of quantum tunneling, showing quantitative agreement with benchmark calculations based on the split-operator Fourier transform method. PMID:26845486
Numerical simulation of fracture rocks and wave propagation by means of fractal theory
Valle G., R. del
1994-12-31
A numerical approach was developed for the dynamic simulation of fracture rocks and wave propagation. Based on some ideas of percolation theory and fractal growth, a network of particles and strings represent the rock model. To simulate an inhomogeneous medium, the particles and springs have random distributed elastic parameters and are implemented in the dynamic Navier equation. Some of the springs snap with criteria based on the confined stress applied, therefore creating a fractured rock consistent with the physical environment. The basic purpose of this research was to provide a method to construct a fractured rock with confined stress conditions as well as the wave propagation imposed in the model. Such models provide a better understanding of the behavior of wave propagation in fractured media. The synthetic seismic data obtained henceforth, can be used as a tool to develop methods for characterizing fractured rocks by means of geophysical inference.
Geant4 Simulations of SuperCDMS iZip Detector Charge Carrier Propagation
NASA Astrophysics Data System (ADS)
Agnese, Robert; Brandt, Daniel; Redl, Peter; Asai, Makoto; Faiez, Dana; Kelsey, Mike; Bagli, Enrico; Anderson, Adam; Schlupf, Chandler
2014-03-01
The SuperCDMS experiment uses germanium crystal detectors instrumented with ionization and phonon readout circuits to search for dark matter. In order to simulate the response of the detectors to particle interactions the SuperCDMS Detector Monte Carlo (DMC) group has been implementing the processes governing electrons and phonons at low temperatures in Geant4. The charge portion of the DMC simulates oblique propagation of the electrons through the L-valleys, propagation of holes through the Γ-valleys, inter-valley scattering, and emission of Neganov-Luke phonons in a complex applied electric field. The field is calculated by applying a directed walk search on a tetrahedral mesh of known potentials and then interpolating the value. This talk will present an overview of the DMC status and a comparison of the charge portion of the DMC to experimental data of electron-hole pair propagation in germanium.
On the propagation of blobs in the magnetotail: MHD simulations
NASA Astrophysics Data System (ADS)
Birn, J.; Nakamura, R.; Hesse, M.
2013-09-01
Using three-dimensional magnetohydrodynamic (MHD) simulations of the magnetotail, we investigate the fate of entropy-enhanced localized magnetic flux tubes ("blobs"). Such flux tubes may be the result of a slippage process that also generates entropy-depleted flux tubes ("bubbles") or of a rapid localized energy increase, for instance, from wave absorption. We confirm the expectation that the entropy enhancement leads to a tailward motion and that the speed and distance traveled into the tail increase with the entropy enhancement, even though the blobs tend to break up into pieces. The vorticity on the outside of the blobs twists the magnetic field and generates field-aligned currents predominantly of region-2 sense (earthward on the dusk side and tailward on the dawn side), which might provide a possibility for remote identification from the ground. The breakup, however, leads to more turbulent flow patterns, associated with opposite vorticity and the generation of region-1 sense field-aligned currents of lower intensity but approximately equal integrated magnitude.
Analysis of transmission error effects on the transfer of real-time simulation data
NASA Technical Reports Server (NTRS)
Credeur, L.
1977-01-01
An analysis was made to determine the effect of transmission errors on the quality of data transferred from the Terminal Area Air Traffic Model to a remote site. Data formating schemes feasible within the operational constraints of the data link were proposed and their susceptibility to both random bit error and to noise burst were investigated. It was shown that satisfactory reliability is achieved by a scheme formating the simulation output into three data blocks which has the priority data triply redundant in the first block in addition to having a retransmission priority on that first block when it is received in error.
NASA Astrophysics Data System (ADS)
Gonçalves, L. D.; Rocco, E. M.; de Moraes, R. V.; Kuga, H. K.
2015-10-01
This paper aims to simulate part of the orbital trajectory of Lunar Prospector mission to analyze the relevance of using a Kalman filter to estimate the trajectory. For this study it is considered the disturbance due to the lunar gravitational potential using one of the most recent models, the LP100K model, which is based on spherical harmonics, and considers the maximum degree and order up to the value 100. In order to simplify the expression of the gravitational potential and, consequently, to reduce the computational effort required in the simulation, in some cases, lower values for degree and order are used. Following this aim, it is made an analysis of the inserted error in the simulations when using such values of degree and order to propagate the spacecraft trajectory and control. This analysis was done using the standard deviation that characterizes the uncertainty for each one of the values of the degree and order used in LP100K model for the satellite orbit. With knowledge of the uncertainty of the gravity model adopted, lunar orbital trajectory simulations may be accomplished considering these values of uncertainty. Furthermore, it was also used a Kalman filter, where is considered the sensor's uncertainty that defines the satellite position at each step of the simulation and the uncertainty of the model, by means of the characteristic variance of the truncated gravity model. Thus, this procedure represents an effort to approximate the results obtained using lower values for the degree and order of the spherical harmonics, to the results that would be attained if the maximum accuracy of the model LP100K were adopted. Also a comparison is made between the error in the satellite position in the situation in which the Kalman filter is used and the situation in which the filter is not used. The data for the comparison were obtained from the standard deviation in the velocity increment of the space vehicle.
NASA Astrophysics Data System (ADS)
Taozheng
2015-08-01
In recent years, due to the high stability and privacy of vortex beam, the optical vortex became the hot spot in research of atmospheric optical transmission .We numerically investigate the propagation of vector elliptical vortex beams in turbulent atmosphere. Numerical simulations are realized with random phase screen. To simulate the vortex beam transport processes in the atmospheric turbulence. Using numerical simulation method to study in the atmospheric turbulence vortex beam transmission characteristics (light intensity, phase, polarization, etc.) Our simulation results show that, vortex beam in the atmospheric transmission distortion is small, make elliptic vortex beam for space communications is a promising strategy.
Simulation study of wakefield generation by two color laser pulses propagating in homogeneous plasma
Kumar Mishra, Rohit; Saroch, Akanksha; Jha, Pallavi
2013-09-15
This paper deals with a two-dimensional simulation of electric wakefields generated by two color laser pulses propagating in homogeneous plasma, using VORPAL simulation code. The laser pulses are assumed to have a frequency difference equal to the plasma frequency. Simulation studies are performed for two similarly as well as oppositely polarized laser pulses and the respective amplitudes of the generated longitudinal wakefields for the two cases are compared. Enhancement of wake amplitude for the latter case is reported. This simulation study validates the analytical results presented by Jha et al.[Phys. Plasmas 20, 053102 (2013)].
GPU simulation of nonlinear propagation of dual band ultrasound pulse complexes
NASA Astrophysics Data System (ADS)
Kvam, Johannes; Angelsen, Bjørn A. J.; Elster, Anne C.
2015-10-01
In a new method of ultrasound imaging, called SURF imaging, dual band pulse complexes composed of overlapping low frequency (LF) and high frequency (HF) pulses are transmitted, where the frequency ratio LF:HF ˜ 1 : 20, and the relative bandwidth of both pulses are ˜ 50 - 70%. The LF pulse length is hence ˜ 20 times the HF pulse length. The LF pulse is used to nonlinearly manipulate the material elasticity observed by the co-propagating HF pulse. This produces nonlinear interaction effects that give more information on the propagation of the pulse complex. Due to the large difference in frequency and pulse length between the LF and the HF pulses, we have developed a dual level simulation where the LF pulse propagation is first simulated independent of the HF pulse, using a temporal sampling frequency matched to the LF pulse. A separate equation for the HF pulse is developed, where the the presimulated LF pulse modifies the propagation velocity. The equations are adapted to parallel processing in a GPU, where nonlinear simulations of a typical HF beam of 10 MHz down to 40 mm is done in ˜ 2 secs in a standard GPU. This simulation is hence very useful for studying the manipulation effect of the LF pulse on the HF pulse.
GPU simulation of nonlinear propagation of dual band ultrasound pulse complexes
Kvam, Johannes Angelsen, Bjørn A. J.; Elster, Anne C.
2015-10-28
In a new method of ultrasound imaging, called SURF imaging, dual band pulse complexes composed of overlapping low frequency (LF) and high frequency (HF) pulses are transmitted, where the frequency ratio LF:HF ∼ 1 : 20, and the relative bandwidth of both pulses are ∼ 50 − 70%. The LF pulse length is hence ∼ 20 times the HF pulse length. The LF pulse is used to nonlinearly manipulate the material elasticity observed by the co-propagating HF pulse. This produces nonlinear interaction effects that give more information on the propagation of the pulse complex. Due to the large difference in frequency and pulse length between the LF and the HF pulses, we have developed a dual level simulation where the LF pulse propagation is first simulated independent of the HF pulse, using a temporal sampling frequency matched to the LF pulse. A separate equation for the HF pulse is developed, where the the presimulated LF pulse modifies the propagation velocity. The equations are adapted to parallel processing in a GPU, where nonlinear simulations of a typical HF beam of 10 MHz down to 40 mm is done in ∼ 2 secs in a standard GPU. This simulation is hence very useful for studying the manipulation effect of the LF pulse on the HF pulse.
Lill, J V; Broughton, J Q
2000-06-19
The method of Parrinello and Rahman is generalized to include slip in addition to deformation of the simulation cell. Equations of motion are derived, and a microscopic expression for traction is introduced. Lagrangian constraints are imposed so that the combination of deformation and slip conform to the invariant plane shear characteristic of martensites. Simulation of a model transformation demonstrates the nucleation and propagation of a glissile dislocation interface. PMID:10991054
Characterizing the propagation of gravity waves in 3D nonlinear simulations of solar-like stars
NASA Astrophysics Data System (ADS)
Alvan, L.; Strugarek, A.; Brun, A. S.; Mathis, S.; Garcia, R. A.
2015-09-01
Context. The revolution of helio- and asteroseismology provides access to the detailed properties of stellar interiors by studying the star's oscillation modes. Among them, gravity (g) modes are formed by constructive interferences between progressive internal gravity waves (IGWs), propagating in stellar radiative zones. Our new 3D nonlinear simulations of the interior of a solar-like star allows us to study the excitation, propagation, and dissipation of these waves. Aims: The aim of this article is to clarify our understanding of the behavior of IGWs in a 3D radiative zone and to provide a clear overview of their properties. Methods: We use a method of frequency filtering that reveals the path of individual gravity waves of different frequencies in the radiative zone. Results: We are able to identify the region of propagation of different waves in 2D and 3D, to compare them to the linear raytracing theory and to distinguish between propagative and standing waves (g-modes). We also show that the energy carried by waves is distributed in different planes in the sphere, depending on their azimuthal wave number. Conclusions: We are able to isolate individual IGWs from a complex spectrum and to study their propagation in space and time. In particular, we highlight in this paper the necessity of studying the propagation of waves in 3D spherical geometry, since the distribution of their energy is not equipartitioned in the sphere.
Simulation of Ocean-Generated Microseismic Noise Propagation in the North-East Atlantic Ocean
NASA Astrophysics Data System (ADS)
Ying, Y.; Bean, C. J.; Lokmer, I.; Faure, T.
2013-12-01
Ocean-generated microseisms are small ground oscillations associated with the occurrence of the interactions between the solid Earth and ocean water waves. Microseismic noise field is mostly composed of surface waves, where the wave energy propagate along the ocean floor predominantly in the form of Rayleigh waves, but some Love waves are also present. Microseisms will pick up some information about the medium on the propagation paths due to the interaction between the seismic waves and the structure. Recently, seismologists become more and more interested in using cross-correlations of continuously recorded microseismic noise to retrieve information about the Earth's structure. In order to use this information well, it's important to identify the rich noise source domain in the ocean and quantify the propagation process of microseism from the origins to the land-based seismic stations. In this work, we try to characterize how a microseism propagates along a fluid-solid interface through numerical simulations, in which a North-East Atlantic Ocean model is adopted and a microseism is generated on the bottom of deep ocean with the expected source mechanism. The spectral element method is used to simulate coupled acoustic/elastic wave propagation in an unstructured mesh, and the coupling between fluid and solid regions is accommodated by using a domain decomposition method. The effects of crustal structure, sediment layer, bathymetry and ocean load on the microseismic wave propagation will be examined, and a special attention will be paid to the fluid-solid coupling. We find that microseismic wave will be highly dispersive propagating in the ocean environment.
Sampling errors in free energy simulations of small molecules in lipid bilayers.
Neale, Chris; Pomès, Régis
2016-10-01
Free energy simulations are a powerful tool for evaluating the interactions of molecular solutes with lipid bilayers as mimetics of cellular membranes. However, these simulations are frequently hindered by systematic sampling errors. This review highlights recent progress in computing free energy profiles for inserting molecular solutes into lipid bilayers. Particular emphasis is placed on a systematic analysis of the free energy profiles, identifying the sources of sampling errors that reduce computational efficiency, and highlighting methodological advances that may alleviate sampling deficiencies. This article is part of a Special Issue entitled: Biosimulations edited by Ilpo Vattulainen and Tomasz Róg. PMID:26952019
Impacts of Characteristics of Errors in Radar Rainfall Estimates for Rainfall-Runoff Simulation
NASA Astrophysics Data System (ADS)
KO, D.; PARK, T.; Lee, T. S.; Shin, J. Y.; Lee, D.
2015-12-01
For flood prediction, weather radar has been commonly employed to measure the amount of precipitation and its spatial distribution. However, estimated rainfall from the radar contains uncertainty caused by its errors such as beam blockage and ground clutter. Even though, previous studies have been focused on removing error of radar data, it is crucial to evaluate runoff volumes which are influenced primarily by the radar errors. Furthermore, resolution of rainfall modeled by previous studies for rainfall uncertainty analysis or distributed hydrological simulation are quite coarse to apply to real application. Therefore, in the current study, we tested the effects of radar rainfall errors on rainfall runoff with a high resolution approach, called spatial error model (SEM). In the current study, the synthetic generation of random and cross-correlated radar errors were employed as SEM. A number of events for the Nam River dam region were tested to investigate the peak discharge from a basin according to error variance. The results indicate that the dependent error brings much higher variations in peak discharge than the independent random error. To further investigate the effect of the magnitude of cross-correlation between radar errors, the different magnitudes of spatial cross-correlations were employed for the rainfall-runoff simulation. The results demonstrate that the stronger correlation leads to higher variation of peak discharge and vice versa. We conclude that the error structure in radar rainfall estimates significantly affects on predicting the runoff peak. Therefore, the efforts must take into consideration on not only removing radar rainfall error itself but also weakening the cross-correlation structure of radar errors in order to forecast flood events more accurately. Acknowledgements This research was supported by a grant from a Strategic Research Project (Development of Flood Warning and Snowfall Estimation Platform Using Hydrological Radars), which
Fast error simulation of optical 3D measurements at translucent objects
NASA Astrophysics Data System (ADS)
Lutzke, P.; Kühmstedt, P.; Notni, G.
2012-09-01
The scan results of optical 3D measurements at translucent objects deviate from the real objects surface. This error is caused by the fact that light is scattered in the objects volume and is not exclusively reflected at its surface. A few approaches were made to separate the surface reflected light from the volume scattered. For smooth objects the surface reflected light is dominantly concentrated in specular direction and could only be observed from a point in this direction. Thus the separation either leads to measurement results only creating data for near specular directions or provides data from not well separated areas. To ensure the flexibility and precision of optical 3D measurement systems for translucent materials it is necessary to enhance the understanding of the error forming process. For this purpose a technique for simulating the 3D measurement at translucent objects is presented. A simple error model is shortly outlined and extended to an efficient simulation environment based upon ordinary raytracing methods. In comparison the results of a Monte-Carlo simulation are presented. Only a few material and object parameters are needed for the raytracing simulation approach. The attempt of in-system collection of these material and object specific parameters is illustrated. The main concept of developing an error-compensation method based on the simulation environment and the collected parameters is described. The complete procedure is using both, the surface reflected and the volume scattered light for further processing.
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2015-01-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735
NASA Astrophysics Data System (ADS)
Hellander, Andreas; Lawson, Michael J.; Drawert, Brian; Petzold, Linda
2014-06-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps were adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the diffusive finite-state projection (DFSP) method, to incorporate temporal adaptivity.
Baker, A.L.
1985-01-01
The use of a computer program EPIC (Error Propagation/Inquiry Code) will be discussed. EPIC calculates the variance of a materials balance closed about a materials balance area (MBA) in a processing plant operated under steady-state conditions. It was designed for use in evaluating the significance of inventory differences in the Department of Energy (DOE) nuclear plants. EPIC rapidly estimates the variance of a materials balance using average plant operating data. The intent is to learn as much as possible about problem areas in a process with simple straightforward calculations assuming a process is running in a steady-state mode. EPIC is designed to be used by plant personnel or others with little computer background. However, the user should be knowledgeable about measurement errors in the system being evaluated and have a limited knowledge of how error terms are combined in error propagation analyses. EPIC contains six variance equations; the appropriate equation is used to calculate the variance at each measurement point. After all of these variances are calculated, the total variance for the MBA is calculated using a simple algebraic sum of variances. The EPIC code runs on any computer that accepts a standard form of the BASIC language. 2 refs., 1 fig., 6 tabs.
Hashemiyan, Z; Packo, P; Staszewski, W J; Uhl, T
2016-01-01
Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808
Packo, P.; Staszewski, W. J.; Uhl, T.
2016-01-01
Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808
NASA Astrophysics Data System (ADS)
Ishmuratov, I. K.; Baibekov, E. I.
2015-12-01
We investigate the possibility to restore transient nutations of electron spin centers embedded in the solid using specific composite pulse sequences developed previously for the application in nuclear magnetic resonance spectroscopy. We treat two types of systematic errors simultaneously: (i) rotation angle errors related to the spatial distribution of microwave field amplitude in the sample volume, and (ii) off-resonance errors related to the spectral distribution of Larmor precession frequencies of the electron spin centers. Our direct simulations of the transient signal in erbium- and chromium-doped CaWO4 crystal samples with and without error corrections show that the application of the selected composite pulse sequences can substantially increase the lifetime of Rabi oscillations. Finally, we discuss the applicability limitations of the studied pulse sequences for the use in solid-state electron paramagnetic resonance spectroscopy.
Watanabe, Y. Abe, S.
2014-06-15
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.
A measurement methodology for dynamic angle of sight errors in hardware-in-the-loop simulation
NASA Astrophysics Data System (ADS)
Zhang, Wen-pan; Wu, Jun-hui; Gan, Lin; Zhao, Hong-peng; Liang, Wei-wei
2015-10-01
In order to precisely measure dynamic angle of sight for hardware-in-the-loop simulation, a dynamic measurement methodology was established and a set of measurement system was built. The errors and drifts, such as synchronization delay, CCD measurement error and drift, laser spot error on diffuse reflection plane and optics axis drift of laser, were measured and analyzed. First, by analyzing and measuring synchronization time between laser and time of controlling data, an error control method was devised and lowered synchronization delay to 21μs. Then, the relationship between CCD device and laser spot position was calibrated precisely and fitted by two-dimension surface fitting. CCD measurement error and drift were controlled below 0.26mrad. Next, angular resolution was calculated, and laser spot error on diffuse reflection plane was estimated to be 0.065mrad. Finally, optics axis drift of laser was analyzed and measured which did not exceed 0.06mrad. The measurement results indicate that the maximum of errors and drifts of the measurement methodology is less than 0.275mrad. The methodology can satisfy the measurement on dynamic angle of sight of higher precision and lager scale.
Using Simulation to Address Hierarchy-Related Errors in Medical Practice
Calhoun, Aaron William; Boone, Megan C; Porter, Melissa B; Miller, Karen H
2014-01-01
Objective: Hierarchy, the unavoidable authority gradients that exist within and between clinical disciplines, can lead to significant patient harm in high-risk situations if not mitigated. High-fidelity simulation is a powerful means of addressing this issue in a reproducible manner, but participant psychological safety must be assured. Our institution experienced a hierarchy-related medication error that we subsequently addressed using simulation. The purpose of this article is to discuss the implementation and outcome of these simulations. Methods: Script and simulation flowcharts were developed to replicate the case. Each session included the use of faculty misdirection to precipitate the error. Care was taken to assure psychological safety via carefully conducted briefing and debriefing periods. Case outcomes were assessed using the validated Team Performance During Simulated Crises Instrument. Gap analysis was used to quantify team self-insight. Session content was analyzed via video review. Results: Five sessions were conducted (3 in the pediatric intensive care unit and 2 in the Pediatric Emergency Department). The team was unsuccessful at addressing the error in 4 (80%) of 5 cases. Trends toward lower communication scores (3.4/5 vs 2.3/5), as well as poor team self-assessment of communicative ability, were noted in unsuccessful sessions. Learners had a positive impression of the case. Conclusions: Simulation is a useful means to replicate hierarchy error in an educational environment. This methodology was viewed positively by learner teams, suggesting that psychological safety was maintained. Teams that did not address the error successfully may have impaired self-assessment ability in the communication skill domain. PMID:24867545
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2014-12-01
Physically based models provide insights into key hydrologic processes, but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology. Here we employ global sensitivity analysis to explore how different error types (i.e., bias, random errors), different error distributions, and different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use Sobol' global sensitivity analysis, which is typically used for model parameters, but adapted here for testing model sensitivity to co-existing errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 520 000 Monte Carlo simulations across four sites and four different scenarios. Model outputs were generally (1) more sensitive to forcing biases than random errors, (2) less sensitive to forcing error distributions, and (3) sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a significant impact depending on forcing error magnitudes. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
NASA Technical Reports Server (NTRS)
Matda, Y.; Crawford, F. W.
1974-01-01
An economical low noise plasma simulation model is applied to a series of problems associated with electrostatic wave propagation in a one-dimensional, collisionless, Maxwellian plasma, in the absence of magnetic field. The model is described and tested, first in the absence of an applied signal, and then with a small amplitude perturbation, to establish the low noise features and to verify the theoretical linear dispersion relation at wave energy levels as low as 0.000,001 of the plasma thermal energy. The method is then used to study propagation of an essentially monochromatic plane wave. Results on amplitude oscillation and nonlinear frequency shift are compared with available theories. The additional phenomena of sideband instability and satellite growth, stimulated by large amplitude wave propagation and the resulting particle trapping, are described.
Gordon, J. J.; Crimaldi, A. J.; Hagan, M.; Moore, J.; Siebers, J. V.
2007-01-15
This work evaluates: (i) the size of random and systematic setup errors that can be absorbed by 5 mm clinical target volume (CTV) to planning target volume (PTV) margins in prostate intensity modulated radiation therapy (IMRT); (ii) agreement between simulation results and published margin recipes; and (iii) whether shifting contours with respect to a static dose distribution accurately predicts dose coverage due to setup errors. In 27 IMRT treatment plans created with 5 mm CTV-to-PTV margins, random setup errors with standard deviations (SDs) of 1.5, 3, 5 and 10 mm were simulated by fluence convolution. Systematic errors with identical SDs were simulated using two methods: (a) shifting the isocenter and recomputing dose (isocenter shift), and (b) shifting patient contours with respect to the static dose distribution (contour shift). Maximum tolerated setup errors were evaluated such that 90% of plans had target coverage equal to the planned PTV coverage. For coverage criteria consistent with published margin formulas, plans with 5 mm margins were found to absorb combined random and systematic SDs{approx_equal}3 mm. Published recipes require margins of 8-10 mm for 3 mm SDs. For the prostate IMRT cases presented here a 5 mm margin would suffice, indicating that published recipes may be pessimistic. We found significant errors in individual plan doses given by the contour shift method. However, dose population plots (DPPs) given by the contour shift method agreed with the isocenter shift method for all structures except the nodal CTV and small bowel. For the nodal CTV, contour shift DPP differences were due to the structure moving outside the patient. Small bowel DPP errors were an artifact of large relative differences at low doses. Estimating individual plan doses by shifting contours with respect to a static dose distribution is not recommended. However, approximating DPPs is acceptable, provided care is taken with structures such as the nodal CTV which lie close
Measurement and simulation of clock errors from resource-constrained embedded systems
NASA Astrophysics Data System (ADS)
Collett, M. A.; Matthews, C. E.; Esward, T. J.; Whibberley, P. B.
2010-07-01
Resource-constrained embedded systems such as wireless sensor networks are becoming increasingly sought-after in a range of critical sensing applications. Hardware for such systems is typically developed as a general tool, intended for research and flexibility. These systems often have unexpected limitations and sources of error when being implemented for specific applications. We investigate via measurement and simulation the output of the onboard clock of a Crossbow MICAz testbed, comprising a quartz oscillator accessed via a combination of hardware and software. We show that the clock output available to the user suffers a number of instabilities and errors. Using a simple software simulation of the system based on a series of nested loops, we identify the source of each component of the error, finding that there is a 7.5 × 10-6 probability that a given oscillation from the governing crystal will be miscounted, resulting in frequency jitter over a 60 µHz range.
Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2010-01-01
The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.
Effects of Crew Resource Management Training on Medical Errors in a Simulated Prehospital Setting
ERIC Educational Resources Information Center
Carhart, Elliot D.
2012-01-01
This applied dissertation investigated the effect of crew resource management (CRM) training on medical errors in a simulated prehospital setting. Specific areas addressed by this program included situational awareness, decision making, task management, teamwork, and communication. This study is believed to be the first investigation of CRM…
Xiao, Xifeng; Voelz, David G; Toselli, Italo; Korotkova, Olga
2016-05-20
Experimental and theoretical work has shown that atmospheric turbulence can exhibit "non-Kolmogorov" behavior including anisotropy and modifications of the classically accepted spatial power spectral slope, -11/3. In typical horizontal scenarios, atmospheric anisotropy implies that the variations in the refractive index are more spatially correlated in both horizontal directions than in the vertical. In this work, we extend Gaussian beam theory for propagation through Kolmogorov turbulence to the case of anisotropic turbulence along the horizontal direction. We also study the effects of different spatial power spectral slopes on the beam propagation. A description is developed for the average beam intensity profile, and the results for a range of scenarios are demonstrated for the first time with a wave optics simulation and a spatial light modulator-based laboratory benchtop counterpart. The theoretical, simulation, and benchtop intensity profiles show good agreement and illustrate that an elliptically shaped beam profile can develop upon propagation. For stronger turbulent fluctuation regimes and larger anisotropies, the theory predicts a slightly more elliptical form of the beam than is generated by the simulation or benchtop setup. The theory also predicts that without an outer scale limit, the beam width becomes unbounded as the power spectral slope index α approaches a maximum value of 4. This behavior is not seen in the simulation or benchtop results because the numerical phase screens used for these studies do not model the unbounded wavefront tilt component implied in the analytic theory. PMID:27411135
PUQ: A code for non-intrusive uncertainty propagation in computer simulations
NASA Astrophysics Data System (ADS)
Hunt, Martin; Haley, Benjamin; McLennan, Michael; Koslowski, Marisol; Murthy, Jayathi; Strachan, Alejandro
2015-09-01
We present a software package for the non-intrusive propagation of uncertainties in input parameters through computer simulation codes or mathematical models and associated analysis; we demonstrate its use to drive micromechanical simulations using a phase field approach to dislocation dynamics. The PRISM uncertainty quantification framework (PUQ) offers several methods to sample the distribution of input variables and to obtain surrogate models (or response functions) that relate the uncertain inputs with the quantities of interest (QoIs); the surrogate models are ultimately used to propagate uncertainties. PUQ requires minimal changes in the simulation code, just those required to annotate the QoI(s) for its analysis. Collocation methods include Monte Carlo, Latin Hypercube and Smolyak sparse grids and surrogate models can be obtained in terms of radial basis functions and via generalized polynomial chaos. PUQ uses the method of elementary effects for sensitivity analysis in Smolyak runs. The code is available for download and also available for cloud computing in nanoHUB. PUQ orchestrates runs of the nanoPLASTICITY tool at nanoHUB where users can propagate uncertainties in dislocation dynamics simulations using simply a web browser, without downloading or installing any software.
Sophocleous, M.A.
1991-01-01
The hypothesis is explored that groundwater-level rises in the Great Bend Prairie aquifer of Kansas are caused not only by water percolating downward through the soil but also by pressure pulses from stream flooding that propagate in a translatory motion through numerous high hydraulic diffusivity buried channels crossing the Great Bend Prairie aquifer in an approximately west to east direction. To validate this hypothesis, two transects of wells in a north-south and east-west orientation crossing and alongside some paleochannels in the area were instrumented with water-level-recording devices; streamflow data from all area streams were obtained from available stream-gaging stations. A theoretical approach was also developed to conceptualize numerically the stream-aquifer processes. The field data and numerical simulations provided support for the hypothesis. Thus, observation wells located along the shoulders or in between the inferred paleochannels show little or no fluctuations and no correlations with streamflow, whereas wells located along paleochannels show high water-level fluctuations and good correlation with the streamflows of the stream connected to the observation site by means of the paleochannels. The stream-aquifer numerical simulation results demonstrate that the larger the hydraulic diffusivity of the aquifer, the larger the extent of pressure pulse propagation and the faster the propagation speed. The conceptual simulation results indicate that long-distance propagation of stream floodwaves (of the order of tens of kilometers) through the Great Bend aquifer is indeed feasible with plausible stream and aquifer parameters. The sensitivity analysis results indicate that the extent and speed of pulse propagation is more sensitive to variations of stream roughness (Manning's coefficient) and stream channel slope than to any aquifer parameter. ?? 1991.
Simulating Reflective Propagating Slow-wave/flow in a Flaring Loop
NASA Astrophysics Data System (ADS)
Fang, X.
2015-12-01
Quasi-periodic propagating intensity disturbances have been observed in large coronal loops in EUV images over a decade, and are widely accepted to be slow magnetosonic waves. However, spectroscopic observations from Hinode/EIS revealed their association with persistent coronal upflows, making this interpretation debatable. We perform a 2.5D magnetohydrodynamic simulation to imitate the chromospheric evaporation and the following reflected patterns in a post flare loop. Our model encompasses the corona, transition region, and chromosphere. We demonstrate that the quasi periodic propagating intensity variations captured by our synthesized AIA 131, 94~Å~emission images match the previous observations well. With particle tracers in the simulation, we confirm that these quasi periodic propagating intensity variations consist of reflected slow mode waves and mass flows with an average speed of 310 km/s in an 80 Mm length loop with an average temperature of 9 MK. With the synthesized Doppler shift velocity and intensity maps in SUMER Fe XIX line emission, we confirm that these reflected slow mode waves are propagating waves.
NASA Astrophysics Data System (ADS)
Carré, M.; Sachs, J. P.; Wallace, J. M.; Favier, C.
2012-03-01
Quantitative reconstructions of the past climate statistics from geochemical coral or mollusk records require quantified error bars in order to properly interpret the amplitude of the climate change and to perform meaningful comparisons with climate model outputs. We introduce here a more precise categorization of reconstruction errors, differentiating the error bar due to the proxy calibration uncertainty from the standard error due to sampling and variability in the proxy formation process. Then, we propose a numerical approach based on Monte Carlo simulations with surrogate proxy-derived climate records. These are produced by perturbing a known time series in a way that mimics the uncertainty sources in the proxy climate reconstruction. A freely available algorithm, MoCo, was designed to be parameterized by the user and to calculate realistic systematic and standard errors of the mean and the variance of the annual temperature, and of the mean and the variance of the temperature seasonality reconstructed from marine accretionary archive geochemistry. In this study, the algorithm is used for sensitivity experiments in a case study to characterize and quantitatively evaluate the sensitivity of systematic and standard errors to sampling size, stochastic uncertainty sources, archive-specific biological limitations, and climate non-stationarity. The results of the experiments yield an illustrative example of the range of variations of the standard error and the systematic error in the reconstruction of climate statistics in the Eastern Tropical Pacific. Thus, we show that the sample size and the climate variability are the main sources of the standard error. The experiments allowed the identification and estimation of systematic bias that would not otherwise be detected because of limited modern datasets. Our study demonstrates that numerical simulations based on Monte Carlo analyses are a simple and powerful approach to improve the understanding of the proxy records
Accelerating spectral-element simulations of seismic wave propagation using local time stepping
NASA Astrophysics Data System (ADS)
Peter, D. B.; Rietmann, M.; Galvez, P.; Nissen-Meyer, T.; Grote, M.; Schenk, O.
2013-12-01
Seismic tomography using full-waveform inversion requires accurate simulations of seismic wave propagation in complex 3D media. However, finite element meshing in complex media often leads to areas of local refinement, generating small elements that accurately capture e.g. strong topography and/or low-velocity sediment basins. For explicit time schemes, this dramatically reduces the global time-step for wave-propagation problems due to numerical stability conditions, ultimately making seismic inversions prohibitively expensive. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. Numerical simulations are thus liberated of global time-step constraints potentially speeding up simulation runtimes significantly. We present here a new, efficient multi-level LTS-Newmark scheme for general use with spectral-element methods (SEM) with applications in seismic wave propagation. We fit the implementation of our scheme onto the package SPECFEM3D_Cartesian, which is a widely used community code, simulating seismic and acoustic wave propagation in earth-science applications. Our new LTS scheme extends the 2nd-order accurate Newmark time-stepping scheme, and leads to an efficient implementation, producing real-world speedup of multi-resolution seismic applications. Furthermore, we generalize the method to utilize many refinement levels with a design specifically for continuous finite elements. We demonstrate performance speedup using a state-of-the-art dynamic earthquake rupture model for the Tohoku-Oki event, which is currently limited by small elements along the rupture fault. Utilizing our new algorithmic LTS implementation together with advances in exploiting graphic processing units (GPUs), numerical seismic wave propagation simulations in complex media will dramatically reduce computation times, empowering high
A study of differentiation errors in large-eddy simulations based on the EDQNM theory
Berland, J. Bogey, C.; Bailly, C.
2008-09-10
This paper is concerned with the investigation of numerical errors in large-eddy simulations by means of two-point turbulence modeling. Based on the eddy-damped quasi-normal Markovian (EDQNM) theory, a stochastic model is developed in order to predict the time evolution of the kinetic energy spectrum obtained by a large-eddy simulation (LES), including the effects of the numerics. Using this framework, the influence of the accuracy of the approximate space differencing schemes on LES quality is studied, for decaying homogeneous isotropic incompressible turbulence, with Reynolds numbers Re{sub {lambda}} based on the transverse Taylor scale equal to 780, 2500 and 8000. The results show that the discretization of the filtered Navier-Stokes equations leads to differentiation and aliasing errors. Error spectra are also presented, and indicate that the numerical errors are mainly originating from the approximate differentiation. In addition, increasing the order of accuracy of the differencing schemes or using algorithms optimized in the Fourier space is found to widen the range of well-resolved scales. Unfortunately, for all the schemes, the smaller scales with wavenumbers close to the grid cut-off wavenumber, are badly calculated and generate differentiation errors over the whole energy spectrum. The eventual use of explicit filtering to remove spurious motions with short wavelength is finally shown to significantly improve LES accuracy.
NASA Astrophysics Data System (ADS)
Danzer, J.; Healy, S. B.; Culverwell, I. D.
2015-08-01
In this study, a new model was explored which corrects for higher order ionospheric residuals in Global Positioning System (GPS) radio occultation (RO) data. Recently, the theoretical basis of this new "residual ionospheric error model" has been outlined (Healy and Culverwell, 2015). The method was tested in simulations with a one-dimensional model ionosphere. The proposed new model for computing the residual ionospheric error is the product of two factors, one of which expresses its variation from profile to profile and from time to time in terms of measurable quantities (the L1 and L2 bending angles), while the other describes the weak variation with altitude. A simple integral expression for the residual error (Vorob'ev and Krasil'nikova, 1994) has been shown to be in excellent numerical agreement with the exact value, for a simple Chapman layer ionosphere. In this case, the "altitudinal" element of the residual error varies (decreases) by no more than about 25 % between ~10 and ~100 km for physically reasonable Chapman layer parameters. For other simple model ionospheres the integral can be evaluated exactly, and results are in reasonable agreement with those of an equivalent Chapman layer. In this follow-up study the overall objective was to explore the validity of the new residual ionospheric error model for more detailed simulations, based on modeling through a complex three-dimensional ionosphere. The simulation study was set up, simulating day and night GPS RO profiles for the period of a solar cycle with and without an ionosphere. The residual ionospheric error was studied, the new error model was tested, and temporal and spatial variations of the model were investigated. The model performed well in the simulation study, capturing the temporal variability of the ionospheric residual. Although it was not possible, due to high noise of the simulated bending-angle profiles at mid- to high latitudes, to perform a thorough latitudinal investigation of the
Efficient simulation for fixed-receiver bistatic SAR with time and frequency synchronization errors
NASA Astrophysics Data System (ADS)
Yan, Feifei; Chang, Wenge; Li, Xiangyang
2015-12-01
Raw signal simulation is a useful tool for synthetic aperture radar (SAR) system design, mission planning, processing algorithm testing, and inversion algorithm design. Time and frequency synchronization is the key technique of bistatic SAR (BiSAR) system, and raw data simulation is an effective tool for verifying the time and frequency synchronization techniques. According to the two-dimensional (2-D) frequency spectrum of fixed-receiver BiSAR, a rapid raw data simulation approach with time and frequency synchronization errors is proposed in this paper. Through 2-D inverse Stolt transform in 2-D frequency domain and phase compensation in range-Doppler frequency domain, this method can significantly improve the efficiency of scene raw data simulation. Simulation results of point targets and extended scene are presented to validate the feasibility and efficiency of the proposed simulation approach.
Tupper, Judith B; Pearson, Karen B; Meinersmann, Krista M; Dvorak, Jean
2013-06-01
Continuing education for health care workers is an important mechanism for maintaining patient safety and high-quality health care. Interdisciplinary continuing education that incorporates simulation can be an effective teaching strategy for improving patient safety. Health care professionals who attended a recent Patient Safety Academy had the opportunity to experience firsthand a simulated situation that included many potential patient safety errors. This high-fidelity activity combined the best practice components of a simulation and a collaborative experience that promoted interdisciplinary communication and learning. Participants were challenged to see, learn, and experience "ah-ha" moments of insight as a basis for error reduction and quality improvement. This innovative interdisciplinary educational training method can be offered in place of traditional lecture or online instruction in any facility, hospital, nursing home, or community care setting. PMID:23654294
Erik P. Gilson; Ronald C. Davidson; Philip C. Efthimion; Richard Majeski
2004-01-29
The results presented here demonstrate that the Paul Trap Simulator Experiment (PTSX) simulates the propagation of intense charged particle beams over distances of many kilometers through magnetic alternating-gradient (AG) transport systems by making use of the similarity between the transverse dynamics of particles in the two systems. Plasmas have been trapped that correspond to normalized intensity parameters s = wp2 (0)/2wq2 * 0.8, where wp(r) is the plasmas frequency and wq is the average transverse focusing frequency in the smooth-focusing approximation. The measured root-mean-squared (RMS) radius of the beam is consistent with a model, equally applicable to both PTSX and AG systems that balances the average inward confining force against the outward pressure-gradient and space-charge forces. The PTSX device confines one-component cesium ion plasmas for hundreds of milliseconds, which is equivalent to over 10 km of beam propagation.
A Large Scale Simulation of Ultrasonic Wave Propagation in Concrete Using Parallelized EFIT
NASA Astrophysics Data System (ADS)
Nakahata, Kazuyuki; Tokunaga, Jyunichi; Kimoto, Kazushi; Hirose, Sohichi
A time domain simulation tool for the ultrasonic propagation in concrete is developed using the elastodynamic finite integration technique (EFIT) and the image-based modeling. The EFIT is a grid-based time domain differential technique and easily treats the different boundary conditions in the inhomogeneous material such as concrete. Here, the geometry of concrete is determined by a scanned image of concrete and the processed color bitmap image is fed into the EFIT. Although the ultrasonic wave simulation in such a complex material requires much time to calculate, we here execute the EFIT by a parallel computing technique using a shared memory computer system. In this study, formulations of the EFIT and treatment of the different boundary conditions are briefly described and examples of shear horizontal wave propagations in reinforced concrete are demonstrated. The methodology and performance of parallelization for the EFIT are also discussed.
Attributing uncertainties in simulated biospheric carbon fluxes to different error sources
NASA Astrophysics Data System (ADS)
Lin, J. C.; Pejam, M. R.; Chan, E.; Wofsy, S. C.; Gottlieb, E. W.; Margolis, H. A.; McCaughey, J. H.
2011-06-01
Estimating the current sources and sinks of carbon and projecting future levels of CO2 and climate require biospheric carbon models that cover the landscape. Such models inevitably suffer from deficiencies and uncertainties. This paper addresses how to quantify errors in modeled carbon fluxes and then trace them to specific input variables. To date, few studies have examined uncertainties in biospheric models in a quantitative fashion that are relevant to landscape-scale simulations. In this paper, we introduce a general framework to quantify errors in biospheric carbon models that "unmix" the contributions to the total uncertainty in simulated carbon fluxes and attribute the error to different variables. To illustrate this framework we apply and use a simple biospheric model, the Vegetation Photosynthesis and Respiration Model (VPRM), in boreal forests of central Canada, using eddy covariance flux measurement data from two main sites of the Canadian Carbon Program (CCP). We explicitly distinguish between systematic errors ("biases") and random errors and focus on the impact of errors present in biospheric parameters as well as driver data sets (satellite indices, temperature, solar radiation, and land cover). Biases in downward shortwave radiation accumulated to the most significant amount out of the driver data sets and accounted for a significant percentage of the annually summed carbon uptake. However, the largest cumulative errors were shown to stem from biospheric parameters controlling the light-use efficiency and respiration-temperature relationships. This work represents a step toward a carbon model-data fusion system because in such systems the outcome is determined as much by uncertainties as by the measurements themselves.