Error propagation in first-principles kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Matera, Sebastian
2017-04-01
First-principles kinetic Monte Carlo models allow for the modeling of catalytic surfaces with predictive quality. This comes at the price of non-negligible errors induced by the underlying approximate density functional calculation. On the example of CO oxidation on RuO2(110), we demonstrate a novel, efficient approach to global sensitivity analysis, with which we address the error propagation in these multiscale models. We find, that we can still derive the most important atomistic factors for reactivity, albeit the errors in the simulation results are sizable. The presented approach might also be applied in the hierarchical model construction or computational catalyst screening.
Error propagation in a digital avionic processor: A simulation-based study
NASA Technical Reports Server (NTRS)
Lomelino, D.; Iyer, R. K.
1986-01-01
An experimental analysis to study error propagation from the gate to the chip level is described. The target system is the CPU in the Bendix BDX-930, an avionic miniprocessor. Error activity data for the study was collected via a gate-level simulation. A family of distributions to characterize the error propagation, both within the chip and at the pins, was then generated. Based on these distributions, measures of error propagation and severity were defined. The analysis quantifies the dependency of the measured error propagation on the location of the fault and the type of instruction/microinstruction executed.
Simulation of radar rainfall errors and their propagation into rainfall-runoff processes
NASA Astrophysics Data System (ADS)
Aghakouchak, A.; Habib, E.
2008-05-01
Radar rainfall data compared with rain gauge measurements provide higher spatial and temporal resolution. However, radar data obtained form reflectivity patterns are subject to various errors such as errors in Z-R relationship, vertical profile of reflectivity, spatial and temporal sampling, etc. Characterization of such uncertainties in radar data and their effects on hydrologic simulations (e.g., streamflow estimation) is a challenging issue. This study aims to analyze radar rainfall error characteristics empirically to gain information on prosperities of random error representativeness and its temporal and spatial dependency. To empirically analyze error characteristics, high resolution and accurate rain gauge measurements are required. The Goodwin Creek watershed located in the north part of Mississippi is selected for this study due to availability of a dense rain gauge network. A total of 30 rain gauge measurement stations within Goodwin Creak watershed and the NWS Level II radar reflectivity data obtained from the WSR-88dD Memphis radar station with temporal resolution of 5min and spatial resolution of 1 km2 are used in this study. Radar data and rain gauge measurements comparisons are used to estimate overall bias, and statistical characteristics and spatio-temporal dependency of radar rainfall error fields. This information is then used to simulate realizations of radar error patterns with multiple correlated variables using Monte Calro method and the Cholesky decomposition. The generated error fields are then imposed on radar rainfall fields to obtain statistical realizations of input rainfall fields. Each simulated realization is then fed as input to a distributed physically based hydrological model resulting in an ensemble of predicted runoff hydrographs. The study analyzes the propagation of radar errors on the simulation of different rainfall-runoff processes such as streamflow, soil moisture, infiltration, and over-land flooding.
Error propagation in calculated ratios.
Holmes, Daniel T; Buhr, Kevin A
2007-06-01
Calculated quantities that combine results of multiple laboratory tests have become popular for screening, risk evaluation, and ongoing care in medicine. Many of these are ratios. In this paper, we address the specific issue of propagated random analytical error in calculated ratios. Standard error propagation theory is applied to develop an approximate formula for the mean, standard deviation (SD), and coefficient of variation (CV) of the ratio of two independent, normally distributed random variables. A method of mathematically modeling the problem by random simulations to validate these formulas is proposed and applied. Comparisons are made with the commonly quoted formula for the CV of a ratio. The approximation formula for the CV of a ratio R=X/Y of independent Gaussian random variables developed herein has an absolute percentage error less than 4% for CVs of less than 20% in Y. In contrast the commonly quoted formula has a percentage error of up to 16% for CVs of less than 20% in Y. The usual formula for the CV of a ratio functions well when the CV of the denominator is less than 10% but for larger CVs, the formula proposed here is more accurate. Random analytical error in calculated ratios may be larger than clinicians and laboratorians are aware. The magnitude of the propagated error needs to be considered when interpreting calculated ratios in the clinical laboratory, especially near medical decision limits where its effect may lead to erroneous conclusions.
Error Propagation in a System Model
NASA Technical Reports Server (NTRS)
Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)
2015-01-01
Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.
NLO error propagation exercise: statistical results
Pack, D.J.; Downing, D.J.
1985-09-01
Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or /sup 235/U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, /sup 235/U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and /sup 235/U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods.
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
Scout trajectory error propagation computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1982-01-01
Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.
Variance propagation by simulation (VPSim)
Burr, T.L.; Coulter, C.A.; Prommel, J.M.
1997-07-01
The application of propagation of variance (POV) for estimating the variance of a material balance is straightforward but tedious. Several computer codes exist today to help perform POV. Examples include MAWST (`materials accounting with sequential testing,` used by some Department of Energy sites) and VP (`variance propagation,` used for training). Also, some sites have such simple error models that custom `spreadsheet like` calculations are adequate. Any software to perform POV will have its strengths and weaknesses. A main disadvantage of MAWST is probably its limited form of error models. This limited form forces the user to use cryptic pseudo measurements to effectively extend the allowed error models. A common example is to include sampling error in the total random error by dividing the actual measurement into two pseudo measurements. Because POV can be tedious and input files can be presented in multiple ways to MAWST, it is valuable to have an alternative method to compare results. This paper describes a new code, VPSim, that uses Monte Carlo simulation to do POV. VPSim does not need to rely on pseudo measurements. It is written in C++, runs under Windows NT, and has a user friendly interface. VPSim has been tested on several example problems, and in this paper we compare its results to results from MAWST. We also describe its error models and indicate the structure of its input files. A main disadvantage of VPSim is its long run times. If many simulations are required (20,000 or more, repeated two or more times) and if each balance period has many (10,000 or more) measurements, then run times can be one-half hour or more. For small and modest sized problems, run times are a few minutes. The main advantage of VPSim is that its input files are simple to construct, and therefore also are relatively easy to inspect.
Truncation and Accumulated Errors in Wave Propagation
NASA Astrophysics Data System (ADS)
Chiang, Yi-Ling F.
1988-12-01
The approximation of the truncation and accumulated errors in the numerical solution of a linear initial-valued partial differential equation problem can be established by using a semidiscretized scheme. This error approximation is observed as a lower bound to the errors of a finite difference scheme. By introducing a modified von Neumann solution, this error approximation is applicable to problems with variable coefficients. To seek an in-depth understanding of this newly established error approximation, numerical experiments were performed to solve the hyperbolic equation {∂U}/{∂t} = -C 1(x)C 2(t) {∂U}/{∂x}, with both continuous and discontinuous initial conditions. We studied three cases: (1) C1( x)= C0 and C2( t)=1; (2) C1( x)= C0 and C2( t= t; and (3) C 1(x)=1+( {solx}/{a}) 2 and C2( t)= C0. Our results show that the errors are problem dependent and are functions of the propagating wave speed. This suggests a need to derive problem-oriented schemes rather than the equation-oriented schemes as is commonly done. Furthermore, in a wave-propagation problem, measurement of the error by the maximum norm is not particularly informative when the wave speed is incorrect.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Observation error propagation on video meteor orbit determination
NASA Astrophysics Data System (ADS)
SonotaCo
2016-04-01
A new radiant direction error computation method on SonotaCo Network meteor observation data was tested. It uses single station observation error obtained by reference star measurement and trajectory linearity measurement on each video, as its source error value, and propagates this to the radiant and orbit parameter errors via the Monte Carlo simulation method. The resulting error values on a sample data set showed a reasonable error distribution that makes accuracy-based selecting feasible. A sample set of selected orbits obtained by this method revealed a sharper concentration of shower meteor radiants than we have ever seen before. The simultaneously observed meteor data sets published by the SonotaCo Network will be revised to include this error value on each record and will be publically available along with the computation program in near future.
Learning Internal Representations by Error Propagation
1985-09-01
Personnel & Training Researc N00014-85-K-0450 8c. ADDRESS (City, State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS Code 1142 PT PROGRAM PROJECT TASK ...of the work is the procedure called error propagation, whereby the gradient can be determined by individual units of the network based only on locally...internal representation adequate for per- forming the task at hand. One such development is presented in the discussion of Boltzmann machines in
Propagation error minimization method for multiple structural displacement monitoring system
NASA Astrophysics Data System (ADS)
Jeon, Haemin; Shin, Jae-Uk; Myung, Hyun
2013-04-01
In the previous study, a visually servoed paired structured light system (ViSP) which is composed of two sides facing each other, each with one or two lasers, a 2-DOF manipulator, a camera, and a screen has been proposed. The lasers project their parallel beams to the screen on the opposite side and 6-DOF relative displacement between two sides is estimated by calculating positions of the projected laser beams and rotation angles of the manipulators. To apply the system to massive civil structures such as long-span bridges or high-rise buildings, the whole area should be divided into multiple partitions and each ViSP module is placed in each partition in a cascaded manner. In other words, the movement of the entire structure can be monitored by multiplying the estimated displacements from multiple ViSP modules. In the multiplication, however, there is a major problem that the displacement estimation error is propagated throughout the multiple modules. To solve the problem, propagation error minimization method (PEMM) which uses Newton-Raphson formulation inspired by the error back-propagation algorithm is proposed. In this method, a propagation error at the last module is calculated and then the estimated displacement from ViSP at each partition is updated in reverse order by using the proposed PEMM that minimizes the propagation error. To verify the performance of the proposed method, various simulations and experimental tests have been performed. The results show that the propagation error is significantly reduced after applying PEMM.
NLO error propagation exercise data collection system
Keisch, B.; Bieber, A.M. Jr.
1983-01-01
A combined automated and manual system for data collection is described. The system is suitable for collecting, storing, and retrieving data related to nuclear material control at a bulk processing facility. The system, which was applied to the NLO operated Feed Materials Production Center, was successfully demonstrated for a selected portion of the facility. The instrumentation consisted of off-the-shelf commercial equipment and provided timeliness, convenience, and efficiency in providing information for generating a material balance and performing error propagation on a sound statistical basis.
VPSim: Variance propagation by simulation
Burr, T.; Coulter, C.A.; Prommel, J.
1997-12-01
One of the fundamental concepts in a materials control and accountability system for nuclear safeguards is the materials balance (MB). All transfers into and out of a material balance area are measured, as are the beginning and ending inventories. The resulting MB measures the material loss, MB = T{sub in} + I{sub B} {minus} T{sub out} {minus} I{sub E}. To interpret the MB, the authors must estimate its measurement error standard deviation, {sigma}{sub MB}. When feasible, they use a method usually known as propagation of variance (POV) to estimate {sigma}{sub MB}. The application of POV for estimating the measurement error variance of an MB is straightforward but tedious. By applying POV to individual measurement error standard deviations they can estimate {sigma}{sub MB} (or more generally, they can estimate the variance-covariance matrix, {Sigma}, of a sequence of MBs). This report describes a new computer program (VPSim) that uses simulation to estimate the {Sigma} matrix of a sequence of MBs. Given the proper input data, VPSim calculates the MB and {sigma}{sub MB}, or calculates a sequence of n MBs and the associated n-by-n covariance matrix, {Sigma}. The covariance matrix, {Sigma}, contains the variance of each MB in the diagonal entries and the covariance between pairs of MBs in the off-diagonal entries.
Error propagation in energetic carrying capacity models
Pearse, Aaron T.; Stafford, Joshua D.
2014-01-01
Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.
Error Propagation Made Easy--Or at Least Easier
ERIC Educational Resources Information Center
Gardenier, George H.; Gui, Feng; Demas, James N.
2011-01-01
Complex error propagation is reduced to formula and data entry into a Mathcad worksheet or an Excel spreadsheet. The Mathcad routine uses both symbolic calculus analysis and Monte Carlo methods to propagate errors in a formula of up to four variables. Graphical output is used to clarify the contributions to the final error of each of the…
Error Propagation Made Easy--Or at Least Easier
ERIC Educational Resources Information Center
Gardenier, George H.; Gui, Feng; Demas, James N.
2011-01-01
Complex error propagation is reduced to formula and data entry into a Mathcad worksheet or an Excel spreadsheet. The Mathcad routine uses both symbolic calculus analysis and Monte Carlo methods to propagate errors in a formula of up to four variables. Graphical output is used to clarify the contributions to the final error of each of the…
Error propagation in a digital avionic mini processor. M.S. Thesis
NASA Technical Reports Server (NTRS)
Lomelino, Dale L.
1987-01-01
A methodology is introduced and demonstrated for the study of error propagation from the gate to the chip level. The importance of understanding error propagation derives from its close tie with system activity. In this system the target system is BDX-930, a digital avionic multiprocessor. The simulator used was developed at NASA-Langley, and is a gate level, event-driven, unit delay, software logic simulator. An approach is highly structured and easily adapted to other systems. The analysis shows the nature and extent of the dependency of error propagation on microinstruction type, assembly level instruction, and fault-free gate activity.
CADNA: a library for estimating round-off error propagation
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie
2008-06-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Detecting and preventing error propagation via competitive learning.
Silva, Thiago Christiano; Zhao, Liang
2013-05-01
Semisupervised learning is a machine learning approach which is able to employ both labeled and unlabeled samples in the training process. It is an important mechanism for autonomous systems due to the ability of exploiting the already acquired information and for exploring the new knowledge in the learning space at the same time. In these cases, the reliability of the labels is a crucial factor, because mislabeled samples may propagate wrong labels to a portion of or even the entire data set. This paper has the objective of addressing the error propagation problem originated by these mislabeled samples by presenting a mechanism embedded in a network-based (graph-based) semisupervised learning method. Such a procedure is based on a combined random-preferential walk of particles in a network constructed from the input data set. The particles of the same class cooperate among them, while the particles of different classes compete with each other to propagate class labels to the whole network. Computer simulations conducted on synthetic and real-world data sets reveal the effectiveness of the model.
Experiments in Error Propagation within Hierarchal Combat Models
2015-09-01
mean outcome and such a relatively large standard deviation, the best estimate one can give on these results is “better than half.” The other measure ... ERROR PROPAGATION WITHIN HIERARCHAL COMBAT MODELS by Russell G. Pav September 2015 Thesis Advisor: Thomas W. Lucas Second Reader: Jeffrey...DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE EXPERIMENTS IN ERROR PROPAGATION WITHIN HIERARCHAL COMBAT MODELS 5. FUNDING NUMBERS 6
Learning representations by back-propagating errors
NASA Astrophysics Data System (ADS)
Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J.
1986-10-01
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal `hidden' units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.
Error analysis using organizational simulation.
Fridsma, D. B.
2000-01-01
Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885
Uncertainty and error in computational simulations
Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.
1997-10-01
The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.
Analysis of Errors in a Special Perturbations Satellite Orbit Propagator
Beckerman, M.; Jones, J.P.
1999-02-01
We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.
Characterizing error propagation in quantum circuits: the Isotropic Index
NASA Astrophysics Data System (ADS)
Fonseca de Oliveira, André L.; Buksman, Efrain; Cohn, Ilan; García López de Lacalle, Jesús
2017-02-01
This paper presents a novel index in order to characterize error propagation in quantum circuits by separating the resultant mixed error state in two components: an isotropic component that quantifies the lack of information, and a disalignment component that represents the shift between the current state and the original pure quantum state. The Isotropic Triangle, a graphical representation that fits naturally with the proposed index, is also introduced. Finally, some examples with the analysis of well-known quantum algorithms degradation are given.
Position error propagation in the simplex strapdown navigation system
NASA Technical Reports Server (NTRS)
1976-01-01
The results of an analysis of the effects of deterministic error sources on position error in the simplex strapdown navigation system were documented. Improving the long term accuracy of the system was addressed in two phases: understanding and controlling the error within the system, and defining methods of damping the net system error through the use of an external reference velocity or position. Review of the flight and ground data revealed error containing the Schuler frequency as well as non-repeatable trends. The only unbounded terms are those involving gyro bias and azimuth error coupled with velocity. All forms of Schuler-periodic position error were found to be sufficiently large to require update or damping capability unless the source coefficients can be limited to values less than those used in this analysis for misalignment and gyro and accelerometer bias. The first-order effects of the deterministic error sources were determined with a simple error propagator which provided plots of error time functions in response to various source error values.
Error Analysis and Propagation in Metabolomics Data Analysis.
Moseley, Hunter N B
2013-01-01
Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of 'omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed.
Inductively Coupled Plasma Mass Spectrometry Uranium Error Propagation
Hickman, D P; Maclean, S; Shepley, D; Shaw, R K
2001-07-01
The Hazards Control Department at Lawrence Livermore National Laboratory (LLNL) uses Inductively Coupled Plasma Mass Spectrometer (ICP/MS) technology to analyze uranium in urine. The ICP/MS used by the Hazards Control Department is a Perkin-Elmer Elan 6000 ICP/MS. The Department of Energy Laboratory Accreditation Program requires that the total error be assessed for bioassay measurements. A previous evaluation of the errors associated with the ICP/MS measurement of uranium demonstrated a {+-} 9.6% error in the range of 0.01 to 0.02 {micro}g/l. However, the propagation of total error for concentrations above and below this level have heretofore been undetermined. This document is an evaluation of the errors associated with the current LLNL ICP/MS method for a more expanded range of uranium concentrations.
Simulation of wave propagation in three-dimensional random media
NASA Technical Reports Server (NTRS)
Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1993-01-01
Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
NASA Astrophysics Data System (ADS)
Jeon, H.; Shin, J. U.; Myung, H.
2013-04-01
Visually servoed paired structured light system (ViSP) has been found to be useful in estimating 6-DOF relative displacement. The system is composed of two screens facing each other, each with one or two lasers, a 2-DOF manipulator and a camera. The displacement between two sides is estimated by observing positions of the projected laser beams and rotation angles of the manipulators. To apply the system to massive structures, the whole area should be partitioned and each ViSP module is placed in each partition in a cascaded manner. The estimated displacement between adjoining ViSPs is combined with the next partition so that the entire movement of the structure can be estimated. The multiple ViSPs, however, have a major problem that the error is propagated through the partitions. Therefore, a displacement estimation error back-propagation (DEEP) method which uses Newton-Raphson or gradient descent formulation inspired by the error back-propagation algorithm is proposed. In this method, the estimated displacement from the ViSP is updated using the error back-propagated from a fixed position. To validate the performance of the proposed method, various simulations and experiments have been performed. The results show that the proposed method significantly reduces the propagation error throughout the multiple modules.
A neural fuzzy controller learning by fuzzy error propagation
NASA Technical Reports Server (NTRS)
Nauck, Detlef; Kruse, Rudolf
1992-01-01
In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.
A neural fuzzy controller learning by fuzzy error propagation
NASA Technical Reports Server (NTRS)
Nauck, Detlef; Kruse, Rudolf
1992-01-01
In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.
High-order Taylor series expansion methods for error propagation in geographic information systems
NASA Astrophysics Data System (ADS)
Xue, Jie; Leung, Yee; Ma, Jiang-Hong
2015-04-01
The quality of modeling results in GIS operations depends on how well we can track error propagating from inputs to outputs. Monte Carlo simulation, moment design and Taylor series expansion have been employed to study error propagation over the years. Among them, first-order Taylor series expansion is popular because error propagation can be analytically studied. Because most operations in GIS are nonlinear, first-order Taylor series expansion generally cannot meet practical needs, and higher-order approximation is thus necessary. In this paper, we employ Taylor series expansion methods of different orders to investigate error propagation when the random error vectors are normally and independently or dependently distributed. We also extend these methods to situations involving multi-dimensional output vectors. We employ these methods to examine length measurement of linear segments, perimeter of polygons and intersections of two line segments basic in GIS operations. Simulation experiments indicate that the fifth-order Taylor series expansion method is most accurate compared with the first-order and third-order method. Compared with the third-order expansion; however, it can only slightly improve the accuracy, but on the expense of substantially increasing the number of partial derivatives that need to be calculated. Striking a balance between accuracy and complexity, the third-order Taylor series expansion method appears to be a more appropriate choice for practical applications.
Autonomous Navigation Error Propagation Assessment for Lunar Surface Mobility Applications
NASA Technical Reports Server (NTRS)
Welch, Bryan W.; Connolly, Joseph W.
2006-01-01
The NASA Vision for Space Exploration is focused on the return of astronauts to the Moon. While navigation systems have already been proven in the Apollo missions to the moon, the current exploration campaign will involve more extensive and extended missions requiring new concepts for lunar navigation. In this document, the results of an autonomous navigation error propagation assessment are provided. The analysis is intended to be the baseline error propagation analysis for which Earth-based and Lunar-based radiometric data are added to compare these different architecture schemes, and quantify the benefits of an integrated approach, in how they can handle lunar surface mobility applications when near the Lunar South pole or on the Lunar Farside.
Mert, Mehmet Can; Filzmoser, Peter; Hron, Karel
2016-01-01
Compositional data, as they typically appear in geochemistry in terms of concentrations of chemical elements in soil samples, need to be expressed in log-ratio coordinates before applying the traditional statistical tools if the relative structure of the data is of primary interest. There are different possibilities for this purpose, like centered log-ratio coefficients, or isometric log-ratio coordinates. In both the approaches, geometric means of the compositional parts are involved, and it is unclear how measurement errors or detection limit problems affect their presentation in coordinates. This problem is investigated theoretically by making use of the theory of error propagation. Due to certain limitations of this approach, the effect of error propagation is also studied by means of simulations. This allows to provide recommendations for practitioners on the amount of error and on the expected distortion of the results, depending on the purpose of the analysis.
Phase unwrapping algorithms in laser propagation simulation
NASA Astrophysics Data System (ADS)
Du, Rui; Yang, Lijia
2013-08-01
Currently simulating on laser propagation in atmosphere usually need to deal with beam in strong turbulence, which may lose a part of information via Fourier Transform to simulate the transmission, makes the phase of beam as a 2-D array wrap by 2π . An effective unwrapping algorithm is needed for continuing result and faster calculation. The unwrapping algorithms in atmospheric propagation are similar to the unwrapping algorithm in radar or 3-D surface rebuilding, but not the same. In this article, three classic unwrapping algorithms: the block least squares (BLS), mask-cut (MCUT), and the Flynn's minimal discontinuity algorithm (FMD) are tried in wave-front reconstruction simulation. Each of those algorithms are tested 100 times in 6 same conditions, including low(64x64), medium(128x128), and high(256x256) resolution phase array, with and without noises. Compared the results, the conclusions are delivered as follows. The BLS-based algorithm is the fastest, and the result is acceptable in low resolution environment without noise. The MCUT are higher in accuracy, though they are slower with the array resolution increased, and it is sensitive to noise, resulted in large area errors. Flynn's algorithm has the better accuracy, and it occupies large memory in calculation. After all, the article delivered a new algorithm that based on Active on Vertex (AOV) Network, to build a logical graph to cut the search space then find minimal discontinuity solution. The AOV is faster than MCUT in dealing with high resolution phase arrays, and better accuracy as FMD that has been tested.
Dose calibration optimization and error propagation in polymer gel dosimetry
NASA Astrophysics Data System (ADS)
Jirasek, A.; Hilts, M.
2014-02-01
This study reports on the relative precision, relative error, and dose differences observed when using a new full-image calibration technique in NIPAM-based x-ray CT polymer gel dosimetry. The effects of calibration parameters (e.g. gradient thresholding, dose bin size, calibration fit function, and spatial remeshing) on subsequent errors in calibrated gel images are reported. It is found that gradient thresholding, dose bin size, and fit function all play a primary role in affecting errors in calibrated images. Spatial remeshing induces minimal reductions or increases in errors in calibrated images. This study also reports on a full error propagation throughout the CT gel image pre-processing and calibration procedure thus giving, for the first time, a realistic view of the errors incurred in calibrated CT polymer gel dosimetry. While the work is based on CT polymer gel dosimetry, the formalism is valid for and easily extended to MRI or optical CT dosimetry protocols. Hence, the procedures developed within the work are generally applicable to calibration of polymer gel dosimeters.
On the error propagation of semi-Lagrange and Fourier methods for advection problems.
Einkemmer, Lukas; Ostermann, Alexander
2015-02-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley-Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme.
Cross Section Sensitivity and Propagated Errors in HZE Exposures
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Wilson, John W.; Blatnig, Steve R.; Qualls, Garry D.; Badavi, Francis F.; Cucinotta, Francis A.
2005-01-01
It has long been recognized that galactic cosmic rays are of such high energy that they tend to pass through available shielding materials resulting in exposure of astronauts and equipment within space vehicles and habitats. Any protection provided by shielding materials result not so much from stopping such particles but by changing their physical character in interaction with shielding material nuclei forming, hopefully, less dangerous species. Clearly, the fidelity of the nuclear cross-sections is essential to correct specification of shield design and sensitivity to cross-section error is important in guiding experimental validation of cross-section models and database. We examine the Boltzmann transport equation which is used to calculate dose equivalent during solar minimum, with units (cSv/yr), associated with various depths of shielding materials. The dose equivalent is a weighted sum of contributions from neutrons, protons, light ions, medium ions and heavy ions. We investigate the sensitivity of dose equivalent calculations due to errors in nuclear fragmentation cross-sections. We do this error analysis for all possible projectile-fragment combinations (14,365 such combinations) to estimate the sensitivity of the shielding calculations to errors in the nuclear fragmentation cross-sections. Numerical differentiation with respect to the cross-sections will be evaluated in a broad class of materials including polyethylene, aluminum and copper. We will identify the most important cross-sections for further experimental study and evaluate their impact on propagated errors in shielding estimates.
Error Analysis and Propagation in Metabolomics Data Analysis
Moseley, Hunter N.B.
2013-01-01
Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of ‘omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed. PMID:23667718
On the uncertainty of stream networks derived from elevation data: the error propagation approach
NASA Astrophysics Data System (ADS)
Hengl, T.; Heuvelink, G. B. M.; van Loon, E. E.
2010-01-01
DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks) using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations), and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations). All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial) clearly depicts areas where the extracted stream network is least precise - usually areas of low local relief, slightly concave. In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor) show high error (H>0.5) of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy a required accuracy level. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small to moderate data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the http://www.geomorphometry.org/ website and can be easily adopted/adjusted to any similar
Relationships between GPS-signal propagation errors and EISCAT observations
NASA Astrophysics Data System (ADS)
Jakowski, N.; Sardon, E.; Engler, E.; Jungstand, A.; Klähn, D.
1996-12-01
When travelling through the ionosphere the signals of space-based radio navigation systems such as the Global Positioning System (GPS) are subject to modifications in amplitude, phase and polarization. In particular, phase changes due to refraction lead to propagation errors of up to 50 m for single-frequency GPS users. If both the L1 and the L2 frequencies transmitted by the GPS satellites are measured, first-order range error contributions of the ionosphere can be determined and removed by difference methods. The ionospheric contribution is proportional to the total electron content (TEC) along the ray path between satellite and receiver. Using about ten European GPS receiving stations of the International GPS Service for Geodynamics (IGS), the TEC over Europe is estimated within the geographic ranges -20°leq
On the uncertainty of stream networks derived from elevation data: the error propagation approach
NASA Astrophysics Data System (ADS)
Hengl, T.; Heuvelink, G. B. M.; van Loon, E. E.
2010-07-01
DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks) using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations), and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations). All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial) clearly depicts areas where the extracted stream network is least precise - usually areas of low local relief and slightly convex (0-10 difference from the mean value). In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor) show high error (H>0.5) of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy the required accuracy level. Such error propagation tool should become a standard functionality in any modern GIS. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the
Molecular dynamics simulation of propagating cracks
NASA Technical Reports Server (NTRS)
Mullins, M.
1982-01-01
Steady state crack propagation is investigated numerically using a model consisting of 236 free atoms in two (010) planes of bcc alpha iron. The continuum region is modeled using the finite element method with 175 nodes and 288 elements. The model shows clear (010) plane fracture to the edge of the discrete region at moderate loads. Analysis of the results obtained indicates that models of this type can provide realistic simulation of steady state crack propagation.
Molecular dynamics simulation of propagating cracks
NASA Technical Reports Server (NTRS)
Mullins, M.
1982-01-01
Steady state crack propagation is investigated numerically using a model consisting of 236 free atoms in two (010) planes of bcc alpha iron. The continuum region is modeled using the finite element method with 175 nodes and 288 elements. The model shows clear (010) plane fracture to the edge of the discrete region at moderate loads. Analysis of the results obtained indicates that models of this type can provide realistic simulation of steady state crack propagation.
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies w_{r} in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = w_{r}/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.
Optimal condition for measurement observable via error-propagation
NASA Astrophysics Data System (ADS)
Zhong, Wei; Lu, Xiao Ming; Jing, Xiao Xing; Wang, Xiaoguang
2014-09-01
Propagation of error is a widely used estimation tool in experiments where the estimation precision of the parameter depends on the fluctuation of the physical observable. Thus the observable that is chosen will greatly affect the estimation sensitivity. Here we study the optimal observable for the ultimate sensitivity bounded by the quantum Cramér-Rao theorem in parameter estimation. By invoking the Schrödinger-Robertson uncertainty relation, we derive the necessary and sufficient condition for the optimal observables saturating the ultimate sensitivity for the single parameter estimate. By applying this condition to Greenberg-Horne-Zeilinger states, we obtain the general expression of the optimal observable for separable measurements to achieve the Heisenberg-limit precision and show that it is closely related to the parity measurement. However, Jose et al (2013 Phys. Rev. A 87 022330) have claimed that the Heisenberg limit may not be obtained via separable measurements. We show this claim is incorrect.
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less
NASA Astrophysics Data System (ADS)
Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok
2005-08-01
Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of
On the error propagation of semi-Lagrange and Fourier methods for advection problems☆
Einkemmer, Lukas; Ostermann, Alexander
2015-01-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018
Errors in radiance simulation and scene discrimination.
Scholl, M S
1982-05-15
Radiance simulation can be achieved by selecting materials with the required infrared characteristics. The relationship between the corresponding quantities on the simulator target and the real target is established to assist in the design analysis. From it, those design parameters which critically affect the performance of the simulator and its sensitivity on the error in the simulating temperature are obtained.
NASA Astrophysics Data System (ADS)
Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.
2016-12-01
Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.
Simulation of guided wave propagation near numerical Brillouin zones
NASA Astrophysics Data System (ADS)
Kijanka, Piotr; Staszewski, Wieslaw J.; Packo, Pawel
2016-04-01
Attractive properties of guided waves provides very unique potential for characterization of incipient damage, particularly in plate-like structures. Among other properties, guided waves can propagate over long distances and can be used to monitor hidden structural features and components. On the other hand, guided propagation brings substantial challenges for data analysis. Signal processing techniques are frequently supported by numerical simulations in order to facilitate problem solution. When employing numerical models additional sources of errors are introduced. These can play significant role for design and development of a wave-based monitoring strategy. Hence, the paper presents an investigation of numerical models for guided waves generation, propagation and sensing. Numerical dispersion analysis, for guided waves in plates, based on the LISA approach is presented and discussed in the paper. Both dispersion and modal amplitudes characteristics are analysed. It is shown that wave propagation in a numerical model resembles propagation in a periodic medium. Consequently, Lamb wave propagation close to numerical Brillouin zone is investigated and characterized.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2017-01-01
This paper presents results from numerical experiments for controlling the error caused by a damping layer boundary treatment when simulating the propagation of an acoustic signal from a continuous pressure source. The computations are with the 2D Linearized Euler Equations (LEE) for both a uniform mean flow and a steady parallel jet. The numerical experiments are with algorithms that are third, fifth, seventh and ninth order accurate in space and time. The numerical domain is enclosed in a damping layer boundary treatment. The damping is implemented in a time accurate manner, with simple polynomial damping profiles of second, fourth, sixth and eighth power. At the outer boundaries of the damping layer the propagating solution is uniformly set to zero. The complete boundary treatment is remarkably simple and intrinsically independant from the dimension of the spatial domain. The reported results show the relative effect on the error from the boundary treatment by varying the damping layer width, damping profile power, damping amplitude, propagtion time, grid resolution and algorithm order. The issue that is being addressed is not the accuracy of the numerical solution when compared to a mathematical solution, but the effect of the complete boundary treatment on the numerical solution, and to what degree the error in the numerical solution from the complete boundary treatment can be controlled. We report maximum relative absolute errors from just the boundary treatment that range from O[10-2] to O[10-7].
Simulation of action potential propagation in plants.
Sukhov, Vladimir; Nerush, Vladimir; Orlova, Lyubov; Vodeneev, Vladimir
2011-12-21
Action potential is considered to be one of the primary responses of a plant to action of various environmental factors. Understanding plant action potential propagation mechanisms requires experimental investigation and simulation; however, a detailed mathematical model of plant electrical signal transmission is absent. Here, the mathematical model of action potential propagation in plants has been worked out. The model is a two-dimensional system of excitable cells; each of them is electrically coupled with four neighboring ones. Ion diffusion between excitable cell apoplast areas is also taken into account. The action potential generation in a single cell has been described on the basis of our previous model. The model simulates active and passive signal transmission well enough. It has been used to analyze theoretically the influence of cell to cell electrical conductivity and H(+)-ATPase activity on the signal transmission in plants. An increase in cell to cell electrical conductivity has been shown to stimulate an increase in the length constant, the action potential propagation velocity and the temperature threshold, while the membrane potential threshold being weakly changed. The growth of H(+)-ATPase activity has been found to induce the increase of temperature and membrane potential thresholds and the reduction of the length constant and the action potential propagation velocity. Copyright © 2011 Elsevier Ltd. All rights reserved.
Propagation of Uncertainties in Radiation Belt Simulations
NASA Astrophysics Data System (ADS)
Camporeale, E.; Shprits, Y. Y.; Chandorkar, M.; Drozdov, A.; Wing, S.
2016-12-01
We present the first study of the uncertainties associated with radiation belt simulations, performed in the standard quasi-linear diffusionframework. In particular, we estimate how uncertainties of some input parameters propagate through the nonlinear simulation, producing a distribution of outputs that can be quite broad. Here, we restrict our focus on two-dimensional simulations (in energy and pitch angle space) and to parallel chorus waves only, and we study as stochastic input parameters the geomagnetic index Kp (that characterize the time dependency of an idealized storm), the latitudinal extent of waves, and the average electron density. We employ a collocation method, thus performing an ensemble of simulations. The results of this work point to the necessity of shifting to a probabilistic interpretation of radiation belt simulation results, and suggest as an important research goal a less uncertain estimation of the electron density in the belts.
Simulations of Seismic Wave Propagation on Mars
NASA Astrophysics Data System (ADS)
Bozdağ, Ebru; Ruan, Youyi; Metthez, Nathan; Khan, Amir; Leng, Kuangdai; van Driel, Martin; Wieczorek, Mark; Rivoldini, Attilio; Larmat, Carène S.; Giardini, Domenico; Tromp, Jeroen; Lognonné, Philippe; Banerdt, Bruce W.
2017-03-01
We present global and regional synthetic seismograms computed for 1D and 3D Mars models based on the spectral-element method. For global simulations, we implemented a radially-symmetric Mars model with a 110 km thick crust (Sohl and Spohn in J. Geophys. Res., Planets 102(E1):1613-1635, 1997). For this 1D model, we successfully benchmarked the 3D seismic wave propagation solver SPECFEM3D_GLOBE (Komatitsch and Tromp in Geophys. J. Int. 149(2):390-412, 2002a; 150(1):303-318, 2002b) against the 2D axisymmetric wave propagation solver AxiSEM (Nissen-Meyer et al. in Solid Earth 5(1):425-445, 2014) at periods down to 10 s. We also present higher-resolution body-wave simulations with AxiSEM down to 1 s in a model with a more complex 1D crust, revealing wave propagation effects that would have been difficult to interpret based on ray theory. For 3D global simulations based on SPECFEM3D_GLOBE, we superimposed 3D crustal thickness variations capturing the distinct crustal dichotomy between Mars' northern and southern hemispheres, as well as topography, ellipticity, gravity, and rotation. The global simulations clearly indicate that the 3D crust speeds up body waves compared to the reference 1D model, whereas it significantly changes surface waveforms and their dispersive character depending on its thickness. We also perform regional simulations with the solver SES3D (Fichtner et al. Geophys. J. Int. 179:1703-1725, 2009) based on 3D crustal models derived from surface composition, thereby addressing the effects of various distinct crustal features down to 2 s. The regional simulations confirm the strong effects of crustal variations on waveforms. We conclude that the numerical tools are ready for examining more scenarios, including various other seismic models and sources.
Spacecraft orbit propagator integration with GNSS in a simulated scenario
NASA Astrophysics Data System (ADS)
Jing, Shuai; Zhan, Xingqun; Zhu, Zhenghong
2017-09-01
When space vehicles operate above the Global Navigation Satellite System (GNSS) constellation or even above geosynchronous orbit, it is common that the traditional GNSS single-epoch solution can't meet the requirement of orbit determination (OD). To provide the required OD accuracy continuously, a new designed spacecraft orbit propagator (OP) is combined with the GNSS observations in a deep integration mode. Taking both the computational complexity and positioning accuracy into consideration, the orbit propagator is optimized based on a simplified fourth order Runge-Kutta integral aided with empirical acceleration model. A simulation scenario containing a typical Highly-inclined Elliptical Orbit (HEO) user and GPS constellation is established on a HwaCreat™ GNSS signal simulator to testify the performance of the design. The numerical test results show that the maximum propagation error of the optimized orbit propagator does not exceed 1000 m within a day, which is superior to conventional OPs. If the new OP is deeply integrated with GNSS in our proposed scheme, the 95% SEP for the OD accuracy is 10.0005 m, and the time to first fix (TTFF) values under cold and warm start conditions are reduced by at least 7 s and 2 s respectively, which proves its advantage over loose integration and tight integration.
Hybrid Computer Errors in Engineering Flight Simulation.
1979-08-01
digital computation is a direct function of the pro- blem complexity which is not the case with analog computation . This is due to the serial processing...hybrid loop, a full scale analysis of the effects of hybrid computer errors on the accuracy of a typical flight simulation mathematical model was...technique directly influences the stability of the simulated system. Since the complexity of aerospace simulation problems makes pure ana- log computation
Propagation of atmospheric density errors to satellite orbits
NASA Astrophysics Data System (ADS)
Emmert, J. T.; Warren, H. P.; Segerman, A. M.; Byers, J. M.; Picone, J. M.
2017-01-01
We develop and test approximate analytic expressions relating time-dependent atmospheric density errors to errors in the mean motion and mean anomaly orbital elements. The mean motion and mean anomaly errors are proportional to the first and second integrals, respectively, of the density error. This means that the mean anomaly (and hence the in-track position) error variance grows with time as t3 for a white noise density error process and as t5 for a Brownian motion density error process. Our approximate expressions are accurate over a wide range of orbital configurations, provided the perigee altitude change is less than ∼0.2 atmospheric scale heights. For orbit prediction, density forecasts are driven in large part by forecasts of solar extreme ultraviolet (EUV) irradiance; we show that errors in EUV ten-day forecasts (and consequently in the density forecasts) approximately follow a Brownian motion process.
Simulating Bosonic Baths with Error Bars
NASA Astrophysics Data System (ADS)
Woods, M. P.; Cramer, M.; Plenio, M. B.
2015-09-01
We derive rigorous truncation-error bounds for the spin-boson model and its generalizations to arbitrary quantum systems interacting with bosonic baths. For the numerical simulation of such baths, the truncation of both the number of modes and the local Hilbert-space dimensions is necessary. We derive superexponential Lieb-Robinson-type bounds on the error when restricting the bath to finitely many modes and show how the error introduced by truncating the local Hilbert spaces may be efficiently monitored numerically. In this way we give error bounds for approximating the infinite system by a finite-dimensional one. As a consequence, numerical simulations such as the time-evolving density with orthogonal polynomials algorithm (TEDOPA) now allow for the fully certified treatment of the system-environment interaction.
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated
Error propagation in the procedure of pressure reconstruction based on PIV data
NASA Astrophysics Data System (ADS)
Wang, Zhongyi; Gao, Qi; Wei, Runjie; Wang, Jinjun
2017-04-01
Pressure reconstruction based on particle image velocimetry (PIV) has become a popular technique in experimental fluid mechanics. Errors in the raw velocity field significantly affect the accuracy of pressure gradient field and further reduce the quality of reconstructed pressure. Thus, error propagation deserves a serious concern. The format and magnitude of errors are investigated using a probability density function (PDF) based method. Theoretical derivation and numerical validation are operated in both Eulerian and Lagrangian descriptions. The influence of spatial and temporal resolutions on error propagation is discussed. A criterion of parameter selection for error suppression in the Lagrangian method is proposed. The results show that error propagation in the Lagrangian method has definite format and magnitude which makes this method more suitable for error control through specific treatment. Time interval during pressure reconstruction should be carefully determined since a large uncertainty appears when the time interval is employed improperly.
NASA Technical Reports Server (NTRS)
Weir, Kent A.; Wells, Eugene M.
1990-01-01
The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.
Vasquez, Victor R; Whiting, Wallace B
2005-12-01
A Monte Carlo method is presented to study the effect of systematic and random errors on computer models mainly dealing with experimental data. It is a common assumption in this type of models (linear and nonlinear regression, and nonregression computer models) involving experimental measurements that the error sources are mainly random and independent with no constant background errors (systematic errors). However, from comparisons of different experimental data sources evidence is often found of significant bias or calibration errors. The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors. The main objectives are to detect the error source with stochastic dominance on the uncertainty propagation and the combined effect on output variables of the models. The results from the case studies analyzed show that the approach is able to distinguish which error type has a more significant effect on the performance of the model. Also, it was found that systematic or calibration errors, if present, cannot be neglected in uncertainty analysis of models dependent on experimental measurements such as chemical and physical properties. The approach can be used to facilitate decision making in fields related to safety factors selection, modeling, experimental data measurement, and experimental design.
Propagation of errors from the sensitivity image in list mode reconstruction
Qi, Jinyi; Huesman, Ronald H.
2003-11-15
List mode image reconstruction is attracting renewed attention. It eliminates the storage of empty sinogram bins. However, a single back projection of all LORs is still necessary for the pre-calculation of a sensitivity image. Since the detection sensitivity is dependent on the object attenuation and detector efficiency, it must be computed for each study. Exact computation of the sensitivity image can be a daunting task for modern scanners with huge numbers of LORs. Thus, some fast approximate calculation may be desirable. In this paper, we theoretically analyze the error propagation from the sensitivity image into the reconstructed image. The theoretical analysis is based on the fixed point condition of the list mode reconstruction. The non-negativity constraint is modeled using the Kuhn-Tucker condition. With certain assumptions and the first order Taylor series approximation, we derive a closed form expression for the error in the reconstructed image as a function of the error in the sensitivity image. The result provides insights on what kind of error might be allowable in the sensitivity image. Computer simulations show that the theoretical results are in good agreement with the measured results.
Simulations of Seismic Wave Propagation on Mars
Bozdağ, Ebru; Ruan, Youyi; Metthez, Nathan; ...
2017-03-23
In this paper, we present global and regional synthetic seismograms computed for 1D and 3D Mars models based on the spectral-element method. For global simulations, we implemented a radially-symmetric Mars model with a 110 km thick crust (Sohl and Spohn in J. Geophys. Res., Planets 102(E1):1613–1635, 1997). For this 1D model, we successfully benchmarked the 3D seismic wave propagation solver SPECFEM3D_GLOBE (Komatitsch and Tromp in Geophys. J. Int. 149(2):390–412, 2002a; 150(1):303–318, 2002b) against the 2D axisymmetric wave propagation solver AxiSEM (Nissen-Meyer et al. in Solid Earth 5(1):425–445, 2014) at periods down to 10 s. We also present higher-resolution body-wave simulationsmore » with AxiSEM down to 1 s in a model with a more complex 1D crust, revealing wave propagation effects that would have been difficult to interpret based on ray theory. For 3D global simulations based on SPECFEM3D_GLOBE, we superimposed 3D crustal thickness variations capturing the distinct crustal dichotomy between Mars’ northern and southern hemispheres, as well as topography, ellipticity, gravity, and rotation. The global simulations clearly indicate that the 3D crust speeds up body waves compared to the reference 1D model, whereas it significantly changes surface waveforms and their dispersive character depending on its thickness. We also perform regional simulations with the solver SES3D (Fichtner et al. Geophys. J. Int. 179:1703–1725, 2009) based on 3D crustal models derived from surface composition, thereby addressing the effects of various distinct crustal features down to 2 s. The regional simulations confirm the strong effects of crustal variations on waveforms. Finally, we conclude that the numerical tools are ready for examining more scenarios, including various other seismic models and sources.« less
Investigation of Propagation in Foliage Using Simulation Techniques
2011-12-01
simulation models provide a rough approximation to radiowave propagation in an actual rainforest environment. Based on the simulated results, the...simulation models provide a rough approximation to radiowave propagation in an actual rainforest environment. Based on the simulated results, the path... Rainforest ...............................2 2. Electrical Properties of a Forest .........................................................3 B. OBJECTIVES OF
Error propagation and scaling for tropical forest biomass estimates.
Chave, Jerome; Condit, Richard; Aguilar, Salomon; Hernandez, Andres; Lao, Suzanne; Perez, Rolando
2004-01-01
The above-ground biomass (AGB) of tropical forests is a crucial variable for ecologists, biogeochemists, foresters and policymakers. Tree inventories are an efficient way of assessing forest carbon stocks and emissions to the atmosphere during deforestation. To make correct inferences about long-term changes in biomass stocks, it is essential to know the uncertainty associated with AGB estimates, yet this uncertainty is rarely evaluated carefully. Here, we quantify four types of uncertainty that could lead to statistical error in AGB estimates: (i) error due to tree measurement; (ii) error due to the choice of an allometric model relating AGB to other tree dimensions; (iii) sampling uncertainty, related to the size of the study plot; (iv) representativeness of a network of small plots across a vast forest landscape. In previous studies, these sources of error were reported but rarely integrated into a consistent framework. We estimate all four terms in a 50 hectare (ha, where 1 ha = 10(4) m2) plot on Barro Colorado Island, Panama, and in a network of 1 ha plots scattered across central Panama. We find that the most important source of error is currently related to the choice of the allometric model. More work should be devoted to improving the predictive power of allometric models for biomass. PMID:15212093
Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.
2010-09-15
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article
Error analysis of mixed finite element methods for wave propagation in double negative metamaterials
NASA Astrophysics Data System (ADS)
Li, Jichun
2007-12-01
In this paper, we develop both semi-discrete and fully discrete mixed finite element methods for modeling wave propagation in three-dimensional double negative metamaterials. Optimal error estimates are proved for Nedelec spaces under the assumption of smooth solutions. To our best knowledge, this is the first error analysis obtained for Maxwell's equations when metamaterials are involved.
NASA Astrophysics Data System (ADS)
Pan, Zhao; Whitehead, Jared; Truscott, Tadd
2016-11-01
Little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure calculation. Rather than measure experimental error, we analytically investigate error propagation by examining the properties of the Poisson equation directly. Our results provide two contributions to the PIV community. First, we quantify the error bound in the pressure field by illustrating the mathematical roots of why and how PIV-based pressure calculations propagate. Second, we design the "worst case error" for a pressure Poisson solver. In other words, we provide a systematic example where the relatively small errors in the experimental data can lead to maximum error in the corresponding pressure calculations. The 2D calculation of the worst case error surprisingly leads to the classic Kirchhoff plates problem, and connects the PIV-based pressure calculation, which is a typical fluid problem, to elastic dynamics. The results can be used for optimizing experimental error minimization by avoiding worst case scenarios. More importantly, they can be used to design synthetic velocity error for future PIV-pressure challenges, which can be the hardest test case in the examinations.
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is
Optimal full motion video registration with rigorous error propagation
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn
2014-06-01
Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.
2013-08-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006-2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC) ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92) are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors in the 7-15 hPa layer (29-32 km), a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold). Comparisons of total column O3 yield average differences of +1.6 DU (-1.1 to +4.9 DU 10th to 90th percentiles) when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (-1.1 to +2.2 DU) when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly distinguishable
Simulation of MAD Cow Disease Propagation
NASA Astrophysics Data System (ADS)
Magdoń-Maksymowicz, M. S.; Maksymowicz, A. Z.; Gołdasz, J.
Computer simulation of dynamic of BSE disease is presented. Both vertical (to baby) and horizontal (to neighbor) mechanisms of the disease spread are considered. The game takes place on a two-dimensional square lattice Nx×Ny = 1000×1000 with initial population randomly distributed on the net. The disease may be introduced either with the initial population or by a spontaneous development of BSE in an item, at a small frequency. Main results show a critical probability of the BSE transmission above which the disease is present in the population. This value is vulnerable to possible spatial clustering of the population and it also depends on the mechanism responsible for the disease onset, evolution and propagation. A threshold birth rate below which the population is extinct is seen. Above this threshold the population is disease free at equilibrium until another birth rate value is reached when the disease is present in population. For typical model parameters used for the simulation, which may correspond to the mad cow disease, we are close to the BSE-free case.
Hoogeveen, R C; Martens, E P; van der Stelt, P F; Berkhout, W E R
2015-01-01
To investigate if software simulation is practical for quantifying random error (RE) in phantom dosimetry. We applied software error simulation to an existing dosimetry study. The specifications and the measurement values of this study were brought into the software (R version 3.0.2) together with the algorithm of the calculation of the effective dose (E). Four sources of RE were specified: (1) the calibration factor; (2) the background radiation correction; (3) the read-out process of the dosimeters; and (4) the fluctuation of the X-ray generator. The amount of RE introduced by these sources was calculated on the basis of the experimental values and the mathematical rules of error propagation. The software repeated the calculations of E multiple times (n = 10,000) while attributing the applicable RE to the experimental values. A distribution of E emerged as a confidence interval around an expected value. Credible confidence intervals around E in phantom dose studies can be calculated by using software modelling of the experiment. With credible confidence intervals, the statistical significance of differences between protocols can be substantiated or rejected. This modelling software can also be used for a power analysis when planning phantom dose experiments.
Hoogeveen, R. C.; Martens, E. P.; van der Stelt, P. F.; Berkhout, W. E. R.
2015-01-01
Objective. To investigate if software simulation is practical for quantifying random error (RE) in phantom dosimetry. Materials and Methods. We applied software error simulation to an existing dosimetry study. The specifications and the measurement values of this study were brought into the software (R version 3.0.2) together with the algorithm of the calculation of the effective dose (E). Four sources of RE were specified: (1) the calibration factor; (2) the background radiation correction; (3) the read-out process of the dosimeters; and (4) the fluctuation of the X-ray generator. Results. The amount of RE introduced by these sources was calculated on the basis of the experimental values and the mathematical rules of error propagation. The software repeated the calculations of E multiple times (n = 10,000) while attributing the applicable RE to the experimental values. A distribution of E emerged as a confidence interval around an expected value. Conclusions. Credible confidence intervals around E in phantom dose studies can be calculated by using software modelling of the experiment. With credible confidence intervals, the statistical significance of differences between protocols can be substantiated or rejected. This modelling software can also be used for a power analysis when planning phantom dose experiments. PMID:26881200
Nonparametric Second-Order Theory of Error Propagation on Motion Groups.
Wang, Yunfeng; Chirikjian, Gregory S
2008-01-01
Error propagation on the Euclidean motion group arises in a number of areas such as in dead reckoning errors in mobile robot navigation and joint errors that accumulate from the base to the distal end of kinematic chains such as manipulators and biological macromolecules. We address error propagation in rigid-body poses in a coordinate-free way. In this paper we show how errors propagated by convolution on the Euclidean motion group, SE(3), can be approximated to second order using the theory of Lie algebras and Lie groups. We then show how errors that are small (but not so small that linearization is valid) can be propagated by a recursive formula derived here. This formula takes into account errors to second-order, whereas prior efforts only considered the first-order case. Our formulation is nonparametric in the sense that it will work for probability density functions of any form (not only Gaussians). Numerical tests demonstrate the accuracy of this second-order theory in the context of a manipulator arm and a flexible needle with bevel tip.
Nonparametric Second-Order Theory of Error Propagation on Motion Groups
Wang, Yunfeng; Chirikjian, Gregory S.
2010-01-01
Error propagation on the Euclidean motion group arises in a number of areas such as in dead reckoning errors in mobile robot navigation and joint errors that accumulate from the base to the distal end of kinematic chains such as manipulators and biological macromolecules. We address error propagation in rigid-body poses in a coordinate-free way. In this paper we show how errors propagated by convolution on the Euclidean motion group, SE(3), can be approximated to second order using the theory of Lie algebras and Lie groups. We then show how errors that are small (but not so small that linearization is valid) can be propagated by a recursive formula derived here. This formula takes into account errors to second-order, whereas prior efforts only considered the first-order case. Our formulation is nonparametric in the sense that it will work for probability density functions of any form (not only Gaussians). Numerical tests demonstrate the accuracy of this second-order theory in the context of a manipulator arm and a flexible needle with bevel tip. PMID:20333324
Hierarchical Boltzmann simulations and model error estimation
NASA Astrophysics Data System (ADS)
Torrilhon, Manuel; Sarna, Neeraj
2017-08-01
A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.
Numerical simulation of wave propagation in cancellous bone.
Padilla, F; Bossy, E; Haiat, G; Jenson, F; Laugier, P
2006-12-22
Numerical simulation of wave propagation is performed through 31 3D volumes of trabecular bone. These volumes were reconstructed from high synchrotron microtomography experiments and are used as the input geometry in a simulation software developed in our laboratory. The simulation algorithm accounts for propagation into both the saturating fluid and bone but absorption is not taken into account. We show that 3D simulation predicts phenomena observed experimentally in trabecular bones : linear frequency dependence of attenuation, increase of attenuation and speed of sound with the bone volume fraction, negative phase velocity dispersion in most of the specimens, propagation of fast and slow wave depending on the orientation of the trabecular network compared to the direction of propagation of the ultrasound. Moreover, the predicted attenuation is in very close agreement with the experimental one measured on the same specimens. Coupling numerical simulation with real bone architecture therefore provides a powerful tool to investigate the physics of ultrasound propagation in trabecular structures.
Zandbergen, P A; Hart, T C; Lenzer, K E; Camponovo, M E
2012-04-01
The quality of geocoding has received substantial attention in recent years. A synthesis of published studies shows that the positional errors of street geocoding are somewhat unique relative to those of other types of spatial data: (1) the magnitude of error varies strongly across urban-rural gradients; (2) the direction of error is not uniform, but strongly associated with the properties of local street segments; (3) the distribution of errors does not follow a normal distribution, but is highly skewed and characterized by a substantial number of very large error values; and (4) the magnitude of error is spatially autocorrelated and is related to properties of the reference data. This makes it difficult to employ analytic approaches or Monte Carlo simulations for error propagation modeling because these rely on generalized statistical characteristics. The current paper describes an alternative empirical approach to error propagation modeling for geocoded data and illustrates its implementation using three different case-studies of geocoded individual-level datasets. The first case-study consists of determining the land cover categories associated with geocoded addresses using a point-in-raster overlay. The second case-study consists of a local hotspot characterization using kernel density analysis of geocoded addresses. The third case-study consists of a spatial data aggregation using enumeration areas of varying spatial resolution. For each case-study a high quality reference scenario based on address points forms the basis for the analysis, which is then compared to the result of various street geocoding techniques. Results show that the unique nature of the positional error of street geocoding introduces substantial noise in the result of spatial analysis, including a substantial amount of bias for some analysis scenarios. This confirms findings from earlier studies, but expands these to a wider range of analytical techniques. Copyright © 2012 Elsevier Ltd
Zandbergen, P.A.; Hart, T.C.; Lenzer, K.E.; Camponovo, M.E.
2012-01-01
The quality of geocoding has received substantial attention in recent years. A synthesis of published studies shows that the positional errors of street geocoding are somewhat unique relative to those of other types of spatial data: 1) the magnitude of error varies strongly across urban-rural gradients; 2) the direction of error is not uniform, but strongly associated with the properties of local street segments; 3) the distribution of errors does not follow a normal distribution, but is highly skewed and characterized by a substantial number of very large error values; and 4) the magnitude of error is spatially autocorrelated and is related to properties of the reference data. This makes it difficult to employ analytic approaches or Monte Carlo simulations for error propagation modeling because these rely on generalized statistical characteristics. The current paper describes an alternative empirical approach to error propagation modeling for geocoded data and illustrates its implementation using three different case-studies of geocoded individual-level datasets. The first case-study consists of determining the land cover categories associated with geocoded addresses using a point-in-raster overlay. The second case-study consists of a local hotspot characterization using kernel density analysis of geocoded addresses. The third case-study consists of a spatial data aggregation using enumeration areas of varying spatial resolution. For each case-study a high quality reference scenario based on address points forms the basis for the analysis, which is then compared to the result of various street geocoding techniques. Results show that the unique nature of the positional error of street geocoding introduces substantial noise in the result of spatial analysis, including a substantial amount of bias for some analysis scenarios. This confirms findings from earlier studies, but expands these to a wider range of analytical techniques. PMID:22469492
Simulation of sound propagation over porous barriers of arbitrary shapes.
Ke, Guoyi; Zheng, Z C
2015-01-01
A time-domain solver using an immersed boundary method is investigated for simulating sound propagation over porous and rigid barriers of arbitrary shapes. In this study, acoustic propagation in the air from an impulse source over the ground is considered as a model problem. The linearized Euler equations are solved for sound propagation in the air and the Zwikker-Kosten equations for propagation in barriers as well as in the ground. In comparison to the analytical solutions, the numerical scheme is validated for the cases of a single rigid barrier with different shapes and for two rigid triangular barriers. Sound propagations around barriers with different porous materials are then simulated and discussed. The results show that the simulation is able to capture the sound propagation behaviors accurately around both rigid and porous barriers.
1978-12-01
as fluctuations in the burn of the rocket motor during orbital insertion or small errors caused by a slightly inaccurate guidance package. All of...these factors contri- bute to position and velocity errors which occur during the powered phase of the mission,- - up to booster burnout. Hence, the...satellite. I Comouting The Launch Site Coordinates In Inertial Space This initial phase of the program was designed to allow the earth location of any
Prediction and simulation errors in parameter estimation for nonlinear systems
NASA Astrophysics Data System (ADS)
Aguirre, Luis A.; Barbosa, Bruno H. G.; Braga, Antônio P.
2010-11-01
This article compares the pros and cons of using prediction error and simulation error to define cost functions for parameter estimation in the context of nonlinear system identification. To avoid being influenced by estimators of the least squares family (e.g. prediction error methods), and in order to be able to solve non-convex optimisation problems (e.g. minimisation of some norm of the free-run simulation error), evolutionary algorithms were used. Simulated examples which include polynomial, rational and neural network models are discussed. Our results—obtained using different model classes—show that, in general the use of simulation error is preferable to prediction error. An interesting exception to this rule seems to be the equation error case when the model structure includes the true model. In the case of error-in-variables, although parameter estimation is biased in both cases, the algorithm based on simulation error is more robust.
Modeling human response errors in synthetic flight simulator domain
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.
1992-01-01
This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.
Error propagation and metamodeling for a fidelity tradeoff capability in complex systems design
NASA Astrophysics Data System (ADS)
McDonald, Robert A.
Complex man-made systems are ubiquitous in modern technological society. The national air transportation infrastructure and the aircraft that operate within it, the highways stretching coast-to-coast and the vehicles that travel on them, and global communications networks and the computers that make them possible are all complex systems. It is impossible to fully validate a systems analysis or a design process. Systems are too large, complex, and expensive to build test and validation articles. Furthermore, the operating conditions throughout the life cycle of a system are impossible to predict and control for a validation experiment. Error is introduced at every point in a complex systems design process. Every error source propagates through the complex system in the same way information propagates, feedforward, feedback, and coupling are all present with error. As with error propagation through a single analysis, error sources grow and decay when propagated through a complex system. These behaviors are made more complex by the complex interactions of a complete system. This complication and the loss of intuition that accompanies it makes proper error propagation calculations even more important to aid the decision maker. Error allocation and fidelity trade decisions answer questions like: Is the fidelity of a complex systems analysis adequate, or is an improvement needed? If an improvement is needed, how is that improvement best achieved? Where should limited resources be invested for the improvement of fidelity? How does knowledge of the imperfection of a model impact design decisions based on the model and the certainty of the performance of a particular design? In this research, a fidelity trade environment was conceived, formulated, developed, and demonstrated. This development relied on the advancement of enabling techniques including error propagation, metamodeling, and information management. A notional transport aircraft is modeled in the fidelity trade
Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results
Clark, E.L.
1994-07-01
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.
Estimation and Propagation of Errors in Ice Sheet Bed Elevation Measurements
NASA Astrophysics Data System (ADS)
Johnson, J. V.; Brinkerhoff, D.; Nowicki, S.; Plummer, J.; Sack, K.
2012-12-01
This work is presented in two parts. In the first, we use a numerical inversion technique to determine a "mass conserving bed" (MCB) and estimate errors in interpolation of the bed elevation. The MCB inversion technique adjusts the bed elevation to assure that the mass flux determined from surface velocity measurements does not violate conservation. Cross validation of the MCB technique is done using a subset of available flight lines. The unused flight lines provide data to compare to, quantifying the errors produced by MCB and other interpolation methods. MCB errors are found to be similar to those produced with more conventional interpolation schemes, such as kriging. However, MCB interpolation is consistent with the physics that govern ice sheet models. In the second part, a numerical model of glacial ice is used to propagate errors in bed elevation to the kinematic surface boundary condition. Initially, a control run is completed to establish the surface velocity produced by the model. The control surface velocity is subsequently used as a target for data inversions performed on perturbed versions of the control bed. The perturbation of the bed represents the magnitude of error in bed measurement. Through the inversion for traction, errors in bed measurement are propagated forward to investigate errors in the evolution of the free surface. Our primary conclusion relates the magnitude of errors in the surface evolution to errors in the bed. By linking free surface errors back to the errors in bed interpolation found in the first part, we can suggest an optimal spacing of the radar flight lines used in bed acquisition.
NASA Astrophysics Data System (ADS)
Hanayama, Hiroki; Nakamura, Takuya; Takagi, Ryo; Yoshizawa, Shin; Umemura, Shin-ichiro
2017-07-01
A shadowgraph method has been proposed for fast and noninterference measurement of ultrasonic pressure field. However, special care has been needed in choosing an appropriate optical propagation length to obtain a satisfactory signal-to-noise ratio of the measurement while avoiding error from a geometrical optics approximation. In this study, we propose a new numerical method replacing the geometrical optics approximation to retrieve the optical phase for the measurement. Optical intensity distribution obtained from the numerical simulation of optical propagation based on the Huygens-Fresnel principle agreed well with the measurement at a relatively large optical propagation length at which the geometrical optics approximation failed. Optical phase retrieval from the simulated optical intensity distribution by the proposed method was then tested. The range of optical propagation for successful phase retrieval was extended a few times by the proposed method compared with the geometrical optics approximation.
Kiguchi, M
1999-09-20
The intrinsic error propagation in a technique that uses total reflection geometry for the measurement of chi(3) is calculated. The results show how accurately the parameters should be measured to obtain the chi(3) value with the required precision. The film thickness should be slightly less than the fundamental wavelength to reduce the chi(3) error that propagates from other parameters.
NASA Technical Reports Server (NTRS)
Mallinckrodt, A. J.
1977-01-01
Data from an extensive array of collocated instrumentation at the Wallops Island test facility were intercompared in order to (1) determine the practical achievable accuracy limitations of various tropospheric and ionospheric correction techniques; (2) examine the theoretical bases and derivation of improved refraction correction techniques; and (3) estimate internal systematic and random error levels of the various tracking stations. The GEOS 2 satellite was used as the target vehicle. Data were obtained regarding the ionospheric and tropospheric propagation errors, the theoretical and data analysis of which was documented in some 30 separate reports over the last 6 years. An overview of project results is presented.
2014-09-01
hour; for our initial computation, we used a solar EUV irradiance uncertainty of 7% at a forecast time of 7 days, so that the forecast error at the...We developed approximate expressions for the propagation of solar irradiance forecast errors propagate to atmospheric density forecasts to in-track...trajectories of most objects in low-Earth orbit, and solar variability is the largest source of error in upper atmospheric density forecasts . There is
Effects of Error Experience When Learning to Simulate Hypernasality
ERIC Educational Resources Information Center
Wong, Andus W.-K.; Tse, Andy C.-Y.; Ma, Estella P.-M.; Whitehill, Tara L.; Masters, Rich S. W.
2013-01-01
Purpose: The purpose of this study was to evaluate the effects of error experience on the acquisition of hypernasal speech. Method: Twenty-eight healthy participants were asked to simulate hypernasality in either an "errorless learning" condition (in which the possibility for errors was limited) or an "errorful learning"…
Sarfati, L; Ranchon, F; Vantard, N; Schwiertz, V; Gauthier, N; He, S; Kiouris, E; Gourc-Berthod, C; Guédat, M G; Alloux, C; Gustin, M-P; You, B; Trillet-Lenoir, V; Freyer, G; Rioufol, C
2015-02-01
Medication errors (ME) in oncology are known to cause serious iatrogenic complications. However, MEs still occur at each step in the anticancer chemotherapy process, particularly when injections are prepared in the hospital pharmacy. This study assessed whether a ME simulation program would help prevent ME-associated iatrogenic complications. The 5-month prospective study, consisting of three phases, was undertaken in the centralized pharmaceutical unit of a university hospital of Lyon, France. During the first simulation phase, 25 instruction sheets each containing one simulated error were inserted among various instruction sheets issued to blinded technicians. The second phase consisted of activity aimed at raising pharmacy technicians' awareness of risk of medication errors associated with antineoplastic drugs. The third phase consisted of re-enacting the error simulation process 3 months after the awareness campaign. The rate and severity of undetected medication errors were measured during the two simulation (first and third) phases. The potential seriousness of the ME was assessed using the NCC MERP(®) index. The rate of undetected medication errors decreased from 12 in the first simulation phase (48%) to five in the second simulation phase (20%, P = 0.04). The number of potential deaths due to administration of a faulty preparation decreased from three to zero. Awareness of iatrogenic risk through error simulation allowed pharmacy technicians to improve their ability to identify errors. This study is the first demonstration of the successful application of a simulation-based learning tool for reducing errors in the preparation of injectable anticancer drugs. Such a program should form part of the continuous quality improvement of risk management strategies for cancer patients. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas
2016-12-01
There is unanimous agreement that a precise spatial representation of past landslide occurrences is a prerequisite to produce high quality statistical landslide susceptibility models. Even though perfectly accurate landslide inventories rarely exist, investigations of how landslide inventory-based errors propagate into subsequent statistical landslide susceptibility models are scarce. The main objective of this research was to systematically examine whether and how inventory-based positional inaccuracies of different magnitudes influence modelled relationships, validation results, variable importance and the visual appearance of landslide susceptibility maps. The study was conducted for a landslide-prone site located in the districts of Amstetten and Waidhofen an der Ybbs, eastern Austria, where an earth-slide point inventory was available. The methodological approach comprised an artificial introduction of inventory-based positional errors into the present landslide data set and an in-depth evaluation of subsequent modelling results. Positional errors were introduced by artificially changing the original landslide position by a mean distance of 5, 10, 20, 50 and 120 m. The resulting differently precise response variables were separately used to train logistic regression models. Odds ratios of predictor variables provided insights into modelled relationships. Cross-validation and spatial cross-validation enabled an assessment of predictive performances and permutation-based variable importance. All analyses were additionally carried out with synthetically generated data sets to further verify the findings under rather controlled conditions. The results revealed that an increasing positional inventory-based error was generally related to increasing distortions of modelling and validation results. However, the findings also highlighted that interdependencies between inventory-based spatial inaccuracies and statistical landslide susceptibility models are complex. The
On the propagation of uncertainties in radiation belt simulations
NASA Astrophysics Data System (ADS)
Camporeale, Enrico; Shprits, Yuri; Chandorkar, Mandar; Drozdov, Alexander; Wing, Simon
2016-11-01
We present the first study of the uncertainties associated with radiation belt simulations, performed in the standard quasi-linear diffusion framework. In particular, we estimate how uncertainties of some input parameters propagate through the nonlinear simulation, producing a distribution of outputs that can be quite broad. Here we restrict our focus on two-dimensional simulations (in energy and pitch angle space) of parallel-propagating chorus waves only, and we study as stochastic input parameters the geomagnetic index Kp (that characterizes the time dependency of an idealized storm), the latitudinal extent of waves, and the average electron density. We employ a collocation method, thus performing an ensemble of simulations. The results of this work point to the necessity of shifting to a probabilistic interpretation of radiation belt simulation results and suggest that an accurate specification of a time-dependent density model is crucial for modeling the radiation environment.
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-01-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. PMID:27499587
NASA Astrophysics Data System (ADS)
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
NASA Technical Reports Server (NTRS)
James, R.; Brownlow, J. D.
1985-01-01
A study is performed under NASA contract to evaluate data from an AN/FPS-16 radar installed for support of flight programs at Dryden Flight Research Facility of NASA Ames Research Center. The purpose of this study is to provide information necessary for improving post-flight data reduction and knowledge of accuracy of derived radar quantities. Tracking data from six flights are analyzed. Noise and bias errors in raw tracking data are determined for each of the flights. A discussion of an altitude bias error during all of the tracking missions is included. This bias error is defined by utilizing pressure altitude measurements made during survey flights. Four separate filtering methods, representative of the most widely used optimal estimation techniques for enhancement of radar tracking data, are analyzed for suitability in processing both real-time and post-mission data. Additional information regarding the radar and its measurements, including typical noise and bias errors in the range and angle measurements, is also presented. This report is in two parts. This is part 2, a discussion of the modeling of propagation path errors.
NASA Astrophysics Data System (ADS)
Bedrossian, Manuel; Nadeau, Jay; Serabyn, Eugene; Lindensmith, Chris
2017-02-01
Quantitative phase imaging (QPI) has many applications in a broad range of disciplines from astronomy to microbiology. QPI is often performed by optical interferometry, where two coherent beams of light are used to produce interference patterns at a detector plane. Many algorithms exist to calculate the phase of the incident light from these recorded interference patterns as well as enhance their quality by various de-noising methods. Many of these de-noising algorithms, however, corrupt the quantitative aspect of the measurement, resulting in phase contrast images. Among these phase calculation techniques and de-noising algorithms, none approach the optimization of phase measurements by theoretically addressing the various sources of error in its measurement, as well as how these errors propagate to the phase calculations. In this work, we investigate the various sources of error in the measurements required for QPI, as well as theoretically derive the influence of each source of error on the overall phase calculation for three common phase calculation techniques: the four bucket/step method, three bucket/step method, and the Carré method. The noise characteristics of each of these techniques are discussed and compared using error parameters of a readily available CCD sensor array. Additionally, experimental analysis is conducted on interferograms to investigate the influence of speckle noise on the phase measurements of the three algorithms discussed.
Exploring Discretization Error in Simulation-Based Aerodynamic Databases
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2010-01-01
This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.
INTERVAL SAMPLING METHODS AND MEASUREMENT ERROR: A COMPUTER SIMULATION
Wirth, Oliver; Slaven, James; Taylor, Matthew A.
2015-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method’s inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. PMID:24127380
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Electromagnetic simulations for salinity index error estimation
NASA Astrophysics Data System (ADS)
Wilczek, Andrzej; Szypłowska, Agnieszka; Kafarski, Marcin; Nakonieczna, Anna; Skierucha, Wojciech
2017-01-01
Soil salinity index (SI) is a measure of salt concentration in soil water. The salinity index is calculated as a partial derivative of the soil bulk electrical conductivity (EC) with respect to the bulk dielectric permittivity (DP). The paper focused on the impact of different sensitivity zones for measured both EC and DP on the salinity index determination accuracy. For this purpose, a set of finite difference time domain (FDTD) simulations was prepared. The simulations were carried out on the model of a reflectometric probe consisting of three parallel rods inserted into a modelled material of simulated DP and EC. The combinations of stratified distributions of DP and EC were tested. An experimental verification of the simulation results on selected cases was performed. The results showed that the electromagnetic simulations can provide useful data to improve accuracy of the determination of soil SI.
Stress Wave Propagation in Larch Plantation Trees-Numerical Simulation
Fenglu Liu; Fang Jiang; Xiping Wang; Houjiang Zhang; Wenhua Yu
2015-01-01
In this paper, we attempted to simulate stress wave propagation in virtual tree trunks and construct two dimensional (2D) wave-front maps in the longitudinal-radial section of the trunk. A tree trunk was modeled as an orthotropic cylinder in which wood properties along the fiber and in each of the two perpendicular directions were different. We used the COMSOL...
Subresolution Displacements in Finite Difference Simulations of Ultrasound Propagation and Imaging.
Pinton, Gianmarco F
2017-03-01
Time domain finite difference simulations are used extensively to simulate wave propagation. They approximate the wave field on a discrete domain with a grid spacing that is typically on the order of a tenth of a wavelength. The smallest displacements that can be modeled by this type of simulation are thus limited to discrete values that are integer multiples of the grid spacing. This paper presents a method to represent continuous and subresolution displacements by varying the impedance of individual elements in a multielement scatterer. It is demonstrated that this method removes the limitations imposed by the discrete grid spacing by generating a continuum of displacements as measured by the backscattered signal. The method is first validated on an ideal perfect correlation case with a single scatterer. It is subsequently applied to a more complex case with a field of scatterers that model an acoustic radiation force-induced displacement used in ultrasound elasticity imaging. A custom finite difference simulation tool is used to simulate propagation from ultrasound imaging pulses in the scatterer field. These simulated transmit-receive events are then beamformed into images, which are tracked with a correlation-based algorithm to determine the displacement. A linear predictive model is developed to analytically describe the relationship between element impedance and backscattered phase shift. The error between model and simulation is λ/ 1364 , where λ is the acoustical wavelength. An iterative method is also presented that reduces the simulation error to λ/ 5556 over one iteration. The proposed technique therefore offers a computationally efficient method to model continuous subresolution displacements of a scattering medium in ultrasound imaging. This method has applications that include ultrasound elastography, blood flow, and motion tracking. This method also extends generally to finite difference simulations of wave propagation, such as electromagnetic or
Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES
NASA Astrophysics Data System (ADS)
Sarkar, B.; Bhunia, C. T.; Maulik, U.
2012-06-01
Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.
Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C
2014-03-01
Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward
Propagation of radar rainfall uncertainty in urban flood simulations
NASA Astrophysics Data System (ADS)
Liguori, Sara; Rico-Ramirez, Miguel
2013-04-01
This work discusses the results of the implementation of a novel probabilistic system designed to improve ensemble sewer flow predictions for the drainage network of a small urban area in the North of England. The probabilistic system has been developed to model the uncertainty associated to radar rainfall estimates and propagate it through radar-based ensemble sewer flow predictions. The assessment of this system aims at outlining the benefits of addressing the uncertainty associated to radar rainfall estimates in a probabilistic framework, to be potentially implemented in the real-time management of the sewer network in the study area. Radar rainfall estimates are affected by uncertainty due to various factors [1-3] and quality control and correction techniques have been developed in order to improve their accuracy. However, the hydrological use of radar rainfall estimates and forecasts remains challenging. A significant effort has been devoted by the international research community to the assessment of the uncertainty propagation through probabilistic hydro-meteorological forecast systems [4-5], and various approaches have been implemented for the purpose of characterizing the uncertainty in radar rainfall estimates and forecasts [6-11]. A radar-based ensemble stochastic approach, similar to the one implemented for use in the Southern-Alps by the REAL system [6], has been developed for the purpose of this work. An ensemble generator has been calibrated on the basis of the spatial-temporal characteristics of the residual error in radar estimates assessed with reference to rainfall records from around 200 rain gauges available for the year 2007, previously post-processed and corrected by the UK Met Office [12-13]. Each ensemble member is determined by summing a perturbation field to the unperturbed radar rainfall field. The perturbations are generated by imposing the radar error spatial and temporal correlation structure to purely stochastic fields. A
Propagating errors in decay equations: Examples from the ReOs isotopic system
NASA Astrophysics Data System (ADS)
Sambridge, Malcolm; Lambert, David D.
1997-07-01
Statistical evaluation of radiogenic isotope data commonly makes use of the isochron method to determine closure age and initial isotopic composition which can be related to the source region from which the rocks or minerals were derived. Isochron regression algorithms also yield estimates of uncertainties in age and initial isotopic composition. However, geochemists frequently require an estimate of uncertainties associated with the calculation of initial isotopic composition and model age for single samples. This is often the case with ReOs isotopic data for small sample suites that may not be isochronous. Here we describe two methods of propagating errors associated with ReOs isotopic measurements in order to estimate uncertainties associated with both of these geologically important parameters; however, these methods are equally applicable to other isotopic systems. The first result is a set of analytical formulae that provide error estimates on both variables, even for the most general case where all dependent variables contain error, and all pairs of variables are correlated. This numerical approach leads to equations that can be easily and efficiently evaluated. A second Monte Carlo procedure was initially implemented to check the accuracy of the analytical formulae, although in the cases tested here it has also proved to be efficient and may even be practical for routine use. The advantage of error analysis of this type is that we can assign a level of confidence and thus significance to calculated initial isotopic compositions and model ages, especially for Archean rocks.
Computer simulation of action potential propagation in septated nerve fibers.
Barach, J P; Wikswo, J P
1987-02-01
The nonlinear, core-conductor model of action potential propagation down axisymmetric nerve fibers is adapted for an implicit, numerical simulation by computer solution of the differential equations. The calculation allows a septum to be inserted in the model fiber; the thin, passive septum is characterized by series resistance Rsz and shunt resistance Rss to the grounded bath. If Rsz is too large or Rss too small, the signal fails to propagate through the septum. Plots of the action potential profiles for various axial positions are obtained and show distortions due to the presence of the septum. A simple linear model, developed from these simulations, relates propagation delay through the septum and the preseptal risetime to Rsz and Rss. This model agrees with the simulations for a wide range of parameters and allows estimation of Rsz and Rss from measured propagation delays at the septum. Plots of the axial current as a function of both time and position demonstrate how the presence of the septum can cause prominent local reversals of the current. This result, not previously described, suggests that extracellular magnetic measurements of cellular action currents could be useful in the biophysical study of septated fibers.
The Propagation of Errors in Experimental Data Analysis: A Comparison of Pre-and Post-Test Designs
ERIC Educational Resources Information Center
Gorard, Stephen
2013-01-01
Experimental designs involving the randomization of cases to treatment and control groups are powerful and under-used in many areas of social science and social policy. This paper reminds readers of the pre-and post-test, and the post-test only, designs, before explaining briefly how measurement errors propagate according to error theory. The…
The Propagation of Errors in Experimental Data Analysis: A Comparison of Pre-and Post-Test Designs
ERIC Educational Resources Information Center
Gorard, Stephen
2013-01-01
Experimental designs involving the randomization of cases to treatment and control groups are powerful and under-used in many areas of social science and social policy. This paper reminds readers of the pre-and post-test, and the post-test only, designs, before explaining briefly how measurement errors propagate according to error theory. The…
Simulation of laser beam propagation through the troposphere
NASA Astrophysics Data System (ADS)
Wang, Bao-feng; Luo, Xiu-juan; Zhang, Yu; Zeng, Zhi-hong; Wang, Feng
2013-09-01
Understanding and predicting laser beam propagation effects in the atmosphere is important for laser applications. Turbulence effects cause beam wander, beam broadening, intensity scintillations, which reducing the power in bucket and the tracking accuracy, etc. In this work, the phase screens are used to model atmosphere turbulence in the model of the laser propagation through troposphere. And according to the characteristics of the troposphere，a layered model is used. Laser propagation follows the Huygens-Fresnel principle between phase screens. Simulations with different grid point numbers were constructed, and numerical experiments were conducted. According to the simulated results including Strehl ratio, sharpness, and amplitude distribution, preceding phase screens have effect on the total energy of the receiving surface, but have little impact on amplitude distribution. And the phase screens, which are close to the receiving surface, have a significant impact on both amplitude distribution and the total receiving energy. The results suggests that in simulation one should increase grid point numbers as many as possible and needs to pay particular attention to parameters of the phase screens near the receiving surface in simulation.
Characteristics and dependencies of error in satellite-based flood event simulations
NASA Astrophysics Data System (ADS)
Mei, Yiwen; Nikolopoulos, Efthymios I.; Anagnostou, Emmanouil N.; Zoccatelli, Davide; Borga, Marco
2016-04-01
The error in satellite precipitation driven complex terrain flood simulations is characterized in this study for eight different global satellite products and 128 flood events over the Eastern Italian Alps. The flood events are grouped according to two flood types: rain floods and flash floods. The satellite precipitation products and runoff simulations are evaluated based on systematic and random error metrics applied on the matched event pairs and basin scale event properties (i.e. rainfall and runoff cumulative depth and time series shape). Overall, error characteristics exhibit dependency on the flood type. Generally, timing of the event precipitation mass center and dispersion of the time series derived from satellite-precipitation exhibits good agreement with reference; the cumulative depth is mostly underestimated. The study shows a dampening effect in both systematic and random error components of the satellite-driven hydrograph relative to the satellite-retrieved hyetograph. The systematic error in shape of time series shows significant dampening effect. The random error dampening effect is less pronounced for the flash flood events, and the rain flood events with high runoff coefficient. This event-based analysis of the satellite precipitation error propagation in flood modeling sheds light on the application of satellite precipitation in mountain flood hydrology.
Simulation of error in optical radar range measurements.
Der, S; Redman, B; Chellappa, R
1997-09-20
We describe a computer simulation of atmospheric and target effects on the accuracy of range measurements using pulsed laser radars with p-i-n or avalanche photodiodes for direct detection. The computer simulation produces simulated images as a function of a wide variety of atmospheric, target, and sensor parameters for laser radars with range accuracies smaller than the pulse width. The simulation allows arbitrary target geometries and simulates speckle, turbulence, and near-field and far-field effects. We compare simulation results with actual range error data collected in field tests.
Wavefront error simulator for evaluating optical testing instrumentation
NASA Technical Reports Server (NTRS)
Golden, L. J.
1975-01-01
A wavefront error simulator has been designed and fabricated to evaluate experimentally test instrumentation for the Large Space Telescope (LST) program. The principal operating part of the simulator is an aberration generator that introduces low-order aberrations of several waves magnitude with an incremented adjustment capability of lambda/100. Each aberration type can be introduced independently with any desired spatial orientation.
Propagation of radiation in fluctuating multiscale plasmas. II. Kinetic simulations
Pal Singh, Kunwar; Robinson, P. A.; Cairns, Iver H.; Tyshetskiy, Yu.
2012-11-15
A numerical algorithm is developed and tested that implements the kinetic treatment of electromagnetic radiation propagating through plasmas whose properties have small scale fluctuations, which was developed in a companion paper. This method incorporates the effects of refraction, damping, mode structure, and other aspects of large-scale propagation of electromagnetic waves on the distribution function of quanta in position and wave vector, with small-scale effects of nonuniformities, including scattering and mode conversion approximated as causing drift and diffusion in wave vector. Numerical solution of the kinetic equation yields the distribution function of radiation quanta in space, time, and wave vector. Simulations verify the convergence, accuracy, and speed of the methods used to treat each term in the equation. The simulations also illustrate the main physical effects and place the results in a form that can be used in future applications.
Generalized phase-shifting algorithms: error analysis and minimization of noise propagation.
Ayubi, Gastón A; Perciante, César D; Di Martino, J Matías; Flores, Jorge L; Ferrari, José A
2016-02-20
Phase shifting is a technique for phase retrieval that requires a series of intensity measurements with certain phase steps. The purpose of the present work is threefold: first we present a new method for generating general phase-shifting algorithms with arbitrarily spaced phase steps. Second, we study the conditions for which the phase-retrieval error due to phase-shift miscalibration can be minimized. Third, we study the phase extraction from interferograms with additive random noise, and deduce the conditions to be satisfied for minimizing the phase-retrieval error. Algorithms with unevenly spaced phase steps are discussed under linear phase-shift errors and additive Gaussian noise, and simulations are presented.
Mitigating Particle Integration Error in Relativistic Laser-Plasma Simulations
NASA Astrophysics Data System (ADS)
Higuera, Adam; Weichmann, Kathleen; Cowan, Benjamin; Cary, John
2016-10-01
In particle-in-cell simulations of laser wakefield accelerators with a0 greater than unity, errors in particle trajectories produce incorrect beam charges and energies, predicting performance not realized in experiments such as the Texas Petawatt Laser. In order to avoid these errors, the simulation time step must resolve a time scale smaller than the laser period by a factor of a0. If the Yee scheme advances the fields with this time step, the laser wavelength must be over-resolved by a factor of a0 to avoid dispersion errors. Here is presented and demonstrated with Vorpal simulations, a new electromagnetic algorithm, building on previous work, correcting Yee dispersion for arbitrary sub-CFL time steps, reducing simulation times by a0.
Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko
2015-01-01
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
Propagating and incorporating the error in anisotropy-based inclination corrections
NASA Astrophysics Data System (ADS)
Bilardello, Dario; Jezek, Josef; Kodama, Kenneth P.
2011-10-01
Sedimentary rock palaeomagnetic inclinations that are too shallow with respect to the ambient field inclination may be restored using anisotropy-based inclination corrections or techniques that rely on models of the past geomagnetic field. One advantage of the anisotropy technique is that it relies on measured parameters (declinations, inclinations, bulk rock magnetic fabrics and particle magnetic anisotropy) that have measurement errors associated with them, rather than relying on a geomagnetic field model and statistical treatment of the data. So far, however, the error associated with the measurements has not been propagated through the corrections and the reported uncertainties are simply the α95 95 per cent confidence circles of the corrected directions. In this paper we outline different methodologies of propagating the error using bootstrap statistics and analytic approximations using the case example of the Shepody Formation inclination correction. Both techniques are in good agreement and indicate a moderate, ˜15 per cent, uncertainty in the determination of the flattening factor (f) used in the correction. Such uncertainty corresponds to an ˜0.31° increase of the confidence cone and a bias that steepens the mean inclination by 0.32°. For other haematite-bearing formations realistic uncertainties for f ranging from 0 and 30 per cent were used (together with an intermediate value of 15 per cent) yielding a maximum expected increase in the confidence cones and steepening of the inclinations of ˜1°. Such results indicate that for moderate errors of f the inclination correction itself does not substantially alter the uncertainty of a typical palaeomagnetic study. We also compare the uncertainties resulting from anisotropy-based corrections to those resulting from the elongation/inclination (E/I) technique. Uncertainties are comparable for studies with a large sample number (>100), otherwise the anisotropy-based technique gives smaller uncertainties
New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations
NASA Technical Reports Server (NTRS)
Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.
2012-01-01
In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.
Efficient Variational Quantum Simulator Incorporating Active Error Minimization
NASA Astrophysics Data System (ADS)
Li, Ying; Benjamin, Simon C.
2017-04-01
One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.
Abundance recovery error analysis using simulated AVIRIS data
NASA Technical Reports Server (NTRS)
Stoner, William W.; Harsanyi, Joseph C.; Farrand, William H.; Wong, Jennifer A.
1992-01-01
Measurement noise and imperfect atmospheric correction translate directly into errors in the determination of the surficial abundance of materials from imaging spectrometer data. The effects of errors on abundance recovery were investigated previously using Monte Carlo simulation methods by Sabol et. al. The drawback of the Monte Carlo approach is that thousands of trials are needed to develop good statistics on the probable error in abundance recovery. This computational burden invariably limits the number of scenarios of interest that can practically be investigated. A more efficient approach is based on covariance analysis. The covariance analysis approach expresses errors in abundance as a function of noise in the spectral measurements and provides a closed form result eliminating the need for multiple trials. Monte Carlo simulation and covariance analysis are used to predict confidence limits for abundance recovery for a scenario which is modeled as being derived from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).
Propagation and interaction of interplanetary transient disturbances. Numerical simulations
NASA Astrophysics Data System (ADS)
González-Esparza, J. Américo; Jeyakumar, S.
We study the heliocentric evolution of ICME-like disturbances and their associated transient forward shocks (TFSs) propagating in the interplanetary (IP) medium comparing the solutions of a hydrodynamic (HD) and magnetohydrodynamic (MHD) models using the ZEUS-3D code [Stone, J.M., Norman, M.L., 1992. Zeus-2d: a radiation magnetohydrodynamics code for astrophysical flows in two space dimensions. i - the hydrodynamic algorithms and tests. Astrophysical Journal Supplement Series 80, 753-790]. The simulations show that when a fast ICME and its associated IP shock propagate in the inner heliosphere they have an initial phase of about quasi-constant propagation speed (small deceleration) followed, after a critical distance (deflection point), by an exponential deceleration. By combining white light coronograph and interplanetary scintillation (IPS) measurements of ICMEs propagating within 1 AU [Manoharan, P.K., 2005. Evolution of coronal mass ejections in the inner heliosphere: a study using white-light and scintillation images. Solar Physics 235 (1-2), 345-368], such a critical distance and deceleration has already been inferred observationally. In addition, we also address the interaction between two ICME-like disturbances: a fast ICME 2 overtaking a previously launched slower ICME 1. After interaction, the leading ICME 1 accelerates and the tracking ICME 2 decelerates and both ICMEs tend to arrive at 1 AU having similar speeds. The 2-D HD and MHD models show similar qualitative results for the evolution and interaction of these disturbances in the IP medium.
Disentangling timing and amplitude errors in streamflow simulations
NASA Astrophysics Data System (ADS)
Seibert, Simon Paul; Ehret, Uwe; Zehe, Erwin
2016-09-01
This article introduces an improvement in the Series Distance (SD) approach for the improved discrimination and visualization of timing and magnitude uncertainties in streamflow simulations. SD emulates visual hydrograph comparison by distinguishing periods of low flow and periods of rise and recession in hydrological events. Within these periods, it determines the distance of two hydrographs not between points of equal time but between points that are hydrologically similar. The improvement comprises an automated procedure to emulate visual pattern matching, i.e. the determination of an optimal level of generalization when comparing two hydrographs, a scaled error model which is better applicable across large discharge ranges than its non-scaled counterpart, and "error dressing", a concept to construct uncertainty ranges around deterministic simulations or forecasts. Error dressing includes an approach to sample empirical error distributions by increasing variance contribution, which can be extended from standard one-dimensional distributions to the two-dimensional distributions of combined time and magnitude errors provided by SD. In a case study we apply both the SD concept and a benchmark model (BM) based on standard magnitude errors to a 6-year time series of observations and simulations from a small alpine catchment. Time-magnitude error characteristics for low flow and rising and falling limbs of events were substantially different. Their separate treatment within SD therefore preserves useful information which can be used for differentiated model diagnostics, and which is not contained in standard criteria like the Nash-Sutcliffe efficiency. Construction of uncertainty ranges based on the magnitude of errors of the BM approach and the combined time and magnitude errors of the SD approach revealed that the BM-derived ranges were visually narrower and statistically superior to the SD ranges. This suggests that the combined use of time and magnitude errors to
Temperature measurement error simulation of the pure rotational Raman lidar
NASA Astrophysics Data System (ADS)
Jia, Jingyu; Huang, Yong; Wang, Zhirui; Yi, Fan; Shen, Jianglin; Jia, Xiaoxing; Chen, Huabin; Yang, Chuan; Zhang, Mingyang
2015-11-01
Temperature represents the atmospheric thermodynamic state. Measure the atmospheric temperature accurately and precisely is very important to understand the physics of the atmospheric process. Lidar has some advantages in the atmospheric temperature measurement. Based on the lidar equation and the theory of pure rotational Raman (PRR), we've simulated the temperature measurement errors of the double-grating-polychromator (DGP) based PRR lidar. First of all, without considering the attenuation terms of the atmospheric transmittance and the range in the lidar equation, we've simulated the temperature measurement errors which are influenced by the beam splitting system parameters, such as the center wavelength, the receiving bandwidth and the atmospheric temperature. We analyzed three types of the temperature measurement errors in theory. We've proposed several design methods for the beam splitting system to reduce the temperature measurement errors. Secondly, we simulated the temperature measurement error profiles by the lidar equation. As the lidar power-aperture product is determined, the main target of our lidar system is to reduce the statistical and the leakage errors.
Numerical Simulation of the Detonation Propagation in Silicon Carbide Shell
NASA Astrophysics Data System (ADS)
Balagansky, Igor; Terechov, Anton
2013-06-01
Last years it was experimentally shown that in condensed high explosive charges (HE) placed in silicon carbide shell with sound velocity greater than the detonation velocity in HE, there may be observed interesting phenomena. Depending on the conditions, as an increase or decrease of the detonation velocity and pressure on the detonation front can be observed. There is also the distortion of the detonation front until the formation of a concave front. For a detailed explanation of the physical nature of the phenomenon we have provided numerical simulation of detonation wave propagation in Composition B HE charge, which was placed in silicon carbide shell. Modeling was performed with Ansys Autodyn in 2D-axis symmetry posting on an Eulerian mesh. Special attention was paid to selection of the parameters values in Lee-Tarver kinetic equation for HE and choice of constants to describe behavior of the ceramics. For comparison, also we have carried out the modeling of propagation of detonation in a completely similar assembly with brass shell. The simulation results agree well with the experimental data. In particular, in silicon carbide shell distortion of the detonation front was observed. A characteristic feature of the process is the pressure waves propagating in the direction of the axis of symmetry on the back surface of the detonation front.
NASA Astrophysics Data System (ADS)
Chen, Shanyong; Li, Shengyi; Wang, Guilin
2014-11-01
environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
Measurements and large eddy simulation of propagating premixed flames
Masri, A.R.; Cadwallader, B.J.; Ibrahim, S.S.
2006-07-15
This paper presents an experimental and numerical study of unsteady turbulent premixed flames igniting in an initially stagnant mixture and propagating past solid obstacles. The objective here is to study the outstanding issue of flow-flame interactions in transient premixed combustion environments. Particular emphasis is placed on the burning rate and the structure of the flame front. The experimental configuration consists of a chamber with a square cross-section filled with a combustible mixture of propane-air ignited from rest. An array of baffle plates as well as geometrical obstructions of varying shapes and blockage ratios, are placed in the path of the flame as it propagates from the ignition source to the vented end of the enclosure. A range of flame propagation conditions are studied experimentally. Measurements are presented for pressure-time traces, high-speed images of the flame front, mean velocities obtained from particle imaging velocimetry and laser induced fluorescence images of the hydroxyl radical OH. Three-dimensional large eddy simulations (LES) are also made for a case where a square obstacle and an array of baffle plates are placed in the chamber. The dynamic Germano model and a simple flamelet combustion model are used at the sub-grid scale. The effects of grid size and sub-grid filter width are also discussed. Calculations and measurements are found to be in good agreement with respect to flame structure and peak overpressure. Turbulence levels increase significantly at the leading edge of the flame as it propagates past the array of baffle plates and the obstacle. With reference to the regime diagrams for turbulent premixed combustion, it is noted that the flame continues to lie in the zones of thin reactions or corrugated flamelets regardless of the stage of propagation along the chamber. (author)
Hybrid simulation of wave propagation in the Io plasma torus
NASA Astrophysics Data System (ADS)
Stauffer, B. H.; Delamere, P. A.; Damiano, P. A.
2015-12-01
The transmission of waves between Jupiter and Io is an excellent case study of magnetosphere/ionosphere (MI) coupling because the power generated by the interaction at Io and the auroral power emitted at Jupiter can be reasonably estimated. Wave formation begins with mass loading as Io passes through the plasma torus. A ring beam distribution of pickup ions and perturbation of the local flow by the conducting satellite generate electromagnetic ion cyclotron waves and Alfven waves. We investigate wave propagation through the torus and to higher latitudes using a hybrid plasma simulation with a physically realistic density gradient, assessing the transmission of Poynting flux and wave dispersion. We also analyze the propagation of kinetic Alfven waves through a density gradient in two dimensions.
Ar-Ar_Redux: rigorous error propagation of 40Ar/39Ar data, including covariances
NASA Astrophysics Data System (ADS)
Vermeesch, P.
2015-12-01
Rigorous data reduction and error propagation algorithms are needed to realise Earthtime's objective to improve the interlaboratory accuracy of 40Ar/39Ar dating to better than 1% and thereby facilitate the comparison and combination of the K-Ar and U-Pb chronometers. Ar-Ar_Redux is a new data reduction protocol and software program for 40Ar/39Ar geochronology which takes into account two previously underappreciated aspects of the method: 1. 40Ar/39Ar measurements are compositional dataIn its simplest form, the 40Ar/39Ar age equation can be written as: t = log(1+J [40Ar/39Ar-298.5636Ar/39Ar])/λ = log(1 + JR)/λ Where λ is the 40K decay constant and J is the irradiation parameter. The age t does not depend on the absolute abundances of the three argon isotopes but only on their relative ratios. Thus, the 36Ar, 39Ar and 40Ar abundances can be normalised to unity and plotted on a ternary diagram or 'simplex'. Argon isotopic data are therefore subject to the peculiar mathematics of 'compositional data', sensu Aitchison (1986, The Statistical Analysis of Compositional Data, Chapman & Hall). 2. Correlated errors are pervasive throughout the 40Ar/39Ar methodCurrent data reduction protocols for 40Ar/39Ar geochronology propagate the age uncertainty as follows: σ2(t) = [J2 σ2(R) + R2 σ2(J)] / [λ2 (1 + R J)], which implies zero covariance between R and J. In reality, however, significant error correlations are found in every step of the 40Ar/39Ar data acquisition and processing, in both single and multi collector instruments, during blank, interference and decay corrections, age calculation etc. Ar-Ar_Redux revisits every aspect of the 40Ar/39Ar method by casting the raw mass spectrometer data into a contingency table of logratios, which automatically keeps track of all covariances in a compositional context. Application of the method to real data reveals strong correlations (r2 of up to 0.9) between age measurements within a single irradiation batch. Propertly taking
Goudar, Chetan T; Biener, Richard; Konstantinov, Konstantin B; Piret, James M
2009-01-01
Error propagation from prime variables into specific rates and metabolic fluxes was quantified for high-concentration CHO cell perfusion cultivation. Prime variable errors were first determined from repeated measurements and ranged from 4.8 to 12.2%. Errors in nutrient uptake and metabolite/product formation rates for 5-15% error in prime variables ranged from 8-22%. The specific growth rate, however, was characterized by higher uncertainty as 15% errors in the bioreactor and harvest cell concentration resulted in 37.8% error. Metabolic fluxes were estimated for 12 experimental conditions, each of 10 day duration, during 120-day perfusion cultivation and were used to determine error propagation from specific rates into metabolic fluxes. Errors of the greater metabolic fluxes (those related to glycolysis, lactate production, TCA cycle and oxidative phosphorylation) were similar in magnitude to those of the related greater specific rates (glucose, lactate, oxygen and CO(2) rates) and were insensitive to errors of the lesser specific rates (amino acid catabolism and biosynthesis rates). Errors of the lesser metabolic fluxes (those related to amino acid metabolism), however, were extremely sensitive to errors of the greater specific rates to the extent that they were no longer representative of cellular metabolism and were much less affected by errors in the lesser specific rates. We show that the relationship between specific rate and metabolic flux error could be accurately described by normalized sensitivity coefficients, which were readily calculated once metabolic fluxes were estimated. Their ease of calculation, along with their ability to accurately describe the specific rate-metabolic flux error relationship, makes them a necessary component of metabolic flux analysis. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.
Visual field test simulation and error in threshold estimation.
Spenceley, S E; Henson, D B
1996-01-01
AIM: To establish, via computer simulation, the effects of patient response variability and staircase starting level upon the accuracy and repeatability of static full threshold visual field tests. METHOD: Patient response variability, defined by the standard deviation of the frequency of seeing versus stimulus intensity curve, is varied from 0.5 to 20 dB (in steps of 0.5 dB) with staircase starting levels ranging from 30 dB below to 30 dB above the patient's threshold (in steps of 10 dB). Fifty two threshold estimates are derived for each condition and the error of each estimate calculated (difference between the true threshold and the threshold estimate derived from the staircase procedure). The mean and standard deviation of the errors are then determined for each condition. The results from a simulated quadrantic defect (response variability set to typical values for a patient with glaucoma) are presented using two different algorithms. The first corresponds with that normally used when performing a full threshold examination while the second uses results from an earlier simulated full threshold examination for the staircase starting values. RESULTS: The mean error in threshold estimates was found to be biased towards the staircase starting level. The extent of the bias was dependent upon patient response variability. The standard deviation of the error increased both with response variability and staircase starting level. With the routinely used full threshold strategy the quadrantic defect was found to have a large mean error in estimated threshold values and an increase in the standard deviation of the error along the edge of the defect. When results from an earlier full threshold test are used as staircase starting values this error and increased standard deviation largely disappeared. CONCLUSION: The staircase procedure widely used in threshold perimetry increased the error and the variability of threshold estimates along the edges of defects. Using
Li, Hui; Fu, Zhida; Liu, Liying; Lin, Zhili; Deng, Wei; Feng, Lishuang
2017-01-03
An improved temperature-insensitive optical voltage sensor (OVS) with a reciprocal dual-crystal sensing method is proposed. The inducing principle of OVS reciprocity degradation is expounded by taking the different temperature fields of two crystals and the axis-errors of optical components into consideration. The key parameters pertaining to the system reciprocity degeneration in the dual-crystal sensing unit are investigated in order to optimize the optical sensing model based on the Maxwell's electromagnetic theory. The influencing principle of axis-angle errors on the system nonlinearity in the Pockels phase transfer unit is analyzed. Moreover, a novel axis-angle compensation method is proposed to improve the OVS measurement precision according to the simulation results. The experiment results show that the measurement precision of OVS is superior to ±0.2% in the temperature range from -40 °C to +60 °C, which demonstrates the excellent temperature stability of the designed voltage sensing system.
Simulation-Based Learning Environment for Assisting Error-Correction
NASA Astrophysics Data System (ADS)
Horiguchi, Tomoya; Hirashima, Tsukasa
In simulation-based learning environments, 'unexpected' phenomena often work as counterexamples which promote a learner to reconsider the problem. It is important that counterexamples contain sufficient information which leads a learner to correct understanding. This paper proposes a method for creating such counterexamples. Error-Based Simulation (EBS) is used for this purpose, which simulates the erroneous motion in mechanics based on a learner's erroneous equation. Our framework is as follows: (1) to identify the cause of errors by comparing a learner's answer with the problem-solver's correct one, (2) to visualize the cause of errors by the unnatural motions in EBS. To perform (1), misconceptions are classified based on problem-solving model, and related to their appearance on a learner's answers (error-identification rules). To perform (2), objects' motions in EBS are classified and related to their suggesting misconceptions (error-visualization rules). A prototype system is implemented and evaluated through a preliminary test, to confirm the usefulness of the framework.
Statistical error in particle simulations of low mach number flows
Hadjiconstantinou, N G; Garcia, A L
2000-11-13
We present predictions for the statistical error due to finite sampling in the presence of thermal fluctuations in molecular simulation algorithms. The expressions are derived using equilibrium statistical mechanics. The results show that the number of samples needed to adequately resolve the flowfield scales as the inverse square of the Mach number. Agreement of the theory with direct Monte Carlo simulations shows that the use of equilibrium theory is justified.
Numerical Homogenization of Jointed Rock Masses Using Wave Propagation Simulation
NASA Astrophysics Data System (ADS)
Gasmi, Hatem; Hamdi, Essaïeb; Bouden Romdhane, Nejla
2014-07-01
Homogenization in fractured rock analyses is essentially based on the calculation of equivalent elastic parameters. In this paper, a new numerical homogenization method that was programmed by means of a MATLAB code, called HLA-Dissim, is presented. The developed approach simulates a discontinuity network of real rock masses based on the International Society of Rock Mechanics (ISRM) scanline field mapping methodology. Then, it evaluates a series of classic joint parameters to characterize density (RQD, specific length of discontinuities). A pulse wave, characterized by its amplitude, central frequency, and duration, is propagated from a source point to a receiver point of the simulated jointed rock mass using a complex recursive method for evaluating the transmission and reflection coefficient for each simulated discontinuity. The seismic parameters, such as delay, velocity, and attenuation, are then calculated. Finally, the equivalent medium model parameters of the rock mass are computed numerically while taking into account the natural discontinuity distribution. This methodology was applied to 17 bench fronts from six aggregate quarries located in Tunisia, Spain, Austria, and Sweden. It allowed characterizing the rock mass discontinuity network, the resulting seismic performance, and the equivalent medium stiffness. The relationship between the equivalent Young's modulus and rock discontinuity parameters was also analyzed. For these different bench fronts, the proposed numerical approach was also compared to several empirical formulas, based on RQD and fracture density values, published in previous research studies, showing its usefulness and efficiency in estimating rapidly the Young's modulus of equivalent medium for wave propagation analysis.
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
A framework for simulating map error in ecosystem models
Sean P. Healey; Shawn P. Urbanski; Paul L. Patterson; Chris Garrard
2014-01-01
The temporal depth and spatial breadth of observations from platforms such as Landsat provide unique perspective on ecosystem dynamics, but the integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential map errors in broader...
Communication Systems Simulator with Error Correcting Codes Using MATLAB
ERIC Educational Resources Information Center
Gomez, C.; Gonzalez, J. E.; Pardo, J. M.
2003-01-01
In this work, the characteristics of a simulator for channel coding techniques used in communication systems, are described. This software has been designed for engineering students in order to facilitate the understanding of how the error correcting codes work. To help students understand easily the concepts related to these kinds of codes, a…
Communication Systems Simulator with Error Correcting Codes Using MATLAB
ERIC Educational Resources Information Center
Gomez, C.; Gonzalez, J. E.; Pardo, J. M.
2003-01-01
In this work, the characteristics of a simulator for channel coding techniques used in communication systems, are described. This software has been designed for engineering students in order to facilitate the understanding of how the error correcting codes work. To help students understand easily the concepts related to these kinds of codes, a…
On the accurate simulation of tsunami wave propagation
NASA Astrophysics Data System (ADS)
Castro, C. E.; Käser, M.; Toro, E. F.
2009-04-01
A very important part of any tsunami early warning system is the numerical simulation of the wave propagation in the open sea and close to geometrically complex coastlines respecting bathymetric variations. Here we are interested in improving the numerical tools available to accurately simulate tsunami wave propagation on a Mediterranean basin scale. To this end, we need to accomplish some targets, such as: high-order numerical simulation in space and time, preserve steady state conditions to avoid spurious oscillations and describe complex geometries due to bathymetry and coastlines. We use the Arbitrary accuracy DERivatives Riemann problem method together with Finite Volume method (ADER-FV) over non-structured triangular meshes. The novelty of this method is the improvement of the ADER-FV scheme, introducing the well-balanced property when geometrical sources are considered for unstructured meshes and arbitrary high-order accuracy. In a previous work from Castro and Toro [1], the authors mention that ADER-FV schemes approach asymptotically the well-balanced condition, which was true for the test case mentioned in [1]. However, new evidence[2] shows that for real scale problems as the Mediterranean basin, and considering realistic bathymetry as ETOPO-2[3], this asymptotic behavior is not enough. Under these realistic conditions the standard ADER-FV scheme fails to accurately describe the propagation of gravity waves without being contaminated with spurious oscillations, also known as numerical waves. The main problem here is that at discrete level, i.e. from a numerical point of view, the numerical scheme does not correctly balance the influence of the fluxes and the sources. Numerical schemes that retain this balance are said to satisfy the well-balanced property or the exact C-property. This unbalance reduces, as we refine the spatial discretization or increase the order of the numerical method. However, the computational cost increases considerably this way
Numerical Simulation of Shock Propagation in Dilute Monodisperse Bubbly Liquids
NASA Astrophysics Data System (ADS)
Cartmell, J. J.; Nadim, A.; Barbone, P. E.
1997-11-01
The MacCormack finite-difference method is used to simulate the propagation and evolution of shock waves in a bubbly liquid. The bubbly liquid is modeled as a continuum which is described by the continuity and Euler equations, but with a non-equilibrium equation of state (EOS) which relates the mixture pressure to the mixture density and its first two material time derivatives. This nonlinear EOS can be derived by assuming the liquid phase to be incompressible and the gas bubbles to be identical and non-interacting. The bubbles are further assumed to translate with the mixture velocity, and their spherical oscillations are taken to be described by the Rayleigh-Plesset equation. In 1-D, the evolution of an initial step function in pressure is followed in time. This produces a shock which propagates towards the low pressure side and a rarefaction front which moves in the opposite direction. The shock forms a steady traveling wave with the oscillatory tail characteristic of bubbly liquids. In 2-D, the focusing of an initially small amplitude wave into a strong shock is simulated.
Computer simulation of short shock pulses propagation in ceramic materials
NASA Astrophysics Data System (ADS)
Skripnyak, Vladimir A.; Skripnyak, Evgenia G.; Zhukova, Tat'yana V.
2001-06-01
The propagation of shock impulses with duration from microsecond to several tens of nanoseconds and attenuation of their amplitude in single-phase polycrystalline ceramics, sapphire and ruby single crystals, nanocrystalline ceramic composites are investigated by numerical simulation method. The propagation of shock waves and unloading waves are determined by the mechanical behavior of ceramics depending from evolution of structure of ceramics. The relaxation of shear stress in constructional ceramics can be caused by set of physical mechanisms on meso- and micro-scale levels. The used model takes into account the kinetics of inelastic deformation caused by martensitic phase transformation, nucleation and motion of dislocation, nucleation of shear microcracks etc. The outcomes of simulation testify, that inelastic deformation can be negligible in the constructional elements from polycrystalline ceramics when the shock pulse amplitude is higher, than the Hugoniot Elastic Limit (HEL), if the impulses duration is comparable with time of relaxation corresponding to preferred physical mechanisms. In these conditions the actual spall strength of polycrystalline ceramics is comparable to the theoretical strength at tension of the single crystals. The Al2O3 ceramics is capable to be practically elastic-deformed and the shear stress is compared to the theoretical shear strength, if the duration of pulse loading is not more than some tens nanoseconds.
Numerical simulation of premixed flame propagation in a closed tube
NASA Astrophysics Data System (ADS)
Kuzuu, Kazuto; Ishii, Katsuya; Kuwahara, Kunio
1996-08-01
Premixed flame propagation of methane-air mixture in a closed tube is estimated through a direct numerical simulation of the three-dimensional unsteady Navier-Stokes equations coupled with chemical reaction. In order to deal with a combusting flow, an extended version of the MAC method, which can be applied to a compressible flow with strong density variation, is employed as a numerical method. The chemical reaction is assumed to be an irreversible single step reaction between methane and oxygen. The chemical species are CH 4, O 2, N 2, CO 2, and H 2O. In this simulation, we reproduce a formation of a tulip flame in a closed tube during the flame propagation. Furthermore we estimate not only a two-dimensional shape but also a three-dimensional structure of the flame and flame-induced vortices, which cannot be observed in the experiments. The agreement between the calculated results and the experimental data is satisfactory, and we compare the phenomenon near the side wall with the one in the corner of the tube.
Monte Carlo simulation of light propagation in the adult brain
NASA Astrophysics Data System (ADS)
Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter
2004-06-01
When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.
Fast Video Encryption Using the H.264 Error Propagation Property for Smart Mobile Devices
Chung, Yongwha; Lee, Sungju; Jeon, Taewoong; Park, Daihee
2015-01-01
In transmitting video data securely over Video Sensor Networks (VSNs), since mobile handheld devices have limited resources in terms of processor clock speed and battery size, it is necessary to develop an efficient method to encrypt video data to meet the increasing demand for secure connections. Selective encryption methods can reduce the amount of computation needed while satisfying high-level security requirements. This is achieved by selecting an important part of the video data and encrypting it. In this paper, to ensure format compliance and security, we propose a special encryption method for H.264, which encrypts only the DC/ACs of I-macroblocks and the motion vectors of P-macroblocks. In particular, the proposed new selective encryption method exploits the error propagation property in an H.264 decoder and improves the collective performance by analyzing the tradeoff between the visual security level and the processing speed compared to typical selective encryption methods (i.e., I-frame, P-frame encryption, and combined I-/P-frame encryption). Experimental results show that the proposed method can significantly reduce the encryption workload without any significant degradation of visual security. PMID:25850068
Fast video encryption using the H.264 error propagation property for smart mobile devices.
Chung, Yongwha; Lee, Sungju; Jeon, Taewoong; Park, Daihee
2015-04-02
In transmitting video data securely over Video Sensor Networks (VSNs), since mobile handheld devices have limited resources in terms of processor clock speed and battery size, it is necessary to develop an efficient method to encrypt video data to meet the increasing demand for secure connections. Selective encryption methods can reduce the amount of computation needed while satisfying high-level security requirements. This is achieved by selecting an important part of the video data and encrypting it. In this paper, to ensure format compliance and security, we propose a special encryption method for H.264, which encrypts only the DC/ACs of I-macroblocks and the motion vectors of P-macroblocks. In particular, the proposed new selective encryption method exploits the error propagation property in an H.264 decoder and improves the collective performance by analyzing the tradeoff between the visual security level and the processing speed compared to typical selective encryption methods (i.e., I-frame, P-frame encryption, and combined I-/P-frame encryption). Experimental results show that the proposed method can significantly reduce the encryption workload without any significant degradation of visual security.
NASA Astrophysics Data System (ADS)
Béchet, C.; Tallon, M.; Thiébaut, E.
2008-07-01
The turbulent wavefront reconstruction step in an adaptive optics system is an inverse problem. The Mean-Square Error (MSE) assessing the reconstruction quality is made of two terms, often called bias and variance. The latter is also commonly referred as the noise propagation. The aim of this paper is to investigate the evolution of these two error contributions when the number of parameters to be estimated becomes of the order of 10 4. Such dimensions are expected for the adaptive optics systems on the Extremely Large Telescopes. We provide an algebraic formalism to compare the MSE of Maximum Likelihood and Maximum A Posteriori linear reconstructors. A Generalized Singular Value Decomposition applied on the reconstructors theoretically enhances the differences between zonal and modal approaches, and demonstrates the gain in using Maximum A Posteriori method. Thanks to numerical simulations, we quantitatively study the evolution of the MSE contributions with respect to the pupil shape, to the outer scale of the turbulence, to the number of actuators and to the signal-to-noise ratio. Simulations results are consistent with previous noise propagation studies and with our algebraic analysis. Finally, using the Fractal Iterative Method as a Maximum A Posteriori reconstruction algorithm in our simulations, we demonstrate a possible reduction of the MSE of a factor 2 in large adaptive optics systems, for low signal-to-noise ratio.
A new version of the CADNA library for estimating round-off error propagation in Fortran programs
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc
2010-11-01
The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round
Robust Simulator for Error-Visualization in Assisting Learning Science
NASA Astrophysics Data System (ADS)
Horiguchi, Tomoya; Hirashima, Tsukasa
Error-based Simulation (EBS) is a framework for assisting a learner to become aware of his error. It makes simulation based on his erroneous hypothesis to show what unreasonable phenomena would occur if the hypothesis were correct, which has been proved effective in causing cognitive conflict. In making EBS, it is necessary (1) to make simulation by dealing with a set of inconsistent constraints because erroneous hypotheses often contradict the correct knowledge, and (2) to estimate the 'unreasonableness' of phenomena in simulation because it must be recognized to be 'unreasonable' by a learner. Since the method used in previous EBS-systems was much domain-dependent, this paper describes a method for making EBS based on any inconsistent simultaneous equations/inequalities by using TMS (it is called 'Partial Constraint Analysis (PCA)'). It also describes a set of general heuristics to estimate the 'unreasonableness' of physical phenomena. By using PCA and the heuristics, a prototype of EBS-system for elementary mechanics and electric circuit problems was implemented in which a learner is asked to set up equations of the systems. A preliminary test proved our method useful in which most of the subjects agreed that the EBSs and explanations made by the prototype were effective in making a learner be aware of his error.
The Communication Link and Error ANalysis (CLEAN) simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.; Crowe, Shane
1993-01-01
During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.
NASA Astrophysics Data System (ADS)
Mulyana, Ade Komara
The stereo SAR (Synthetic Aperture Radar) technique computes three-dimensional coordinates of ground objects by making use of two quantities that can be derived from a SAR image: range and doppler angle. Due to the flexibility in the image collection requirements, this is an attractive alternative to interferometric SAR. Applying stereo SAR to stripmap SAR images, however, is known to produce coordinates with only modest accuracy. Another mode of SAR, the spotlight mode, produces SAR images with a superior resolution (at the expense of smaller coverage). Together with the more advanced navigation systems available today, this makes applying the stereo technique to airborne spotlight SAR images an interesting topic of study. An error model for stereo spotlight SAR in the form of the precision for the observations is developed. The precision of the navigation data is derived directly from the performance description of available navigation systems. The error in the range and doppler angle is derived from the analysis of the image formation process applied to real spotlight SAR data, including the autofocus process. An error analysis for stereo SAR is performed based on a covariance analysis study. The impact of navigation data quality, different flight trajectories, and different distances to the scene, on the precision of the computed ground coordinates are evaluated. The analysis is done using simulated spotlight SAR images of discrete point objects. Ground coordinates with Circular (Horizontal) Error at 0.9 probability (CE90) on the order of 1--2 m and Linear (Vertical) Error at 0.9 probability (LE90) of 1 m are possible to achieve from a medium distance of about 6 km. From a longer distance of about 50 km, CE90 of about 6 m is obtained from a reasonable flight configuration. The results also reveal the necessity to determine the direction of the velocity vector precisely. Each component of the velocity vector must be determined to better than 10 cm/sec. These
Correction of Discretization Errors Simulated at Supply Wells.
MacMillan, Gordon J; Schumacher, Jens
2015-01-01
Many hydrogeology problems require predictions of hydraulic heads in a supply well. In most cases, the regional hydraulic response to groundwater withdrawal is best approximated using a numerical model; however, simulated hydraulic heads at supply wells are subject to errors associated with model discretization and well loss. An approach for correcting the simulated head at a pumping node is described here. The approach corrects for errors associated with model discretization and can incorporate the user's knowledge of well loss. The approach is model independent, can be applied to finite difference or finite element models, and allows the numerical model to remain somewhat coarsely discretized and therefore numerically efficient. Because the correction is implemented external to the numerical model, one important benefit of this approach is that a response matrix, reduced model approach can be supported even when nonlinear well loss is considered.
NASA Astrophysics Data System (ADS)
Liu, C. L.; Kirchengast, G.; Zhang, K. F.; Norman, R.; Li, Y.; Zhang, S. C.; Carter, B.; Fritzer, J.; Schwaerz, M.; Choy, S. L.; Wu, S. Q.; Tan, Z. X.
2013-09-01
Global Navigation Satellite System (GNSS) radio occultation (RO) is an innovative meteorological remote sensing technique for measuring atmospheric parameters such as refractivity, temperature, water vapour and pressure for the improvement of numerical weather prediction (NWP) and global climate monitoring (GCM). GNSS RO has many unique characteristics including global coverage, long-term stability of observations, as well as high accuracy and high vertical resolution of the derived atmospheric profiles. One of the main error sources in GNSS RO observations that significantly affect the accuracy of the derived atmospheric parameters in the stratosphere is the ionospheric error. In order to mitigate the effect of this error, the linear ionospheric correction approach for dual-frequency GNSS RO observations is commonly used. However, the residual ionospheric errors (RIEs) can be still significant, especially when large ionospheric disturbances occur and prevail such as during the periods of active space weather. In this study, the RIEs were investigated under different local time, propagation direction and solar activity conditions and their effects on RO bending angles are characterised using end-to-end simulations. A three-step simulation study was designed to investigate the characteristics of the RIEs through comparing the bending angles with and without the effects of the RIEs. This research forms an important step forward in improving the accuracy of the atmospheric profiles derived from the GNSS RO technique.
Li, Hui; Fu, Zhida; Liu, Liying; Lin, Zhili; Deng, Wei; Feng, Lishuang
2017-01-01
An improved temperature-insensitive optical voltage sensor (OVS) with a reciprocal dual-crystal sensing method is proposed. The inducing principle of OVS reciprocity degradation is expounded by taking the different temperature fields of two crystals and the axis-errors of optical components into consideration. The key parameters pertaining to the system reciprocity degeneration in the dual-crystal sensing unit are investigated in order to optimize the optical sensing model based on the Maxwell's electromagnetic theory. The influencing principle of axis-angle errors on the system nonlinearity in the Pockels phase transfer unit is analyzed. Moreover, a novel axis-angle compensation method is proposed to improve the OVS measurement precision according to the simulation results. The experiment results show that the measurement precision of OVS is superior to ±0.2% in the temperature range from −40 °C to +60 °C, which demonstrates the excellent temperature stability of the designed voltage sensing system. PMID:28054951
[Monte Carlo simulation of the divergent beam propagation in a semi-infinite bio-tissue].
Zhang, Lin; Qi, Shengwen
2013-12-01
In order to study the light propagation in biological tissue, we analyzed the divergent beam propagation in turbid medium. We set up a Monte Carlo simulation model for simulating the divergent beam propagation in a semi-infinite bio-tissue. Using this model, we studied the absorbed photon density with different tissue parameters in the case of a divergent beam injecting the tissue. The simulation results showed that the rules of optical propagation in the tissue were found and further the results also suggested that the diagnosis and treatment of the light could refer to the rules of optical propagation.
Jia, Hao; Chen, Bin; Li, Dong; Zhang, Yong
2015-02-01
To adapt the complex tissue structure, laser propagation in a two-layered skin model is simulated to compare voxel-based Monte Carlo (VMC) and tetrahedron-based MC (TMC) methods with a geometry-based MC (GMC) method. In GMC, the interface is mathematically defined without any discretization. GMC is the most accurate but is not applicable to complicated domains. The implementation of VMC is simple because of its structured voxels. However, unavoidable errors are expected because of the zigzag polygonal interface. Compared with GMC and VMC, TMC provides a balance between accuracy and flexibility by the tetrahedron cells. In the present TMC, the body-fitted tetrahedra are generated in different tissues. No interface tetrahedral cells exist, thereby avoiding the photon reflection error in the interface cells in VMC. By introducing a distance threshold, the error caused by confused optical parameters between neighboring cells when photons are incident along the cell boundary can be avoided. The results show that the energy deposition error by TMC in the interfacial region is one-tenth to one-fourth of that by VMC, yielding more accurate computations of photon reflection, refraction, and energy deposition. The results of multilayered and n-shaped vessels indicate that a laser with a 1064-nm wavelength should be introduced to clean deep-buried vessels. © 2015 Society of Photo-Optical Instrumentation Engineers (SPIE)
NASA Astrophysics Data System (ADS)
Jia, Hao; Chen, Bin; Li, Dong; Zhang, Yong
2015-02-01
To adapt the complex tissue structure, laser propagation in a two-layered skin model is simulated to compare voxel-based Monte Carlo (VMC) and tetrahedron-based MC (TMC) methods with a geometry-based MC (GMC) method. In GMC, the interface is mathematically defined without any discretization. GMC is the most accurate but is not applicable to complicated domains. The implementation of VMC is simple because of its structured voxels. However, unavoidable errors are expected because of the zigzag polygonal interface. Compared with GMC and VMC, TMC provides a balance between accuracy and flexibility by the tetrahedron cells. In the present TMC, the body-fitted tetrahedra are generated in different tissues. No interface tetrahedral cells exist, thereby avoiding the photon reflection error in the interface cells in VMC. By introducing a distance threshold, the error caused by confused optical parameters between neighboring cells when photons are incident along the cell boundary can be avoided. The results show that the energy deposition error by TMC in the interfacial region is one-tenth to one-fourth of that by VMC, yielding more accurate computations of photon reflection, refraction, and energy deposition. The results of multilayered and n-shaped vessels indicate that a laser with a 1064-nm wavelength should be introduced to clean deep-buried vessels.
Simulations of ultra-high-energy cosmic rays propagation
Kalashev, O. E.; Kido, E.
2015-05-15
We compare two techniques for simulation of the propagation of ultra-high-energy cosmic rays (UHECR) in intergalactic space: the Monte Carlo approach and a method based on solving transport equations in one dimension. For the former, we adopt the publicly available tool CRPropa and for the latter, we use the code TransportCR, which has been developed by the first author and used in a number of applications, and is made available online with publishing this paper. While the CRPropa code is more universal, the transport equation solver has the advantage of a roughly 100 times higher calculation speed. We conclude that the methods give practically identical results for proton or neutron primaries if some accuracy improvements are introduced to the CRPropa code.
Simulation of seismic wave propagation for reconnaissance in machined tunnelling
NASA Astrophysics Data System (ADS)
Lambrecht, L.; Friederich, W.
2012-04-01
During machined tunnelling, there is a complex interaction chain of the involved components. For example, on one hand the machine influences the surrounding ground during excavation, on the other hand supporting measures are needed acting on the ground. Furthermore, the different soil conditions are influencing the wearing of tools, the speed of the excavation and the safety of the construction site. In order to get information about the ground along the tunnel track, one can use seismic imaging. To get a better understanding of seismic wave propagation for a tunnel environment, we want to perform numerical simulations. For that, we use the spectral element method (SEM) and the nodal discontinuous galerkin method (NDG). In both methods, elements are the basis to discretize the domain of interest for performing high order elastodynamic simulations. The SEM is a fast and widely used method but the biggest drawback is it's limitation to hexahedral elements. For complex heterogeneous models with a tunnel included, it is a better choice to use the NDG, which needs more computation time but can be adapted to tetrahedral elements. Using this technique, we can perform high resolution simulations of waves initialized by a single force acting either on the front face or the side face of the tunnel. The aim is to produce waves that travel mainly in the direction of the tunnel track and to get as much information as possible from the backscattered part of the wave field.
Simulating vestibular compensation using recurrent back-propagation.
Anastasio, T J
1992-01-01
Vestibular compensation is simulated as learning in a dynamic neural network model of the horizontal vestibulo-ocular reflex (VOR). The bilateral, three-layered VOR model consists of nonlinear units representing horizontal canal afferents, vestibular nuclei (VN) neurons and eye muscle motoneurons. Dynamic processing takes place via commissural connections that link the VN bilaterally. The intact network is trained, using recurrent back-propagation, to produce the VOR with velocity storage integration. Compensation is simulated by removing vestibular afferent input from one side and retraining the network. The time course of simulated compensation matches that observed experimentally. The behavior of model VN neurons in the compensated network also matches real data, but only if connections at the motoneurons, as well as at the VN, are allowed to be plastic. The dynamic properties of real VN neurons in compensated and normal animals are found to differ when tested with sinusoidal but not with step stimuli. The model reproduces these conflicting data, and suggests that the disagreement may be due to VN neuron nonlinearity.
Computer simulation of microwave propagation in heterogeneous and fractal media
NASA Astrophysics Data System (ADS)
Korvin, Gabor; Khachaturov, Ruben V.; Oleschko, Klaudia; Ronquillo, Gerardo; Correa López, María de jesús; García, Juan-josé
2017-03-01
Maxwell's equations (MEs) are the starting point for all calculations involving surface or borehole electromagnetic (EM) methods in Petroleum Industry. In well-log analysis numerical modeling of resistivity and induction tool responses has became an indispensable step of interpretation. We developed a new method to numerically simulate electromagnetic wave propagation through heterogeneous and fractal slabs taking into account multiple scattering in the direction of normal incidence. In simulation, the gray-scale image of the porous medium is explored by monochromatic waves. The gray-tone of each pixel can be related to the dielectric permittivity of the medium at that point by two different equations (linear dependence, and fractal or power law dependence). The wave equation is solved in second order difference approximation, using a modified sweep technique. Examples will be shown for simulated EM waves in carbonate rocks imaged at different scales by electron microscopy and optical photography. The method has wide ranging applications in remote sensing, borehole scanning and Ground Penetrating Radar (GPR) exploration.
Computational fluid dynamics simulation of sound propagation through a blade row.
Zhao, Lei; Qiao, Weiyang; Ji, Liang
2012-10-01
The propagation of sound waves through a blade row is investigated numerically. A wave splitting method in a two-dimensional duct with arbitrary mean flow is presented, based on which pressure amplitude of different wave mode can be extracted at an axial plane. The propagation of sound wave through a flat plate blade row has been simulated by solving the unsteady Reynolds average Navier-Stokes equations (URANS). The transmission and reflection coefficients obtained by Computational Fluid Dynamics (CFD) are compared with semi-analytical results. It indicates that the low order URANS scheme will cause large errors if the sound pressure level is lower than -100 dB (with as reference pressure the product of density, main flow velocity, and speed of sound). The CFD code has sufficient precision when solving the interaction of sound wave and blade row providing the boundary reflections have no substantial influence. Finally, the effects of flow Mach number, blade thickness, and blade turning angle on sound propagation are studied.
Unraveling the uncertainty and error propagation in the vertical flux Martin curve
NASA Astrophysics Data System (ADS)
Olli, Kalle
2015-06-01
Analyzing the vertical particle flux and particle retention in the upper twilight zone has commonly been accomplished by fitting a power function to the data. Measuring the vertical particle flux in the upper twilight zone, where most of the re-mineralization occurs, is a complex endeavor. Here I use field data and simulations to show how uncertainty in the particle flux measurements propagates into the vertical flux attenuation model parameters. Further, I analyze how the number of sampling depths, and variations in the vertical sampling locations influences the model performance and parameters stability. The arguments provide a simple framework to optimize sampling scheme when vertical flux attenuation profiles are measured in the field, either by using an array of sediment traps or 234Th methodology. A compromise between effort and quality of results is to sample from at least six depths: upper sampling depth as close to the base of the euphotic layer as feasible, the vertical sampling depths slightly aggregated toward the upper aphotic zone where most of the vertical flux attenuation takes place, and extending the lower end of the sampling range to as deep as practicable in the twilight zone.
NASA Astrophysics Data System (ADS)
Vicent, Jorge; Alonso, Luis; Sabater, Neus; Miesch, Christophe; Kraft, Stefan; Moreno, Jose
2015-09-01
The uncertainties in the knowledge of the Instrument Spectral Response Function (ISRF), barycenter of the spectral channels and bandwidth / spectral sampling (spectral resolution) are important error sources in the processing of satellite imaging spectrometers within narrow atmospheric absorption bands. The exhaustive laboratory spectral characterization is a costly engineering process that differs from the instrument configuration in-flight given the harsh space environment and harmful launching phase. The retrieval schemes at Level-2 commonly assume a Gaussian ISRF, leading to uncorrected spectral stray-light effects and wrong characterization and correction of the spectral shift and smile. These effects produce inaccurate atmospherically corrected data and are propagated to the final Level-2 mission products. Within ESA's FLEX satellite mission activities, the impact of the ISRF knowledge error and spectral calibration at Level-1 products and its propagation to Level-2 retrieved chlorophyll fluorescence has been analyzed. A spectral recalibration scheme has been implemented at Level-2 reducing the errors in Level-1 products below the 10% error in retrieved fluorescence within the oxygen absorption bands enhancing the quality of the retrieved products. The work presented here shows how the minimization of the spectral calibration errors requires an effort both for the laboratory characterization and for the implementation of specific algorithms at Level-2.
Cereatti, Andrea; Camomilla, Valentina; Vannozzi, Giuseppe; Cappozzo, Aurelio
2007-01-01
To estimate hip joint angles during selected motor tasks using stereophotogrammetric data, it is necessary to determine the hip joint centre position. The question is whether the errors affecting that determination propagate less to the angles estimates when a three degrees of freedom (DOFs) constraint (spherical hinge) is used between femur and pelvis, rather than when the two bones are assumed to be unconstrained (six DOFs). An analytical relationship between the hip joint centre location error and the joint angle error was obtained limited to the planar case. In the 3-D case, a similar relationship was obtained using a simulation approach based on experimental data. The joint angle patterns resulted in a larger distortion using a constrained approach, especially when wider rotations occur. The range of motion of the hip flexion-extension, obtained simulating different location errors and without taking into account soft tissue artefacts, varied approximately 7 deg using a constrained approach and up to 1 deg when calculated with an unconstrained approach. Thus, the unconstrained approach should be preferred even though its estimated three linear DOFs most unlikely carry meaningful information.
Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A
2013-08-26
We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.
Numerical Simulation of Acoustic Propagation in a Lined Duct
NASA Astrophysics Data System (ADS)
Biringen, S.; Reichert, R. S.; Yu, J.; Zorumski, W. E.
1996-11-01
An inviscid, spatial time-domain numerical simulation is employed to compute acoustic wave propagation in a duct treated with an acoustic liner. The motivation is to assess the effects on sound attenuation of bias flow passed through the liner for application to noise suppression in jet engine nacelles. Physically, the liner is composed of porous sheets with backing air cavities. The mathematical model lumps the liner presence into a continuous empirical source term which modifies the right-hand side of the momentum equations. Thus, liner effects are felt interior to the domain rather than through boundary conditions. This source term determines the time-domain effects of the frequency-domain resistance and reactance of the liner's component sheets. The source term constants are matched to frequency-domain impedance data via a one-dimensional numerical impedance tube simulation. Nonlinear behavior of the liner at high sound pressure levels is included in the form of the source term. Sound pressure levels and axially transmitted power are computed to assess the effect of various magnitudes of bias flow on attenuation.
Numerical simulation of broadband vortex terahertz beams propagation
NASA Astrophysics Data System (ADS)
Semenova, V. A.; Kulya, M. S.; Bespalov, V. G.
2016-08-01
Orbital angular momentum (OAM) represents new informational degree of freedom for data encoding and multiplexing in fiber and free-space communications. OAM-carrying beams (also called vortex beams) were successfully used to increase the capacity of optical, millimetre-wave and radio frequency communication systems. However, the investigation of the OAM potential for the new generation high-speed terahertz communications is also of interest due to the unlimited demand of higher capacity in telecommunications. Here we present a simulation-based study of the propagating in non-dispersive medium broadband terahertz vortex beams generated by a spiral phase plate (SPP). The algorithm based on scalar diffraction theory was used to obtain the spatial amplitude and phase distributions of the vortex beam in the frequency range from 0.1 to 3 THz at the distances 20-80 mm from the SPP. The simulation results show that the amplitude and phase distributions without unwanted modulation are presented in the wavelengths ranges with centres on the wavelengths which are multiple to the SPP optical thickness. This fact may allow to create the high-capacity near-field communication link which combines OAM and wavelength-division multiplexing.
Simulation of 3D Global Wave Propagation Through Geodynamic Models
NASA Astrophysics Data System (ADS)
Schuberth, B.; Piazzoni, A.; Bunge, H.; Igel, H.; Steinle-Neumann, G.
2005-12-01
This project aims at a better understanding of the forward problem of global 3D wave propagation. We use the spectral element program "SPECFEM3D" (Komatitsch and Tromp, 2002a,b) with varying input models of seismic velocities derived from mantle convection simulations (Bunge et al., 2002). The purpose of this approach is to obtain seismic velocity models independently from seismological studies. In this way one can test the effects of varying parameters of the mantle convection models on the seismic wave field. In order to obtain the seismic velocities from the temperature field of the geodynamical simulations we follow a mineral physics approach. Assuming a certain mantle composition (e.g. pyrolite with CMASF composition) we compute the stable phases for each depth (i.e. pressure) and temperature by system Gibbs free energy minimization. Elastic moduli and density are calculated from the equations of state of the stable mineral phases. For this we use a mineral physics database derived from calorimetric experiments (enthalphy and entropy of formation, heat capacity) and EOS parameters.
Supporting the Development of Soft-Error Resilient Message Passing Applications using Simulation
Engelmann, Christian; Naughton III, Thomas J
2016-01-01
Radiation-induced bit flip faults are of particular concern in extreme-scale high-performance computing systems. This paper presents a simulation-based tool that enables the development of soft-error resilient message passing applications by permitting the investigation of their correctness and performance under various fault conditions. The documented extensions to the Extreme-scale Simulator (xSim) enable the injection of bit flip faults at specific of injection location(s) and fault activation time(s), while supporting a significant degree of configurability of the fault type. Experiments show that the simulation overhead with the new feature is ~2,325% for serial execution and ~1,730% at 128 MPI processes, both with very fine-grain fault injection. Fault injection experiments demonstrate the usefulness of the new feature by injecting bit flips in the input and output matrices of a matrix-matrix multiply application, revealing vulnerability of data structures, masking and error propagation. xSim is the very first simulation-based MPI performance tool that supports both, the injection of process failures and bit flip faults.
Using cellular automata to simulate forest fire propagation in Portugal
NASA Astrophysics Data System (ADS)
Freire, Joana; daCamara, Carlos
2017-04-01
Wildfires in the Mediterranean region have severe damaging effects mainly due to large fire events [1, 2]. When restricting to Portugal, wildfires have burned over 1:4 million ha in the last decade. Considering the increasing tendency in the extent and severity of wildfires [1, 2], the availability of modeling tools of fire episodes is of crucial importance. Two main types of mathematical models are generally available, namely deterministic and stochastic models. Deterministic models attempt a description of fires, fuel and atmosphere as multiphase continua prescribing mass, momentum and energy conservation, which typically leads to systems of coupled PDEs to be solved numerically on a grid. Simpler descriptions, such as FARSITE, neglect the interaction with atmosphere and propagate the fire front using wave techniques. One of the most important stochastic models are Cellular Automata (CA), in which space is discretized into cells, and physical quantities take on a finite set of values at each cell. The cells evolve in discrete time according to a set of transition rules, and the states of the neighboring cells. In the present work, we implement and then improve a simple and fast CA model designed to operationally simulate wildfires in Portugal. The reference CA model chosen [3] has the advantage of having been applied successfully in other Mediterranean ecosystems, namely to historical fires in Greece. The model is defined on a square grid with propagation to 8 nearest and next-nearest neighbors, where each cell is characterized by 4 possible discrete states, corresponding to burning, not-yet burned, fuel-free and completely burned cells, with 4 possible rules of evolution which take into account fuel properties, meteorological conditions, and topography. As a CA model, it offers the possibility to run a very high number of simulations in order to verify and apply the model, and is easily modified by implementing additional variables and different rules for the
NASA Technical Reports Server (NTRS)
Greatorex, Scott (Editor); Beckman, Mark
1996-01-01
Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would
Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui
2016-01-01
Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation.
Hoinaski, Leonardo; Franco, Davide; de Melo Lisboa, Henrique
2017-03-01
Dispersion modelling was proved by researchers that most part of the models, including the regulatory models recommended by the Environmental Protection Agency of the United States (AERMOD and CALPUFF), do not have the ability to predict under complex situations. This article presents a novel evaluation of the propagation of errors in lateral dispersion coefficient of AERMOD with emphasis on estimate of average times under 10 min. The sources of uncertainty evaluated were parameterizations of lateral dispersion ([Formula: see text]), standard deviation of lateral wind speed ([Formula: see text]) and processing of obstacle effect. The model's performance was tested in two field tracer experiments: Round Hill II and Uttenweiller. The results show that error propagation from the estimate of [Formula: see text] directly affects the determination of [Formula: see text], especially in Round Hill II experiment conditions. After average times are reduced, errors arise in the parameterization of [Formula: see text], even after observation assimilations of [Formula: see text], exposing errors on Lagrangian Time Scale parameterization. The assessment of the model in the presence of obstacles shows that the implementation of a plume rise model enhancement algorithm can improve the performance of the AERMOD model. However, these improvements are small when the obstacles have a complex geometry, such as Uttenweiller.
Clark, E.L.
1993-08-01
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, calibration Mach number and Reynolds number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-stream Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for nine fundamental aerodynamic ratios, most of which relate free-stream test conditions (pressure, temperature, density or velocity) to a reference condition. Tables of the ratios, R, absolute sensitivity coefficients, {partial_derivative}R/{partial_derivative}M{infinity}, and relative sensitivity coefficients, (M{infinity}/R) ({partial_derivative}R/{partial_derivative}M{infinity}), are provided as functions of M{infinity}.
Photon propagation correction in 3D photoacoustic image reconstruction using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Cheong, Yaw Jye; Stantz, Keith M.
2010-02-01
Purpose: The purpose of this study is to develop a new 3-D iterative Monte Carlo algorithm to recover the heterogeneous distribution of molecular absorbers with a solid tumor. Introduction: Spectroscopic imaging (PCT-S) has the potential to identify a molecular species and quantify its concentration with high spatial fidelity. To accomplish this task, tissue attenuation losses during photon propagation in heterogeneous 3D objects is necessary. An iterative recovery algorithm has been developed to extract 3D heterogeneous parametric maps of absorption coefficients implementing a MC algorithm based on a single source photoacoustic scanner and to determine the influence of the reduced scattering coefficient on the uncertainty of recovered absorption coefficient. Material and Methods: This algorithm is tested for spheres and ellipsoids embedded in simulated mouse torso with optical absorption values ranging from 0.01-0.5/cm, for the same objects where the optical scattering is unknown (μs'=7-13/cm), and for a heterogeneous distribution of absorbers. Results: Systemic and statistical errors in ma with a priori knowledge of μs' and g are <2% (sphere) and <4% (ellipsoid) for all ma and without a priori knowledge of ms' is <3% and <6%. For heterogenenous distributions of ma, errors are <4% and <5.5% for each object with a prior knowledge of ms' and g, and to 7 and 14% when μs' varied from 7-13/cm. Conclusions: A Monte Carlo code has been successfully developed and used to correct for photon propagation effects in simulated objects consistent with tumors.
Handling error propagation in sequential data assimilation using an evolutionary strategy
NASA Astrophysics Data System (ADS)
Bai, Yulong; Li, Xin; Huang, Chunlin
2013-07-01
An evolutionary strategy-based error parameterization method that searches for the most ideal error adjustment factors was developed to obtain better assimilation results. Numerical experiments were designed using some classical nonlinear models (i.e., the Lorenz-63 model and the Lorenz-96 model). Crossover and mutation error adjustment factors of evolutionary strategy were investigated in four aspects: the initial conditions of the Lorenz model, ensemble sizes, observation covariance, and the observation intervals. The search for error adjustment factors is usually performed using trial-and-error methods. To solve this difficult problem, a new data assimilation system coupled with genetic algorithms was developed. The method was tested in some simplified model frameworks, and the results are encouraging. The evolutionary strategy-based error handling methods performed robustly under both perfect and imperfect model scenarios in the Lorenz-96 model. However, the application of the methodology to more complex atmospheric or land surface models remains to be tested.
Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation
NASA Astrophysics Data System (ADS)
Schiavazzi, Daniele; Marsden, Alison
2015-11-01
Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.
Numerical simulation of propagation of the MHD waves in sunspots
NASA Astrophysics Data System (ADS)
Parchevsky, K.; Kosovichev, A.; Khomenko, E.; Olshevsky, V.; Collados, M.
2010-11-01
We present results of numerical 3D simulation of propagation of MHD waves in sunspots. We used two self consistent magnetohydrostatic background models of sunspots. There are two main differences between these models: (i) the topology of the magnetic field and (ii) dependence of the horizontal profile of the sound speed on depth. The model with convex shape of the magnetic field lines near the photosphere has non-zero horizorntal perturbations of the sound speed up to the depth of 7.5 Mm (deep model). In the model with concave shape of the magnetic field lines near the photosphere Δ c/c is close to zero everywhere below 2 Mm (shallow model). Strong Alfven wave is generated at the wave source location in the deep model. This wave is almost unnoticeable in the shallow model. Using filtering technique we separated magnetoacoustic and magnetogravity waves. It is shown, that inside the sunspot magnetoacoustic and magnetogravity waves are not spatially separated unlike the case of the horizontally uniform background model. The sunspot causes anisotropy of the amplitude distribution along the wavefront and changes the shape of the wavefront. The amplitude of the waves is reduced inside the sunspot. This effect is stronger for the magnetogravity waves than for magnetoacoustic waves. The shape of the wavefront of the magnetogravity waves is distorted stronger as well. The deep model causes bigger anisotropy for both mgnetoacoustic and magneto gravity waves than the shallow model.
Simulation of Magnetic Cloud Erosion and Deformation During Propagation
NASA Astrophysics Data System (ADS)
Manchester, W.; Kozyra, J. U.; Lepri, S. T.; Lavraud, B.; Jackson, B. V.
2013-12-01
We examine a three-dimensional (3-D) numerical magnetohydrodynamic (MHD) simulation describing a very fast interplanetary coronal mass ejection (ICME) propagating from the solar corona to 1 AU. In conjunction with it's high speed, the ICME evolves in ways that give it a unique appearance at 1AU that does not resemble a typical ICME. First, as the ICME decelerates in the solar wind, filament material at the back of the flux rope pushes its way forward through the flux rope. Second, diverging nonradial flows in front of the filament transport azimuthal flux of the rope to the sides of the ICME. Third, the magnetic flux rope reconnects with the interplanetary magnetic field (IMF). As a consequence of these processes, the flux rope partially unravels and appears to evolve to an entirely open configuration near its nose. At the same time, filament material at the base of the flux rope moves forward and comes in direct contact with the shocked plasma in the CME sheath. We find evidence such remarkable behavior has occurred when we examine a very fast CME that erupted from the Sun on 2005 January 20. In situ observations of this event near 1 AU show very dense cold material impacting the Earth following immediately behind the CME sheath. Charge state analysis shows this dense plasma is filament material, and the analysis of SMEI data provides the trajectory of this dense plasma from the Sun. Consistent with the simulation, we find the azimuthal flux (Bz) to be entirely unbalanced giving the appearance that the flux rope has completely eroded on the anti-sunward side.
Petrov, Nikolay V; Pavlov, Pavel V; Malov, A N
2013-06-30
Using the equations of scalar diffraction theory we consider the formation of an optical vortex on a diffractive optical element. The algorithms are proposed for simulating the processes of propagation of spiral wavefronts in free space and their reflections from surfaces with different roughness parameters. The given approach is illustrated by the results of numerical simulations. (propagation of wave fronts)
Propagation of calibration errors in prospective motion correction using external tracking.
Zahneisen, Benjamin; Keating, Brian; Ernst, Thomas
2014-08-01
Prospective motion correction of MRI scans using an external tracking device (such as a camera) is becoming increasingly popular, especially for imaging of the head. In order for external tracking data to be transformed into the MR scanner reference frame, the pose (i.e., position and orientation) of the camera relative to the scanner--or cross-calibration--must be accurate. In this study, we investigated how errors in cross-calibration affect the accuracy of motion correction feedback in MRI. An operator equation is derived describing how calibration errors relate to errors in applied motion compensation. By taking advantage of spherical symmetry and performing a Taylor approximation for small rotation angles, a closed form expression and upper limit for the residual tracking error is provided. Experiments confirmed theoretical predictions of a bilinear dependence of the residual rotational component on the calibration error and the motion performed, modulated by a sinusoidal dependence on the angle between the calibration error axis and motion axis. The residual translation error is bounded by the sum of the rotation angle multiplied by the translational calibration error plus the linear head displacement multiplied by the calibration error angle. The results make it possible to calculate the required cross-calibration accuracy for external tracking devices for a range of motions. Scans with smaller expected movements require less accuracy in cross-calibration than scans involving larger movements. Typical clinical applications require that the calibration accuracy is substantially below 1 mm and 1°. Copyright © 2013 Wiley Periodicals, Inc.
Analysis of errors occurring in large eddy simulation.
Geurts, Bernard J
2009-07-28
We analyse the effect of second- and fourth-order accurate central finite-volume discretizations on the outcome of large eddy simulations of homogeneous, isotropic, decaying turbulence at an initial Taylor-Reynolds number Re(lambda)=100. We determine the implicit filter that is induced by the spatial discretization and show that a higher order discretization also induces a higher order filter, i.e. a low-pass filter that keeps a wider range of flow scales virtually unchanged. The effectiveness of the implicit filtering is correlated with the optimal refinement strategy as observed in an error-landscape analysis based on Smagorinsky's subfilter model. As a point of reference, a finite-volume method that is second-order accurate for both the convective and the viscous fluxes in the Navier-Stokes equations is used. We observe that changing to a fourth-order accurate convective discretization leads to a higher value of the Smagorinsky coefficient C(S) required to achieve minimal total error at given resolution. Conversely, changing only the viscous flux discretization to fourth-order accuracy implies that optimal simulation results are obtained at lower values of C(S). Finally, a fully fourth-order discretization yields an optimal C(S) that is slightly lower than the reference fully second-order method.
Van Niel, Kimberly P; Austin, Mike P
2007-01-01
The effect of digital elevation model (DEM) error on environmental variables, and subsequently on predictive habitat models, has not been explored. Based on an error analysis of a DEM, multiple error realizations of the DEM were created and used to develop both direct and indirect environmental variables for input to predictive habitat models. The study explores the effects of DEM error and the resultant uncertainty of results on typical steps in the modeling procedure for prediction of vegetation species presence/absence. Results indicate that all of these steps and results, including the statistical significance of environmental variables, shapes of species response curves in generalized additive models (GAMs), stepwise model selection, coefficients and standard errors for generalized linear models (GLMs), prediction accuracy (Cohen's kappa and AUC), and spatial extent of predictions, were greatly affected by this type of error. Error in the DEM can affect the reliability of interpretations of model results and level of accuracy in predictions, as well as the spatial extent of the predictions. We suggest that the sensitivity of DEM-derived environmental variables to error in the DEM should be considered before including them in the modeling processes.
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel
Electron Beam Propagation Through a Magnetic Wiggler with Random Field Errors
1989-08-21
Another quantity of interest is the vector potential 6.A,.(:) associated with the field error 6B,,,(:). Defining the normalized vector potentials ba = ebA...then follows that the correlation of the normalized vector potential errors is given by 1 . 12 (-a.(zj)a.,(z2)) = a,k,, dz’ , dz" (bBE(z’)bB , (z")) a2...Throughout the following, terms of order O(z:/z) will be neglected. Similarly, for the y-component of the normalized vector potential errors, one
Evaluation of color error and noise on simulated images
NASA Astrophysics Data System (ADS)
Mornet, Clémence; Vaillant, Jérôme; Decroux, Thomas; Hérault, Didier; Schanen, Isabelle
2010-01-01
The evaluation of CMOS sensors performance in terms of color accuracy and noise is a big challenge for camera phone manufacturers. On this paper, we present a tool developed with Matlab at STMicroelectronics which allows quality parameters to be evaluated on simulated images. These images are computed based on measured or predicted Quantum Efficiency (QE) curves and noise model. By setting the parameters of integration time and illumination, the tool optimizes the color correction matrix (CCM) and calculates the color error, color saturation and signal-to-noise ratio (SNR). After this color correction optimization step, a Graphics User Interface (GUI) has been designed to display a simulated image at a chosen illumination level, with all the characteristics of a real image taken by the sensor with the previous color correction. Simulated images can be a synthetic Macbeth ColorChecker, for which reflectance of each patch is known, or a multi-spectral image, described by the reflectance spectrum of each pixel or an image taken at high-light level. A validation of the results has been performed with ST under development sensors. Finally we present two applications one based on the trade-offs between color saturation and noise by optimizing the CCM and the other based on demosaicking SNR trade-offs.
NASA Astrophysics Data System (ADS)
Arahata, E.; Nikuni, T.
2013-05-01
We study sound propagation in a Bose-condensed gas confined in a highly elongated harmonic trap at finite temperatures. Our analysis is based on Zaremba-Nikuni-Griffin (ZNG) formalism, which consists of Gross-Pitaevskii equation for the condensate and the kinetic equation for a thermal cloud. We extend ZNG formalism to deal with a highly-anisotropic trap potential, and use it to simulate sound propagation in the two fluid hydrodynamic regime. We use the trap parameters for the experiment that has reported second sound propagation. Our simulation results show that propagation of two sound pulses corresponding to first and second sound can be observed in an intermediate temperature.
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
The numerical robustness of four generally applicable, recursive, least-squares-estimation schemes is analyzed by means of a theoretical round-off propagation study. This study highlights a number of practical, interesting insights of widely used recursive least-squares schemes. These insights have been confirmed in an experimental study as well.
Assesment of SIRGAS Ionospheric Maps errors based on a numerical simulation
NASA Astrophysics Data System (ADS)
Brunini, Claudio; Emilio, Camilion; Francisco, Azpilicueta
2010-05-01
SIRGAS (Sistema de Referencia Geocéntrico para las Américas) is responsible of the International Terrestrial Reference Frame densification in Latin America and the Caribbean, which is realized and maintained by means of a continuously operational GNSS network with more than 200 receivers. Besides, SIRGAS uses this network for computing regional maps of the vertical Total Electron Content (TEC), which are released to the community through the SIRGAS web page (www.sirgas.org). As other similar products (e.g.: Global Ionospheric Maps (GIM) computed by the International GNSS Service), SIRGAS Ionospheric Maps (SIM) are based on a thin layer ionospheric model, in which the whole ionosphere is represented by one spherical layer of infinitesimal thickness and equivalent vertical TEC, located at a fixed height above the Earth's surface (tipycally between 350 and 450 km). This contribution aims to characterize the errors introduced in the thin layer ionospheric model by the use of a fixed and, sometimes, inappropiated ionospheric layer height. Particular attention is payed to the propagation of these errors to the estimation of the vertical TEC and to the estimation of the GNSS satellites and receivers Inter-Frequency Biases (IFB). The work relies upon a numerical simulation performed with an empirical model of the Earth's ionosphere, which allows creating a realistic but controlled ionospheric scenario, and then evaluates the errors that are produced when the thin layer model is used to reproduce those ionospheric scenarios. The error assessment is performed for the Central and the northern part of the South American continents, where largest errors are expected because the combined actions of the Appleton Anomaly of the ionosphere and the South-Atlantic anomaly of the geomagnetic field.
Numerical Simulation of Ion Rings and Ion Beam Propagation.
NASA Astrophysics Data System (ADS)
Mankofsky, Alan
processes are included. CIDER has been used to simulate the propagation of an intense ion beam through a z-pinch plasma channel. We find that transport efficiencies in the 75% range are possible using 5-7 MeV, 1-2 MA proton beams with initial divergences in the range 1.5(DEGREES)-7(DEGREES) in a 4 m hydrogen channel. Current neutralization in excess of 99% is found, and no gross axisymmetric instabilities are observed.
Modeling and Simulation for Realistic Propagation Environments of Communications Signals at SHF Band
NASA Technical Reports Server (NTRS)
Ho, Christian
2005-01-01
In this article, most of widely accepted radio wave propagation models that have proven to be accurate in practice as well as numerically efficient at SHF band will be reviewed. Weather and terrain data along the signal's paths can be input in order to more accurately simulate the propagation environments under particular weather and terrain conditions. Radio signal degradation and communications impairment severity will be investigated through the realistic radio propagation channel simulator. Three types of simulation approaches in predicting signal's behaviors are classified as: deterministic, stochastic and attenuation map. The performance of the simulation can be evaluated under operating conditions for the test ranges of interest. Demonstration tests of a real-time propagation channel simulator will show the capabilities and limitations of the simulation tool and underlying models.
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
NASA Astrophysics Data System (ADS)
Privé, N. C.; Errico, R. M.; Tai, K.-S.
2013-06-01
The National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a 1 month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 h forecast, increased observation error only yields a slight decline in forecast skill in the extratropics and no discernible degradation of forecast skill in the tropics.
PLASIM: A computer code for simulating charge exchange plasma propagation
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Deininger, W. D.; Winder, D. R.; Kaufman, H. R.
1982-01-01
The propagation of the charge exchange plasma for an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ANSI Standard FORTRAN.
Simulation of reactive nanolaminates using reduced models: II. Normal propagation
Salloum, Maher; Knio, Omar M.
2010-03-15
Transient normal flame propagation in reactive Ni/Al multilayers is analyzed computationally. Two approaches are implemented, based on generalization of earlier methodology developed for axial propagation, and on extension of the model reduction formalism introduced in Part I. In both cases, the formulation accommodates non-uniform layering as well as the presence of inert layers. The equations of motion for the reactive system are integrated using a specially-tailored integration scheme, that combines extended-stability, Runge-Kutta-Chebychev (RKC) integration of diffusion terms with exact treatment of the chemical source term. The detailed and reduced models are first applied to the analysis of self-propagating fronts in uniformly-layered materials. Results indicate that both the front velocities and the ignition threshold are comparable for normal and axial propagation. Attention is then focused on analyzing the effect of a gap composed of inert material on reaction propagation. In particular, the impacts of gap width and thermal conductivity are briefly addressed. Finally, an example is considered illustrating reaction propagation in reactive composites combining regions corresponding to two bilayer widths. This setup is used to analyze the effect of the layering frequency on the velocity of the corresponding reaction fronts. In all cases considered, good agreement is observed between the predictions of the detailed model and the reduced model, which provides further support for adoption of the latter. (author)
Hubig, Michael; Muggenthaler, Holger; Mall, Gita
2014-05-01
Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.
Molecular-level Simulations of Shock Generation and Propagation in Polyurea
2011-01-26
Polyurea Shock-wave generation and propagation Molecular-level calculations a b s t r a c t A non -equilibrium molecular dynamics method is employed in order...homepage: www.e lsev ier .com/ locate /msea Molecular-level simulations of shock generation and propagation in polyurea M. Grujicica,∗, B. Pandurangana...to study various phenomena accompanying the generation and propagation of shock waves in polyurea (a micro-phase segregated elastomer). Several
Mekid, Samir; Vacharanukul, Ketsaya
2006-01-01
To achieve dynamic error compensation in CNC machine tools, a non-contact laser probe capable of dimensional measurement of a workpiece while it is being machined has been developed and presented in this paper. The measurements are automatically fed back to the machine controller for intelligent error compensations. Based on a well resolved laser Doppler technique and real time data acquisition, the probe delivers a very promising dimensional accuracy at few microns over a range of 100 mm. The developed optical measuring apparatus employs a differential laser Doppler arrangement allowing acquisition of information from the workpiece surface. In addition, the measurements are traceable to standards of frequency allowing higher precision.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
Molecular dynamics simulation of the burning front propagation in PETN
NASA Astrophysics Data System (ADS)
Yanilkin, A. V.; Sergeev, O. V.
2014-05-01
One of the models of detonation development in condensed explosives under shock loading is the concept of "hot spots." According to this model, the reaction initially starts at various defects and inhomogeneities, where energy is localized during shock wave propagation. In such a region the reaction may start and the heat flux sufficient for the ignition of the adjacent layers of matter may be formed. If the reaction propagates fast enough, the merging of the burning fronts from several hot spots may lead to detonation. So there is an interest in determining the burning propagation rate from the hot spot in various conditions. In this work we investigate the propagation of plane burning front from initially heated layer in PETN single crystal using molecular dynamics method with the reactive force field (ReaxFF). The burning rate depends on the direction in crystal. The kinetics of chemical transformations is considered. The dependence of the burning front propagation rate along [100] direction on the external pressure in the pressure range from normal to 30 GPa is calculated, it is shown that it grows linearly in the considered range from 50 m/s to 320 m/s. The results are compared with the data from experiments and quantum chemical calculations.
Control and alignment of segmented-mirror telescopes: matrices, modes, and error propagation.
Chanan, Gary; MacMartin, Douglas G; Nelson, Jerry; Mast, Terry
2004-02-20
Starting from the successful Keck telescope design, we construct and analyze the control matrix for the active control system of the primary mirror of a generalized segmented-mirror telescope, with up to 1000 segments and including an alternative sensor geometry to the one used at Keck. In particular we examine the noise propagation of the matrix and its consequences for both seeing-limited and diffraction-limited observations. The associated problem of optical alignment of such a primary mirror is also analyzed in terms of the distinct but related matrices that govern this latter problem.
function. Key Words and Phrases: Parametric estimation , exponential families, nonlinear models, nonlinear least squares, neural networks, Monte Carlo simulation, computer intensive statistical methods.
NASA Astrophysics Data System (ADS)
Prive, N.; Errico, R. M.; Tai, K.
2012-12-01
A global observing system simulation experiment (OSSE) has been developed at the NASA Global Modeling and Assimilation Office using the Global Earth Observing System (GEOS-5) forecast model and Gridpoint Statistical Interpolation data assimilation. A 13-month integration of the European Centre for Medium-Range Weather Forecasts operational forecast model is used as the Nature Run. Synthetic observations for conventional and radiance data types are interpolated from the Nature Run, with calibrated observation errors added to reproduce realistic statistics of analysis increment and observation innovation. It is found that correlated observation errors are necessary in order to replicate the statistics of analysis increment and observation innovation found with real data. The impact of these observation errors is explored in a series of OSSE experiments in which the magnitude of the applied observation error is varied from zero to double the calibrated values while the observation error covariances of the GSI are held fixed. Increased observation error has a strong effect on the variance of the analysis increment and observation innovation fields, but a much weaker impact on the root mean square (RMS) analysis error. For the 120 hour forecast, only slight degradation of forecast skill in terms of anomaly correlation and RMS forecast error is observed in the midlatitudes, and there is no appreciable impact of observation error on forecast skill in the tropics.
Statistical error in simulations of Poisson processes: Example of diffusion in solids
NASA Astrophysics Data System (ADS)
Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.
2016-08-01
Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.
Error propagation in the numerical solutions of the differential equations of orbital mechanics
NASA Technical Reports Server (NTRS)
Bond, V. R.
1982-01-01
The relationship between the eigenvalues of the linearized differential equations of orbital mechanics and the stability characteristics of numerical methods is presented. It is shown that the Cowell, Encke, and Encke formulation with an independent variable related to the eccentric anomaly all have a real positive eigenvalue when linearized about the initial conditions. The real positive eigenvalue causes an amplification of the error of the solution when used in conjunction with a numerical integration method. In contrast an element formulation has zero eigenvalues and is numerically stable.
Numerical simulation of impurity propagation in sea channels
NASA Astrophysics Data System (ADS)
Cherniy, Dmitro; Dovgiy, Stanislav; Gourjii, Alexandre
2009-11-01
Building the dike (2003) in Kerch channel (between Black and Azov seas) from Taman peninsula is an example of technological influence on the fluid flow and hydrological conditions in the channel. Increasing velocity flow by two times in a fairway region results in the appearance dangerous tendencies in hydrology of Kerch channel. A flow near the coastal edges generates large scale vortices, which move along the channel. A shipwreck (November 11, 2007) of tanker ``Volganeft-139'' in Kerch channel resulted in an ecological catastrophe in the indicated region. More than 1300 tons of petroleum appeared on the sea surface. Intensive vortices formed here involve part of the impurity region in own motion. Boundary of the impurity region is deformed, stretched and cover the center part of the channel. The adapted vortex singularity method for the impurity propagation in Kerch channel and analyze of the pollution propagation are the main goal of the report.
Development of a web-based simulator for estimating motion errors in linear motion stages
NASA Astrophysics Data System (ADS)
Khim, G.; Oh, J.-S.; Park, C.-H.
2017-08-01
This paper presents a web-based simulator for estimating 5-DOF motion errors in the linear motion stages. The main calculation modules of the simulator are stored on the server computer. The clients uses the client software to send the input parameters to the server and receive the computed results from the server. By using the simulator, we can predict performances such as 5-DOF motion errors, bearing and table stiffness by entering the design parameters in a design step before fabricating the stages. Motion errors are calculated using the transfer function method from the rail form errors which is the most dominant factor on the motion errors. To verify the simulator, the predicted motion errors are compared to the actually measured motion errors in the linear motion stage.
NASA Astrophysics Data System (ADS)
Messerly, Richard A.; Knotts, Thomas A.; Wilding, W. Vincent
2017-05-01
Molecular simulation has the ability to predict various physical properties that are difficult to obtain experimentally. For example, we implement molecular simulation to predict the critical constants (i.e., critical temperature, critical density, critical pressure, and critical compressibility factor) for large n-alkanes that thermally decompose experimentally (as large as C48). Historically, molecular simulation has been viewed as a tool that is limited to providing qualitative insight. One key reason for this perceived weakness in molecular simulation is the difficulty to quantify the uncertainty in the results. This is because molecular simulations have many sources of uncertainty that propagate and are difficult to quantify. We investigate one of the most important sources of uncertainty, namely, the intermolecular force field parameters. Specifically, we quantify the uncertainty in the Lennard-Jones (LJ) 12-6 parameters for the CH4, CH3, and CH2 united-atom interaction sites. We then demonstrate how the uncertainties in the parameters lead to uncertainties in the saturated liquid density and critical constant values obtained from Gibbs Ensemble Monte Carlo simulation. Our results suggest that the uncertainties attributed to the LJ 12-6 parameters are small enough that quantitatively useful estimates of the saturated liquid density and the critical constants can be obtained from molecular simulation.
Fast, Accurate RF Propagation Modeling and Simulation Tool for Highly Cluttered Environments
Kuruganti, Phani Teja
2007-01-01
As network centric warfare and distributed operations paradigms unfold, there is a need for robust, fast wireless network deployment tools. These tools must take into consideration the terrain of the operating theater, and facilitate specific modeling of end to end network performance based on accurate RF propagation predictions. It is well known that empirical models can not provide accurate, site specific predictions of radio channel behavior. In this paper an event-driven wave propagation simulation is proposed as a computationally efficient technique for predicting critical propagation characteristics of RF signals in cluttered environments. Convincing validation and simulator performance studies confirm the suitability of this method for indoor and urban area RF channel modeling. By integrating our RF propagation prediction tool, RCSIM, with popular packetlevel network simulators, we are able to construct an end to end network analysis tool for wireless networks operated in built-up urban areas.
Investigation of Radar Propagation in Buildings: A 10 Billion Element Cartesian-Mesh FETD Simulation
Stowell, M L; Fasenfest, B J; White, D A
2008-01-14
In this paper large scale full-wave simulations are performed to investigate radar wave propagation inside buildings. In principle, a radar system combined with sophisticated numerical methods for inverse problems can be used to determine the internal structure of a building. The composition of the walls (cinder block, re-bar) may effect the propagation of the radar waves in a complicated manner. In order to provide a benchmark solution of radar propagation in buildings, including the effects of typical cinder block and re-bar, we performed large scale full wave simulations using a Finite Element Time Domain (FETD) method. This particular FETD implementation is tuned for the special case of an orthogonal Cartesian mesh and hence resembles FDTD in accuracy and efficiency. The method was implemented on a general-purpose massively parallel computer. In this paper we briefly describe the radar propagation problem, the FETD implementation, and we present results of simulations that used over 10 billion elements.
End-to-End Network Simulation Using a Site-Specific Radio Wave Propagation Model
Djouadi, Seddik M; Kuruganti, Phani Teja; Nutaro, James J
2013-01-01
The performance of systems that rely on a wireless network depends on the propagation environment in which that network operates. To predict how these systems and their supporting networks will perform, simulations must take into consideration the propagation environment and how this effects the performance of the wireless network. Network simulators typically use empirical models of the propagation environment. However, these models are not intended for, and cannot be used, to predict a wireless system will perform in a specific location, e.g., in the center of a particular city or the interior of a specific manufacturing facility. In this paper, we demonstrate how a site-specific propagation model and the NS3 simulator can be used to predict the end-to-end performance of a wireless network.
Coherent-wave Monte Carlo method for simulating light propagation in tissue
NASA Astrophysics Data System (ADS)
Kraszewski, Maciej; Pluciński, Jerzy
2016-03-01
Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime.
NASA Astrophysics Data System (ADS)
Tong, Xiaohua; Sun, Tong; Fan, Junyi; Goodchild, Michael F.; Shi, Wenzhong
2013-04-01
This paper presents a new error band model, the statistical simulation error model, for describing the positional error of line features by incorporating both analytical and simulation methods. In this study, line features include line segments, polylines, and polygons. In existing error models, an infinite number of points on the line segment are considered as the stochastic variables and the error band of a line segment is obtained from the union of all intermediate points on the line segment, while that of a polyline/polygon is obtained from the union of all error bands of the composite line segments. Our proposed error band model, however, regards the entire line feature (line segment/polyline/polygon) as the stochastic variable, instead of the infinite number of points on the line segment. Based solely on the statistical characteristics of the endpoints of the line feature and the predefined confidence level, our proposed error model is created by a simulation method that integrates a population of line segments/polylines/polygons computed from the entire solution set of the error model's defining equation. A comprehensive comparison of the proposed and existing error band models is carried out through both simulated and practical experiments. The experimental results show the following: (1) For line segments, the proposed standard statistically simulated error band matches that of existing error models (for example, the G-band). Further, it is found that a scaled G-band with a specific scale factor (e.g.,√{χ42(α)}) matches the proposed statistically simulated error band with probability (1 - α) × 100%. (2) For polylines and polygons, if we correlate the errors of all the endpoints of the polyline/polygon, there is a marked difference between the proposed statistically simulated error band and existing error bands. The reason for the difference is explained as follows. The existing error model defines the error band of a polyline/polygon as the union of
Whistler propagation in ionospheric density ducts: Simulations and DEMETER observations
NASA Astrophysics Data System (ADS)
Woodroffe, J. R.; Streltsov, A. V.; Vartanyan, A.; Milikh, G. M.
2013-11-01
On 16 October 2009, the Detection of Electromagnetic Emissions Transmitted from Earthquake Regions (DEMETER) satellite observed VLF whistler wave activity coincident with an ionospheric heating experiment conducted at HAARP. At the same time, density measurements by DEMETER indicate the presence of multiple field-aligned enhancements. Using an electron MHD model, we show that the distribution of VLF power observed by DEMETER is consistent with the propagation of whistlers from the heating region inside the observed density enhancements. We also discuss other interesting features of this event, including coupling of the lower hybrid and whistler modes, whistler trapping in artificial density ducts, and the interference of whistlers waves from two adjacent ducts.
Simulation of Ductile Crack Propagation for Pipe Structures Using X-FEM
NASA Astrophysics Data System (ADS)
Miura, Naoki; Nagashima, Toshio
Conventional finite element method is continually used for the flaw evaluation of pipe structures to investigate the fitness-for-service for power plant components, however, it is generally time consuming to make a model of specific crack configuration. The consideration of a propagating surface crack is further accentuated since the crack propagation behavior along the crack front is implicitly affected by the distribution of the crack driving force along the crack front. The authors developed a system to conduct crack propagation analysis by use of the three-dimensional elastic-plastic extended finite element method. It was applied to simulate ductile crack propagation of circumferentially surface cracks in pipe structures and could realize the simultaneous calculation of the J-integral and the consequent ductile crack propagation. Both the crack extension and the possible change of crack shape were evaluated by the developed system.
Revised error propagation of 40Ar/39Ar data, including covariances
NASA Astrophysics Data System (ADS)
Vermeesch, Pieter
2015-12-01
The main advantage of the 40Ar/39Ar method over conventional K-Ar dating is that it does not depend on any absolute abundance or concentration measurements, but only uses the relative ratios between five isotopes of the same element -argon- which can be measured with great precision on a noble gas mass spectrometer. The relative abundances of the argon isotopes are subject to a constant sum constraint, which imposes a covariant structure on the data: the relative amount of any of the five isotopes can always be obtained from that of the other four. Thus, the 40Ar/39Ar method is a classic example of a 'compositional data problem'. In addition to the constant sum constraint, covariances are introduced by a host of other processes, including data acquisition, blank correction, detector calibration, mass fractionation, decay correction, interference correction, atmospheric argon correction, interpolation of the irradiation parameter, and age calculation. The myriad of correlated errors arising during the data reduction are best handled by casting the 40Ar/39Ar data reduction protocol in a matrix form. The completely revised workflow presented in this paper is implemented in a new software platform, Ar-Ar_Redux, which takes raw mass spectrometer data as input and generates accurate 40Ar/39Ar ages and their (co-)variances as output. Ar-Ar_Redux accounts for all sources of analytical uncertainty, including those associated with decay constants and the air ratio. Knowing the covariance matrix of the ages removes the need to consider 'internal' and 'external' uncertainties separately when calculating (weighted) mean ages. Ar-Ar_Redux is built on the same principles as its sibling program in the U-Pb community (U-Pb_Redux), thus improving the intercomparability of the two methods with tangible benefits to the accuracy of the geologic time scale. The program can be downloaded free of charge from
NASA Astrophysics Data System (ADS)
Mizokami, Naoya; Nakahata, Kazuyuki; Ogi, Keiji; Yamawaki, Hisashi; Shiwa, Mitsuharu
2017-02-01
The use of fiber reinforced plastics (FRPs) as structural components has significantly increased in recent years. FRPs are made of stacks of plies, each of which is reinforced by fibers. When modeling ultrasonic wave propagation in FRPs, it is important to introduce three-dimensional mesoscopic and microscopic structures to account for the anisotropy and heterogeneity caused by fiber orientation and the lay-up of laminates. In this study, a finite element method using an image-based modeling is applied to simulation of ultrasonic wave propagation in a carbon FRP (CFRP). Here, the elastic stiffness of a single ply is determined using a homogenization method, where a CFRP microstructure is incorporated on the basis of a two-scale asymptotic expansion. The wave propagation in a CFRP specimen composed of unidirectionally aligned fibers is calculated, and the simulation results are compared to visualization results obtained for ultrasonic wave propagation using a laser scanning device.
The Simulation of Off-Axis Laser Propagation Using Heleeos
2006-03-01
56 Laser Pointer Test...55 6. Laser pointer simulation ....................................................................................... 57 7... lasers have many different uses and can be found in much of today’s new technology. They are used in DVD players, CD players, builder’s leveling
Fencil, L E; Metz, C E
1990-01-01
We are developing a technique for determination of the three-dimensional (3-D) structure of vascular objects from two radiographic projection images acquired at arbitrary and unknown relative orientations. No separate calibration steps are required with this method, which exploits an inherent redundancy of biplane imaging to extract the imaging geometry as well as the 3-D locations of eight or more object points. The theoretical basis of this technique has been described previously. In this paper, we review the method from the perspective of linear algebra and describe an improvement, not heretofore reported, that reduces the method's sensitivity to experimental error. We then examine the feasibility and inherent accuracy of this approach by computer simulation of biplane imaging experiments. The precision with which 3-D object structure may be retrieved, together with the dependence of precision on the actual imaging geometry and errors in various measured quantities, is studied in detail. Our simulation studies show that the method is not only feasible but potentially accurate, typically determining object-point configurations with root-mean-square (RMS) error on the order of 1 to 2 mm. The method is also quite fast, requiring approximately one second of CPU time on a VAX 11/750 computer (0.6 MIPS).
Simulation-based reasoning about the physical propagation of fault effects
NASA Technical Reports Server (NTRS)
Feyock, Stefan; Li, Dalu
1990-01-01
The research described deals with the effects of faults on complex physical systems, with particular emphasis on aircraft and spacecraft systems. Given that a malfunction has occurred and been diagnosed, the goal is to determine how that fault will propagate to other subsystems, and what the effects will be on vehicle functionality. In particular, the use of qualitative spatial simulation to determine the physical propagation of fault effects in 3-D space is described.
Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S
2016-05-01
The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures.
Batista, R. Alves; Vliet, A. van; Boncioli, D.; Di Matteo, A.; Walz, D. E-mail: denise.boncioli@lngs.infn.it E-mail: a.vanvliet@astro.ru.nl
2015-10-01
The results of simulations of extragalactic propagation of ultra-high energy cosmic rays (UHECRs) have intrinsic uncertainties due to poorly known physical quantities and approximations used in the codes. We quantify the uncertainties in the simulated UHECR spectrum and composition due to different models of extragalactic background light (EBL), different photodisintegration setups, approximations concerning photopion production and the use of different simulation codes. We discuss the results for several representative source scenarios with proton, nitrogen or iron at injection. For this purpose we used SimProp and CRPropa, two publicly available codes for Monte Carlo simulations of UHECR propagation. CRPropa is a detailed and extensive simulation code, while SimProp aims to achieve acceptable results using a simpler code. We show that especially the choices for the EBL model and the photodisintegration setup can have a considerable impact on the simulated UHECR spectrum and composition.
FDTD Simulation on Terahertz Waves Propagation Through a Dusty Plasma
NASA Astrophysics Data System (ADS)
Wang, Maoyan; Zhang, Meng; Li, Guiping; Jiang, Baojun; Zhang, Xiaochuan; Xu, Jun
2016-08-01
The frequency dependent permittivity for dusty plasmas is provided by introducing the charging response factor and charge relaxation rate of airborne particles. The field equations that describe the characteristics of Terahertz (THz) waves propagation in a dusty plasma sheath are derived and discretized on the basis of the auxiliary differential equation (ADE) in the finite difference time domain (FDTD) method. Compared with numerical solutions in reference, the accuracy for the ADE FDTD method is validated. The reflection property of the metal Aluminum interlayer of the sheath at THz frequencies is discussed. The effects of the thickness, effective collision frequency, airborne particle density, and charge relaxation rate of airborne particles on the electromagnetic properties of Terahertz waves through a dusty plasma slab are investigated. Finally, some potential applications for Terahertz waves in information and communication are analyzed. supported by National Natural Science Foundation of China (Nos. 41104097, 11504252, 61201007, 41304119), the Fundamental Research Funds for the Central Universities (Nos. ZYGX2015J039, ZYGX2015J041), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (No. 20120185120012)
Computer Simulation of Electrical Propagation in Cardiac Tissue
2001-10-25
model, that are cardiac action potential models with Hodgkin - Huxley like representation.[2][3] Each simulation models is two dimensional electric...ircuit network model where many electric circuit models of cardiac membrane action potential were connected with electric resistance...controlled the sodium current , IN a , and the potassium current, IK 1 , of the cardiac action potential models and investigated the obtained
Realization of State-Space Models for Wave Propagation Simulations
2012-01-01
turquoise ). 13 Verification and Performance of Superstable Model The second FDTD analysis was to verify that a simulation using the...another, so that the turquoise and red lines are all that are visible). The implication is that model-order reduction can serve a useful purpose when
Accumulation of errors in numerical simulations of chemically reacting gas dynamics
NASA Astrophysics Data System (ADS)
Smirnov, N. N.; Betelin, V. B.; Nikitin, V. F.; Stamov, L. I.; Altoukhov, D. I.
2015-12-01
The aim of the present study is to investigate problems of numerical simulations precision and stochastic errors accumulation in solving problems of detonation or deflagration combustion of gas mixtures in rocket engines. Computational models for parallel computing on supercomputers incorporating CPU and GPU units were tested and verified. Investigation of the influence of computational grid size on simulation precision and computational speed was performed. Investigation of accumulation of errors for simulations implying different strategies of computation were performed.
Standard Errors of Equating Differences: Prior Developments, Extensions, and Simulations
ERIC Educational Resources Information Center
Moses, Tim; Zhang, Wenmin
2011-01-01
The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…
Standard Errors of Equating Differences: Prior Developments, Extensions, and Simulations
ERIC Educational Resources Information Center
Moses, Tim; Zhang, Wenmin
2011-01-01
The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…
A simulation study of the propagation of whistler-mode chorus in the Earth's inner magnetosphere
NASA Astrophysics Data System (ADS)
Katoh, Yuto
2014-12-01
We study the propagation of whistler-mode chorus in the magnetosphere by a spatially two-dimensional simulation code in the dipole coordinates. We set the simulation system so as to assume the outside of the plasmapause, corresponding to the radial distance from 3.9 to 4.1 R E in the equatorial plane and the latitudinal range from -15° to +15°, where R E is the Earth's radius. We assume a model chorus element propagating northward from the magnetic equator of the field line at L=4 with a rising tone from 0.2 to 0.7 Ω e0 in the time scale of 5,000 [InlineEquation not available: see fulltext.], where Ω e0 is the electron gyrofrequency at the magnetic equator. For the initial density distribution of cold electrons, we assume three types of initial conditions in the outside of the plasmapause: without a duct (run 1), a density enhancement duct (run 2), and a density decrease duct (run 3). In run 1, the simulation result reveals that whistler-mode waves of the different wave frequencies propagate in the different ray path in the region away from the magnetic equator. In runs 2 and 3, the model chorus element propagates inside the assumed duct with changing wave normal angle. The simulation results show the different propagation properties of the chorus element in runs 2 and 3 and reveal that resultant wave spectra observed along the field line are different between the density enhancement and density decrease duct cases. The spectral modification of chorus by the propagation effect should play a significant role in the interactions between chorus and energetic electrons in the magnetosphere, particularly in the region away from the equator. The present study clarifies that the variation of propagation properties of chorus should be taken into account for the thorough understanding of resonant interactions of chorus with energetic electrons in the inner magnetosphere.
ITER Test Blanket Module Error Field Simulation Experiments
NASA Astrophysics Data System (ADS)
Schaffer, M. J.
2010-11-01
Recent experiments at DIII-D used an active-coil mock-up to investigate effects of magnetic error fields similar to those expected from two ferromagnetic Test Blanket Modules (TBMs) in one ITER equatorial port. The largest and most prevalent observed effect was plasma toroidal rotation slowing across the entire radial profile, up to 60% in H-mode when the mock-up local ripple at the plasma was ˜4 times the local ripple expected in front of ITER TBMs. Analysis showed the slowing to be consistent with non-resonant braking by the mock-up field. There was no evidence of strong electromagnetic braking by resonant harmonics. These results are consistent with the near absence of resonant helical harmonics in the TBM field. Global particle and energy confinement in H-mode decreased by <20% for the maximum mock-up ripple, but <5% at the local ripple expected in ITER. These confinement reductions may be linked with the large velocity reductions. TBM field effects were small in L-mode but increased with plasma beta. The L-H power threshold was unaffected within error bars. The mock-up field increased plasma sensitivity to mode locking by a known n=1 test field (n = toroidal harmonic number). In H-mode the increased locking sensitivity was from TBM torque slowing plasma rotation. At low beta, locked mode tolerance was fully recovered by re-optimizing the conventional DIII-D ``I-coils'' empirical compensation of n=1 errors in the presence of the TBM mock-up field. Empirical error compensation in H-mode should be addressed in future experiments. Global loss of injected neutral beam fast ions was within error bars, but 1 MeV fusion triton loss may have increased. The many DIII-D mock-up results provide important benchmarks for models needed to predict effects of TBMs in ITER.
Monte Carlo simulations of intensity profiles for energetic particle propagation
NASA Astrophysics Data System (ADS)
Tautz, R. C.; Bolte, J.; Shalchi, A.
2016-02-01
Aims: Numerical test-particle simulations are a reliable and frequently used tool for testing analytical transport theories and predicting mean-free paths. The comparison between solutions of the diffusion equation and the particle flux is used to critically judge the applicability of diffusion to the stochastic transport of energetic particles in magnetized turbulence. Methods: A Monte Carlo simulation code is extended to allow for the generation of intensity profiles and anisotropy-time profiles. Because of the relatively low number density of computational particles, a kernel function has to be used to describe the spatial extent of each particle. Results: The obtained intensity profiles are interpreted as solutions of the diffusion equation by inserting the diffusion coefficients that have been directly determined from the mean-square displacements. The comparison shows that the time dependence of the diffusion coefficients needs to be considered, in particular the initial ballistic phase and the often subdiffusive perpendicular coefficient. Conclusions: It is argued that the perpendicular component of the distribution function is essential if agreement between the diffusion solution and the simulated flux is to be obtained. In addition, time-dependent diffusion can provide a better description than the classic diffusion equation only after the initial ballistic phase.
Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue
Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William
2008-01-01
In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.
Inland and Offshore Propagation Speeds of a Sea Breeze from Simulations and Measurements
NASA Astrophysics Data System (ADS)
Finkele, Klara
The inland and offshore propagation speeds of a sea breeze circulation cell are simulated using a three-dimensional hydrostatic model within a terrain-following coordinate system. The model includes a third-order semi-Lagrangian advection scheme, which compares well in a one-dimensional stand-alone test with the more complex Bott and Smolarkiewicz advection schemes. Two turbulence schemes are available: a local scheme by Louis (1979) and a modified non-local scheme based on Zhang and Anthes (1982). Both compare well with higher-order closure schemes using the Wangara data set for Day 33-34 (Clark et al., 1971).Two-dimensional cross-sections derived from airborne sea breeze measurements (Finkele et al. 1995) constitute the basis for comparison with two-dimensional numerical model results. The offshore sea breeze propagation speed is defined as the speed at which the seaward extent of the sea breeze grows offshore. On a study day, the offshore sea breeze propagation speed, from both measurements and model, is -3.4 m s-1. The measured inland propagation speed of the sea breeze decreased somewhat during the day. The model results show a fairly uniform inland propagation speed of 1.6 m s-1 which corresponds to the average measured value. The offshore sea breeze propagation speed is about twice the inland propagation speed for this particular case study, from both the model and measurements.
Simulations of Wave Propagation in the Jovian Atmosphere after SL9 Impact Events
NASA Astrophysics Data System (ADS)
Pond, Jarrad W.; Palotai, C.; Korycansky, D.; Harrington, J.
2013-10-01
Our previous numerical investigations into Jovian impacts, including the Shoemaker Levy- 9 (SL9) event (Korycansky et al. 2006 ApJ 646. 642; Palotai et al. 2011 ApJ 731. 3), the 2009 bolide (Pond et al. 2012 ApJ 745. 113), and the ephemeral flashes caused by smaller impactors in 2010 and 2012 (Hueso et al. 2013; Submitted to A&A), have covered only up to approximately 3 to 30 seconds after impact. Here, we present further SL9 impacts extending to minutes after collision with Jupiter’s atmosphere, with a focus on the propagation of shock waves generated as a result of the impact events. Using a similar yet more efficient remapping method than previously presented (Pond et al. 2012; DPS 2012), we move our simulation results onto a larger computational grid, conserving quantities with minimal error. The Jovian atmosphere is extended as needed to accommodate the evolution of the features of the impact event. We restart the simulation, allowing the impact event to continue to progress to greater spatial extents and for longer times, but at lower resolutions. This remap-restart process can be implemented multiple times to achieve the spatial and temporal scales needed to investigate the observable effects of waves generated by the deposition of energy and momentum into the Jovian atmosphere by an SL9-like impactor. As before, we use the three-dimensional, parallel hydrodynamics code ZEUS-MP 2 (Hayes et al. 2006 ApJ.SS. 165. 188) to conduct our simulations. Wave characteristics are tracked throughout these simulations. Of particular interest are the wave speeds and wave positions in the atmosphere as a function of time. These properties are compared to the characteristics of the HST rings to see if shock wave behavior within one hour of impact is consistent with waves observed at one hour post-impact and beyond (Hammel et al. 1995 Science 267. 1288). This research was supported by National Science Foundation Grant AST-1109729 and NASA Planetary Atmospheres Program Grant
NASA Astrophysics Data System (ADS)
Jiang, Xianan
2017-01-01
As a prominent climate variability mode with widespread influences on global weather extremes, the Madden-Julian Oscillation (MJO) remains poorly represented in the latest generation of general circulation models (GCMs), with a particular challenge in simulating its eastward propagating convective signals. In this study, by analyzing multimodel simulations from a recent global MJO model evaluation project, an effort is made to identify key processes for the eastward propagation of the MJO through analyses of moisture entropy (ME) processes under a "moisture mode" framework for the MJO. The column-integrated horizontal ME advection is found to play a critical role for the eastward propagation of the MJO in both observations and good MJO models, with a primary contribution through advection of the lower tropospheric seasonal mean ME by the MJO anomalous circulations. By contrast, the horizontal ME advection effect for the eastward propagation is greatly underestimated in poor MJO GCMs, due to model deficiencies in simulating both the seasonal mean ME pattern and MJO circulations, leading to a largely stationary MJO mode in these GCMs. These results thus pinpoint an important guidance toward improved representation of the MJO in climate and weather forecast models. While this study mainly focuses on fundamental physics for the MJO propagation over the Indian Ocean, complex influences by the Maritime Continent on the MJO and also ME processes associated with the MJO over the western Pacific warrant further investigations.
Analysis of the statistical error in umbrella sampling simulations by umbrella integration
NASA Astrophysics Data System (ADS)
Kästner, Johannes; Thiel, Walter
2006-06-01
Umbrella sampling simulations, or biased molecular dynamics, can be used to calculate the free-energy change of a chemical reaction. We investigate the sources of different sampling errors and derive approximate expressions for the statistical errors when using harmonic restraints and umbrella integration analysis. This leads to generally applicable rules for the choice of the bias potential and the sampling parameters. Numerical results for simulations on an analytical model potential are presented for validation. While the derivations are based on umbrella integration analysis, the final error estimate is evaluated from the raw simulation data, and it may therefore be generally applicable as indicated by tests using the weighted histogram analysis method.
NASA Astrophysics Data System (ADS)
Ferreira, F.; Gendron, E.; Rousset, G.; Gratadour, D.
2016-07-01
The future European Extremely Large Telescope (E-ELT) adaptive optics (AO) systems will aim at wide field correction and large sky coverage. Their performance will be improved by using post processing techniques, such as point spread function (PSF) deconvolution. The PSF estimation involves characterization of the different error sources in the AO system. Such error contributors are difficult to estimate: simulation tools are a good way to do that. We have developed in COMPASS (COMputing Platform for Adaptive opticS Systems), an end-to-end simulation tool using GPU (Graphics Processing Unit) acceleration, an estimation tool that provides a comprehensive error budget by the outputs of a single simulation run.
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
ASTRA Simulation Results of RF Propagation in Plasma Medium
NASA Astrophysics Data System (ADS)
Goodwin, Joshua; Oneal, Brandon; Smith, Aaron; Sen, Sudip
2015-04-01
Transport barriers in toroidal plasmas play a major role in achieving the required confinement for reactor grade plasmas. They are formed by different mechanisms, but most of them are associated with a zonal flow which suppresses turbulence. A different way of producing a barrier has been recently proposed which uses the ponderomotive force of RF waves to reduce the fluctuations due to drift waves, but without inducing any plasma rotation. Using this mechanism, a transport coefficient is derived which is a function of RF power, and it is incorporated in transport simulations performed for the Brazilian tokamak TCABR, as a possible test bed for the theoretical model. The formation of a transport barrier is demonstrated at the position of the RF wave resonant absorption surface, having the typical pedestal-like temperature profile.
Numerical simulation of ion rings and ion beam propagation
NASA Astrophysics Data System (ADS)
Manofsky, A.
The development of numerical simulation techniques for studying the physics of ion beams and rings in a background plasma as applicable to certain problems in magnetic and inertial confinement fusion is presented. Two codes were developed for these purposes: RINGA and CIDER. The 2 and 1/2 dimensional particle code RINGA follows the trajectories of ions in their self consistent magnetic field. The code assumes strict charge neutrality and admits currents only in the azimuthal direction. The injection and resistive trapping of ion rings was with RINGA. Modifications to RINGA to include finite pressure of confined plasma and beam ion electron slowing down collisions are discussed. In the CIDER hybrid code, ions are represented by particles and electrons by an inertialess thermal fluid which obeys a generalized Ohm's law. Fields are solved in the quasineutral Darwin approximation. Several collisional and atomic processes are included.
Hybrid simulations of rotational discontinuities. [Alfven wave propagation in astrophysics
NASA Technical Reports Server (NTRS)
Goodrich, C. C.; Cargill, P. J.
1991-01-01
1D hybrid simulations of rotational discontinuities (RDs) are presented. When the angle between the discontinuity normal and the magnetic field (theta-BN) is 30 deg, the RD broadens into a quasi-steady state of width 60-80 c/omega-i. The hodogram has a characteristic S-shape. When theta-BN = 60 deg, the RD is much narrower (10 c/omega-i). For right handed rotations, the results are similar to theta-BN = 30 deg. For left handed rotations, the RD does not evolve much from its initial conditions and the S-shape in the hodogram is much less visible. The results can be understood in terms of matching a fast mode wavelike structure upstream of the RD with an intermediate mode one downstream.
Time-Sliced Thawed Gaussian Propagation Method for Simulations of Quantum Dynamics.
Kong, Xiangmeng; Markmann, Andreas; Batista, Victor S
2016-05-19
A rigorous method for simulations of quantum dynamics is introduced on the basis of concatenation of semiclassical thawed Gaussian propagation steps. The time-evolving state is represented as a linear superposition of closely overlapping Gaussians that evolve in time according to their characteristic equations of motion, integrated by fourth-order Runge-Kutta or velocity Verlet. The expansion coefficients of the initial superposition are updated after each semiclassical propagation period by implementing the Husimi Transform analytically in the basis of closely overlapping Gaussians. An advantage of the resulting time-sliced thawed Gaussian (TSTG) method is that it allows for full-quantum dynamics propagation without any kind of multidimensional integral calculation, or inversion of overlap matrices. The accuracy of the TSTG method is demonstrated as applied to simulations of quantum tunneling, showing quantitative agreement with benchmark calculations based on the split-operator Fourier transform method.
Computer simulation of crack propagation in ductile materials under biaxial dynamic loads
Chen, Y.M.
1980-07-29
The finite-difference computer program HEMP is used to simulate the crack-propagation phenomenon in two-dimensional ductile materials under truly dynamic biaxial loads. A comulative strain-damage criterion for the initiation of ductile fracture is used. To simulate crack propagation numerically, the method of equivalent free-surface boundary conditions and the method of artifical velocity are used in the computation. Centrally cracked rectangular aluminum bars subjected to constant-velocity biaxial loads at the edges are considered. Tensile and compressive loads in the direction of crack length are found, respectively, to increase and decrease directional instability in crack propagation, where the directional instability is characterized by branching or bifurcation.
Fully kinetic particle simulations of high pressure streamer propagation
NASA Astrophysics Data System (ADS)
Rose, David; Welch, Dale; Thoma, Carsten; Clark, Robert
2012-10-01
Streamer and leader formation in high pressure devices is a dynamic process involving a hierarchy of physical phenomena. These include elastic and inelastic particle collisions in the gas, radiation generation, transport and absorption, and electrode interactions. We have performed 2D and 3D fully EM implicit particle-in-cell simulation model of gas breakdown leading to streamer formation under DC and RF fields. The model uses a Monte Carlo treatment for all particle interactions and includes discrete photon generation, transport, and absorption for ultra-violet and soft x-ray radiation. Central to the realization of this fully kinetic particle treatment is an algorithm [D. R. Welch, et al., J. Comp. Phys. 227, 143 (2007)] that manages the total particle count by species while preserving the local momentum distribution functions and conserving charge. These models are being applied to the analysis of high-pressure gas switches [D. V. Rose, et al., Phys. Plasmas 18, 093501 (2011)] and gas-filled RF accelerator cavities [D. V. Rose, et al. Proc. IPAC12, to appear].
GPU simulation of nonlinear propagation of dual band ultrasound pulse complexes
Kvam, Johannes Angelsen, Bjørn A. J.; Elster, Anne C.
2015-10-28
In a new method of ultrasound imaging, called SURF imaging, dual band pulse complexes composed of overlapping low frequency (LF) and high frequency (HF) pulses are transmitted, where the frequency ratio LF:HF ∼ 1 : 20, and the relative bandwidth of both pulses are ∼ 50 − 70%. The LF pulse length is hence ∼ 20 times the HF pulse length. The LF pulse is used to nonlinearly manipulate the material elasticity observed by the co-propagating HF pulse. This produces nonlinear interaction effects that give more information on the propagation of the pulse complex. Due to the large difference in frequency and pulse length between the LF and the HF pulses, we have developed a dual level simulation where the LF pulse propagation is first simulated independent of the HF pulse, using a temporal sampling frequency matched to the LF pulse. A separate equation for the HF pulse is developed, where the the presimulated LF pulse modifies the propagation velocity. The equations are adapted to parallel processing in a GPU, where nonlinear simulations of a typical HF beam of 10 MHz down to 40 mm is done in ∼ 2 secs in a standard GPU. This simulation is hence very useful for studying the manipulation effect of the LF pulse on the HF pulse.
1991-11-20
3 1M. Spivack , "Accuracy of the Moments from Simulation of Waves in Random Media," J. Opt Soc Am A 7, 790-793 (1990). 32D. Rouseff and R. P. Porter...34Anomalous Microwave Propagation through Atmospheric Ducts," Johns Hopkins APL Tech. Dig. 4, 12-26 (1983). 31M. Spivack , "Accuracy of the Moments from
NASA Astrophysics Data System (ADS)
Taozheng
2015-08-01
In recent years, due to the high stability and privacy of vortex beam, the optical vortex became the hot spot in research of atmospheric optical transmission .We numerically investigate the propagation of vector elliptical vortex beams in turbulent atmosphere. Numerical simulations are realized with random phase screen. To simulate the vortex beam transport processes in the atmospheric turbulence. Using numerical simulation method to study in the atmospheric turbulence vortex beam transmission characteristics (light intensity, phase, polarization, etc.) Our simulation results show that, vortex beam in the atmospheric transmission distortion is small, make elliptic vortex beam for space communications is a promising strategy.
Simulation study of wakefield generation by two color laser pulses propagating in homogeneous plasma
Kumar Mishra, Rohit; Saroch, Akanksha; Jha, Pallavi
2013-09-15
This paper deals with a two-dimensional simulation of electric wakefields generated by two color laser pulses propagating in homogeneous plasma, using VORPAL simulation code. The laser pulses are assumed to have a frequency difference equal to the plasma frequency. Simulation studies are performed for two similarly as well as oppositely polarized laser pulses and the respective amplitudes of the generated longitudinal wakefields for the two cases are compared. Enhancement of wake amplitude for the latter case is reported. This simulation study validates the analytical results presented by Jha et al.[Phys. Plasmas 20, 053102 (2013)].
Lill, J V; Broughton, J Q
2000-06-19
The method of Parrinello and Rahman is generalized to include slip in addition to deformation of the simulation cell. Equations of motion are derived, and a microscopic expression for traction is introduced. Lagrangian constraints are imposed so that the combination of deformation and slip conform to the invariant plane shear characteristic of martensites. Simulation of a model transformation demonstrates the nucleation and propagation of a glissile dislocation interface.
Simulation techniques for estimating error in the classification of normal patterns
NASA Technical Reports Server (NTRS)
Whitsitt, S. J.; Landgrebe, D. A.
1974-01-01
Methods of efficiently generating and classifying samples with specified multivariate normal distributions were discussed. Conservative confidence tables for sample sizes are given for selective sampling. Simulation results are compared with classified training data. Techniques for comparing error and separability measure for two normal patterns are investigated and used to display the relationship between the error and the Chernoff bound.
Runyon, Matthew K; Kastrup, Christian J; Johnson-Kerner, Bethany L; Ha, Thuong G Van; Ismagilov, Rustem F
2008-03-19
This paper describes microfluidic experiments with human blood plasma and numerical simulations to determine the role of fluid flow in the regulation of propagation of blood clotting. We demonstrate that propagation of clotting can be regulated by different mechanisms depending on the volume-to-surface ratio of a channel. In small channels, propagation of clotting can be prevented by surface-bound inhibitors of clotting present on vessel walls. In large channels, where surface-bound inhibitors are ineffective, propagation of clotting can be prevented by a shear rate above a threshold value, in agreement with predictions of a simple reaction-diffusion mechanism. We also demonstrate that propagation of clotting in a channel with a large volume-to-surface ratio and a shear rate below a threshold shear rate can be slowed by decreasing the production of thrombin, an activator of clotting. These in vitro results make two predictions, which should be experimentally tested in vivo. First, propagation of clotting from superficial veins to deep veins may be regulated by shear rate, which might explain the correlation between superficial thrombosis and the development of deep vein thrombosis (DVT). Second, nontoxic thrombin inhibitors with high binding affinities could be locally administered to prevent recurrent thrombosis after a clot has been removed. In addition, these results demonstrate the utility of simplified mechanisms and microfluidics for generating and testing predictions about the dynamics of complex biochemical networks.
NASA Astrophysics Data System (ADS)
Zeng, Qinglei; Liu, Zhanli; Wang, Tao; Gao, Yue; Zhuang, Zhuo
2017-05-01
In hydraulic fracturing process in shale rock, multiple fractures perpendicular to a horizontal wellbore are usually driven to propagate simultaneously by the pumping operation. In this paper, a numerical method is developed for the propagation of multiple hydraulic fractures (HFs) by fully coupling the deformation and fracturing of solid formation, fluid flow in fractures, fluid partitioning through a horizontal wellbore and perforation entry loss effect. The extended finite element method (XFEM) is adopted to model arbitrary growth of the fractures. Newton's iteration is proposed to solve these fully coupled nonlinear equations, which is more efficient comparing to the widely adopted fixed-point iteration in the literatures and avoids the need to impose fluid pressure boundary condition when solving flow equations. A secant iterative method based on the stress intensity factor (SIF) is proposed to capture different propagation velocities of multiple fractures. The numerical results are compared with theoretical solutions in literatures to verify the accuracy of the method. The simultaneous propagation of multiple HFs is simulated by the newly proposed algorithm. The coupled influences of propagation regime, stress interaction, wellbore pressure loss and perforation entry loss on simultaneous propagation of multiple HFs are investigated.
Quantification of uncertainties in OCO-2 measurements of XCO2: simulations and linear error analysis
NASA Astrophysics Data System (ADS)
Connor, Brian; Bösch, Hartmut; McDuffie, James; Taylor, Tommy; Fu, Dejian; Frankenberg, Christian; O'Dell, Chris; Payne, Vivienne H.; Gunson, Michael; Pollock, Randy; Hobbs, Jonathan; Oyafuso, Fabiano; Jiang, Yibo
2016-10-01
We present an analysis of uncertainties in global measurements of the column averaged dry-air mole fraction of CO2 (XCO2) by the NASA Orbiting Carbon Observatory-2 (OCO-2). The analysis is based on our best estimates for uncertainties in the OCO-2 operational algorithm and its inputs, and uses simulated spectra calculated for the actual flight and sounding geometry, with measured atmospheric analyses. The simulations are calculated for land nadir and ocean glint observations. We include errors in measurement, smoothing, interference, and forward model parameters. All types of error are combined to estimate the uncertainty in XCO2 from single soundings, before any attempt at bias correction has been made. From these results we also estimate the "variable error" which differs between soundings, to infer the error in the difference of XCO2 between any two soundings. The most important error sources are aerosol interference, spectroscopy, and instrument calibration. Aerosol is the largest source of variable error. Spectroscopy and calibration, although they are themselves fixed error sources, also produce important variable errors in XCO2. Net variable errors are usually < 1 ppm over ocean and ˜ 0.5-2.0 ppm over land. The total error due to all sources is ˜ 1.5-3.5 ppm over land and ˜ 1.5-2.5 ppm over ocean.
Packo, P.; Staszewski, W. J.; Uhl, T.
2016-01-01
Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808
Molecular dynamics simulation of effect of hydrogen atoms on crack propagation behavior of α-Fe
NASA Astrophysics Data System (ADS)
Song, H. Y.; Zhang, L.; Xiao, M. X.
2016-12-01
The effect of the hydrogen concentration and hydrogen distribution on the mechanical properties of α-Fe with a pre-existing unilateral crack under tensile loading is investigated by molecular dynamics simulation. The results reveal that the models present good ductility when the front region of crack tip has high local hydrogen concentration. The peak stress of α-Fe decreases with increasing hydrogen concentration. The studies also indicate that for the samples with hydrogen atoms, the crack propagation behavior is independent of the model size and boundaries. In addition, the crack propagation behavior is significantly influenced by the distribution of hydrogen atoms.
NASA Astrophysics Data System (ADS)
Winey, J. M.; Gupta, Y. M.
2010-05-01
An anisotropic continuum material model was developed to describe the thermomechanical response of unreacted pentaerythritol tetranitrate (PETN) single crystals to shock wave loading. Using this model, which incorporates nonlinear elasticity and crystal plasticity in a thermodynamically consistent tensor formulation, wave propagation simulations were performed to compare to experimental wave profiles [J. J. Dick and J. P. Ritchie, J. Appl. Phys. 76, 2726 (1994)] for PETN crystals under plate impact loading to 1.2 GPa. Our simulations show that for shock propagation along the [100] orientation where deformation across shear planes is sterically unhindered, a dislocation-based model provides a good match to the wave profile data. For shock propagation along the [110] direction, where deformation across shear planes is sterically hindered, a dislocation-based model cannot account for the observed strain-softening behavior. Instead, a shear cracking model was developed, providing good agreement with the data for [110] and [001] shock orientations. These results show that inelastic deformation due to hindered and unhindered shear in PETN occurs through mechanisms that are physically different. In addition, results for shock propagation normal to the (101) crystal plane suggest that the primary slip system identified from quasistatic indentation tests is not activated under shock wave loading. Overall, results from our continuum simulations are consistent with a previously proposed molecular mechanism for shock-induced chemical reaction in PETN in which the formation of polar conformers, due to hindered shear, facilitates the development of ionic reaction pathways.
NASA Astrophysics Data System (ADS)
Hardie, Russell C.; Power, Jonathan D.; LeMaster, Daniel A.; Droege, Douglas R.; Gladysz, Szymon; Bose-Pillai, Santasri
2017-07-01
We present a numerical wave propagation method for simulating imaging of an extended scene under anisoplanatic conditions. While isoplanatic simulation is relatively common, few tools are specifically designed for simulating the imaging of extended scenes under anisoplanatic conditions. We provide a complete description of the proposed simulation tool, including the wave propagation method used. Our approach computes an array of point spread functions (PSFs) for a two-dimensional grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. The degradation includes spatially varying warping and blurring. To produce the PSF array, we generate a series of extended phase screens. Simulated point sources are numerically propagated from an array of positions on the object plane, through the phase screens, and ultimately to the focal plane of the simulated camera. Note that the optical path for each PSF will be different, and thus, pass through a different portion of the extended phase screens. These different paths give rise to a spatially varying PSF to produce anisoplanatic effects. We use a method for defining the individual phase screen statistics that we have not seen used in previous anisoplanatic simulations. We also present a validation analysis. In particular, we compare simulated outputs with the theoretical anisoplanatic tilt correlation and a derived differential tilt variance statistic. This is in addition to comparing the long- and short-exposure PSFs and isoplanatic angle. We believe this analysis represents the most thorough validation of an anisoplanatic simulation to date. The current work is also unique that we simulate and validate both constant and varying Cn2(z) profiles. Furthermore, we simulate sequences with both temporally independent and temporally correlated turbulence effects. Temporal correlation is introduced
NASA Astrophysics Data System (ADS)
Na, D. H.; Lee, Y.
Three-dimensional finite element simulation has been carried out to understand better the crack initiation and growth at the edge side of silicon steel sheet during cold rolling, which is attributable to elastic deformation of work roll, i.e., roll bending. Strain-controlled failure model was coupled with finite element method and a series of FE simulation has been carried out while three different roll bending modes are considered. FE simulation shows that the negative roll bending mode during rolling affects significantly the crack initiation behavior. When the strain for failure was reduced by 20%, number of elements removed was increased by about 305%. If an initial crack with 2.5mm in length was assumed on the strip, the initial edge crack propagated toward inner region of strip and the propagated length is about 10times of the initial edge crack length.
Freudenthal, Daniel; Pine, Julian M; Jones, Gary; Gobet, Fernand
2015-10-01
One of the most striking features of children's early multi-word speech is their tendency to produce non-finite verb forms in contexts in which a finite verb form is required (Optional Infinitive [OI] errors, Wexler, 1994). MOSAIC is a computational model of language learning that simulates developmental changes in the rate of OI errors across several different languages by learning compound finite constructions from the right edge of the utterance (Freudenthal, Pine, Aguado-Orea, & Gobet, 2007; Freudenthal, Pine, & Gobet, 2006a, 2009). However, MOSAIC currently only simulates the pattern of OI errors in declaratives, and there are important differences in the cross-linguistic patterning of OI errors in declaratives and Wh- questions. In the present study, we describe a new version of MOSAIC that learns from both the right and left edges of the utterance. Our simulations demonstrate that this new version of the model is able to capture the cross-linguistic patterning of OI errors in declaratives in English, Dutch, German and Spanish by learning from declarative input, and the cross-linguistic patterning of OI errors in Wh- questions in English, German and Spanish by learning from interrogative input. These results show that MOSAIC is able to provide an integrated account of the cross-linguistic patterning of OI errors in declaratives and Wh- questions, and provide further support for the view, instantiated in MOSAIC, that OI errors are compound-finite utterances with missing modals or auxiliaries.
Error and Uncertainty Analysis for Ecological Modeling and Simulation
2001-12-01
data and indicator semi- variograms . The indicator semi- variograms imply spatial similarity of indicator variables depending on the separation vector of...sample data is done, the number of cutoff values, equal to the number of indicator semi- variograms used in simulation, will affect structure and...resolution, the larger the predicted slope values, their variance and semi- variogram given a separation distance of data . The reason may be that
Theory and simulations of electrostatic field error transport
Dubin, Daniel H. E.
2008-07-15
Asymmetries in applied electromagnetic fields cause plasma loss (or compression) in stellarators, tokamaks, and non-neutral plasmas. Here, this transport is studied using idealized simulations that follow guiding centers in given fields, neglecting collective effects on the plasma evolution, but including collisions at rate {nu}. For simplicity the magnetic field is assumed to be uniform; transport is due to asymmetries in applied electrostatic fields. Also, the Fokker-Planck equation describing the particle distribution is solved, and the predicted transport is found to agree with the simulations. Banana, plateau, and fluid regimes are identified and observed in the simulations. When separate trapped particle populations are created by application of an axisymmetric squeeze potential, enhanced transport regimes are observed, scaling as {radical}({nu}) when {nu}<{omega}{sub 0}<{omega}{sub B} and as 1/{nu} when {omega}{sub 0}<{nu}<{omega}{sub B} (where {omega}{sub 0} and {omega}{sub B} are the rotation and axial bounce frequencies, respectively). These regimes are similar to those predicted for neoclassical transport in stellarators.
Pointing-error simulations of the DSS-13 antenna due to wind disturbances
NASA Technical Reports Server (NTRS)
Gawronski, W.; Bienkiewicz, B.; Hill, R. E.
1992-01-01
Accurate spacecraft tracking by the NASA Deep Space Network (DSN) antennas must be assured during changing weather conditions. Wind disturbances are the main source of tracking errors. The development of a wind-force model and simulations of wind-induced pointing errors of DSN antennas are presented. The antenna model includes the antenna structure, the elevation and azimuth servos, and the tracking controller. Simulation results show that pointing errors due to wind gusts are of the same order as errors due to static wind pressure and that these errors (similar to those of static wind pressure) satisfy the velocity quadratic law. The presented methodology is used for wind-disturbance estimation and for the design of an antenna controller with wind-disturbance rejection properties.
NASA Astrophysics Data System (ADS)
Blanco, Joaquín. E.; Nolan, David S.; Tulich, Stefan N.
2016-10-01
Convectively coupled Kelvin waves (CCKWs) represent a significant contribution to the total variability of the Intertropical Convergence Zone (ITCZ). This study analyzes the structure and propagation of CCKWs simulated by the Weather Research and Forecasting (WRF) model using two types of idealized domains. These are the "aquachannel," a flat rectangle on a beta plane with zonally periodic boundary conditions and length equal to the Earth's circumference at the equator, and the "aquapatch," a square domain with zonal extent equal to one third of the aquachannel's length. A series of simulations are performed, including a doubly nested aquapatch, in which convection is solved explicitly along the equator. The model intercomparison is carried out throughout the use of several techniques such as power spectra, filtering, wave tracking, and compositing, and it is extended to some simulations from the Aquaplanet Experiment (APE). Results show that despite the equatorial superrotation bias produced by the WRF simulations, the CCKWs simulated with this model propagate with similar phase speeds (relative to the low-level mean flow) as the corresponding waves from the APE simulations. Horizontal and vertical structures of the CCKWs simulated with aquachannels are also in overall good agreement with those from aquaplanet simulations and observations, although there is a distortion of the zonal extent of anomalies when the shorter aquapatch is used.
Nonlinear Simulation of Plasma Response to the NSTX Error Field
NASA Astrophysics Data System (ADS)
Breslau, J. A.; Park, J. K.; Boozer, A. H.; Park, W.
2008-11-01
In order to better understand the effects of the time-varying error field in NSTX on rotation braking, which impedes RWM stabilization, we model the plasma response to an applied low-n external field perturbation using the resistive MHD model in the M3D code. As an initial benchmark, we apply an m=2, n=1 perturbation to the flux at the boundary of a non-rotating model equilibrium and compare the resulting steady-state island sizes with those predicted by the ideal linear code IPEC. For sufficiently small perturbations, the codes agree; for larger perturbations, the nonlinear correction yields an upper limit on the island width beyond which stochasticity sets in. We also present results of scaling studies showing the effects of finite resistivity on island size in NSTX, and of time-dependent studies of the interaction between these islands and plasma rotation. The M3D-C1 code is also being evaluated as a tool for this analysis; first results will be shown. J.E. Menard, et al., Nucl. Fus. 47, S645 (2007). W. Park, et al., Phys. Plasmas 6, 1796 (1999). J.K. Park, et al., Phys. Plasmas 14, 052110 (2007). S.C. Jardin, et al., J. Comp. Phys. 226, 2146 (2007).
Experimental and Computational Models for Simulating Sound Propagation Within the Lungs
Acikgoz, S.; Ozer, M. B.; Mansy, H. A.; Sandler, R. H.
2008-01-01
An acoustic boundary element model is used to simulate sound propagation in the lung parenchyma and surrounding chest wall. It is validated theoretically and numerically and then compared with experimental studies on lung-chest phantom models that simulate the lung pathology of pneumothorax. Studies quantify the effect of the simulated lung pathology on the resulting acoustic field measured at the phantom chest surface. This work is relevant to the development of advanced auscultatory techniques for lung, vascular, and cardiac sounds within the torso that utilize multiple noninvasive sensors to create acoustic images of the sound generation and transmission to identify certain pathologies. PMID:18568101
Simulation of picosecond pulse propagation in fibre-based radiation shaping units
NASA Astrophysics Data System (ADS)
Kuptsov, G. V.; Petrov, V. V.; Laptev, A. V.; Petrov, V. A.; Pestryakov, E. V.
2016-09-01
We have performed a numerical simulation of picosecond pulse propagation in a combined stretcher consisting of a segment of a telecommunication fibre and diffraction holographic gratings. The process of supercontinuum generation in a nonlinear photoniccrystal fibre pumped by picosecond pulses is simulated by solving numerically the generalised nonlinear Schrödinger equation; spectral and temporal pulse parameters are determined. Experimental data are in good agreement with simulation results. The obtained results are used to design a high-power femtosecond laser system with a pulse repetition rate of 1 kHz.
Classical simulation of quantum error correction in a Fibonacci anyon code
NASA Astrophysics Data System (ADS)
Burton, Simon; Brell, Courtney G.; Flammia, Steven T.
2017-02-01
Classically simulating the dynamics of anyonic excitations in two-dimensional quantum systems is likely intractable in general because such dynamics are sufficient to implement universal quantum computation. However, processes of interest for the study of quantum error correction in anyon systems are typically drawn from a restricted class that displays significant structure over a wide range of system parameters. We exploit this structure to classically simulate, and thereby demonstrate the success of, an error-correction protocol for a quantum memory based on the universal Fibonacci anyon model. We numerically simulate a phenomenological model of the system and noise processes on lattice sizes of up to 128 ×128 sites, and find a lower bound on the error-correction threshold of approximately 0.125 errors per edge, which is comparable to those previously known for Abelian and (nonuniversal) non-Abelian anyon models.
A Compact Code for Simulations of Quantum Error Correction in Classical Computers
Nyman, Peter
2009-03-10
This study considers implementations of error correction in a simulation language on a classical computer. Error correction will be necessarily in quantum computing and quantum information. We will give some examples of the implementations of some error correction codes. These implementations will be made in a more general quantum simulation language on a classical computer in the language Mathematica. The intention of this research is to develop a programming language that is able to make simulations of all quantum algorithms and error corrections in the same framework. The program code implemented on a classical computer will provide a connection between the mathematical formulation of quantum mechanics and computational methods. This gives us a clear uncomplicated language for the implementations of algorithms.
NASA Technical Reports Server (NTRS)
Matda, Y.; Crawford, F. W.
1974-01-01
An economical low noise plasma simulation model is applied to a series of problems associated with electrostatic wave propagation in a one-dimensional, collisionless, Maxwellian plasma, in the absence of magnetic field. The model is described and tested, first in the absence of an applied signal, and then with a small amplitude perturbation, to establish the low noise features and to verify the theoretical linear dispersion relation at wave energy levels as low as 0.000,001 of the plasma thermal energy. The method is then used to study propagation of an essentially monochromatic plane wave. Results on amplitude oscillation and nonlinear frequency shift are compared with available theories. The additional phenomena of sideband instability and satellite growth, stimulated by large amplitude wave propagation and the resulting particle trapping, are described.
NASA Astrophysics Data System (ADS)
Rakesh, V.; Kantharao, B.
2017-03-01
Data assimilation is considered as one of the effective tools for improving forecast skill of mesoscale models. However, for optimum utilization and effective assimilation of observations, many factors need to be taken into account while designing data assimilation methodology. One of the critical components that determines the amount and propagation observation information into the analysis, is model background error statistics (BES). The objective of this study is to quantify how BES in data assimilation impacts on simulation of heavy rainfall events over a southern state in India, Karnataka. Simulations of 40 heavy rainfall events were carried out using Weather Research and Forecasting Model with and without data assimilation. The assimilation experiments were conducted using global and regional BES while the experiment with no assimilation was used as the baseline for assessing the impact of data assimilation. The simulated rainfall is verified against high-resolution rain-gage observations over Karnataka. Statistical evaluation using several accuracy and skill measures shows that data assimilation has improved the heavy rainfall simulation. Our results showed that the experiment using regional BES outperformed the one which used global BES. Critical thermo-dynamic variables conducive for heavy rainfall like convective available potential energy simulated using regional BES is more realistic compared to global BES. It is pointed out that these results have important practical implications in design of forecast platforms while decision-making during extreme weather events
Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, Ronald M.
2015-01-01
The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.
Mézière, Fabien; Muller, Marie; Dobigny, Blandine; Bossy, Emmanuel; Derode, Arnaud
2013-02-01
Ultrasound propagation in clusters of elliptic (two-dimensional) or ellipsoidal (three-dimensional) scatterers randomly distributed in a fluid is investigated numerically. The essential motivation for the present work is to gain a better understanding of ultrasound propagation in trabecular bone. Bone microstructure exhibits structural anisotropy and multiple wave scattering. Some phenomena remain partially unexplained, such as the propagation of two longitudinal waves. The objective of this study was to shed more light on the occurrence of these two waves, using finite-difference simulations on a model medium simpler than bone. Slabs of anisotropic, scattering media were randomly generated. The coherent wave was obtained through spatial and ensemble-averaging of the transmitted wavefields. When varying relevant medium parameters, four of them appeared to play a significant role for the observation of two waves: (i) the solid fraction, (ii) the direction of propagation relatively to the scatterers orientation, (iii) the ability of scatterers to support shear waves, and (iv) a continuity of the solid matrix along the propagation. These observations are consistent with the hypothesis that fast waves are guided by the locally plate/bar-like solid matrix. If confirmed, this interpretation could significantly help developing approaches for a better understanding of trabecular bone micro-architecture using ultrasound.
Xiao, Xifeng; Voelz, David G; Toselli, Italo; Korotkova, Olga
2016-05-20
Experimental and theoretical work has shown that atmospheric turbulence can exhibit "non-Kolmogorov" behavior including anisotropy and modifications of the classically accepted spatial power spectral slope, -11/3. In typical horizontal scenarios, atmospheric anisotropy implies that the variations in the refractive index are more spatially correlated in both horizontal directions than in the vertical. In this work, we extend Gaussian beam theory for propagation through Kolmogorov turbulence to the case of anisotropic turbulence along the horizontal direction. We also study the effects of different spatial power spectral slopes on the beam propagation. A description is developed for the average beam intensity profile, and the results for a range of scenarios are demonstrated for the first time with a wave optics simulation and a spatial light modulator-based laboratory benchtop counterpart. The theoretical, simulation, and benchtop intensity profiles show good agreement and illustrate that an elliptically shaped beam profile can develop upon propagation. For stronger turbulent fluctuation regimes and larger anisotropies, the theory predicts a slightly more elliptical form of the beam than is generated by the simulation or benchtop setup. The theory also predicts that without an outer scale limit, the beam width becomes unbounded as the power spectral slope index α approaches a maximum value of 4. This behavior is not seen in the simulation or benchtop results because the numerical phase screens used for these studies do not model the unbounded wavefront tilt component implied in the analytic theory.
Dobie, Gordon; Spencer, Andrew; Burnham, Kenneth; Pierce, S Gareth; Worden, Keith; Galbraith, Walter; Hayward, Gordon
2011-04-01
A computer simulator, to facilitate the design and assessment of a reconfigurable, air-coupled ultrasonic scanner is described and evaluated. The specific scanning system comprises a team of remote sensing agents, in the form of miniature robotic platforms that can reposition non-contact Lamb wave transducers over a plate type of structure, for the purpose of non-destructive evaluation (NDE). The overall objective is to implement reconfigurable array scanning, where transmission and reception are facilitated by different sensing agents which can be organised in a variety of pulse-echo and pitch-catch configurations, with guided waves used to generate data in the form of 2-D and 3-D images. The ability to reconfigure the scanner adaptively requires an understanding of the ultrasonic wave generation, its propagation and interaction with potential defects and boundaries. Transducer behaviour has been simulated using a linear systems approximation, with wave propagation in the structure modelled using the local interaction simulation approach (LISA). Integration of the linear systems and LISA approaches are validated for use in Lamb wave scanning by comparison with both analytic techniques and more computationally intensive commercial finite element/difference codes. Starting with fundamental dispersion data, the paper goes on to describe the simulation of wave propagation and the subsequent interaction with artificial defects and plate boundaries, before presenting a theoretical image obtained from a team of sensing agents based on the current generation of sensors and instrumentation.
PUQ: A code for non-intrusive uncertainty propagation in computer simulations
NASA Astrophysics Data System (ADS)
Hunt, Martin; Haley, Benjamin; McLennan, Michael; Koslowski, Marisol; Murthy, Jayathi; Strachan, Alejandro
2015-09-01
We present a software package for the non-intrusive propagation of uncertainties in input parameters through computer simulation codes or mathematical models and associated analysis; we demonstrate its use to drive micromechanical simulations using a phase field approach to dislocation dynamics. The PRISM uncertainty quantification framework (PUQ) offers several methods to sample the distribution of input variables and to obtain surrogate models (or response functions) that relate the uncertain inputs with the quantities of interest (QoIs); the surrogate models are ultimately used to propagate uncertainties. PUQ requires minimal changes in the simulation code, just those required to annotate the QoI(s) for its analysis. Collocation methods include Monte Carlo, Latin Hypercube and Smolyak sparse grids and surrogate models can be obtained in terms of radial basis functions and via generalized polynomial chaos. PUQ uses the method of elementary effects for sensitivity analysis in Smolyak runs. The code is available for download and also available for cloud computing in nanoHUB. PUQ orchestrates runs of the nanoPLASTICITY tool at nanoHUB where users can propagate uncertainties in dislocation dynamics simulations using simply a web browser, without downloading or installing any software.
NASA Astrophysics Data System (ADS)
Vannoni, Maurizio; Yang, Fan; Sinn, Harald
2015-01-01
An algorithm to solve the inverse problem of synchrotron radiation adaptive mirrors' tuning is presented. The influence functions are modeled and calculated for a generic bimorph mirror. An error function minimization method is used to simulate the correction of the surface figure of the mirror in some particular conditions. Possible applications to free-electron-laser mirror simulations are pointed out.
Sophocleous, M.A.
1991-01-01
The hypothesis is explored that groundwater-level rises in the Great Bend Prairie aquifer of Kansas are caused not only by water percolating downward through the soil but also by pressure pulses from stream flooding that propagate in a translatory motion through numerous high hydraulic diffusivity buried channels crossing the Great Bend Prairie aquifer in an approximately west to east direction. To validate this hypothesis, two transects of wells in a north-south and east-west orientation crossing and alongside some paleochannels in the area were instrumented with water-level-recording devices; streamflow data from all area streams were obtained from available stream-gaging stations. A theoretical approach was also developed to conceptualize numerically the stream-aquifer processes. The field data and numerical simulations provided support for the hypothesis. Thus, observation wells located along the shoulders or in between the inferred paleochannels show little or no fluctuations and no correlations with streamflow, whereas wells located along paleochannels show high water-level fluctuations and good correlation with the streamflows of the stream connected to the observation site by means of the paleochannels. The stream-aquifer numerical simulation results demonstrate that the larger the hydraulic diffusivity of the aquifer, the larger the extent of pressure pulse propagation and the faster the propagation speed. The conceptual simulation results indicate that long-distance propagation of stream floodwaves (of the order of tens of kilometers) through the Great Bend aquifer is indeed feasible with plausible stream and aquifer parameters. The sensitivity analysis results indicate that the extent and speed of pulse propagation is more sensitive to variations of stream roughness (Manning's coefficient) and stream channel slope than to any aquifer parameter. ?? 1991.
NASA Astrophysics Data System (ADS)
Jiang, Shan; Sewell, Thomas D.; Thompson, Donald L.
2015-06-01
We are interested in understanding the fundamental processes that occur during propagation of shock waves across the crystal-melt interface in molecular substances. We have carried out molecular dynamics simulations of shock passage from the nitromethane (100)-oriented crystal into the melt and vice versa using the fully flexible, non-reactive Sorescu, Rice, and Thompson force field. A stable interface was established for a temperature near the melting point by using a combination of isobaric-isothermal (NPT) and isochoric-isothermal (NVT) simulations. The equilibrium bulk and interfacial regions were characterized using spatial-temporal distributions of molecular number density, kinetic and potential energy, and C-N bond orientations. Those same properties were calculated as functions of time during shock propagation. As expected, the local temperatures (intermolecular, intramolecular, and total) and stress states differed significantly between the liquid and crystal regions and depending on the direction of shock propagation. Substantial differences in the spatial distribution of shock-induced defect structures in the crystalline region were observed depending on the direction of shock propagation. Research supported by the U.S. Army Research Office.
Simulation of near-surface seismic wave propagation in porous media
NASA Astrophysics Data System (ADS)
Sidler, Rolf; Carcione, José M.; Holliger, Klaus
2010-05-01
We present a novel numerical algorithm for the simulation of poro-elastic seismic wave propagation in general and for the accurate and realistic modeling of Scholte, Stoneley, and Rayleigh waves in porous media in particular. The differential equations of motion are based on Biot's theory of poro-elasticity and solved with a pseudo-spectral approach using Fourier and Chebyshev methods to compute the spatial derivatives along the horizontal and vertical directions, respectively. We stretch the mesh in the vertical direction to decrease the minimum grid spacing and reduce the computational cost. The free-surface boundary conditions are implemented with a characteristics approach, where the characteristics variables are evaluated at zero viscosity. The same procedure is used to model seismic wave propagation at the interface between a fluid and porous medium. In this case, each medium is represented by a different mesh and the two meshes are combined through a domain-decomposition method. We simulate seismic wave propagation with open and sealed boundary conditions and compare the numerical solution to an analytical solution obtained from the 2-D Green's function. This algorithm represents a versatile and powerful basis for the poro-elastic analysis and interpretation of near-surface seismic wave propagation phenomena in general and of seismic surface-wave-type data in particular.
Wendelberger, James G.
2016-10-31
These are slides from a presentation made by a researcher from Los Alamos National Laboratory. The following topics are covered: sources of error for NDA gamma measurements, precision and accuracy are two important characteristics of measurements, four items processed in a material balance area during the inventory time period, inventory difference and propagation of variance, sum in quadrature, and overview of the ID/POV process.
Simulation of Inertial Navigation System Errors at Aerial Photography from Uav
NASA Astrophysics Data System (ADS)
Shults, R.
2017-05-01
The problem of accuracy determination of the UAV position using INS at aerial photography can be resolved in two different ways: modelling of measurement errors or in-field calibration for INS. The paper presents the results of INS errors research by mathematical modelling. In paper were considered the following steps: developing of INS computer model; carrying out INS simulation; using reference data without errors, estimation of errors and their influence on maps creation accuracy by UAV data. It must be remembered that the values of orientation angles and the coordinates of the projection centre may change abruptly due to the influence of the atmosphere (different air density, wind, etc.). Therefore, the mathematical model of the INS was constructed taking into account the use of different models of wind gusts. For simulation were used typical characteristics of micro electromechanical (MEMS) INS and parameters of standard atmosphere. According to the simulation established domination of INS systematic errors that accumulate during the execution of photographing and require compensation mechanism, especially for orientation angles. MEMS INS have a high level of noise at the system input. Thanks to the developed model, we are able to investigate separately the impact of noise in the absence of systematic errors. According to the research was found that on the interval of observations in 5 seconds the impact of random and systematic component is almost the same. The developed model of INS errors studies was implemented in Matlab software environment and without problems can be improved and enhanced with new blocks.
SIMulation of Medication Error induced by Clinical Trial drug labeling: the SIMME-CT study.
Dollinger, Cecile; Schwiertz, Vérane; Sarfati, Laura; Gourc-Berthod, Chloé; Guédat, Marie-Gabrielle; Alloux, Céline; Vantard, Nicolas; Gauthier, Noémie; He, Sophie; Kiouris, Elena; Caffin, Anne-Gaelle; Bernard, Delphine; Ranchon, Florence; Rioufol, Catherine
2016-06-01
To assess the impact of investigational drug labels on the risk of medication error in drug dispensing. A simulation-based learning program focusing on investigational drug dispensing was conducted. The study was undertaken in an Investigational Drugs Dispensing Unit of a University Hospital of Lyon, France. Sixty-three pharmacy workers (pharmacists, residents, technicians or students) were enrolled. Ten risk factors were selected concerning label information or the risk of confusion with another clinical trial. Each risk factor was scored independently out of 5: the higher the score, the greater the risk of error. From 400 labels analyzed, two groups were selected for the dispensing simulation: 27 labels with high risk (score ≥3) and 27 with low risk (score ≤2). Each question in the learning program was displayed as a simulated clinical trial prescription. Medication error was defined as at least one erroneous answer (i.e. error in drug dispensing). For each question, response times were collected. High-risk investigational drug labels correlated with medication error and slower response time. Error rates were significantly 5.5-fold higher for high-risk series. Error frequency was not significantly affected by occupational category or experience in clinical trials. SIMME-CT is the first simulation-based learning tool to focus on investigational drug labels as a risk factor for medication error. SIMME-CT was also used as a training tool for staff involved in clinical research, to develop medication error risk awareness and to validate competence in continuing medical education. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
Accelerating spectral-element simulations of seismic wave propagation using local time stepping
NASA Astrophysics Data System (ADS)
Peter, D. B.; Rietmann, M.; Galvez, P.; Nissen-Meyer, T.; Grote, M.; Schenk, O.
2013-12-01
Seismic tomography using full-waveform inversion requires accurate simulations of seismic wave propagation in complex 3D media. However, finite element meshing in complex media often leads to areas of local refinement, generating small elements that accurately capture e.g. strong topography and/or low-velocity sediment basins. For explicit time schemes, this dramatically reduces the global time-step for wave-propagation problems due to numerical stability conditions, ultimately making seismic inversions prohibitively expensive. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. Numerical simulations are thus liberated of global time-step constraints potentially speeding up simulation runtimes significantly. We present here a new, efficient multi-level LTS-Newmark scheme for general use with spectral-element methods (SEM) with applications in seismic wave propagation. We fit the implementation of our scheme onto the package SPECFEM3D_Cartesian, which is a widely used community code, simulating seismic and acoustic wave propagation in earth-science applications. Our new LTS scheme extends the 2nd-order accurate Newmark time-stepping scheme, and leads to an efficient implementation, producing real-world speedup of multi-resolution seismic applications. Furthermore, we generalize the method to utilize many refinement levels with a design specifically for continuous finite elements. We demonstrate performance speedup using a state-of-the-art dynamic earthquake rupture model for the Tohoku-Oki event, which is currently limited by small elements along the rupture fault. Utilizing our new algorithmic LTS implementation together with advances in exploiting graphic processing units (GPUs), numerical seismic wave propagation simulations in complex media will dramatically reduce computation times, empowering high
NASA Astrophysics Data System (ADS)
Gu, Boliang; Nihei, Kurt T.; Myer, Larry R.
1996-07-01
This paper describes a boundary integral equation method for simulating two-dimensional elastic wave propagation in a rock mass with nonwelded discontinuities, such as fractures, joints, and faults. The numerical formulation is based on the three-dimensional boundary integral equations that are reduced to two dimensions by numerical integration along the axis orthogonal to the plane of interest. The numerical technique requires the assembly and solution of the coefficient matrix only for the first time step, resulting in a significant reduction in computational time. Nonwelded discontinuities are each treated as an elastic contact between blocks of a fractured rock mass. Across such an elastic contact, seismic stresses are continuous and particle displacements are discontinuous by an amount which is proportional to the stress on the discontinuity and inversely to the specific stiffness of the discontinuity. Simulations demonstrate that such formulated boundary element method successfully models elastic wave propagation along and across a single fracture generated by a line source.
Erik P. Gilson; Ronald C. Davidson; Philip C. Efthimion; Richard Majeski
2004-01-29
The results presented here demonstrate that the Paul Trap Simulator Experiment (PTSX) simulates the propagation of intense charged particle beams over distances of many kilometers through magnetic alternating-gradient (AG) transport systems by making use of the similarity between the transverse dynamics of particles in the two systems. Plasmas have been trapped that correspond to normalized intensity parameters s = wp2 (0)/2wq2 * 0.8, where wp(r) is the plasmas frequency and wq is the average transverse focusing frequency in the smooth-focusing approximation. The measured root-mean-squared (RMS) radius of the beam is consistent with a model, equally applicable to both PTSX and AG systems that balances the average inward confining force against the outward pressure-gradient and space-charge forces. The PTSX device confines one-component cesium ion plasmas for hundreds of milliseconds, which is equivalent to over 10 km of beam propagation.
NASA Astrophysics Data System (ADS)
Intriligator, D. S.; Sun, W.; Detman, T. R.; Dryer, Ph D., M.; Intriligator, J.; Deehr, C. S.; Webber, W. R.; Gloeckler, G.; Miller, W. D.
2015-12-01
Large solar events can have severe adverse global impacts at Earth. These solar events also can propagate throughout the heliopshere and into the interstellar medium. We focus on the July 2012 and Halloween 2003 solar events. We simulate these events starting from the vicinity of the Sun at 2.5 Rs. We compare our three dimensional (3D) time-dependent simulations to available spacecraft (s/c) observations at 1 AU and beyond. Based on the comparisons of the predictions from our simulations with in-situ measurements we find that the effects of these large solar events can be observed in the outer heliosphere, the heliosheath, and even into the interstellar medium. We use two simulation models. The HAFSS (HAF Source Surface) model is a kinematic model. HHMS-PI (Hybrid Heliospheric Modeling System with Pickup protons) is a numerical magnetohydrodynamic solar wind (SW) simulation model. Both HHMS-PI and HAFSS are ideally suited for these analyses since starting at 2.5 Rs from the Sun they model the slowly evolving background SW and the impulsive, time-dependent events associated with solar activity. Our models naturally reproduce dynamic 3D spatially asymmetric effects observed throughout the heliosphere. Pre-existing SW background conditions have a strong influence on the propagation of shock waves from solar events. Time-dependence is a crucial aspect of interpreting s/c data. We show comparisons of our simulation results with STEREO A, ACE, Ulysses, and Voyager s/c observations.
NASA Astrophysics Data System (ADS)
Ohori, Tomohiro; Yoshida, Shuhei; Yamamoto, Manabu
2010-05-01
The rapid progress in computer performance and widespread use of broadband networks has facilitated the transmission of huge quantities of digital information, thus increasing the need for high-speed, large-capacity storage devices and leading to studies on holographic data storage (HDS). Compared with laser disks where the recording density is limited by optical diffraction, HDS provides ultrahigh capacity with multiplex recording and high-speed transfer greater than 1 Gbps; it has excellent potential for optical memory systems of the future [1]. To develop HDS, a design theory for element technologies such as signal processing, recording materials and optical systems is required. Therefore, this study examines technology for simulating the recording and reproduction for HDS. In simulations thus far, the medium for the recording process has usually been approximated as laminated layers of holographic thin films. This method is suitable for systematic evaluation because the computational cost is low and it allows simulation in the true form of data, that is, in two-dimensional digital data patterns. However, it is difficult to accurately examine the influence of film thickness with a two-dimensional lamination simulation. Therefore, in this study, a technique for analyzing thick-film holograms is examined using the beam propagation method. The results of a two-dimensional simulation assuming laminated, holographic thin films and a three-dimensional simulation using the beam propagation method are compared for cases where the medium need not be treated as a thick film.
Wang, Fei; Toselli, Italo; Korotkova, Olga
2016-02-10
An optical system consisting of a laser source and two independent consecutive phase-only spatial light modulators (SLMs) is shown to accurately simulate a generated random beam (first SLM) after interaction with a stationary random medium (second SLM). To illustrate the range of possibilities, a recently introduced class of random optical frames is examined on propagation in free space and several weak turbulent channels with Kolmogorov and non-Kolmogorov statistics.
Design of a predictive targeting error simulator for MRI-guided prostate biopsy.
Avni, Shachar; Vikal, Siddharth; Fichtinger, Gabor
2010-02-23
Multi-parametric MRI is a new imaging modality superior in quality to Ultrasound (US) which is currently used in standard prostate biopsy procedures. Surface-based registration of the pre-operative and intra-operative prostate volumes is a simple alternative to side-step the challenges involved with deformable registration. However, segmentation errors inevitably introduced during prostate contouring spoil the registration and biopsy targeting accuracies. For the crucial purpose of validating this procedure, we introduce a fully interactive and customizable simulator which determines the resulting targeting errors of simulated registrations between prostate volumes given user-provided parameters for organ deformation, segmentation, and targeting. We present the workflow executed by the simulator in detail and discuss the parameters involved. We also present a segmentation error introduction algorithm, based on polar curves and natural cubic spline interpolation, which introduces statistically realistic contouring errors. One simulation, including all I/O and preparation for rendering, takes approximately 1 minute and 40 seconds to complete on a system with 3 GB of RAM and four Intel Core 2 Quad CPUs each with a speed of 2.40 GHz. Preliminary results of our simulation suggest the maximum tolerable segmentation error given the presence of a 5.0 mm wide small tumor is between 4-5 mm. We intend to validate these results via clinical trials as part of our ongoing work.
Design of a predictive targeting error simulator for MRI-guided prostate biopsy
NASA Astrophysics Data System (ADS)
Avni, Shachar; Vikal, Siddharth; Fichtinger, Gabor
2010-02-01
Multi-parametric MRI is a new imaging modality superior in quality to Ultrasound (US) which is currently used in standard prostate biopsy procedures. Surface-based registration of the pre-operative and intra-operative prostate volumes is a simple alternative to side-step the challenges involved with deformable registration. However, segmentation errors inevitably introduced during prostate contouring spoil the registration and biopsy targeting accuracies. For the crucial purpose of validating this procedure, we introduce a fully interactive and customizable simulator which determines the resulting targeting errors of simulated registrations between prostate volumes given user-provided parameters for organ deformation, segmentation, and targeting. We present the workflow executed by the simulator in detail and discuss the parameters involved. We also present a segmentation error introduction algorithm, based on polar curves and natural cubic spline interpolation, which introduces statistically realistic contouring errors. One simulation, including all I/O and preparation for rendering, takes approximately 1 minute and 40 seconds to complete on a system with 3 GB of RAM and four Intel Core 2 Quad CPUs each with a speed of 2.40 GHz. Preliminary results of our simulation suggest the maximum tolerable segmentation error given the presence of a 5.0 mm wide small tumor is between 4-5 mm. We intend to validate these results via clinical trials as part of our ongoing work.
Numerical simulation of seismic wave propagation produced by earthquake by using a particle method
NASA Astrophysics Data System (ADS)
Takekawa, Junichi; Madariaga, Raul; Mikada, Hitoshi; Goto, Tada-nori
2012-12-01
We propose a forward wavefield simulation based on a particle continuum model to simulate seismic waves travelling through a complex subsurface structure with arbitrary topography. The inclusion of arbitrary topography in the numerical simulation is a key issue not only for scientific interests but also for disaster prediction and mitigation purposes. In this study, a Hamiltonian particle method (HPM) is employed. It is easy to introduce traction-free boundary conditions in HPM and to refine the particle density in space. Any model with complex geometry and velocity structure can be simulated by HPM because the connectivity between particles is easily calculated based on their relative positions and the free surfaces are automatically introduced. In addition, the spatial resolution of the simulation could be refined in a simple manner even in a relatively complex velocity structure with arbitrary surface topography. For these reasons, the present method possesses great potential for the simulation of strong ground motions. In this paper, we first investigate the dispersion property of HPM through a plane wave analysis. Next, we simulate surface wave propagation in an elastic half space, and compare the numerical results with analytical solutions. HPM is more dispersive than FDM, however, our local refinement technique shows accuracy improvements in a simple and effective manner. Next, we introduce an earthquake double-couple source in HPM and compare a simulated seismic waveform obtained with HPM with that computed with FDM to demonstrate the performance of the method. Furthermore, we simulate the surface wave propagation in a model with a surface of arbitrary topographical shape and compare with results computed with FEM. In each simulation, HPM shows good agreement with the reference solutions. Finally, we discuss the calculation costs of HPM including its accuracy.
Kim, Sang-Hyuk; Suh, Hyun Sang; Cho, Min Hyoung; Lee, Soo Yeol; Kim, Tae-Seong
2009-01-01
Osteoporosis is a serious bone disease which leads to the increased risk of bone fractures. For prevention and therapy, early detection of osteoporosis is critical. In general, for diagnosis of osteoporosis, dual-energy X-ray absoptiometry (DXA) or densitometry is most commonly used. However DXA exhibits some disadvantages such as ionizing radiation, relatively expensive cost, and limited information on mineralization and geometry of the bone. As an alternative method of DXA, quantitative ultrasound (QUS) is being investigated. In contrast to DXA, QUS is non-ionizing and relatively inexpensive. It can also provide some bone-related parameters (e.g., quantitative measurements including speed of sound and frequency-dependent attenuation). However the estimation of these parameters is difficult and few analytical solutions exist due to the complex behavior of ultrasound propagation in bone. As an alternative to the analytical methods, in most attempts, finite difference time domain (FDTD) method is used for simulation of ultrasound propagation in bone with a limited capability of modeling complex geometries of the bone. Finite element method (FEM) is a better solution since it can handle the complex geometry, but has been rarely applied due to its computational complexity. In this work, we propose an approach of FEM-based simulation of ultrasound propagation in bone. To validate our approach, we have tested simulated and real bone models from micro-CT using the index of speed-of-sound. Our results achieve an average of 97.54% in the computational accuracy.
NASA Astrophysics Data System (ADS)
Rauter, N.; Lammering, R.
2015-04-01
In order to detect micro-structural damages accurately new methods are currently developed. A promising tool is the generation of higher harmonic wave modes caused by the nonlinear Lamb wave propagation in plate like structures. Due to the very small amplitudes a cumulative effect is used. To get a better overview of this inspection method numerical simulations are essential. Previous studies have developed the analytical description of this phenomenon which is based on the five-constant nonlinear elastic theory. The analytical solution has been approved by numerical simulations. In this work first the nonlinear cumulative wave propagation is simulated and analyzed considering micro-structural cracks in thin linear elastic isotropic plates. It is shown that there is a cumulative effect considering the S1-S2 mode pair. Furthermore the sensitivity of the relative acoustical nonlinearity parameter regarding those damages is validated. Furthermore, an influence of the crack size and orientation on the nonlinear wave propagation behavior is observed. In a second step the micro-structural cracks are replaced by a nonlinear material model. Instead of the five-constant nonlinear elastic theory hyperelastic material models that are implemented in commonly used FEM software are used to simulate the cumulative effect of the higher harmonic Lamb wave generation. The cumulative effect as well as the different nonlinear behavior of the S1-S2 and S2-S4 mode pairs are found by using these hyperelastic material models. It is shown that, both numerical simulations, which take into account micro-structural cracks on the one hand and nonlinear material on the other hand, lead to comparable results. Furthermore, in comparison to the five-constant nonlinear elastic theory the use of the well established hyperelastic material models like Neo-Hooke and Mooney-Rivlin are a suitable alternative to simulate the cumulative higher harmonic generation.
Experimental study on propagation of fault slip along a simulated rock fault
NASA Astrophysics Data System (ADS)
Mizoguchi, K.
2015-12-01
Around pre-existing geological faults in the crust, we have often observed off-fault damage zone where there are many fractures with various scales, from ~ mm to ~ m and their density typically increases with proximity to the fault. One of the fracture formation processes is considered to be dynamic shear rupture propagation on the faults, which leads to the occurrence of earthquakes. Here, I have conducted experiments on propagation of fault slip along a pre-cut rock surface to investigate the damaging behavior of rocks with slip propagation. For the experiments, I used a pair of metagabbro blocks from Tamil Nadu, India, of which the contacting surface simulates a fault of 35 cm in length and 1cm width. The experiments were done with the similar uniaxial loading configuration to Rosakis et al. (2007). Axial load σ is applied to the fault plane with an angle 60° to the loading direction. When σ is 5kN, normal and shear stresses on the fault are 1.25MPa and 0.72MPa, respectively. Timing and direction of slip propagation on the fault during the experiments were monitored with several strain gauges arrayed at an interval along the fault. The gauge data were digitally recorded with a 1MHz sampling rate and 16bit resolution. When σ is 4.8kN is applied, we observed some fault slip events where a slip nucleates spontaneously in a subsection of the fault and propagates to the whole fault. However, the propagation speed is about 1.2km/s, much lower than the S-wave velocity of the rock. This indicates that the slip events were not earthquake-like dynamic rupture ones. More efforts are needed to reproduce earthquake-like slip events in the experiments. This work is supported by the JSPS KAKENHI (26870912).
Simulating underwater plasma sound sources to evaluate focusing performance and analyze errors
NASA Astrophysics Data System (ADS)
Ma, Tian; Huang, Jian-Guo; Lei, Kai-Zhuo; Chen, Jian-Feng; Zhang, Qun-Fei
2010-03-01
Focused underwater plasma sound sources are being applied in more and more fields. Focusing performance is one of the most important factors determining transmission distance and peak values of the pulsed sound waves. The sound source’s components and focusing mechanism were all analyzed. A model was built in 3D Max and wave strength was measured on the simulation platform. Error analysis was fully integrated into the model so that effects on sound focusing performance of processing-errors and installation-errors could be studied. Based on what was practical, ways to limit the errors were proposed. The results of the error analysis should guide the design, machining, placement, debugging and application of underwater plasma sound sources.
SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors
Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I
2014-06-01
Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with γ<κ was < (or ≥) τ. (Conventionally, κ=1 and τ=90%.) A ROC curve was generated from the 616 tests by varying τ. For a range of κ and τ, true/false positive/negative rates were calculated. This procedure was repeated for inserted errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some κ+τ combination. Results: 20mm{sup 2} errors with intensity altered by ≥20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ≥50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though
Wiegart, L. Fluerasu, A.; Chubar, O.; Bruhwiler, D.
2016-07-27
We have applied fully-and partially-coherent synchrotron radiation wavefront propagation simulations, implemented in the “Synchrotron Radiation Workshop” (SRW) computer code, to analyse the effects of imperfect mirrors and monochromator at the Coherent Hard X-ray beamline. This beamline is designed for X-ray Photon Correlation Spectroscopy, a technique that heavily relies on the partial coherence of the X-ray beam and benefits from a careful preservation of the X-ray wavefront. We present simulations and a comparison with the measured beam profile at the sample position, which show the impact of imperfect optics on the wavefront.
Simulation of Transrib HIFU Propagation and the Strategy of Phased-array Activation
NASA Astrophysics Data System (ADS)
Zhou, Yufeng; Wang, Mingjun
Liver ablation is challenging in high-intensity focused ultrasound (HIFU) because of the presence of ribs and great inhomogeneity in multi-layer tissue. In this study, angular spectrum approach (ASA) has been used in the wave propagation from phased-array HIFU transducer, and diffraction, attenuation and the nonlinearity are accounted for by means of second order operator splitting method. Bioheat equation is used to simulate the subsequent temperature elevation and lesion formation with the formation of shifted focus and multiple foci. In summary, our approach could simulate the performance of phased-array HIFU in the clinics and then develop an appropriate treatment plan.
NASA Astrophysics Data System (ADS)
Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros
2012-10-01
We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.
Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros
2012-10-14
We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.
NASA Technical Reports Server (NTRS)
Goldberg, Louis F.
1992-01-01
Aspects of the information propagation modeling behavior of integral machine computer simulation programs are investigated in terms of a transmission line. In particular, the effects of pressure-linking and temporal integration algorithms on the amplitude ratio and phase angle predictions are compared against experimental and closed-form analytic data. It is concluded that the discretized, first order conservation balances may not be adequate for modeling information propagation effects at characteristic numbers less than about 24. An entropy transport equation suitable for generalized use in Stirling machine simulation is developed. The equation is evaluated by including it in a simulation of an incompressible oscillating flow apparatus designed to demonstrate the effect of flow oscillations on the enhancement of thermal diffusion. Numerical false diffusion is found to be a major factor inhibiting validation of the simulation predictions with experimental and closed-form analytic data. A generalized false diffusion correction algorithm is developed which allows the numerical results to match their analytic counterparts. Under these conditions, the simulation yields entropy predictions which satisfy Clausius' inequality.
Bossy, Emmanuel; Padilla, Frédéric; Peyrin, Françoise; Laugier, Pascal
2005-12-07
Three-dimensional numerical simulations of ultrasound transmission were performed through 31 trabecular bone samples measured by synchrotron microtomography. The synchrotron microtomography provided high resolution 3D mappings of bone structures, which were used as the input geometry in the simulation software developed in our laboratory. While absorption (i.e. the absorption of ultrasound through dissipative mechanisms) was not taken into account in the algorithm, the simulations reproduced major phenomena observed in real through-transmission experiments in trabecular bone. The simulated attenuation (i.e. the decrease of the transmitted ultrasonic energy) varies linearly with frequency in the MHz frequency range. Both the speed of sound (SOS) and the slope of the normalized frequency-dependent attenuation (nBUA) increase with the bone volume fraction. Twenty-five out of the thirty-one samples exhibited negative velocity dispersion. One sample was rotated to align the main orientation of the trabecular structure with the direction of ultrasonic propagation, leading to the observation of a fast and a slow wave. Coupling numerical simulation with real bone architecture therefore provides a powerful tool to investigate the physics of ultrasound propagation in trabecular structures. As an illustration, comparison between results obtained on bone modelled either as a fluid or a solid structure suggested the major role of mode conversion of the incident acoustic wave to shear waves in bone to explain the large contribution of scattering to the overall attenuation.
NASA Astrophysics Data System (ADS)
Bossy, Emmanuel; Padilla, Frédéric; Peyrin, Françoise; Laugier, Pascal
2005-12-01
Three-dimensional numerical simulations of ultrasound transmission were performed through 31 trabecular bone samples measured by synchrotron microtomography. The synchrotron microtomography provided high resolution 3D mappings of bone structures, which were used as the input geometry in the simulation software developed in our laboratory. While absorption (i.e. the absorption of ultrasound through dissipative mechanisms) was not taken into account in the algorithm, the simulations reproduced major phenomena observed in real through-transmission experiments in trabecular bone. The simulated attenuation (i.e. the decrease of the transmitted ultrasonic energy) varies linearly with frequency in the MHz frequency range. Both the speed of sound (SOS) and the slope of the normalized frequency-dependent attenuation (nBUA) increase with the bone volume fraction. Twenty-five out of the thirty-one samples exhibited negative velocity dispersion. One sample was rotated to align the main orientation of the trabecular structure with the direction of ultrasonic propagation, leading to the observation of a fast and a slow wave. Coupling numerical simulation with real bone architecture therefore provides a powerful tool to investigate the physics of ultrasound propagation in trabecular structures. As an illustration, comparison between results obtained on bone modelled either as a fluid or a solid structure suggested the major role of mode conversion of the incident acoustic wave to shear waves in bone to explain the large contribution of scattering to the overall attenuation.
Sampling data for OSSEs. [simulating errors for WINDSAT Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Hoffman, Ross
1988-01-01
An OSSE should for the sake of realism incorporate at least some of the high-frequency, small-scale phenomena that are suppressed by atmospheric models; these phenomena should be present in the realistic atmosphere sampled by all observing sensor systems whose data are being used. Errors are presently generated for an OSSE in a way that encompasses representational errors, sampling, geophysical local bias, random error, and sensor filtering.
Sampling data for OSSEs. [simulating errors for WINDSAT Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Hoffman, Ross
1988-01-01
An OSSE should for the sake of realism incorporate at least some of the high-frequency, small-scale phenomena that are suppressed by atmospheric models; these phenomena should be present in the realistic atmosphere sampled by all observing sensor systems whose data are being used. Errors are presently generated for an OSSE in a way that encompasses representational errors, sampling, geophysical local bias, random error, and sensor filtering.
Testing the Propagating Fluctuations Model with a Long, Global Accretion Disk Simulation
NASA Astrophysics Data System (ADS)
Hogg, J. Drew; Reynolds, Christopher S.
2016-07-01
The broadband variability of many accreting systems displays characteristic structures; log-normal flux distributions, root-mean square (rms)-flux relations, and long inter-band lags. These characteristics are usually interpreted as inward propagating fluctuations of the mass accretion rate in an accretion disk driven by stochasticity of the angular momentum transport mechanism. We present the first analysis of propagating fluctuations in a long-duration, high-resolution, global three-dimensional magnetohydrodynamic (MHD) simulation of a geometrically thin (h/r ≈ 0.1) accretion disk around a black hole. While the dynamical-timescale turbulent fluctuations in the Maxwell stresses are too rapid to drive radially coherent fluctuations in the accretion rate, we find that the low-frequency quasi-periodic dynamo action introduces low-frequency fluctuations in the Maxwell stresses, which then drive the propagating fluctuations. Examining both the mass accretion rate and emission proxies, we recover log-normality, linear rms-flux relations, and radial coherence that would produce inter-band lags. Hence, we successfully relate and connect the phenomenology of propagating fluctuations to modern MHD accretion disk theory.
NASA Astrophysics Data System (ADS)
Ruan, H.; Wang, W.; Dou, X.; Yue, J.; Du, J.; Lei, J.
2016-12-01
The migrating terdiurnal tide at the mesosphere and lower thermosphere (MLT) is suggested to contribute to the formation of the well-known Midnight Temperature/Density Maximum (MTM/MDM) in the upper thermosphere with a significant seasonal variation. In this study, the models of TIEGCM and eCMAM are utilized to investigate the seasonal variations of the upward propagation of migrating terdiurnal tide from the MLT. Three main conclusions are drawn from a series of controlled simulations: 1) Both the background zonal and meridional winds in thermosphere can significantly affect the upward propagation of the terdiurnal tide. 2) The terdiurnal tide in the MLT impacts not only on the latitudinal distributions and magnitudes of the terdiurnal component in upper thermosphere but also on the influence of the background winds on the terdiurnal tidal upward propagation. 3) The hemispheric asymmetry in background mean and vertical gradient of the thermospheric temperature contributes to the seasonal variation of the terdiurnal tidal upward propagation as well.
One-way approximation for the simulation of weak shock wave propagation in atmospheric flows.
Gallin, Louis-Jonardan; Rénier, Mathieu; Gaudard, Eric; Farges, Thomas; Marchiano, Régis; Coulouvrat, François
2014-05-01
A numerical scheme is developed to simulate the propagation of weak acoustic shock waves in the atmosphere with no absorption. It generalizes the method previously developed for a heterogeneous medium [Dagrau, Rénier, Marchiano, and Coulouvrat, J. Acoust. Soc. Am. 130, 20-32 (2011)] to the case of a moving medium. It is based on an approximate scalar wave equation for potential, rewritten in a moving time frame, and separated into three parts: (i) the linear wave equation in a homogeneous and quiescent medium, (ii) the effects of atmospheric winds and of density and speed of sound heterogeneities, and (iii) nonlinearities. Each effect is then solved separately by an adapted method: angular spectrum for the wave equation, finite differences for the flow and heterogeneity corrections, and analytical method in time domain for nonlinearities. To keep a one-way formulation, only forward propagating waves are kept in the angular spectrum part, while a wide-angle parabolic approximation is performed on the correction terms. The numerical process is validated in the case of guided modal propagation with a shear flow. It is then applied to the case of blast wave propagation within a boundary layer flow over a flat and rigid ground.
NASA Astrophysics Data System (ADS)
Shay, M. A.; Drake, J. F.
2009-12-01
In a recent substorm case study using THEMIS data [1], it was inferred that auroral intensification occurred 96 seconds after reconnection onset initiated a substorm in the magnetotail. These conclusions have been the subject of some controversy [2,3]. The time delay between reconnection and auroral intensification requires a propagation speed significantly faster than can be explained by Alfvén waves. Kinetic Alfvén waves, however, can be much faster and could possibly explain the time lag. To test this possiblity, we simulate large scale reconnection events with the kinetic PIC code P3D and examine the disturbances on a magnetic field line as it propagates through a reconnection region. In the regions near the separatrices but relatively far from the x-line, the propagation physics is expected to be governed by the physics of kinetic Alfvén waves. Indeed, we find that the propagation speed of the magnetic disturbance roughly scales with kinetic Alfvén speeds. We also examine energization of electrons due to this disturbance. Consequences for our understanding of substorms will be discussed. [1] Angelopoulos, V. et al., Science, 321, 931, 2008. [2] Lui, A. T. Y., Science, 324, 1391-b, 2009. [3] Angelopoulos, V. et al., Science, 324, 1391-c, 2009.
Simulation of Propagation and Transformation of THz Bessel Beams with Orbital Angular Momentum
NASA Astrophysics Data System (ADS)
Choporova, Yulia; Knyazev, Boris; Mitkov, Mikhail; Osintseva, Natalya; Pavelyev, Vladimir
Recently, terahertz Bessel beams with angular orbital momentum ("vortex beams") with topological charges l = ±1 and l = ±2 were generated for the first time using radiation of the Novosibirsk free electron laser (NovoFEL) and silicon binary phase axicons (Knyazev et al., Phys. Rev. Letters, vol. 115, Art. 163901, 2015). Such beams are prospective for application in wireless communication and remote sensing. In present paper, numerical modelling of generation and transformation of vortex beams based on the scalar diffraction theory has been performed. It was shown that the Bessel beams with the diameters of the first ring of 1.7 and 3.2 mm for topological charges ±1 and ±2, respectively, propagate at a distance up to 160 mm without dispersion. Calculation showed that the propagation distance can be increased by reducing of the radiation wavelength or using a telescopic system. In the first case, the propagation distance grows up inversely proportional to the wavelength, whereas, in the latter case the propagation distance increases as a square of a ratio of the telescope lenses foci. Modelling of the passing of the vortex Bessel beams through a random phase screen and amplitude obstacles showed the self-healing ability of the beams. Even if an obstacle with a diameter of 10 mm blocks several central rings of Bessel beam, it reconstructs itself after passing a length of about 100 mm. Results of the simulations are in a good agreement with the experimental data, when the latter exist.
Simulations of laser propagation and ionization in l'OASIS experiments
Dimitrov, D.A.; Bruhwiler, D.L.; Leemans, W.; Esarey, E.; Catravas, P.; Toth, C.; Shadwick, B.; Cary, J.R.; Giacone, R.
2002-06-30
We have conducted particle-in-cell simulations of laser pulse propagation through neutral He, including the effects of tunneling ionization, within the parameter regime of the l'OASIS experiments [1,2] at the Lawrence Berkeley National Laboratory (LBNL). The simulations show the theoretically predicted [3] blue shifting of the laser frequency at the leading edge of the pulse. The observed blue shifting is in good agreement with the experimental data. These results indicate that such computations can be used to accurately simulate a number of important effects related to tunneling ionization for laser-plasma accelerator concepts, such as steepening due to ionization-induced pump depletion, which can seed and enhance instabilities. Our simulations show self-modulation occurring earlier when tunneling ionization is included then for a pre-ionized plasma.
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Accelerating Simulation of Seismic Wave Propagation by Multi-GPUs (Invited)
NASA Astrophysics Data System (ADS)
Okamoto, T.; Takenaka, H.; Nakamura, T.; Aoki, T.
2010-12-01
Simulation of seismic wave propagation is essential in modern seismology: the effects of irregular topography of the surface, internal discontinuities and heterogeneity on the seismic waveforms must be precisely modeled in order to probe the Earth's and other planets' interiors, to study the earthquake sources, and to evaluate the strong ground motions due to earthquakes. Devices with high computing performance are necessary because in large scale simulations more than one billion of grid points are required. GPU (Graphics Processing Unit) is a remarkable device for its many core architecture with more-than-one-hundred processing units, and its high memory bandwidth. Now GPU delivers extremely high computing performance (more than one tera-flops in single-precision arithmetic) at a reduced power and cost compared to conventional CPUs. The simulation of seismic wave propagation is a memory intensive problem which involves large amount of data transfer between the memory and the arithmetic units while the number of arithmetic calculations is relatively small. Therefore the simulation should benefit from the high memory bandwidth of the GPU. Thus several approaches to adopt GPU to the simulation of seismic wave propagation have been emerging (e.g., Komatitsch et al., 2009; Micikevicius, 2009; Michea and Komatitsch, 2010; Aoi et al., SSJ 2009, JPGU 2010; Okamoto et al., SSJ 2009, SACSIS 2010). In this paper we describe our approach to accelerate the simulation of seismic wave propagation based on the finite-difference method (FDM) by adopting multi-GPU computing. The finite-difference scheme we use is the three-dimensional, velocity-stress staggered grid scheme (e.g., Grave 1996; Moczo et al., 2007) for heterogeneous medium with perfect elasticity (incorporation of an-elasticity is underway). We use the GPUs (NVIDIA S1070, 1.44 GHz) installed in the TSUBAME grid cluster in the Global Scientific Information and Computing Center, Tokyo Institute of Technology and NVIDIA
Sampling errors in free energy simulations of small molecules in lipid bilayers.
Neale, Chris; Pomès, Régis
2016-10-01
Free energy simulations are a powerful tool for evaluating the interactions of molecular solutes with lipid bilayers as mimetics of cellular membranes. However, these simulations are frequently hindered by systematic sampling errors. This review highlights recent progress in computing free energy profiles for inserting molecular solutes into lipid bilayers. Particular emphasis is placed on a systematic analysis of the free energy profiles, identifying the sources of sampling errors that reduce computational efficiency, and highlighting methodological advances that may alleviate sampling deficiencies. This article is part of a Special Issue entitled: Biosimulations edited by Ilpo Vattulainen and Tomasz Róg.
Shinozaki, Takashi; Naruse, Yasushi; Câteau, Hideyuki
2013-10-01
This study investigates the effect of gap junctions on firing propagation in a feedforward neural network by a numerical simulation with biologically plausible parameters. Gap junctions are electrical couplings between two cells connected by a binding protein, connexin. Recent electrophysiological studies have reported that a large number of inhibitory neurons in the mammalian cortex are mutually connected by gap junctions, and synchronization of gap junctions, spread over several hundred microns, suggests that these have a strong effect on the dynamics of the cortical network. However, the effect of gap junctions on firing propagation in cortical circuits has not been examined systematically. In this study, we perform numerical simulations using biologically plausible parameters to clarify this effect on population firing in a feedforward neural network. The results suggest that gap junctions switch the temporally uniform firing in a layer to temporally clustered firing in subsequent layers, resulting in an enhancement in the propagation of population firing in the feedforward network. Because gap junctions are often modulated in physiological conditions, we speculate that gap junctions could be related to a gating function of population firing in the brain. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2008-01-01
An approach for assessing the delamination propagation simulation capabilities in commercial finite element codes is presented and demonstrated. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. The load-displacement relationship and the total strain energy obtained from the propagation analysis results and the benchmark results were compared and good agreements could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as was expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.
A Discussion on the Errors in the Surface Heat Fluxes Simulated by a Coupled GCM.
NASA Astrophysics Data System (ADS)
Yu, Jin-Yi; Mechoso, Carlos R.
1999-02-01
This paper contrasts the sea surface temperature (SST) and surface heat flux errors in the Tropical Pacific simulated by the University of California, Los Angeles, coupled atmosphere-ocean general circulation model (CGCM) and by its atmospheric component (AGCM) using prescribed SSTs. The usefulness of such a comparison is discussed in view of the sensitivities of the coupled system.Off the equator, the CGCM simulates more realistic surface heat fluxes than the AGCM, except in the eastern Pacific south of the equator where the coupled model produces a spurious intertropical convergence zone. The AGCM errors are dominated by excessive latent heat flux, except in the stratus regions along the coasts of California and Peru where errors are dominated by excessive shortwave flux. The CGCM tends to balance the AGCM errors by either correctly decreasing the evaporation at the expense of cold SST biases or erroneously increasing the evaporation at the expense of warm SST biases.At the equator, errors in simulated SSTs are amplified by the feedbacks of the coupled system. Over the western equatorial Pacific, the CGCM produces a cold SST bias that is a manifestation of a spuriously elongated cold tongue. The AGCM produces realistic values of surface heat flux. Over the cold tongue in the eastern equatorial Pacific, the CGCM simulates realistic annual variations in SST. In the simulation, however, the relationship between variations in SST and surface latent heat flux corresponds to a negative feedback, while in the observation it corresponds to a positive feedback. Such an erroneous feature of the CGCM is linked to deficiencies in the simulation of the cross-equatorial component of the surface wind. The reasons for the success in the simulation of SST in the equatorial cold tongue despite the erroneous surface heat flux are examined.
Quantitative analyses of spectral measurement error based on Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Jiang, Jingying; Ma, Congcong; Zhang, Qi; Lu, Junsheng; Xu, Kexin
2015-03-01
The spectral measurement error is controlled by the resolution and the sensitivity of the spectroscopic instrument and the instability of involved environment. In this talk, the spectral measurement error has been analyzed quantitatively by using the Monte Carlo (MC) simulation. Take the floating reference point measurement for example, unavoidably there is a deviation between the measuring position and the theoretical position due to various influence factors. In order to determine the error caused by the positioning accuracy of the measuring device, Monte Carlo simulation has been carried out at the wavelength of 1310nm, simulating Intralipid solution of 2%. MC simulation was performed with the number of 1010 photons and the sampling interval of the ring at 1μm. The data from MC simulation will be analyzed on the basis of thinning and calculating method (TCM) proposed in this talk. The results indicate that TCM could be used to quantitatively analyze the spectral measurement error brought by the positioning inaccuracy.
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2014-06-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity.
Simulation of Lamb wave propagation for the characterization of complex structures.
Agostini, Valentina; Delsanto, Pier Paolo; Genesio, Ivan; Olivero, Dimitri
2003-04-01
Reliable numerical simulation techniques represent a very valuable tool for analysis. For this purpose we investigated the applicability of the local interaction simulation approach (LISA) to the study of the propagation of Lamb waves in complex structures. The LISA allows very fast and flexible simulations, especially in conjunction with parallel processing, and it is particularly useful for complex (heterogeneous, anisotropic, attenuative, and/or nonlinear) media. We present simulations performed on a glass fiber reinforced plate, initially undamaged and then with a hole passing through its thickness (passing-by hole). In order to give a validation of the method, the results are compared with experimental data. Then we analyze the interaction of Lamb waves with notches, delaminations, and complex structures. In the first case the discontinuity due to a notch generates mode conversion, which may be used to predict the defect shape and size. In the case of a single delamination, the most striking "signature" is a time-shift delay, which may be observed in the temporal evolution of the signal recorded by a receiver. We also present some results obtained on a geometrically complex structure. Due to the inherent discontinuities, a wealth of propagation mechanisms are observed, which can be exploited for the purpose of quantitative nondestructive evaluation (NDE).
Impacts of Characteristics of Errors in Radar Rainfall Estimates for Rainfall-Runoff Simulation
NASA Astrophysics Data System (ADS)
KO, D.; PARK, T.; Lee, T. S.; Shin, J. Y.; Lee, D.
2015-12-01
For flood prediction, weather radar has been commonly employed to measure the amount of precipitation and its spatial distribution. However, estimated rainfall from the radar contains uncertainty caused by its errors such as beam blockage and ground clutter. Even though, previous studies have been focused on removing error of radar data, it is crucial to evaluate runoff volumes which are influenced primarily by the radar errors. Furthermore, resolution of rainfall modeled by previous studies for rainfall uncertainty analysis or distributed hydrological simulation are quite coarse to apply to real application. Therefore, in the current study, we tested the effects of radar rainfall errors on rainfall runoff with a high resolution approach, called spatial error model (SEM). In the current study, the synthetic generation of random and cross-correlated radar errors were employed as SEM. A number of events for the Nam River dam region were tested to investigate the peak discharge from a basin according to error variance. The results indicate that the dependent error brings much higher variations in peak discharge than the independent random error. To further investigate the effect of the magnitude of cross-correlation between radar errors, the different magnitudes of spatial cross-correlations were employed for the rainfall-runoff simulation. The results demonstrate that the stronger correlation leads to higher variation of peak discharge and vice versa. We conclude that the error structure in radar rainfall estimates significantly affects on predicting the runoff peak. Therefore, the efforts must take into consideration on not only removing radar rainfall error itself but also weakening the cross-correlation structure of radar errors in order to forecast flood events more accurately. Acknowledgements This research was supported by a grant from a Strategic Research Project (Development of Flood Warning and Snowfall Estimation Platform Using Hydrological Radars), which
NASA Astrophysics Data System (ADS)
Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian
2016-08-01
Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modelling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5 per cent and 9° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10 per cent in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1 per cent at periods greater than 30 s in most oceanic regions, but the error is up to 2 per cent for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.
NASA Astrophysics Data System (ADS)
Lu, Q.; Lu, S.; Lin, Y.; Wang, X.
2016-12-01
Dipolarization fronts (DFs) as earthward propagating flux ropes (FRs) in the Earth's magnetotailare presented and investigated with a three-dimensional (3-D) global hybrid simulation for the first time. In thesimulation, several small-scale earthward propagating FRs are found to be formed by multiple X line reconnectionin the near tail. During their earthward propagation, the magnetic field Bz of the FRs becomes highly asymmetricdue to the imbalance of the reconnection rates between the multiple X lines. At the later stage, when the FRsapproach the near-Earth dipole-like region, the antireconnection between the southward/negative Bz ofthe FRs and the northward geomagnetic field leads to the erosion of the southward magnetic flux of theFRs, which further aggravates the Bz asymmetry. Eventually, the FRs merge into the near-Earth regionthrough the antireconnection. These earthward propagating FRs can fully reproduce the observationalfeatures of the DFs, e.g., a sharp enhancement of Bz preceded by a smaller amplitude Bz dip, an earthwardflow enhancement, the presence of the electric field components in the normal and dawn-dusk directions,and ion energization. Our results show that the earthward propagating FRs can be used to explain the DFsobserved in the magnetotail. The thickness of the DFs is on the order of several ion inertial lengths, and theelectric field normal to the front is found to be dominated by the Hall physics. During the earthward propagationfrom the near-tail to the near-Earth region, the speed of the FR/DFs increases from 150km/s to 1000 km/s. TheFR/DFs can be tilted in the GSM (x, y) plane with respect to the y (dawn-dusk) axis and only extend several Earthradii in this direction. Moreover, the structure and evolution of the FRs/DFs are nonuniform in the dawn-duskdirection, which indicates that the DFs are essentially 3-D.
Simulation of Crack Propagation in Engine Rotating Components under Variable Amplitude Loading
NASA Technical Reports Server (NTRS)
Bonacuse, P. J.; Ghosn, L. J.; Telesman, J.; Calomino, A. M.; Kantzos, P.
1998-01-01
The crack propagation life of tested specimens has been repeatedly shown to strongly depend on the loading history. Overloads and extended stress holds at temperature can either retard or accelerate the crack growth rate. Therefore, to accurately predict the crack propagation life of an actual component, it is essential to approximate the true loading history. In military rotorcraft engine applications, the loading profile (stress amplitudes, temperature, and number of excursions) can vary significantly depending on the type of mission flown. To accurately assess the durability of a fleet of engines, the crack propagation life distribution of a specific component should account for the variability in the missions performed (proportion of missions flown and sequence). In this report, analytical and experimental studies are described that calibrate/validate the crack propagation prediction capability ]or a disk alloy under variable amplitude loading. A crack closure based model was adopted to analytically predict the load interaction effects. Furthermore, a methodology has been developed to realistically simulate the actual mission mix loading on a fleet of engines over their lifetime. A sequence of missions is randomly selected and the number of repeats of each mission in the sequence is determined assuming a Poisson distributed random variable with a given mean occurrence rate. Multiple realizations of random mission histories are generated in this manner and are used to produce stress, temperature, and time points for fracture mechanics calculations. The result is a cumulative distribution of crack propagation lives for a given, life limiting, component location. This information can be used to determine a safe retirement life or inspection interval for the given location.
Simulation of Crack Propagation in Engine Rotating Components Under Variable Amplitude Loading
NASA Technical Reports Server (NTRS)
Bonacuse, P. J.; Ghosn, L. J.; Telesman, J.; Calomino, A. M.; Kantzos, P.
1999-01-01
The crack propagation life of tested specimens has been repeatedly shown to strongly depend on the loading history. Overloads and extended stress holds at temperature can either retard or accelerate the crack growth rate. Therefore, to accurately predict the crack propagation life of an actual component, it is essential to approximate the true loading history. In military rotorcraft engine applications, the loading profile (stress amplitudes, temperature, and number of excursions) can vary significantly depending on the type of mission flown. To accurately assess the durability of a fleet of engines, the crack propagation life distribution of a specific component should account for the variability in the missions performed (proportion of missions flown and sequence). In this report, analytical and experimental studies are described that calibrate/validate the crack propagation prediction capability for a disk alloy under variable amplitude loading. A crack closure based model was adopted to analytically predict the load interaction effects. Furthermore, a methodology has been developed to realistically simulate the actual mission mix loading on a fleet of engines over their lifetime. A sequence of missions is randomly selected and the number of repeats of each mission in the sequence is determined assuming a Poisson distributed random variable with a given mean occurrence rate. Multiple realizations of random mission histories are generated in this manner and are used to produce stress, temperature, and time points for fracture mechanics calculations. The result is a cumulative distribution of crack propagation lives for a given, life limiting, component location. This information can be used to determine a safe retirement life or inspection interval for the given location.
An Atomistic Simulation of Crack Propagation in a Nickel Single Crystal
NASA Technical Reports Server (NTRS)
Karimi, Majid
2002-01-01
The main objective of this paper is to determine mechanisms of crack propagation in a nickel single crystal. Motivation for selecting nickel as a case study is because we believe that its physical properties are very close to that of nickel-base super alloy. We are directed in identifying some generic trends that would lead a single crystalline material to failure. We believe that the results obtained here would be of interest to the experimentalists in guiding them to a more optimized experimental strategy. The dynamic crack propagation experiments are very difficult to do. We are partially motivated to fill the gap by generating the simulation results in lieu of the experimental ones for the cases where experiment can not be done or when the data is not available.
NASA Astrophysics Data System (ADS)
Gai, F. F.; Pang, B. J.; Guan, G. S.
2009-03-01
In the paper SPH methods in AUTODYN-2D is used to investigate the characteristics of debris clouds propagation inside the gas-filled pressure vessels for hypervelocity impact on the pressure vessels. The effect of equation of state on debris cloud has been investigated. The numerical simulation performed to analyze the effect of the gas pressure and the impact condition on the propagation of the debris clouds. The result shows that the increase of gas pressure can reduce the damage of the debris clouds' impact on the back wall of vessels when the pressure value is in a certain range. The smaller projectile lead the axial velocity of the debris cloud to stronger deceleration and the debris cloud deceleration is increasing with increased impact velocity. The time of venting begins to occur is related to the "vacuum column" at the direction of impact-axial. The paper studied the effect of impact velocities on gas shock wave.
3D dynamic simulation of crack propagation in extracorporeal shock wave lithotripsy
NASA Astrophysics Data System (ADS)
Wijerathne, M. L. L.; Hori, Muneo; Sakaguchi, Hide; Oguni, Kenji
2010-06-01
Some experimental observations of Shock Wave Lithotripsy(SWL), which include 3D dynamic crack propagation, are simulated with the aim of reproducing fragmentation of kidney stones with SWL. Extracorporeal shock wave lithotripsy (ESWL) is the fragmentation of kidney stones by focusing an ultrasonic pressure pulse onto the stones. 3D models with fine discretization are used to accurately capture the high amplitude shear shock waves. For solving the resulting large scale dynamic crack propagation problem, PDS-FEM is used; it provides numerically efficient failure treatments. With a distributed memory parallel code of PDS-FEM, experimentally observed 3D photoelastic images of transient stress waves and crack patterns in cylindrical samples are successfully reproduced. The numerical crack patterns are in good agreement with the experimental ones, quantitatively. The results shows that the high amplitude shear waves induced in solid, by the lithotriptor generated shock wave, play a dominant role in stone fragmentation.
Simulation of the trans-oceanic tsunami propagation due to the 1883 Krakatau volcanic eruption
NASA Astrophysics Data System (ADS)
Choi, B. H.; Pelinovsky, E.; Kim, K. O.; Lee, J. S.
The 1883 Krakatau volcanic eruption has generated a destructive tsunami higher than 40 m on the Indonesian coast where more than 36 000 lives were lost. Sea level oscillations related with this event have been reported on significant distances from the source in the Indian, Atlantic and Pacific Oceans. Evidence of many manifestations of the Krakatau tsunami was a subject of the intense discussion, and it was suggested that some of them are not related with the direct propagation of the tsunami waves from the Krakatau volcanic eruption. Present paper analyzes the hydrodynamic part of the Krakatau event in details. The worldwide propagation of the tsunami waves generated by the Krakatau volcanic eruption is studied numerically using two conventional models: ray tracing method and two-dimensional linear shallow-water model. The results of the numerical simulations are compared with available data of the tsunami registration.
Numerical simulations of large earthquakes: Dynamic rupture propagation on heterogeneous faults
Harris, R.A.
2004-01-01
Our current conceptions of earthquake rupture dynamics, especially for large earthquakes, require knowledge of the geometry of the faults involved in the rupture, the material properties of the rocks surrounding the faults, the initial state of stress on the faults, and a constitutive formulation that determines when the faults can slip. In numerical simulations each of these factors appears to play a significant role in rupture propagation, at the kilometer length scale. Observational evidence of the earth indicates that at least the first three of the elements, geometry, material, and stress, can vary over many scale dimensions. Future research on earthquake rupture dynamics needs to consider at which length scales these features are significant in affecting rupture propagation. ?? Birkha??user Verlag, Basel, 2004.
Parametric decay of a parallel propagating monochromatic whistler wave: Particle-in-cell simulations
NASA Astrophysics Data System (ADS)
Ke, Yangguang; Gao, Xinliang; Lu, Quanming; Wang, Shui
2017-01-01
In this paper, by using one-dimensional (1-D) particle-in-cell simulations, we investigate the parametric decay of a parallel propagating monochromatic whistler wave with various wave frequencies and amplitudes. The pump whistler wave can decay into a backscattered daughter whistler wave and an ion acoustic wave, and the decay instability grows more rapidly with the increase of the frequency or amplitude. When the frequency or amplitude is sufficiently large, a multiple decay process may occur, where the daughter whistler wave undergoes a secondary decay into an ion acoustic wave and a forward propagating whistler wave. We also find that during the parametric decay a considerable part of protons can be accelerated along the background magnetic field by the enhanced ion acoustic wave through the Landau resonance. The implication of the parametric decay to the evolution of whistler waves in Earth's magnetosphere is also discussed in the paper.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Tanaka, T.; Matsuyama, K.
2017-05-01
Feasibility of two-dimensional propagation of the domain wall (DW) was investigated by micromagnetic simulations. Successful bit-by-bit propagation of the DW was demonstrated in a designed meandering magnetic strip with periodic material parameter modulation, used as DW pinning sites (PSs). The DW was successively shifted along the straight part and around the corner with a spin polarized current pulses with 1 ns-width, 3 ns-interval and same amplitude. A practical current amplitude margin (30 % of mid value) was achieved by analyzing the energy landscape around the meandering corner and optimizing the location of the PSs, which energy barrier height assures a thermal stability criterion (>60 kBT).
Numerical Simulation of Propagation and Transformation of the MHD Waves in Sunspots
NASA Astrophysics Data System (ADS)
Parchevsky, Konstantin; Zhao, J.; Kosovichev, A.
2010-05-01
Direct numerical simulation of propagation of MHD waves in stratified medium in regions with non-uniform magnetic field is very important for understanding of scattering and transformation of waves by sunspots. We present numerical simulations of wave propagation through the sunspot in 3D. We compare results propagation in two different magnitostatic models of sunspots refferred to as "deep" and "shallow" models. The "deep" model has convex shape of magnetic field lines near the photosphere and non-zero horizorntal perturbations of the sound speed up to the bottom of the model. The "shallow" model has concave shape of the magnetic field lines near the photosphere and horizontally uniform sound speed below 2 Mm. Waves reduce their amplitude when they reach the center of the sunspot and estore the amplitude when pass the center. For the "deep" model this effect is bigger than for the "shallow" model. The wave amplitude depends on the distance of the source from the sunspot center. For the "shallow" model and source distance of 9 Mm from the sunspot center the wave amplitude at some moment (when the wavefront passes the sunspot center) becomes bigger inside the sunspot than outside. For the source distance of 12 Mm the wave amplitude remains smaller inside the sunspot than outside for all moments of time. Using filtering technique we separated magnetoacoustic and magnetogravity waves. Simulations show that the sunspot changes the shape of the wave front and amplitude of the f-modes significantly stronger than the p-modes. It is shown, that inside the sunspot magnetoacoustic and magnetogravity waves are not spatially separated unlike the case of the horizontally uniform background model. We compared simulation results with the wave signals (Green's functions) extracted from the SOHO/MDI data for AR9787.
Simulation of quasi-static hydraulic fracture propagation in porous media with XFEM
NASA Astrophysics Data System (ADS)
Juan-Lien Ramirez, Alina; Neuweiler, Insa; Löhnert, Stefan
2015-04-01
Hydraulic fracturing is the injection of a fracking fluid at high pressures into the underground. Its goal is to create and expand fracture networks to increase the rock permeability. It is a technique used, for example, for oil and gas recovery and for geothermal energy extraction, since higher rock permeability improves production. Many physical processes take place when it comes to fracking; rock deformation, fluid flow within the fractures, as well as into and through the porous rock. All these processes are strongly coupled, what makes its numerical simulation rather challenging. We present a 2D numerical model that simulates the hydraulic propagation of an embedded fracture quasi-statically in a poroelastic, fully saturated material. Fluid flow within the porous rock is described by Darcy's law and the flow within the fracture is approximated by a parallel plate model. Additionally, the effect of leak-off is taken into consideration. The solid component of the porous medium is assumed to be linear elastic and the propagation criteria are given by the energy release rate and the stress intensity factors [1]. The used numerical method for the spatial discretization is the eXtended Finite Element Method (XFEM) [2]. It is based on the standard Finite Element Method, but introduces additional degrees of freedom and enrichment functions to describe discontinuities locally in a system. Through them the geometry of the discontinuity (e.g. a fracture) becomes independent of the mesh allowing it to move freely through the domain without a mesh-adapting step. With this numerical model we are able to simulate hydraulic fracture propagation with different initial fracture geometries and material parameters. Results from these simulations will also be presented. References [1] D. Gross and T. Seelig. Fracture Mechanics with an Introduction to Micromechanics. Springer, 2nd edition, (2011) [2] T. Belytschko and T. Black. Elastic crack growth in finite elements with minimal
NASA Astrophysics Data System (ADS)
Lu, San; Lu, Quanming; Lin, Yu; Wang, Xueyi; Ge, Yasong; Wang, Rongsheng; Zhou, Meng; Fu, Huishan; Huang, Can; Wu, Mingyu; Wang, Shui
2015-08-01
Dipolarization fronts (DFs) as earthward propagating flux ropes (FRs) in the Earth's magnetotail are presented and investigated with a three-dimensional (3-D) global hybrid simulation for the first time. In the simulation, several small-scale earthward propagating FRs are found to be formed by multiple X line reconnection in the near tail. During their earthward propagation, the magnetic field Bz of the FRs becomes highly asymmetric due to the imbalance of the reconnection rates between the multiple X lines. At the later stage, when the FRs approach the near-Earth dipole-like region, the antireconnection between the southward/negative Bz of the FRs and the northward geomagnetic field leads to the erosion of the southward magnetic flux of the FRs, which further aggravates the Bz asymmetry. Eventually, the FRs merge into the near-Earth region through the antireconnection. These earthward propagating FRs can fully reproduce the observational features of the DFs, e.g., a sharp enhancement of Bz preceded by a smaller amplitude Bz dip, an earthward flow enhancement, the presence of the electric field components in the normal and dawn-dusk directions, and ion energization. Our results show that the earthward propagating FRs can be used to explain the DFs observed in the magnetotail. The thickness of the DFs is on the order of several ion inertial lengths, and the electric field normal to the front is found to be dominated by the Hall physics. During the earthward propagation from the near-tail to the near-Earth region, the speed of the FR/DFs increases from ~150 km/s to ~1000 km/s. The FR/DFs can be tilted in the GSM (x, y) plane with respect to the y (dawn-dusk) axis and only extend several Earth radii in this direction. Moreover, the structure and evolution of the FRs/DFs are nonuniform in the dawn-dusk direction, which indicates that the DFs are essentially 3-D.
NASA Astrophysics Data System (ADS)
Lu, S.; Lu, Q.; Lin, Y.; Wang, X.; Ge, Y.; Wang, R.; Zhou, M.; Fu, H.; Huang, C.; Wu, M.; Wang, S.
2015-12-01
Dipolarization fronts (DFs) as earthward propagating flux ropes (FRs) in the Earth's magnetotail are presented and investigated with a three-dimensional (3-D) global hybrid simulation for the first time. In the simulation, several small-scale earthward propagating FRs are found to be formed by multiple X-line reconnection in the near-tail. During their earthward propagation, the magnetic field Bz of the FRs becomes highly asymmetric due to the imbalance of the reconnection rates between the multiple X-lines. At the later stage, when the FRs approach the near-Earth dipole-like region, the anti-reconnection between the southward/negative Bz of the FRs and the northward geomagnetic field leads to the erosion of the southward magnetic flux of the FRs, which further aggravates the Bz asymmetry. Eventually, the FRs merge into the near-Earth region through the anti-reconnection. These earthward propagating FRs can fully reproduce the observational features of the DFs, e.g., a sharp enhancement of Bz preceded by a smaller amplitude Bz dip, an earthward flow enhancement, the presence of the electric field components in the normal and dawn-dusk directions, and ion energization. Our results show that the earthward propagating FRs can be used to explain the DFs observed in the magnetotail. The thickness of the DFs is on the order of several ion inertial lengths, and the electric field normal to the front is found to be dominated by the Hall physics. During the earthward propagation from the near-tail to the near-Earth region, the speed of the FR/DFs increases from ~150km/s to ~1000km/s. The FR/DFs can be tilted in the GSM xy plane with respect to the y (dawn-dusk) axis and only extend several RE in this direction. Moreover, the structure and evolution of the FRs/DFs are non-uniform in the dawn-dusk direction, which indicates that the DFs are essentially 3-D.
Using simulation to address hierarchy-related errors in medical practice.
Calhoun, Aaron William; Boone, Megan C; Porter, Melissa B; Miller, Karen H
2014-01-01
Hierarchy, the unavoidable authority gradients that exist within and between clinical disciplines, can lead to significant patient harm in high-risk situations if not mitigated. High-fidelity simulation is a powerful means of addressing this issue in a reproducible manner, but participant psychological safety must be assured. Our institution experienced a hierarchy-related medication error that we subsequently addressed using simulation. The purpose of this article is to discuss the implementation and outcome of these simulations. Script and simulation flowcharts were developed to replicate the case. Each session included the use of faculty misdirection to precipitate the error. Care was taken to assure psychological safety via carefully conducted briefing and debriefing periods. Case outcomes were assessed using the validated Team Performance During Simulated Crises Instrument. Gap analysis was used to quantify team self-insight. Session content was analyzed via video review. Five sessions were conducted (3 in the pediatric intensive care unit and 2 in the Pediatric Emergency Department). The team was unsuccessful at addressing the error in 4 (80%) of 5 cases. Trends toward lower communication scores (3.4/5 vs 2.3/5), as well as poor team self-assessment of communicative ability, were noted in unsuccessful sessions. Learners had a positive impression of the case. Simulation is a useful means to replicate hierarchy error in an educational environment. This methodology was viewed positively by learner teams, suggesting that psychological safety was maintained. Teams that did not address the error successfully may have impaired self-assessment ability in the communication skill domain.
NASA Astrophysics Data System (ADS)
Ishmuratov, I. K.; Baibekov, E. I.
2016-12-01
We investigate the possibility to restore transient nutations of electron spin centers embedded in the solid using specific composite pulse sequences developed previously for the application in nuclear magnetic resonance spectroscopy. We treat two types of systematic errors simultaneously: (i) rotation angle errors related to the spatial distribution of microwave field amplitude in the sample volume, and (ii) off-resonance errors related to the spectral distribution of Larmor precession frequencies of the electron spin centers. Our direct simulations of the transient signal in erbium- and chromium-doped CaWO4 crystal samples with and without error corrections show that the application of the selected composite pulse sequences can substantially increase the lifetime of Rabi oscillations. Finally, we discuss the applicability limitations of the studied pulse sequences for the use in solid-state electron paramagnetic resonance spectroscopy.
Evaluation of Interprofessional Team Disclosure of a Medical Error to a Simulated Patient
Kern, Donna H.; Shrader, Sarah P.
2016-01-01
Objective. To evaluate the impact of an Interprofessional Communication Skills Workshop on pharmacy student confidence and proficiency in disclosing medical errors to patients. Pharmacy student behavior was also compared to that of other health professions’ students on the team. Design. Students from up to four different health professions participated in a simulation as part of an interprofessional team. Teams were evaluated with a validated rubric postsimulation on how well they handled the disclosure of an error to the patient. Individually, each student provided anonymous feedback and self-reflected on their abilities via a Likert-scale evaluation tool. A comparison of pharmacy students who completed the workshop (active group) vs all others who did not (control group) was completed and analyzed. Assessment. The majority of students felt they had adequate training related to communication issues that cause medication errors. However, fewer students believed that they knew how to report such an error to a patient or within a health system. Pharmacy students who completed the workshop were significantly more comfortable explicitly stating the error disclosure to a patient and/or caregiver and were more likely to apologize and respond to questions forthrightly (p<0.05). Conclusions. This data affirms the need to devote more time to training students on communicating with patients about the occurrence of medical errors and how to report these errors. Educators should be encouraged to incorporate such training within interprofessional education curricula. PMID:27899834
NASA Astrophysics Data System (ADS)
Lamb, Masen; Correia, Carlos; Sauvage, Jean-François; Véran, Jean-Pierre; Andersen, David; Vigan, Arthur; Wizinowich, Peter; van Dam, Marcos; Mugnier, Laurent; Bond, Charlotte
2016-07-01
We propose and apply two methods for estimating phase discontinuities for two realistic scenarios on VLT and Keck. The methods use both phase diversity and a form of image sharpening. For the case of VLT, we simulate the `low wind effect' (LWE) which is responsible for focal plane errors in low wind and good seeing conditions. We successfully estimate the LWE using both methods, and show that using both methods both independently and together yields promising results. We also show the use of single image phase diversity in the LWE estimation, and show that it too yields promising results. Finally, we simulate segmented piston effects on Keck/NIRC2 images and successfully recover the induced phase errors using single image phase diversity. We also show that on Keck we can estimate both the segmented piston errors and any Zernike modes affiliated with the non-common path.
Global particle simulation of lower hybrid wave propagation and mode conversion in tokamaks
Bao, J.; Lin, Z.; Kuley, A.
2015-12-10
Particle-in-cell simulation of lower hybrid (LH) waves in core plasmas is presented with a realistic electron-to-ion mass ratio in toroidal geometry. Due to the fact that LH waves mainly interact with electrons to drive the current, ion dynamic is described by cold fluid equations for simplicity, while electron dynamic is described by drift kinetic equations. This model could be considered as a new method to study LH waves in tokamak plasmas, which has advantages in nonlinear simulations. The mode conversion between slow and fast waves is observed in the simulation when the accessibility condition is not satisfied, which is consistent with the theory. The poloidal spectrum upshift and broadening effects are observed during LH wave propagation in the toroidal geometry.
NASA Astrophysics Data System (ADS)
Sonnad, Kiran G.; Hammond, Kenneth C.; Schwartz, Robert M.; Veitzer, Seth A.
2014-08-01
The use of transverse electric (TE) waves has proved to be a powerful, noninvasive method for estimating the densities of electron clouds formed in particle accelerators. Results from the plasma simulation program VSim have served as a useful guide for experimental studies related to this method, which have been performed at various accelerator facilities. This paper provides results of the simulation and modeling work done in conjunction with experimental efforts carried out at the Cornell electron storage ring “Test Accelerator” (CESRTA). This paper begins with a discussion of the phase shift induced by electron clouds in the transmission of RF waves, followed by the effect of reflections along the beam pipe, simulation of the resonant standing wave frequency shifts and finally the effects of external magnetic fields, namely dipoles and wigglers. A derivation of the dispersion relationship of wave propagation for arbitrary geometries in field free regions with a cold, uniform cloud density is also provided.
Reducing errors in simulated satellite views of clouds from large-scale models
NASA Astrophysics Data System (ADS)
Hillman, Benjamin R.
A fundamental test of the representation of clouds in models is evaluating the simulation of present-day climate against available observations. Satellite retrievals of cloud properties provide an attractive baseline for this evaluation because they can provide near global coverage and long records. However, comparisons of modeled and satellite-retrieved cloud properties are difficult because the quantities that can be represented by a model and those that can be observed from space are fundamentally different. Satellite simulators have emerged in recent decades as a means to account for these differences by producing pseudo-retrievals of cloud properties from model diagnosed descriptions of the atmosphere, but these simulators are subject to their own uncertainties as well that have not been well-quantified in the existing literature. In addition to uncertainties regarding the simulation of satellite retrievals themselves, a more fundamental source of uncertainty exists in connecting the different spatial scales between satellite retrievals and large-scale models. Systematic errors arising due to assumptions about the unresolved cloud and precipitation condensate distributions are identified here. Simulated satellite retrievals are shown in this study to be particularly sensitive to the treatment of cloud and precipitation occurrence overlap as well as to unresolved condensate variability. To correct for these errors, an improved treatment of unresolved clouds and precipitation is implemented for use with the simulator framework and is shown to substantially reduce the identified errors.
Mao, J.; Robock, A.
1998-07-01
Thirty surface air temperature simulations for 1979--88 by 29 atmospheric general circulation models are analyzed and compared with the observations over land. These models were run as part of the Atmospheric Model Intercomparison Project (AMIP). Several simulations showed serious systematic errors, up to 4--5 C, in globally averaged land air temperature. The 16 best simulations gave rather realistic reproductions of the mean climate and seasonal cycle of global land air temperature, with an average error of {minus}0.9 C for the 10-yr period. The general coldness of the model simulations is consistent with previous intercomparison studies. The regional systematic errors showed very large cold biases in areas with topography and permanent ice, which implies a common deficiency in the representation of snow-ice albedo in the diverse models. The SST and sea ice specification of climatology rather than observations at high latitudes for the first three years (1979--81) caused a noticeable drift in the neighboring land air temperature simulations, compared to the rest of the years (1982--88). Unsuccessful simulation of the extreme warm (1981) and cold (1984--85) periods implies that some variations are chaotic or unpredictable, produced by internal atmospheric dynamics and not forced by global SST patterns.
Effects of Crew Resource Management Training on Medical Errors in a Simulated Prehospital Setting
ERIC Educational Resources Information Center
Carhart, Elliot D.
2012-01-01
This applied dissertation investigated the effect of crew resource management (CRM) training on medical errors in a simulated prehospital setting. Specific areas addressed by this program included situational awareness, decision making, task management, teamwork, and communication. This study is believed to be the first investigation of CRM…
A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments
S. Healey; P. Patterson; S. Urbanski
2014-01-01
Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...
Effects of Crew Resource Management Training on Medical Errors in a Simulated Prehospital Setting
ERIC Educational Resources Information Center
Carhart, Elliot D.
2012-01-01
This applied dissertation investigated the effect of crew resource management (CRM) training on medical errors in a simulated prehospital setting. Specific areas addressed by this program included situational awareness, decision making, task management, teamwork, and communication. This study is believed to be the first investigation of CRM…
Canestrari, Niccolo; Chubar, Oleg; Reininger, Ruben
2014-09-01
X-ray beamlines in modern synchrotron radiation sources make extensive use of grazing-incidence reflective optics, in particular Kirkpatrick-Baez elliptical mirror systems. These systems can focus the incoming X-rays down to nanometer-scale spot sizes while maintaining relatively large acceptance apertures and high flux in the focused radiation spots. In low-emittance storage rings and in free-electron lasers such systems are used with partially or even nearly fully coherent X-ray beams and often target diffraction-limited resolution. Therefore, their accurate simulation and modeling has to be performed within the framework of wave optics. Here the implementation and benchmarking of a wave-optics method for the simulation of grazing-incidence mirrors based on the local stationary-phase approximation or, in other words, the local propagation of the radiation electric field along geometrical rays, is described. The proposed method is CPU-efficient and fully compatible with the numerical methods of Fourier optics. It has been implemented in the Synchrotron Radiation Workshop (SRW) computer code and extensively tested against the geometrical ray-tracing code SHADOW. The test simulations have been performed for cases without and with diffraction at mirror apertures, including cases where the grazing-incidence mirrors can be hardly approximated by ideal lenses. Good agreement between the SRW and SHADOW simulation results is observed in the cases without diffraction. The differences between the simulation results obtained by the two codes in diffraction-dominated cases for illumination with fully or partially coherent radiation are analyzed and interpreted. The application of the new method for the simulation of wavefront propagation through a high-resolution X-ray microspectroscopy beamline at the National Synchrotron Light Source II (Brookhaven National Laboratory, USA) is demonstrated.
Reduction of very large reaction mechanisms using methods based on simulation error minimization
Nagy, Tibor; Turanyi, Tamas
2009-02-15
A new species reduction method called the Simulation Error Minimization Connectivity Method (SEM-CM) was developed. According to the SEM-CM algorithm, a mechanism building procedure is started from the important species. Strongly connected sets of species, identified on the basis of the normalized Jacobian, are added and several consistent mechanisms are produced. The combustion model is simulated with each of these mechanisms and the mechanism causing the smallest error (i.e. deviation from the model that uses the full mechanism), considering the important species only, is selected. Then, in several steps other strongly connected sets of species are added, the size of the mechanism is gradually increased and the procedure is terminated when the error becomes smaller than the required threshold. A new method for the elimination of redundant reactions is also presented, which is called the Principal Component Analysis of Matrix F with Simulation Error Minimization (SEM-PCAF). According to this method, several reduced mechanisms are produced by using various PCAF thresholds. The reduced mechanism having the least CPU time requirement among the ones having almost the smallest error is selected. Application of SEM-CM and SEM-PCAF together provides a very efficient way to eliminate redundant species and reactions from large mechanisms. The suggested approach was tested on a mechanism containing 6874 irreversible reactions of 345 species that describes methane partial oxidation to high conversion. The aim is to accurately reproduce the concentration-time profiles of 12 major species with less than 5% error at the conditions of an industrial application. The reduced mechanism consists of 246 reactions of 47 species and its simulation is 116 times faster than using the full mechanism. The SEM-CM was found to be more effective than the classic Connectivity Method, and also than the DRG, two-stage DRG, DRGASA, basic DRGEP and extended DRGEP methods. (author)
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Marchand, R.; Ackerman, T. P.
2016-12-01
Satellite instrument simulators have emerged as a means to reduce errors in model evaluation by producing simulated or psuedo-retrievals from model fields, which account for limitations in the satellite retrieval process. Because of the mismatch in resolved scales between satellite retrievals and large-scale models, model cloud fields must first be downscaled to scales consistent with satellite retrievals. This downscaling is analogous to that required for model radiative transfer calculations. The assumption is often made in both model radiative transfer codes and satellite simulators that the unresolved clouds follow maximum-random overlap with horizontally homogeneous cloud condensate amounts. We examine errors in simulated MISR and CloudSat retrievals that arise due to these assumptions by applying the MISR and CloudSat simulators to cloud resolving model (CRM) output generated by the Super-parameterized Community Atmosphere Model (SP-CAM). Errors are quantified by comparing simulated retrievals performed directly on the CRM fields with those simulated by first averaging the CRM fields to approximately 2-degree resolution, applying a "subcolumn generator" to regenerate psuedo-resolved cloud and precipitation condensate fields, and then applying the MISR and CloudSat simulators on the regenerated condensate fields. We show that errors due to both assumptions of maximum-random overlap and homogeneous condensate are significant (relative to uncertainties in the observations and other simulator limitations). The treatment of precipitation is particularly problematic for CloudSat-simulated radar reflectivity. We introduce an improved subcolumn generator for use with the simulators, and show that these errors can be greatly reduced by replacing the maximum-random overlap assumption with the more realistic generalized overlap and incorporating a simple parameterization of subgrid-scale cloud and precipitation condensate heterogeneity. Sandia National Laboratories is a
Hoffelner, J; Landes, H; Kaltenbacher, M; Lerch, R
2001-05-01
A recently developed finite element method (FEM) for the numerical simulation of nonlinear sound wave propagation in thermoviscous fluids is presented. Based on the nonlinear wave equation as derived by Kuznetsov, typical effects associated with nonlinear acoustics, such as generation of higher harmonics and dissipation resulting from the propagation of a finite amplitude wave through a thermoviscous medium, are covered. An efficient time-stepping algorithm based on a modification of the standard Newmark method is used for solving the non-linear semidiscrete equation system. The method is verified by comparison with the well-known Fubini and Fay solutions for plane wave problems, where good agreement is found. As a practical application, a high intensity focused ultrasound (HIFU) source is considered. Impedance simulations of the piezoelectric transducer and the complete HIFU source loaded with air and water are performed and compared with measured data. Measurements of radiated low and high amplitude pressure pulses are compared with corresponding simulation results. The obtained good agreement demonstrates validity and applicability of the nonlinear FEM.
NASA Astrophysics Data System (ADS)
Sarris, T.; Li, X.
Energetic electron and ion injections are a common characteristic of substorms and are often observed near or inside geosynchronous orbit. Depending on the local time of measurement these injections can appear to be dispersionless. We performed a sim- ulation of an electron dispersionless injection by considering the interaction of an Earthward propagating electromagnetic pulse with the preexisting electron popula- tion. Such simulations have been performed previously [Li et al., 1993, 1998] and the dispersionless nature of injections measured at geostationary orbit has been repro- duced. These simulations assumed a constant propagation speed for the field configu- ration that produced the dispersionless injections. In our simulation we vary the pulse speed with the radial distance from the Earth to match the surprisingly low propa- gation velocities that have been measured inside geostationary orbit. We show that a deccelerating electromagnetic field configuration is able to produce dispersionless in- jections inside of geostationary orbit. We have reproduced a particular event (February 12, 1991) as seen by two spacecraft (CRRES and LANL 1990-095) when they were around local midnight and at different radial distances. We explain the energization of electrons during this interaction by means of betatron acceleration and we show that under our model electrons are transported inside geosynchronous orbit from more than a few RE tailward.
Fast acceleration of 2D wave propagation simulations using modern computational accelerators.
Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H; Kay, Matthew
2014-01-01
Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than 150x speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least 200x faster than the sequential implementation and 30x faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of 120x with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of
Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators
Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew
2014-01-01
Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in
Boyd, J C; Bruns, D E
2001-02-01
Proposed quality specifications for glucose meters allow results to be in error by 5-10% or more of the "true" concentration. Because meters are used as aids in the adjustment of insulin doses, we aimed to characterize the quantitative effect of meter error on the ability to identify the insulin dose appropriate for the true glucose concentration. Using Monte Carlo simulation, we generated random "true" glucose values within defined intervals. These values were converted to "measured" glucose values using mathematical models of glucose meters having defined imprecision (CV) and bias. For each combination of bias and imprecision, 10,000-20,000 true and measured glucose concentrations were matched with the corresponding insulin doses specified by selected insulin-dosing regimens. Discrepancies in prescribed doses were counted and their frequencies plotted in relation to bias and imprecision. For meters with a total analytical error of 5%, dosage errors occurred in approximately 8-23% of insulin doses. At 10% total error, 16-45% of doses were in error. Large errors of insulin dose (two-step or greater) occurred >5% of the time when the CV and/or bias exceeded 10-15%. Total dosage error rates were affected only slightly by choices of sliding scale among insulin dosage rules or by the range of blood glucose. To provide the intended insulin dosage 95% of the time required that both the bias and the CV of the glucose meter be <1% or <2%, depending on mean glucose concentrations and the rules for insulin dosing. Glucose meters that meet current quality specifications allow a large fraction of administered insulin doses to differ from the intended doses. The effects of such dosage errors on blood glucose and on patient outcomes require study.
Simulation of charge exchange plasma propagation near an ion thruster propelled spacecraft
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Kaufman, H. R.; Winder, D. R.
1981-01-01
A model describing the charge exchange plasma and its propagation is discussed, along with a computer code based on the model. The geometry of an idealized spacecraft having an ion thruster is outlined, with attention given to the assumptions used in modeling the ion beam. Also presented is the distribution function describing charge exchange production. The barometric equation is used in relating the variation in plasma potential to the variation in plasma density. The numerical methods and approximations employed in the calculations are discussed, and comparisons are made between the computer simulation and experimental data. An analytical solution of a simple configuration is also used in verifying the model.
NASA Astrophysics Data System (ADS)
Ishii, Katsuhiro; Nishidate, Izumi; Iwai, Toshiaki
2014-05-01
Numerical analysis of optical propagation in highly scattering media is investigated when light is normally incident to the surface and re-emerges backward from the same point. This situation corresponds to practical light scattering setups, such as in optical coherence tomography. The simulation uses the path-length-assigned Monte Carlo method based on an ellipsoidal algorithm. The spatial distribution of the scattered light is determined and the dependence of its width and penetration depth on the path-length is found. The backscattered light is classified into three types, in which ballistic, snake, and diffuse photons are dominant.
Efficient simulation for fixed-receiver bistatic SAR with time and frequency synchronization errors
NASA Astrophysics Data System (ADS)
Yan, Feifei; Chang, Wenge; Li, Xiangyang
2015-12-01
Raw signal simulation is a useful tool for synthetic aperture radar (SAR) system design, mission planning, processing algorithm testing, and inversion algorithm design. Time and frequency synchronization is the key technique of bistatic SAR (BiSAR) system, and raw data simulation is an effective tool for verifying the time and frequency synchronization techniques. According to the two-dimensional (2-D) frequency spectrum of fixed-receiver BiSAR, a rapid raw data simulation approach with time and frequency synchronization errors is proposed in this paper. Through 2-D inverse Stolt transform in 2-D frequency domain and phase compensation in range-Doppler frequency domain, this method can significantly improve the efficiency of scene raw data simulation. Simulation results of point targets and extended scene are presented to validate the feasibility and efficiency of the proposed simulation approach.
Particle-in-cell simulations of electron beam propagation in the magnetospheric plasma environment
NASA Astrophysics Data System (ADS)
Powis, A. T.; Kaganovich, I.; Johnson, J.; Sanchez, E. R.
2016-12-01
New accelerator technologies have made it possible to produce a light-weight compact electron beam accelerator able to be installed on a small to medium sized satellite for applications of mapping the magnetisphere. We present a particle-in-cell (PIC) study of electron beam propagation in the magnetospheric environment. Two-stream and fillamentation instabilities, as well as generation of whistler waves can potentially disrupt beam propagation in the plasma environment [1,2]. We compare results of the PIC simulations with previous analytical estimates for the threshold of instabilities. [1] "Whistler Wave Excitation and Effects of Self-Focusing on Ion Beam Propagation through a Background Plasma along a Solenoidal Magnetic Field", M. Dorf, I. Kaganovich, E. Startsev, and R. C. Davidson, Physics of Plasmas 17, 023103 (2010). [2] "Survey of Collective Instabilities and Beam-Plasma Interactions in Intense Heavy Ion Beams", R. C. Davidson, M. A. Dorf, I. D. Kaganovich, H. Qin, A. B. Sefkow, E. A. Startsev, D. R. Welch, D. V. Rose, and S. M. Lund, Nuclear Instruments and Methods in Physics Research A 606, 11 (2009).
NASA Astrophysics Data System (ADS)
Goulden, T.; Hopkinson, C.
2013-12-01
The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future
Simulating propagation of coherent light in random media using the Fredholm type integral equation
NASA Astrophysics Data System (ADS)
Kraszewski, Maciej; Pluciński, Jerzy
2017-06-01
Studying propagation of light in random scattering materials is important for both basic and applied research. Such studies often require usage of numerical method for simulating behavior of light beams in random media. However, if such simulations require consideration of coherence properties of light, they may become a complex numerical problems. There are well established methods for simulating multiple scattering of light (e.g. Radiative Transfer Theory and Monte Carlo methods) but they do not treat coherence properties of light directly. Some variations of these methods allows to predict behavior of coherent light but only for an averaged realization of the scattering medium. This limits their application in studying many physical phenomena connected to a specific distribution of scattering particles (e.g. laser speckle). In general, numerical simulation of coherent light propagation in a specific realization of random medium is a time- and memory-consuming problem. The goal of the presented research was to develop new efficient method for solving this problem. The method, presented in our earlier works, is based on solving the Fredholm type integral equation, which describes multiple light scattering process. This equation can be discretized and solved numerically using various algorithms e.g. by direct solving the corresponding linear equations system, as well as by using iterative or Monte Carlo solvers. Here we present recent development of this method including its comparison with well-known analytical results and a finite-difference type simulations. We also present extension of the method for problems of multiple scattering of a polarized light on large spherical particles that joins presented mathematical formalism with Mie theory.
Errors in short circuit measurements due to spectral mismatch between sunlight and solar simulators
NASA Technical Reports Server (NTRS)
Curtis, H. B.
1976-01-01
Errors in short circuit current measurement were calculated for a variety of spectral mismatch conditions. The differences in spectral irradiance between terrestrial sunlight and three types of solar simulator were studied, as well as the differences in spectral response between three types of reference solar cells and various test cells. The simulators considered were a short arc xenon lamp AMO sunlight simulator, an ordinary quartz halogen lamp, and an ELH-type quartz halogen lamp. Three types of solar cells studied were a silicon cell, a cadmium sulfide cell and a gallium arsenide cell.
Preventing technology-induced errors in healthcare: the role of simulation.
Kushniruk, Andre W; Borycki, Elizabeth M; Anderson, James G; Anderson, Marilyn M
2009-01-01
We describe a novel approach to the study and prediction of technology-induced error in healthcare. The objective of our approach is to identify and reduce the potential for error so that the benefits of introducing information technology, such as Computerized Physician Order Entry (CPOE) or Electronic Health Records (EHRs), are maximized. The approach involves four phases. In Phase 1, we typically conduct small scale clinical simulations to assess whether or not the use of a new information technology can introduce error. (Human subjects are involved and user-system interactions are recorded.) In Phase 2, we analyze the results from Phase 1 to identify statistically significant relationships between usability issues and the occurrence of error (e.g., medication error). In Phase 3, we enter the results from Phase 2 into computer-based simulation models to explore the potential impact of the technology over time and across user populations. In Phase 4, we conduct naturalistic studies to examine whether or not the predictions made in Phases 2 and 3 apply to the real world. In closing, we discuss how the approach can be used to increase the safety of health information systems.
Fabrication and simulation of random and periodic composites for reduced stress wave propagation
NASA Astrophysics Data System (ADS)
McCuiston, Ryan Charles
During a ballistic impact event between a monolithic ceramic target and a projectile, a shock wave precedes the projectile penetration and propagates through the target. Shock wave induced damage, fundamentally caused by the creation of tensile stress, can reduce the, expected performance of the target material. If the shock wave could be prevented from propagating it would be possible to improve ballistic performance of the target material. Recent research on phononic band gap structures has shown that it is possible to design and fabricate biphasic structures that forbid propagation of low amplitude acoustic waves. The goal of this dissertation was to determine the feasibility of creating a structure that is capable of limiting and or defeating large amplitude shock wave propagation by applying the concepts of phononic band gap research. A model system of Al2O3 and WC-Co was selected based on processing, acoustic and ballistic criteria. Al2O 3/WC-Co composites were fabricated by die pressing and vacuum sintering. The WC-Co was added as discrete inclusions 0.5 to 1.5 mm in diameter up to 50 vol. %. The interfacial bonding between Al2O3 and WC-Co was characterized by indentation and microscopy to determine optimal sintering conditions. A tape casting and lamination technique was developed to fabricate large dimension Al2O3 samples with periodically placed WC-Co inclusions. Through transmission acoustic characterization of green tape cast and laminated samples showed acoustic velocity could be reduced significantly by proper WC-Co inclusion arrangement. Two dimensional finite element simulations were performed on a series of designed Al2O3 structures containing both random and periodically arrayed WC-Co inclusions. For a fixed loading scheme, the effects of WC-Co inclusion diameter, area fraction and stacking arrangement were studied. Structures were found to respond either homogenously, heterogeneously or in a mixed mode fashion to the propagating stress wave. The
Chemical basis of Trotter-Suzuki errors in quantum chemistry simulation
NASA Astrophysics Data System (ADS)
Babbush, Ryan; McClean, Jarrod; Wecker, Dave; Aspuru-Guzik, Alán; Wiebe, Nathan
2015-02-01
Although the simulation of quantum chemistry is one of the most anticipated applications of quantum computing, the scaling of known upper bounds on the complexity of these algorithms is daunting. Prior work has bounded errors due to discretization of the time evolution (known as "Trotterization") in terms of the norm of the error operator and analyzed scaling with respect to the number of spin orbitals. However, we find that these error bounds can be loose by up to 16 orders of magnitude for some molecules. Furthermore, numerical results for small systems fail to reveal any clear correlation between ground-state error and number of spin orbitals. We instead argue that chemical properties, such as the maximum nuclear charge in a molecule and the filling fraction of orbitals, can be decisive for determining the cost of a quantum simulation. Our analysis motivates several strategies to use classical processing to further reduce the required Trotter step size and estimate the necessary number of steps, without requiring additional quantum resources. Finally, we demonstrate improved methods for state preparation techniques which are asymptotically superior to proposals in the simulation literature.
On the Chemical Basis of Trotter-Suzuki Errors in Quantum Chemistry Simulation
NASA Astrophysics Data System (ADS)
Babbush, Ryan; McClean, Jarrod; Wecker, Dave; Aspuru-Guzik, Alán; Wiebe, Nathan
2015-03-01
Although the simulation of quantum chemistry is one of the most anticipated applications of quantum computing, the scaling of known upper bounds on the complexity of these algorithms is daunting. Prior work has bounded errors due to Trotterization in terms of the norm of the error operator and analyzed scaling with respect to the number of spin-orbitals. However, we find that these error bounds can be loose by up to sixteen orders of magnitude for some molecules. Furthermore, numerical results for small systems fail to reveal any clear correlation between ground state error and number of spin-orbitals. We instead argue that chemical properties, such as the maximum nuclear charge in a molecule and the filling fraction of orbitals, can be decisive for determining the cost of a quantum simulation. Our analysis motivates several strategies to use classical processing to further reduce the required Trotter step size and to estimate the necessary number of steps, without requiring additional quantum resources. Finally, we demonstrate improved methods for state preparation techniques which are asymptotically superior to proposals in the simulation literature.
Confirmation of standard error analysis techniques applied to EXAFS using simulations
Booth, Corwin H; Hu, Yung-Jin
2009-12-14
Systematic uncertainties, such as those in calculated backscattering amplitudes, crystal glitches, etc., not only limit the ultimate accuracy of the EXAFS technique, but also affect the covariance matrix representation of real parameter errors in typical fitting routines. Despite major advances in EXAFS analysis and in understanding all potential uncertainties, these methods are not routinely applied by all EXAFS users. Consequently, reported parameter errors are not reliable in many EXAFS studies in the literature. This situation has made many EXAFS practitioners leery of conventional error analysis applied to EXAFS data. However, conventional error analysis, if properly applied, can teach us more about our data, and even about the power and limitations of the EXAFS technique. Here, we describe the proper application of conventional error analysis to r-space fitting to EXAFS data. Using simulations, we demonstrate the veracity of this analysis by, for instance, showing that the number of independent dat a points from Stern's rule is balanced by the degrees of freedom obtained from a 2 statistical analysis. By applying such analysis to real data, we determine the quantitative effect of systematic errors. In short, this study is intended to remind the EXAFS community about the role of fundamental noise distributions in interpreting our final results.
Absolute Time Error Calibration of GPS Receivers Using Advanced GPS Simulators
1997-12-01
29th Annual Precise Time a d Time Interval (PTTI) Meeting ABSOLUTE TIME ERROR CALIBRATION OF GPS RECEIVERS USING ADVANCED GPS SIMULATORS E.D...DC 20375 USA Abstract Preche time transfer eq)er&nen& using GPS with t h e stabd?v’s under ten nanoseconh are common& being reported willrbr the... time transfer communily. Relarive calibrations are done by naeasurhg the time error of one GPS receiver versus a “known master refmence receiver.” Z?t
Time-domain study on reproducibility of laser-based soft-error simulation
NASA Astrophysics Data System (ADS)
Itsuji, Hiroaki; Kobayashi, Daisuke; Lourenco, Nelson E.; Hirose, Kazuyuki
2017-04-01
Studied is the soft error issue, which is a circuit malfunction caused by ion-radiation-induced noise currents. We have developed a laser-based soft-error simulation system to emulate the noise and evaluate its reproducibility in the time domain. It is found that this system, which utilizes a two-photon absorption process, can reproduce the shape of ion-induced transient currents, which are assumed to be induced from neutrons at the ground level. A technique used to extract the initial carrier structure inside the device is also presented.
Nagatani, Yoshiki; Mizuno, Katsunori; Saeki, Takashi; Matsukawa, Mami; Sakaguchi, Takefumi; Hosoi, Hiroshi
2008-11-01
In cancellous bone, longitudinal waves often separate into fast and slow waves depending on the alignment of bone trabeculae in the propagation path. This interesting phenomenon becomes an effective tool for the diagnosis of osteoporosis because wave propagation behavior depends on the bone structure. Since the fast wave mainly propagates in trabeculae, this wave is considered to reflect the structure of trabeculae. For a new diagnosis method using the information of this fast wave, therefore, it is necessary to understand the generation mechanism and propagation behavior precisely. In this study, the generation process of fast wave was examined by numerical simulations using elastic finite-difference time-domain (FDTD) method and experimental measurements. As simulation models, three-dimensional X-ray computer tomography (CT) data of actual bone samples were used. Simulation and experimental results showed that the attenuation of fast wave was always higher in the early state of propagation, and they gradually decreased as the wave propagated in bone. This phenomenon is supposed to come from the complicated propagating paths of fast waves in cancellous bone.
Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation
NASA Astrophysics Data System (ADS)
Li, C.
2012-07-01
POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
A combined ADER-DG and PML approach for simulating wave propagation in unbounded domains
NASA Astrophysics Data System (ADS)
Amler, Thomas G.; Hoteit, Ibrahim; Alkhalifah, Tariq A.
2012-09-01
In this work, we present a numerical approach for simulating wave propagation in unbounded domains which combines discontinuous Galerkin methods with arbitrary high order time integration (ADER-DG) and a stabilized modification of perfectly matched layers (PML). Here, the ADER-DG method is applied to Bérenger's formulation of PML. The instabilities caused by the original PML formulation are treated by a fractional step method that allows to monitor whether waves are damped in PML region. In grid cells where waves are amplified by the PML, the contribution of damping terms is neglected and auxiliary variables are reset. Results of 2D simulations in acoustic media with constant and discontinuous material parameters are presented to illustrate the performance of the method.
Titze, Ingo R.; Palaparthi, Anil; Smith, Simeon L.
2014-01-01
Time-domain computer simulation of sound production in airways is a widely used tool, both for research and synthetic speech production technology. Speed of computation is generally the rationale for one-dimensional approaches to sound propagation and radiation. Transmission line and wave-reflection (scattering) algorithms are used to produce formant frequencies and bandwidths for arbitrarily shaped airways. Some benchmark graphs and tables are provided for formant frequencies and bandwidth calculations based on specific mathematical terms in the one-dimensional Navier–Stokes equation. Some rules are provided here for temporal and spatial discretization in terms of desired accuracy and stability of the solution. Kinetic losses, which have been difficult to quantify in frequency-domain simulations, are quantified here on the basis of the measurements of Scherer, Torkaman, Kucinschi, and Afjeh [(2010). J. Acoust. Soc. Am. 128(2), 828–838]. PMID:25480071
NASA Astrophysics Data System (ADS)
Bidari, Pooya Sobhe; Alirezaie, Javad; Tavakkoli, Jahan
2017-03-01
This paper presents a method for modeling and simulation of shear wave generation from a nonlinear Acoustic Radiation Force Impulse (ARFI) that is considered as a distributed force applied at the focal region of a HIFU transducer radiating in nonlinear regime. The shear wave propagation is simulated by solving the Navier's equation from the distributed nonlinear ARFI as the source of the shear wave. Then, the Wigner-Ville Distribution (WVD) as a time-frequency analysis method is used to detect the shear wave at different local points in the region of interest. The WVD results in an estimation of the shear wave time of arrival, its mean frequency and local attenuation which can be utilized to estimate medium's shear modulus and shear viscosity using the Voigt model.
Daupin, Johanne; Atkinson, Suzanne; Bédard, Pascal; Pelchat, Véronique; Lebel, Denis; Bussières, Jean-François
2016-12-01
The medication-use system in hospitals is very complex. To improve the health professionals' awareness of the risks of errors related to the medication-use system, a simulation of medication errors was created. The main objective was to assess the medical, nursing and pharmacy staffs' ability to identify errors related to the medication-use system using a simulation. The secondary objective was to assess their level of satisfaction. This descriptive cross-sectional study was conducted in a 500-bed mother-and-child university hospital. A multidisciplinary group set up 30 situations and replicated a patient room and a care unit pharmacy. All hospital staff, including nurses, physicians, pharmacists and pharmacy technicians, was invited. Participants had to detect if a situation contained an error and fill out a response grid. They also answered a satisfaction survey. The simulation was held during 100 hours. A total of 230 professionals visited the simulation, 207 handed in a response grid and 136 answered the satisfaction survey. The participants' overall rate of correct answers was 67.5% ± 13.3% (4073/6036). Among the least detected errors were situations involving a Y-site infusion incompatibility, an oral syringe preparation and the patient's identification. Participants mainly considered the simulation as effective in identifying incorrect practices (132/136, 97.8%) and relevant to their practice (129/136, 95.6%). Most of them (114/136; 84.4%) intended to change their practices in view of their exposure to the simulation. We implemented a realistic medication-use system errors simulation in a mother-child hospital, with a wide audience. This simulation was an effective, relevant and innovative tool to raise the health care professionals' awareness of critical processes. © 2016 John Wiley & Sons, Ltd.
Hydrodynamics simulations of 2 (omega) laser propagation in underdense gasbag plasmas
Meezan, N B; Divol, L; Marinak, M M; Kerbel, G D; Suter, L J; Stevenson, R M; Slark, G E; Oades, K
2004-04-05
Recent 2{omega} laser propagation and stimulated Raman backscatter (SRS) experiments performed on the Helen laser have been analyzed using the radiation-hydrodynamics code hydra. These experiments utilized two diagnostics sensitive to the hydrodynamics of gasbag targets: a fast x-ray framing camera (FXI) and an SRS streak spectrometer. With a newly implemented nonlocal thermal transport model, hydra is able to reproduce many features seen in the FXI images and the SRS streak spectra. Experimental and simulated side-on FXI images suggest that propagation can be explained by classical laser absorption and the resulting hydrodynamics. Synthetic SRS spectra generated from the hydra results reproduce the details of the experimental SRS streak spectra. Most features in the synthetic spectra can be explained solely by axial density and temperature gradients. The total SRS backscatter increases with initial gasbag fill density up to {approx} 0.08 times the critical density, then decreases. Images from a near-backscatter camera (NBI) show that severe beam spray is not responsible for the trend in total backscatter. Filamentation does not appear to be a significant factor in gasbag hydrodynamics. The simulation and analysis techniques established here can be used in upcoming experimental campaigns on the Omega laser facility and the National Ignition Facility.
Hydrodynamics simulations of 2{omega} laser propagation in underdense gasbag plasmas
Meezan, N.B.; Divol, L.; Marinak, M.M.; Kerbel, G.D.; Suter, L.J.; Stevenson, R.M.; Slark, G.E.; Oades, K.
2004-12-01
Recent 2{omega} laser propagation and stimulated Raman backscatter (SRS) experiments performed on the Helen laser have been analyzed using the radiation-hydrodynamics code HYDRA [M. M. Marinak, G. D. Kerbel, N. A. Gentile, O. Jones, D. Munro, S. Pollaine, T. R. Dittrich, and S. W. Haan, Phys. Plasmas 8, 2275 (2001)]. These experiments utilized two diagnostics sensitive to the hydrodynamics of gasbag targets: a fast x-ray framing camera (FXI) and a SRS streak spectrometer. With a newly implemented nonlocal thermal transport model, HYDRA is able to reproduce many features seen in the FXI images and the SRS streak spectra. Experimental and simulated side-on FXI images suggest that propagation can be explained by classical laser absorption and the resulting hydrodynamics. Synthetic SRS spectra generated from the HYDRA results reproduce the details of the experimental SRS streak spectra. Most features in the synthetic spectra can be explained solely by axial density and temperature gradients. The total SRS backscatter increases with initial gasbag fill density up to {approx_equal}0.08 times the critical density, then decreases. Data from a near-backscatter imaging camera show that severe beam spray is not responsible for the trend in total backscatter. Filamentation does not appear to be a significant factor in gasbag hydrodynamics. The simulation and analysis techniques established here can be used in ongoing experimental campaigns on the Omega laser facility and the National Ignition Facility.
Atomistic Simulation of Environment-Assisted Crack Propagation Behavior of SiO2
NASA Astrophysics Data System (ADS)
Yasukawa, Akio
A modified extended Tersoff interatomic potential function is proposed to simulate environment-assisted crack propagation behavior. First, the physical properties of Si, O2, H2, SiO2, and H2O were calculated by this modified function. It was confirmed that the calculated values agreed with the measured values very well. Next, the potential surface of the H2O molecular transporting process to the crack tip of SiO2 material was calculated by the same function. The relationship between the velocity of crack propagation "υ" and the stress intensity factor "K" was calculated based on this surface. The results agreed with the experimental results well. This simulation clarified that the crack velocity is controlled by the H2O transporting process in both regions I and II of the "υ-K curve". In region I, H2O molecules have physically limited access to the crack tip due to the small opening in the crack. This works as an energy barrier in transporting H2O molecules. Due to the relatively large crack opening in region II, H2O molecules have free access to the crack tip without any energy barrier. This difference makes a bend in the "υ-K curve" between regions I and II.
Parallel Simulation of Wave Propagation in Three-Dimensional Poroelastic Media
NASA Astrophysics Data System (ADS)
Sheen, D.; Baag, C.; Tuncay, K.; Ortoleva, P. J.
2003-12-01
Parallelized velocity-stress staggered-grid finite-difference method to simulate wave propagation in 3-D heterogeneous poroelastic media is presented. Biot_s poroelasticity theory is used to study the behavior of wavefield in fluid saturated media. In the poroelasticity theory, the fluid velocities and pressure are included as additional field variables to those for the pure elasticity in order to describe the interaction between pore fluid and solid. Discretization of governing equations for finite-difference approximation is performed for total of 13 components of field variables in 3-D Cartesian coordinates: six components for velocity, six components for solid stress, and a component for fluid pressure. The scheme has fourth-order accuracy in space and second-order accuracy in time. Also, to simulate wave propagation in an unbounded medium, the perfectly matched layer (PML) method is used as an absorbing boundary condition. In contrast with the pure elastic problem, the larger number of components to describe the poroelasticity requires a huge sum of core memory inevitably. In the case of modeling in a realistic scale, the computation is hardly to run on serial platforms. Therefore, the computationally efficient scheme to run on a large parallel environment is required. The parallel implementation is achieved by using a spatial decomposition and the portable message passing interface (MPI) for communication between neighboring processors. Direct comparisons are made for serial and parallel computations. The inevitability and efficiency of parallelization for the poroelastic wave modeling are also demonstrated using model examples.
Simulation of linear aeracoustic propagation in lined ducts with discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Peyret, Christophe; Delorme, Philippe
2004-05-01
The simulation of the acoustic propagation inside a lined duct with nonuniform flow still presents problems when geometry is complex. To handle computations on a complex geometry, without consequential effort, unstructured meshes are required. Assuming irrotational flow and acoustic perturbation, a well-posed finite element method based on the potential equation is established. But, the effect of the thin boundary layer is then neglected, which is not relevant to the acoustical processes occurring near the lining. Recent works have focused on the tremendous interest of the Galerkin discontinuous method (GDM) to solve Euler's linearized equations. The GDM can handle computations on unstructured meshes and introduces low numerical dissipation. Very recent mathematical works have established, for the GDM, a well-posed boundary condition to simulate the lining effect. Results computed with the GDM are presented for a uniform cross section lined duct with a shear flow and are found to be in good agreement with the modal analysis results, thereby validating the boundary condition. To illustrate the flexibility of the method other applications dealing with instabilities, air-wing diffraction, and atmospheric propagation are also presented.
A phase screen model for simulating numerically the propagation of a laser beam in rain
Lukin, I P; Rychkov, D S; Falits, A V; Lai, Kin S; Liu, Min R
2009-09-30
The method based on the generalisation of the phase screen method for a continuous random medium is proposed for simulating numerically the propagation of laser radiation in a turbulent atmosphere with precipitation. In the phase screen model for a discrete component of a heterogeneous 'air-rain droplet' medium, the amplitude screen describing the scattering of an optical field by discrete particles of the medium is replaced by an equivalent phase screen with a spectrum of the correlation function of the effective dielectric constant fluctuations that is similar to the spectrum of a discrete scattering component - water droplets in air. The 'turbulent' phase screen is constructed on the basis of the Kolmogorov model, while the 'rain' screen model utiises the exponential distribution of the number of rain drops with respect to their radii as a function of the rain intensity. Theresults of the numerical simulation are compared with the known theoretical estimates for a large-scale discrete scattering medium. (propagation of laser radiation in matter)
NASA Astrophysics Data System (ADS)
Yu, Rixin; Lipatnikov, Andrei N.
2017-06-01
A three-dimensional (3D) direct numerical simulation (DNS) study of the propagation of a reaction wave in forced, constant-density, statistically stationary, homogeneous, isotropic turbulence is performed by solving Navier-Stokes and reaction-diffusion equations at various (from 0.5 to 10) ratios of the rms turbulent velocity U' to the laminar wave speed, various (from 2.1 to 12.5) ratios of an integral length scale of the turbulence to the laminar wave thickness, and two Zeldovich numbers Ze=6.0 and 17.1. Accordingly, the Damköhler and Karlovitz numbers are varied from 0.2 to 25.1 and from 0.4 to 36.2, respectively. Contrary to an earlier DNS study of self-propagation of an infinitely thin front in statistically the same turbulence, the bending of dependencies of the mean wave speed on U' is simulated in the case of a nonzero thickness of the local reaction wave. The bending effect is argued to be controlled by inefficiency of the smallest scale turbulent eddies in wrinkling the reaction-zone surface, because such small-scale wrinkles are rapidly smoothed out by molecular transport within the local reaction wave.
NASA Astrophysics Data System (ADS)
Nikolopoulos, Efthymios I.; Polcher, Jan; Anagnostou, Emmanouil N.; Eisner, Stephanie; Fink, Gabriel; Kallos, George
2016-04-01
Precipitation is arguably one of the most important forcing variables that drive terrestrial water cycle processes. The process of precipitation exhibits significant variability in space and time, is associated with different water phases (liquid or solid) and depends on several other factors (aerosols, orography etc), which make estimation and modeling of this process a particularly challenging task. As such, precipitation information from different sensors/products is associated with uncertainty. Propagation of this uncertainty into hydrologic simulations can have a considerable impact on the accuracy of the simulated hydrologic variables. Therefore, to make hydrologic predictions more useful, it is important to investigate and assess the impact of precipitation uncertainty in hydrologic simulations in order to be able to quantify it and identify ways to minimize it. In this work we investigate the impact of precipitation uncertainty in hydrologic simulations using land surface models (e.g. ORCHIDEE) and global hydrologic models (e.g. WaterGAP3) for the simulation of several hydrologic variables (soil moisture, ET, runoff) over the Iberian Peninsula. Uncertainty in precipitation is assessed by utilizing various sources of precipitation input that include one reference precipitation dataset (SAFRAN), three widely-used satellite precipitation products (TRMM 3B42v7, CMORPH, PERSIANN) and a state-of-the-art reanalysis product (WFDEI) based on the ECMWF ERA-Interim reanalysis. Comparative analysis is based on using the SAFRAN-simulations as reference and it is carried out at different space (0.5deg or regional average) and time (daily or seasonal) scales. Furthermore, as an independent verification, simulated discharge is compared against available discharge observations for selected major rivers of Iberian region. Results allow us to draw conclusions regarding the impact of precipitation uncertainty with respect to i) hydrologic variable of interest, ii
NASA Astrophysics Data System (ADS)
Heller, E. K.; Jain, F. C.
2000-06-01
A time-dependent finite-difference beam propagation method is presented to analyze quantum interference transistor (QUIT) structures, employing the Aharonov-Bohm effect, in both steady state and transient conditions. Current-voltage characteristics of two ring structures having 0.2 and 0.05 μm channel lengths, respectively, are presented. Additionally, the wave functions are calculated, and reflections are observed in both the ON and OFF states of the device. Cutoff frequency fT values of 3 and 8.5 THz, respectively, are calculated from the switching response to a gate pulse of 200 fs, for the 0.2 μm device, and to a pulse of 50 fs, for the 0.05 μm device. Results indicate that reflections at the drain may degrade frequency performance of these devices, which is not evident from earlier analytical studies. These structures are further explored to investigate the effects of imperfections introduced in fabricating the quantum wire channels. We compare two QUITs, one realized by a 1 nm resolution lithography process (representing an advanced fabrication technique) and the other realized by a 10 nm resolution (representing current state-of-the-art lithography). We also present an asymmetric 10 nm resolution structure, to represent the case when errors in fabrication significantly alter the QUIT topology. This simulation shows strong dependence of the electron transmission probability on the channel topology and roughness determined by the lithographic resolution.
Qiang, Bo; Brigham, John C; McGough, Robert J; Greenleaf, James F; Urban, Matthew W
2017-03-01
Shear wave elastography is a versatile technique that is being applied to many organs. However, in tissues that exhibit anisotropic material properties, special care must be taken to estimate shear wave propagation accurately and efficiently. A two-dimensional simulation method is implemented to simulate the shear wave propagation in the plane of symmetry in transversely isotropic viscoelastic media. The method uses a mapped Chebyshev pseudo-spectral method to calculate the spatial derivatives and an Adams-Bashforth-Moulton integrator with variable step sizes for time marching. The boundaries of the two-dimensional domain are surrounded by perfectly matched layers to approximate an infinite domain and minimize reflection errors. In an earlier work, we proposed a solution for estimating the apparent shear wave elasticity and viscosity of the spatial group velocity as a function of rotation angle through a low-frequency approximation by a Taylor expansion. With the solver implemented in MATLAB, the simulated results in this paper match well with the theory. Compared to the finite element method simulations we used before, the pseudo-spectral solver consumes less memory and is faster and achieves better accuracy.
Using individual-muscle specific instead of across-muscle mean data halves muscle simulation error.
Blümel, Marcus; Guschlbauer, Christoph; Hooper, Scott L; Büschges, Ansgar
2012-11-01
Hill-type parameter values measured in experiments on single muscles show large across-muscle variation. Using individual-muscle specific values instead of the more standard approach of across-muscle means might therefore improve muscle model performance. We show here that using mean values increased simulation normalized RMS error in all tested motor nerve stimulation paradigms in both isotonic and isometric conditions, doubling mean simulation error from 9 to 18 (different at p < 0.0001). These data suggest muscle-specific measurement of Hill-type model parameters is necessary in work requiring highly accurate muscle model construction. Maximum muscle force (F (max)) showed large (fourfold) across-muscle variation. To test the role of F (max) in model performance we compared the errors of models using mean F (max) and muscle-specific values for the other model parameters, and models using muscle-specific F (max) values and mean values for the other model parameters. Using muscle-specific F (max) values did not improve model performance compared to using mean values for all parameters, but using muscle-specific values for all parameters but F (max) did (to an error of 14, different from muscle-specific, mean all parameters, and mean only F (max) errors at p ≤ 0.014). Significantly improving model performance thus required muscle-specific values for at least a subset of parameters other than F (max), and best performance required muscle-specific values for this subset and F (max). Detailed consideration of model performance suggested that remaining model error likely stemmed from activation of both fast and slow motor neurons in our experiments and inadequate specification of model activation dynamics.
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
NASA Astrophysics Data System (ADS)
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; Touma, Danielle; Ruby Leung, L.
2016-09-01
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land-atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land-atmosphere interactions in the development and maintenance of SAM.
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; Touma, Danielle; Ruby Leung, L.
2016-09-19
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.
Dominant Drivers of GCMs Errors in the Simulation of South Asian Summer Monsoon
NASA Astrophysics Data System (ADS)
Ashfaq, Moetasim
2017-04-01
Accurate simulation of the South Asian summer monsoon (SAM) is a longstanding unresolved problem in climate modeling science. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to demonstrate that most of the simulation errors in the summer season and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation over land further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land-atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land-atmosphere interactions in the development and maintenance of SAM.
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
NASA Astrophysics Data System (ADS)
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; Touma, Danielle; Ruby Leung, L.
2017-07-01
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land-atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land-atmosphere interactions in the development and maintenance of SAM.
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; ...
2016-09-19
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and themore » trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. Ultimately, these results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.« less
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; Touma, Danielle; Ruby Leung, L.
2016-09-19
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. Ultimately, these results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; Touma, Danielle; Ruby Leung, L.
2016-09-19
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. Ultimately, these results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.
Errors in the Simulated Heat Budget of CGCMs in the Eastern Part of the Tropical Oceans
NASA Astrophysics Data System (ADS)
Hazel, J.; Masarik, M. T.; Mechoso, C. R.; Small, R. J.; Curchitser, E. N.
2014-12-01
The simulation of the tropical climate by coupled atmosphere-ocean general circulation models (CGCMs) shows severe warm biases in the sea-surface temperature (SST) field of the southeastern part of the Pacific and the Atlantic (SEP and SEA, respectively). The errors are strongest near the land mass with a broad plume extending west, Also, the equatorial cold tongue is too strong and extends too far to the west. The simulated precipitation field generally shows a persistent double Inter-tropical Convergence Zone (ITCZ). Tremendous effort has been made to improve CGCM performance in general and to address these tropical errors in particular. The present paper start by comparing Taylor diagrams of the SST errors in the SEP and SEA by CGCMs participating in the Coupled Model Intercomparison Project phases 3 and 5 (CMIP3 and CMIP5, respectively). Some improvement is noted in models that perform poorly in CMIP3, but the overall performance is broadly similar in the two intercomparison projects. We explore the hypothesis that an improved representation of atmosphere-ocean interaction involving stratocumulus cloud decks and oceanic upwelling is essential to reduce errors in the SEP and SEA. To estimate the error contribution by clouds and upwelling, we examine the upper ocean surface heat flux budget. The resolution of the oceanic component of the CGCMs in both CMIP3 and CMIP5 is too coarse for a realistic representation of upwelling. Therefore, we also examine simulations by the Nested Regional Climate Model (nRCM) system, which is a CGCM with a very high-resolution regional model embedded in coastal regions. The nRCM consists of the Community Atmosphere Model (CAM, run at 1°) coupled to the global Parallel Ocean Program Model (POP, run at 1°) to which the Regional Ocean Modeling System (ROMS6, run at 5-10 km) is nested in selected coastal regions.
Elias, John J.; Kelly, Michael J.; Smith, Kathryn E.; Gall, Kenneth A.; Farr, Jack
2016-01-01
Background: Medial patellofemoral ligament (MPFL) reconstruction is performed to prevent recurrent instability, but errors in femoral fixation can elevate graft tension. Hypothesis: Errors related to femoral fixation will overconstrain the patella and increase medial patellofemoral pressures. Study Design: Controlled laboratory study. Methods: Five knees with patellar instability were represented with computational models. Kinematics during knee extension were characterized from computational reconstruction of motion performed within a dynamic computed tomography (CT) scanner. Multibody dynamic simulation of knee extension, with discrete element analysis used to quantify contact pressures, was performed for the preoperative condition and after MPFL reconstruction. A standard femoral attachment and graft resting length were set for each knee. The resting length was decreased by 2 mm, and the femoral attachment was shifted 5 mm posteriorly. The simulated errors were also combined. Root-mean-square errors were quantified for the comparison of preoperative patellar lateral shift and tilt between computationally reconstructed motion and dynamic simulation. Simulation output was compared between the preoperative and MPFL reconstruction conditions with repeated-measures Friedman tests and Dunnett comparisons against a control, which was the standard MPFL condition, with statistical significance set at P < .05. Results: Root-mean-square errors for simulated patellar tilt and shift were 5.8° and 3.3 mm, respectively. Patellar lateral tracking for the preoperative condition was significantly larger near full extension compared with the standard MPFL reconstruction (mean differences of 8 mm and 13° for shift and tilt, respectively, at 0°), and lateral tracking was significantly smaller for a posterior femoral attachment (mean differences of 3 mm and 4° for shift and tilt, respectively, at 0°). The maximum medial pressure was also larger for the short graft with a
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Numerical Simulation of Nonuniformly Time-Sampled Pulse Propagation in Nonlinear Fiber
NASA Astrophysics Data System (ADS)
Premaratne, Malin
2005-08-01
Numerical simulation of pulse propagation in nonlinear optical fiber based on nonlinear Schrodinger equation plays a significant role in the design, analysis, and optimization of optical communication systems. Unconditionally stable operator-splitting techniques such as the split-step Fourier method or the split-step wavelet method have been successfully used for numerical simulation of uniformly time-sampled pulses along nonlinear optical fibers. Even though uniform time sampling is widely used in optical communication systems simulation, nonuniform time sampling is better or even desired for certain applications. For example, a sampling strategy that uses denser sampling points in regions where the signal changes rapidly and sparse sampling in regions where the signal change is gradual would result in a better replica of the signal. In this paper, we report a novel method that extends the standard operator-splitting techniques to handle nonuniformly sampled optical pulse profiles in the time domain. The proposed method relies on using cubic (or higher order) B-splines as a basis set for representing optical pulses in the time domain. We show that resulting operator matrices are banded and sparse due to the compact support of B-splines. Moreover, we use an algorithm based on Krylov subspace to exploit the sparsity of matrices for calculating matrix exponential operators. We present a comprehensive set of analytical and numerical simulation results to demonstrate the validity and accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Lisinetskaya, Polina G.; Röhr, Merle I. S.; Mitrić, Roland
2016-06-01
We present a theoretical approach for the simulation of the electric field and exciton propagation in ordered arrays constructed of molecular-sized noble metal clusters bound to organic polymer templates. In order to describe the electronic coupling between individual constituents of the nanostructure we use the ab initio parameterized transition charge method which is more accurate than the usual dipole-dipole coupling. The electronic population dynamics in the nanostructure under an external laser pulse excitation is simulated by numerical integration of the time-dependent Schrödinger equation employing the fully coupled Hamiltonian. The solution of the TDSE gives rise to time-dependent partial point charges for each subunit of the nanostructure, and the spatio-temporal electric field distribution is evaluated by means of classical electrodynamics methods. The time-dependent partial charges are determined based on the stationary partial and transition charges obtained in the framework of the TDDFT. In order to treat large plasmonic nanostructures constructed of many constituents, the approximate self-consistent iterative approach presented in (Lisinetskaya and Mitrić in Phys Rev B 89:035433, 2014) is modified to include the transition-charge-based interaction. The developed methods are used to study the optical response and exciton dynamics of Ag3+ and porphyrin-Ag4 dimers. Subsequently, the spatio-temporal electric field distribution in a ring constructed of ten porphyrin-Ag4 subunits under the action of circularly polarized laser pulse is simulated. The presented methodology provides a theoretical basis for the investigation of coupled light-exciton propagation in nanoarchitectures built from molecular size metal nanoclusters in which quantum confinement effects are important.
NASA Technical Reports Server (NTRS)
Warner, Joseph D.; Theofylaktos, Onoufrios
2012-01-01
A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.
Salomons, Erik M; Lohman, Walter J A; Zhou, Han
2016-01-01
Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.
Dogan, Hakan; Popov, Viktor
2016-05-01
We investigate the acoustic wave propagation in bubbly liquid inside a pilot sonochemical reactor which aims to produce antibacterial medical textile fabrics by coating the textile with ZnO or CuO nanoparticles. Computational models on acoustic propagation are developed in order to aid the design procedures. The acoustic pressure wave propagation in the sonoreactor is simulated by solving the Helmholtz equation using a meshless numerical method. The paper implements both the state-of-the-art linear model and a nonlinear wave propagation model recently introduced by Louisnard (2012), and presents a novel iterative solution procedure for the nonlinear propagation model which can be implemented using any numerical method and/or programming tool. Comparative results regarding both the linear and the nonlinear wave propagation are shown. Effects of bubble size distribution and bubble volume fraction on the acoustic wave propagation are discussed in detail. The simulations demonstrate that the nonlinear model successfully captures the realistic spatial distribution of the cavitation zones and the associated acoustic pressure amplitudes. Copyright © 2015 Elsevier B.V. All rights reserved.
Salomons, Erik M.; Lohman, Walter J. A.; Zhou, Han
2016-01-01
Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing. PMID:26789631
NASA Technical Reports Server (NTRS)
Prive, Nikki C.; Errico, Ronald M.
2013-01-01
A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.
NASA Technical Reports Server (NTRS)
Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.
2005-01-01
This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.
Stochastic Simulations on the Reliability of Action Potential Propagation in Thin Axons
Faisal, A. Aldo; Laughlin, Simon B
2007-01-01
It is generally assumed that axons use action potentials (APs) to transmit information fast and reliably to synapses. Yet, the reliability of transmission along fibers below 0.5 μm diameter, such as cortical and cerebellar axons, is unknown. Using detailed models of rodent cortical and squid axons and stochastic simulations, we show how conduction along such thin axons is affected by the probabilistic nature of voltage-gated ion channels (channel noise). We identify four distinct effects that corrupt propagating spike trains in thin axons: spikes were added, deleted, jittered, or split into groups depending upon the temporal pattern of spikes. Additional APs may appear spontaneously; however, APs in general seldom fail (<1%). Spike timing is jittered on the order of milliseconds over distances of millimeters, as conduction velocity fluctuates in two ways. First, variability in the number of Na channels opening in the early rising phase of the AP cause propagation speed to fluctuate gradually. Second, a novel mode of AP propagation (stochastic microsaltatory conduction), where the AP leaps ahead toward spontaneously formed clusters of open Na channels, produces random discrete jumps in spike time reliability. The combined effect of these two mechanisms depends on the pattern of spikes. Our results show that axonal variability is a general problem and should be taken into account when considering both neural coding and the reliability of synaptic transmission in densely connected cortical networks, where small synapses are typically innervated by thin axons. In contrast we find that thicker axons above 0.5 μm diameter are reliable. PMID:17480115
NASA Astrophysics Data System (ADS)
Nikolopoulos, E. I.; Anagnostou, M.; Anagnostou, E. N.; Albergel, C.; Dutra, E. N.; Fink, G.; Martínez de la Torre, A.; Munier, S.; Polcher, J.; Quintana-Segui, P.
2016-12-01
In this work we investigate the uncertainty associated with different earth observation precipitation datasets and the propagation of this uncertainty in hydrologic simulations for a number of global hydrologic and land surface models. Specifically, the work presents a comparative analysis of multi-model/multi-forcing simulations for a number of different hydrologic variables (runoff, soil moisture, ET). The area of study is focused over the Iberian Peninsula. Uncertainty in precipitation is assessed by utilizing various sources of precipitation input that include one reference precipitation dataset (SAFRAN), three widely-used satellite precipitation products (TRMM 3B42v7, CMORPH, PERSIANN) and a state-of-the-art reanalysis product based on a downscaled version of the ECMWF ERA-Interim reanalysis. The models used to carry out the hydrologic simulations include the HTESSEL, ORCHIDEE, JULES, WATERGAP3 and SURFEX-TRIP models that participate in the development of a state-of-the-art water resource reanalysis product within the Earth2Observe project (www.earth2observe.eu). Evaluation results are reported with respect to reference (i.e. SAFRAN-based) simulations, as well as independent observations from satellite-based hydrologic estimates (e.g. ESA CCI soil moisture estimates). Results from this work showed that precipitation from different satellite and reanalysis data examined exhibit considerable differences in pattern and magnitude of precipitation values and this causes significant differences in corresponding hydrologic simulations. However, the sensitivity of hydrologic simulations to different precipitation forcing depends highly on the hydrologic variable under consideration. For example surface runoff appears to be highly sensitive to precipitation differences while evapotranspiration fluxes are not so sensitive. Another important finding highlighted from this study was the fact that modeling uncertainty, i.e. uncertainty associated to model structure, is a
Open Boundary Particle-in-Cell Simulation of Dipolarization Front Propagation
NASA Technical Reports Server (NTRS)
Klimas, Alex; Hwang, Kyoung-Joo; Vinas, Adolfo F.; Goldstein, Melvyn L.
2014-01-01
First results are presented from an ongoing open boundary 2-1/2D particle-in-cell simulation study of dipolarization front (DF) propagation in Earth's magnetotail. At this stage, this study is focused on the compression, or pileup, region preceding the DF current sheet. We find that the earthward acceleration of the plasma in this region is in general agreement with a recent DF force balance model. A gyrophase bunched reflected ion population at the leading edge of the pileup region is reflected by a normal electric field in the pileup region itself, rather than through an interaction with the current sheet. We discuss plasma wave activity at the leading edge of the pileup region that may be driven by gradients, or by reflected ions, or both; the mode has not been identified. The waves oscillate near but above the ion cyclotron frequency with wavelength several ion inertial lengths. We show that the waves oscillate primarily in the perpendicular magnetic field components, do not propagate along the background magnetic field, are right handed elliptically (close to circularly) polarized, exist in a region of high electron and ion beta, and are stationary in the plasma frame moving earthward. We discuss the possibility that the waves are present in plasma sheet data, but have not, thus far, been discovered.
Numerical Simulations of Upstream Propagating Solitary Waves and Wave Breaking In A Stratified Fjord
NASA Astrophysics Data System (ADS)
Stastna, M.; Peltier, W. R.
In this talk we will discuss ongoing numerical modeling of the flow of a stratified fluid over large scale topography motivated by observations in Knight Inlet, a fjord in British Columbia, Canada. After briefly surveying the work done on the topic in the past we will discuss our latest set of simulations in which we have observed the gener- ation and breaking of three different types of nonlinear internal waves in the lee of the sill topography. The first type of wave observed is a large lee wave in the weakly strat- ified main portion of the water column, The second is an upward propagating internal wave forced by topography that breaks in the strong, near-surface pycnocline. The third is a train of upstream propagating solitary waves that, in certain circumstances, form as breaking waves consisting of a nearly solitary wave envelope and a highly unsteady core near the surface. Time premitting, we will comment on the implications of these results for our long term goal of quantifying tidally driven mixing in Knight Inlet.
Mathieu, Vincent; Anagnostou, Fani; Soffer, Emmanuel; Haiat, Guillaume
2011-06-01
Osseointegration of dental implants remains poorly understood. The objective of this numerical study is to understand the propagation phenomena of ultrasonic waves in prototypes cylindrically shaped implants and to investigate the sensitivity of their ultrasonic response to the surrounding bone biomechanical properties. The 10 MHz ultrasonic response of the implant was calculated using a finite difference numerical simulation tool and was compared to rf signals taken from a recent experimental study by Mathieu et al. [Ultrasound Med. Biol. 37, 262-270 (2011a)]. Reflection and mode conversion phenomena were analyzed to understand the origin of the different echoes and the importance of lateral wave propagation was evidenced. The sensitivity of the ultrasonic response of the implant to changes of (i) amount of bone in contact with the implant, (ii) cortical bone thickness, and (iii) surrounding bone material properties, was compared to the reproducibility of the measurements. The results show that, either a change of 1 mm of bone in contact with the implant, or 1.1 mm of cortical thickness or 12% of trabecular bone mass density should be detectable. This study paves the way for the investigation of the use of quantitative ultrasound techniques for the evaluation of bone-implant interface properties and implant stability. © 2011 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Lu, B.; Darmon, M.; Leymarie, N.; Chatillon, S.; Potel, C.
2012-05-01
In-service inspection of Sodium-Cooled Fast Reactors (SFR) requires the development of non-destructive techniques adapted to the harsh environment conditions and the examination complexity. From past experiences, ultrasonic techniques are considered as suitable candidates. The ultrasonic telemetry is a technique used to constantly insure the safe functioning of reactor inner components by determining their exact position: it consists in measuring the time of flight of the ultrasonic response obtained after propagation of a pulse emitted by a transducer and its interaction with the targets. While in-service the sodium flow creates turbulences that lead to temperature inhomogeneities, which translates into ultrasonic velocity inhomogeneities. These velocity variations could directly impact the accuracy of the target locating by introducing time of flight variations. A stochastic simulation model has been developed to calculate the propagation of ultrasonic waves in such an inhomogeneous medium. Using this approach, the travel time is randomly generated by a stochastic process whose inputs are the statistical moments of travel times known analytically. The stochastic model predicts beam deviations due to velocity inhomogeneities, which are similar to those provided by a determinist method, such as the ray method.
Stott, Shannon L; Irimia, Daniel; Karlsson, Jens O M
2004-04-01
A microscale theoretical model of intracellular ice formation (IIF) in a heterogeneous tissue volume comprising a tumor mass and surrounding normal tissue is presented. Intracellular ice was assumed to form either by intercellular ice propagation or by processes that are not affected by the presence of ice in neighboring cells (e.g., nucleation or mechanical rupture). The effects of cryosurgery on a 2D tissue consisting of 10(4) cells were simulated using a lattice Monte Carlo technique. A parametric analysis was performed to assess the specificity of IIF-related cell damage and to identify criteria for minimization of collateral damage to the healthy tissue peripheral to the tumor. Among the parameters investigated were the rates of interaction-independent IIF and intercellular ice propagation in the tumor and in the normal tissue, as well as the characteristic length scale of thermal gradients in the vicinity of the cryosurgical probe. Model predictions suggest gap junctional intercellular communication as a potential new target for adjuvant therapies complementing the cryosurgical procedure.
NASA Astrophysics Data System (ADS)
Taghavifar, Hadi; Khalilarya, Shahram; Jafarmadar, Samad; Taghavifar, Hamid
2016-08-01
A multidimensional computational fluid dynamic code was developed and integrated with probability density function combustion model to give the detailed account of multiphase fluid flow. The vapor phase within injector domain is treated with Reynolds-averaged Navier-Stokes technique. A new parameter is proposed which is an index of plane-cut spray propagation and takes into account two parameters of spray penetration length and cone angle at the same time. It was found that spray propagation factor (SPI) tends to increase at lower r/ d ratios, although the spray penetration tends to decrease. The results of SPI obtained by empirical correlation of Hay and Jones were compared with the simulation computation as a function of respective r/ d ratio. Based on the results of this study, the spray distribution on plane area has proportional correlation with heat release amount, NO x emission mass fraction, and soot concentration reduction. Higher cavitation is attributed to the sharp edge of nozzle entrance, yielding better liquid jet disintegration and smaller spray droplet that reduces soot mass fraction of late combustion process. In order to have better insight of cavitation phenomenon, turbulence magnitude in nozzle and combustion chamber was acquired and depicted along with spray velocity.
Numerical simulation of an adaptive optics system with laser propagation in the atmosphere.
Yan, H X; Li, S S; Zhang, D L; Chen, S
2000-06-20
A comprehensive model of laser propagation in the atmosphere with a complete adaptive optics (AO) system for phase compensation is presented, and a corresponding computer program is compiled. A direct wave-front gradient control method is used to reconstruct the wave-front phase. With the long-exposure Strehl ratio as the evaluation parameter, a numerical simulation of an AO system in a stationary state with the atmospheric propagation of a laser beam was conducted. It was found that for certain conditions the phase screen that describes turbulence in the atmosphere might not be isotropic. Numerical experiments show that the computational results in imaging of lenses by means of the fast Fourier transform (FFT) method agree well with those computed by means of an integration method. However, the computer time required for the FFT method is 1 order of magnitude less than that of the integration method. Phase tailoring of the calculated phase is presented as a means to solve the problem that variance of the calculated residual phase does not correspond to the correction effectiveness of an AO system. It is found for the first time to our knowledge that for a constant delay time of an AO system, when the lateral wind speed exceeds a threshold, the compensation effectiveness of an AO system is better than that of complete phase conjugation. This finding indicates that the better compensation capability of an AO system does not mean better correction effectiveness.
Spectral-infinite-element Simulations of Self-gravitating Seismic Wave Propagation
NASA Astrophysics Data System (ADS)
Gharti, H. N.; Tromp, J.
2015-12-01
Gravitational perturbations induced by particle motions are governed by the Poisson/Laplace equation, whosedomain includes all of space. Due to its unbounded nature, obtaining an accurate numerical solution is verychallenging. Consequently, gravitational perturbations are generally ignored in simulations of global seismicwave propagation, and only the unperturbed equilibrium gravitational field is taken into account. This so-called"Cowling approximation" is justified for relatively short-period waves (periods less than 250 s), but is invalidfor free-oscillation seismology. Existing methods are usually based on spherical harmonic expansions. Mostmethods are either limited to spherically symmetric models or have to rely on costly iterative implementationprocedures. We propose a spectral-infinite-element method to solve wave propagation in a self-gravitating Earthmodel. The spectral-infinite-element method combines the spectral-element method with the infinite-elementmethod. Spectral elements are used to capture the internal field, and infinite elements are used to represent theexternal field. To solve the weak form of the Poisson/Laplace equation, we employ Gauss-Legendre-Lobattoquadrature in spectral elements. In infinite elements, Gauss-Radau quadrature is used in the radial directionwhereas Gauss-Legendre-Lobatto quadrature is used in the lateral directions. Infinite elements naturally integratewith spectral elements, thereby avoiding an iterative implementation. We demonstrate the accuracy of themethod by comparing our results with a spherical harmonics method. The new method empowers us to tackleseveral problems in long-period seismology accurately and efficiently.
Particle Simulation of the Blob Propagation in Non-Uniform Plasmas
NASA Astrophysics Data System (ADS)
Hasegawa, Hiroki; Ishiguro, Seiji
2014-10-01
The kinetic dynamics on blob propagation in non-uniform plasmas have been studied with a three dimensional electrostatic plasma particle simulation code. In our previous studies, we assumed that grad-B is uniform in the toroidal and poloidal directions. In scrape-off layer (SOL) plasmas of real magnetic confinement devices, however, the direction of grad-B is different between the inside and the outside of torus. In this study, we have investigated the blob kinetic dynamics in the system where grad-B is spatially non-uniform. We observe different potential and particle flow structures from those shown in our previous studies. Thus, it is found that propagation properties of blobs in non-uniform grad-B plasmas are also distinct. These properties depend on the initial blob location in the toroidal directions. We will also discuss the application of this study to pellet dynamics. Supported by NIFS Collaboration Research programs (NIFS13KNSS038 and NIFS14KNXN279) and a Grant-in-Aid for Scientific Research from Japan Society for the Promotion of Science (KAKENHI 23740411).
Scintillation analysis of truncated Bessel beams via numerical turbulence propagation simulation.
Eyyuboğlu, Halil T; Voelz, David; Xiao, Xifeng
2013-11-20
Scintillation aspects of truncated Bessel beams propagated through atmospheric turbulence are investigated using a numerical wave optics random phase screen simulation method. On-axis, aperture averaged scintillation and scintillation relative to a classical Gaussian beam of equal source power and scintillation per unit received power are evaluated. It is found that in almost all circumstances studied, the zeroth-order Bessel beam will deliver the lowest scintillation. Low aperture averaged scintillation levels are also observed for the fourth-order Bessel beam truncated by a narrower source window. When assessed relative to the scintillation of a Gaussian beam of equal source power, Bessel beams generally have less scintillation, particularly at small receiver aperture sizes and small beam orders. Upon including in this relative performance measure the criteria of per unit received power, this advantageous position of Bessel beams mostly disappears, but zeroth- and first-order Bessel beams continue to offer some advantage for relatively smaller aperture sizes, larger source powers, larger source plane dimensions, and intermediate propagation lengths.
NASA Astrophysics Data System (ADS)
Petrov, P.; Newman, G. A.
2010-12-01
-Fourier domain we had developed 3D code for full-wave field simulation in the elastic media which take into account nonlinearity introduced by free-surface effects. Our approach is based on the velocity-stress formulation. In the contrast to conventional formulation we defined the material properties such as density and Lame constants not at nodal points but within cells. This second order finite differences method formulated in the cell-based grid, generate numerical solutions compatible with analytical ones within the range errors determinate by dispersion analysis. Our simulator will be embedded in an inversion scheme for joint seismic- electromagnetic imaging. It also offers possibilities for preconditioning the seismic wave propagation problems in the frequency domain. References. Shin, C. & Cha, Y. (2009), Waveform inversion in the Laplace-Fourier domain, Geophys. J. Int. 177(3), 1067- 1079. Shin, C. & Cha, Y. H. (2008), Waveform inversion in the Laplace domain, Geophys. J. Int. 173(3), 922-931. Commer, M. & Newman, G. (2008), New advances in three-dimensional controlled-source electromagnetic inversion, Geophys. J. Int. 172(2), 513-535. Newman, G. A., Commer, M. & Carazzone, J. J. (2010), Imaging CSEM data in the presence of electrical anisotropy, Geophysics, in press.
NASA Astrophysics Data System (ADS)
Hoshiba, M.; Aoki, S.
2014-12-01
In many methods of the present Earthquake Early Warning (EEW) systems, hypocenter and magnitude are determined quickly and then strengths of ground motions are predicted. The 2011 Tohoku Earthquake (MW9.0), however, revealed some technical issues with the conventional methods: under-prediction due to the large extent of the fault rupture, and over-prediction due to confusion of the system by multiple aftershocks occurred simultaneously. To address these issues, a new concept is proposed for EEW: applying data assimilation technique, present wavefield is estimated precisely in real time (real-time shake mapping) and then future wavefield is predicted time-evolutionally using physical process of seismic wave propagation. Information of hypocenter location and magnitude are not required, which is basically different from the conventional method. In the proposed method, data assimilation technique is applied to estimate the current spatial distribution of wavefield, in which not only actual observation but also anticipated wavefield predicted from one time-step before are used. Real-time application of the data assimilation technique enables us to estimate wavefield in real time, which corresponds to real-time shake mapping. Once present situation is estimated precisely, we go forward to the prediction of future situation using simulation of wave propagation. The proposed method is applied to the 2011 Tohoku Earthquake (MW9.0) and the 2004 Mid-Niigata earthquake (Mw6.7). Future wavefield is precisely predicted, and the prediction is improved with shortening the lead time: for example, the error of 10 s prediction is smaller than that of 20 s, and that of 5 s is much smaller. By introducing this method, it becomes possible to predict ground motion precisely even for cases of the large extent of fault rupture and the multiple simultaneous earthquakes. The proposed method is based on a simulation of physical process from the precisely estimated present condition. This
Davis, Matthew H.
2016-01-01
Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior
Blank, Helen; Davis, Matthew H
2016-11-01
Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior
Systematic errors in Monsoon simulation: importance of the equatorial Indian Ocean processes
NASA Astrophysics Data System (ADS)
Annamalai, H.; Taguchi, B.; McCreary, J. P., Jr.; Nagura, M.; Miyama, T.
2015-12-01
H. Annamalai1, B. Taguchi2, J.P. McCreary1, J. Hafner1, M. Nagura2, and T. Miyama2 International Pacific Research Center, University of Hawaii, USA Application Laboratory, JAMSTEC, Japan In climate models, simulating the monsoon precipitation climatology remains a grand challenge. Compared to CMIP3, the multi-model-mean (MMM) errors for Asian-Australian monsoon (AAM) precipitation climatology in CMIP5, relative to GPCP observations, have shown little improvement. One of the implications is that uncertainties in the future projections of time-mean changes to AAM rainfall may not have reduced from CMIP3 to CMIP5. Despite dedicated efforts by the modeling community, the progress in monsoon modeling is rather slow. This leads us to wonder: Has the scientific community reached a "plateau" in modeling mean monsoon precipitation? Our focus here is to better understanding of the coupled air-sea interactions, and moist processes that govern the precipitation characteristics over the tropical Indian Ocean where large-scale errors persist. A series idealized coupled model experiments are performed to test the hypothesis that errors in the coupled processes along the equatorial Indian Ocean during inter-monsoon seasons could potentially influence systematic errors during the monsoon season. Moist static energy budget diagnostics has been performed to identify the leading moist and radiative processes that account for the large-scale errors in the simulated precipitation. As a way forward, we propose three coordinated efforts, and they are: (i) idealized coupled model experiments; (ii) process-based diagnostics and (iii) direct observations to constrain model physics. We will argue that a systematic and coordinated approach in the identification of the various interactive processes that shape the precipitation basic state needs to be carried out, and high-quality observations over the data sparse monsoon region are needed to validate models and further improve model physics.
Simulation of wave propagation in boreholes and radial profiling of formation elastic parameters
NASA Astrophysics Data System (ADS)
Chi, Shihong
Modern acoustic logging tools measure in-situ elastic wave velocities of rock formations. These velocities provide ground truth for time-depth conversions in seismic exploration. They are also widely used to quantify the mechanical strength of formations for applications such as wellbore stability analysis and sand production prevention. Despite continued improvements in acoustic logging technology and interpretation methods that take advantage of full waveform data, acoustic logs processed with current industry standard methods often remain influenced by formation damage and mud-filtrate invasion. This dissertation develops an efficient and accurate algorithm for the numerical simulation of wave propagation in fluid-filled boreholes in the presence of complex, near-wellbore damaged zones. The algorithm is based on the generalized reflection and transmission matrices method. Assessment of mud-filtrate invasion effects on borehole acoustic measurements is performed through simulation of time-lapse logging in the presence of complex radial invasion zones. The validity of log corrections performed with the Biot-Gassmann fluid substitution model is assessed by comparing the velocities estimated from array waveform data simulated for homogeneous and radially heterogeneous formations that sustain mud-filtrate invasion. The proposed inversion algorithm uses array waveform data to estimate radial profiles of formation elastic parameters. These elastic parameters can be used to construct more realistic near-wellbore petrophysical models for applications in seismic exploration, geo-mechanics, and production. Frequency-domain, normalized amplitude and phase information contained in array waveform data are input to the nonlinear Gauss-Newton inversion algorithm. Validation of both numerical simulation and inversion is performed against previously published results based on the Thomson-Haskell method and travel time tomography, respectively. This exercise indicates that the
Steepening of parallel propagating hydromagnetic waves into magnetic pulsations - A simulation study
NASA Technical Reports Server (NTRS)
Akimoto, K.; Winske, D.; Onsager, T. G.; Thomsen, M. F.; Gary, S. P.
1991-01-01
The steepening mechanism of parallel propagating low-frequency MHD-like waves observed upstream of the earth's quasi-parallel bow shock has been investigated by means of electromagnetic hybrid simulations. It is shown that an ion beam through the resonant electromagnetic ion/ion instability excites large-amplitude waves, which consequently pitch angle scatter, decelerate, and eventually magnetically trap beam ions in regions where the wave amplitudes are largest. As a result, the beam ions become bunched in both space and gyrophase. As these higher-density, nongyrotropic beam segments are formed, the hydromagnetic waves rapidly steepen, resulting in magnetic pulsations, with properties generally in agreement with observations. This steepening process operates on the scale of the linear growth time of the resonant ion/ion instability. Many of the pulsations generated by this mechanism are left-hand polarized in the spacecraft frame.
Numerical simulations of wave propagation in long bars with application to Kolsky bar testing
Corona, Edmundo
2014-11-01
Material testing using the Kolsky bar, or split Hopkinson bar, technique has proven instrumental to conduct measurements of material behavior at strain rates in the order of 10^{3} s^{-1}. Test design and data reduction, however, remain empirical endeavors based on the experimentalist's experience. Issues such as wave propagation across discontinuities, the effect of the deformation of the bar surfaces in contact with the specimen, the effect of geometric features in tensile specimens (dog-bone shape), wave dispersion in the bars and other particulars are generally treated using simplified models. The work presented here was conducted in Q3 and Q4 of FY14. The objective was to demonstrate the feasibility of numerical simulations of Kolsky bar tests, which was done successfully.
Signal propagation time from the magnetotail to the ionosphere: OpenGGCM simulation
NASA Astrophysics Data System (ADS)
Ferdousi, Banafsheh; Raeder, Joachim
2016-07-01
Distinguishing the processes that occur during the first 2 min of a substorm depends critically on the correct timing of different signals between the plasma sheet and the ionosphere. To investigate signal propagation paths and signal travel times, we use a magnetohydrodynamic global simulation model of the Earth magnetosphere and ionosphere, OpenGGCM-CTIM model. By creating single impulse or sinusoidal pulsations in various locations in the magnetotail, the waves are launched, and we investigate the paths taken by the waves and the time that different waves take to reach the ionosphere. We find that it takes approximately about 27, 36, 45, 60, and 72 s for waves to travel from the tail plasma sheet at x =- 10,-15,-20,-25, and -30RE, respectively, to the ionosphere, contrary to previous reports. We also find that waves originating in the plasma sheet generally travel faster through the lobes than through the plasma sheet.
Chen, Qiang; Chen, Bin
2012-10-01
In this paper, a hybrid electrodynamics and kinetics numerical model based on the finite-difference time-domain method and lattice Boltzmann method is presented for electromagnetic wave propagation in weakly ionized hydrogen plasmas. In this framework, the multicomponent Bhatnagar-Gross-Krook collision model considering both elastic and Coulomb collisions and the multicomponent force model based on the Guo model are introduced, which supply a hyperfine description on the interaction between electromagnetic wave and weakly ionized plasma. Cubic spline interpolation and mean filtering technique are separately introduced to solve the multiscalar problem and enhance the physical quantities, which are polluted by numerical noise. Several simulations have been implemented to validate our model. The numerical results are consistent with a simplified analytical model, which demonstrates that this model can obtain satisfying numerical solutions successfully.
A 2D spring model for the simulation of ultrasonic wave propagation in nonlinear hysteretic media.
Delsanto, P P; Gliozzi, A S; Hirsekorn, M; Nobili, M
2006-07-01
A two-dimensional (2D) approach to the simulation of ultrasonic wave propagation in nonclassical nonlinear (NCNL) media is presented. The approach represents the extension to 2D of a previously proposed one dimensional (1D) Spring Model, with the inclusion of a PM space treatment of the intersticial regions between grains. The extension to 2D is of great practical relevance for its potential applications in the field of quantitative nondestructive evaluation and material characterization, but it is also useful, from a theoretical point of view, to gain a better insight of the interaction mechanisms involved. The model is tested by means of virtual 2D experiments. The expected NCNL behaviors are qualitatively well reproduced.
Bazelyan, E. M.; Sysoev, V. S.; Andreev, M. G.
2009-08-15
A numerical model of a spark discharge propagating along the ground surface from the point at which an {approx}100-kA current pulse is input into the ground has been developed based on experiments in which the velocity of a long leader was measured as a function of the leader current. The results of numerical simulations are in good agreement with the measured characteristics of creeping discharges excited in field experiments by using a high-power explosive magnetic generator. The reason why the length of a spark discharge depends weakly on the number of simultaneously developing channels is found. Analysis of the influence of the temporal characteristics of the current pulse on the parameters of the creeping spark discharge shows that actual lighting may exhibit similar behavior.
NASA Astrophysics Data System (ADS)
Zhan, Qiwei; Sun, Qingtao; Ren, Qiang; Fang, Yuan; Wang, Hua; Liu, Qing Huo
2017-08-01
We develop a non-conformal mesh discontinuous Galerkin (DG) pseudospectral time domain (PSTD) method for 3-D elastic wave scattering problems with arbitrary fracture inclusions. In contrast to directly meshing the exact thin-layer fracture, we use the linear-slip model, one kind of transmission boundary condition, for the DG scheme. Intrinsically, we can efficiently impose a jump-boundary condition by defining a new numerical flux for the surface integration in the DG framework. This transmission boundary condition in the DG-PSTD method significantly reduces the computational cost. 3-D DG simulations and accurate waveform comparisons validate our results for arbitrary discrete fractures. Numerical results indicate that fractures have a significant influence on wave propagation.
Lisitsa, Vadim; Tcheverda, Vladimir; Botter, Charlotte
2016-04-15
We present an algorithm for the numerical simulation of seismic wave propagation in models with a complex near surface part and free surface topography. The approach is based on the combination of finite differences with the discontinuous Galerkin method. The discontinuous Galerkin method can be used on polyhedral meshes; thus, it is easy to handle the complex surfaces in the models. However, this approach is computationally intense in comparison with finite differences. Finite differences are computationally efficient, but in general, they require rectangular grids, leading to the stair-step approximation of the interfaces, which causes strong diffraction of the wavefield. In this research we present a hybrid algorithm where the discontinuous Galerkin method is used in a relatively small upper part of the model and finite differences are applied to the main part of the model.
López, Rodrigo A.; Muñoz, Víctor; Viñas, Adolfo F.; Valdivia, Juan A.
2015-09-15
We use a particle-in-cell simulation to study the propagation of localized structures in a magnetized electron-positron plasma with relativistic finite temperature. We use as initial condition for the simulation an envelope soliton solution of the nonlinear Schrödinger equation, derived from the relativistic two fluid equations in the strongly magnetized limit. This envelope soliton turns out not to be a stable solution for the simulation and splits in two localized structures propagating in opposite directions. However, these two localized structures exhibit a soliton-like behavior, as they keep their profile after they collide with each other due to the periodic boundary conditions. We also observe the formation of localized structures in the evolution of a spatially uniform circularly polarized Alfvén wave. In both cases, the localized structures propagate with an amplitude independent velocity.
Computational Simulation of Damage Propagation in Three-Dimensional Woven Composites
NASA Technical Reports Server (NTRS)
Huang, Dade; Minnetyan, Levon
2005-01-01
Three dimensional (3D) woven composites have demonstrated multi-directional properties and improved transverse strength, impact resistance, and shear characteristics. The objective of this research is to develop a new model for predicting the elastic constants, hygrothermal effects, thermomechanical response, and stress limits of 3D woven composites; and to develop a computational tool to facilitate the evaluation of 3D woven composite structures with regard to damage tolerance and durability. Fiber orientations of weave and braid patterns are defined with reference to composite structural coordinates. Orthotropic ply properties and stress limits computed via micromechanics are transformed to composite structural coordinates and integrated to obtain the 3D properties. The various stages of degradation, from damage initiation to collapse of structures, in the 3D woven structures are simulated for the first time. Three dimensional woven composite specimens with various woven patterns under different loading conditions, such as tension, compression, bending, and shear are simulated in the validation process of this research. Damage initiation, growth, accumulation, and propagation to fracture are included in these simulations.
Finite-element simulation of wave propagation in periodic piezoelectric SAW structures.
Hofer, Manfred; Finger, Norman; Kovacs, Günter; Schöberl, Joachim; Zaglmayr, Sabine; Langer, Ulrich; Lerch, Reinhard
2006-06-01
Many surface acoustic wave (SAW) devices consist of quasiperiodic structures that are designed by successive repetition of a base cell. The precise numerical simulation of such devices, including all physical effects, is currently beyond the capacity of high-end computation. Therefore, we have to restrict the numerical analysis to the periodic substructure. By using the finite-element method (FEM), this can be done by introducing periodic boundary conditions (PBCs) at special artificial boundaries. To be able to describe the complete dispersion behavior of waves, including damping effects, the PBC has to be able to model each mode that can be excited within the periodic structure. Therefore, the condition used for the PBCs must hold for each phase and amplitude difference existing at periodic boundaries. Based on the Floquet theorem, our two newly developed PBC algorithms allow the calculation of both, the phase and the amplitude coefficients of the wave. In the first part of this paper we describe the basic theory of the PBCs. Based on the FEM, we develop two different methods that deliver the same results but have totally different numerical properties and, therefore, allow the use of problem-adapted solvers. Further on, we show how to compute the charge distribution of periodic SAW structures with the aid of the new PBCs. In the second part, we compare the measured and simulated dispersion behavior of waves propagating on periodic SAW structures for two different piezoelectric substrates. Then we compare measured and simulated input admittances of structures similar to SAW resonators.
On a Wavelet-Based Method for the Numerical Simulation of Wave Propagation
NASA Astrophysics Data System (ADS)
Hong, Tae-Kyung; Kennett, B. L. N.
2002-12-01
A wavelet-based method for the numerical simulation of acoustic and elastic wave propagation is developed. Using a displacement-velocity formulation and treating spatial derivatives with linear operators, the wave equations are rewritten as a system of equations whose evolution in time is controlled by first-order derivatives. The linear operators for spatial derivatives are implemented in wavelet bases using an operator projection technique with nonstandard forms of wavelet transform. Using a semigroup approach, the discretized solution in time can be represented in an explicit recursive form, based on Taylor expansion of exponential functions of operator matrices. The boundary conditions are implemented by augmenting the system of equations with equivalent force terms at the boundaries. The wavelet-based method is applied to the acoustic wave equation with rigid boundary conditions at both ends in 1-D domain and to the elastic wave equation with a traction-free boundary conditions at a free surface in 2-D spatial media. The method can be applied directly to media with plane surfaces, and surface topography can be included with the aid of distortion of the grid describing the properties of the medium. The numerical results are compared with analytic solutions based on the Cagniard technique and show high accuracy. The wavelet-based approach is also demonstrated for complex media including highly varying topography or stochastic heterogeneity with rapid variations in physical parameters. These examples indicate the value of the approach as an accurate and stable tool for the simulation of wave propagation in general complex media.
Yılmaz, Bülent; Çiftçi, Emre
2013-06-01
Extracorporeal Shock Wave Lithotripsy (ESWL) is based on disintegration of the kidney stone by delivering high-energy shock waves that are created outside the body and transmitted through the skin and body tissues. Nowadays high-energy shock waves are also used in orthopedic operations and investigated to be used in the treatment of myocardial infarction and cancer. Because of these new application areas novel lithotriptor designs are needed for different kinds of treatment strategies. In this study our aim was to develop a versatile computer simulation environment which would give the device designers working on various medical applications that use shock wave principle a substantial amount of flexibility while testing the effects of new parameters such as reflector size, material properties of the medium, water temperature, and different clinical scenarios. For this purpose, we created a finite-difference time-domain (FDTD)-based computational model in which most of the physical system parameters were defined as an input and/or as a variable in the simulations. We constructed a realistic computational model of a commercial electrohydraulic lithotriptor and optimized our simulation program using the results that were obtained by the manufacturer in an experimental setup. We, then, compared the simulation results with the results from an experimental setup in which oxygen level in water was varied. Finally, we studied the effects of changing the input parameters like ellipsoid size and material, temperature change in the wave propagation media, and shock wave source point misalignment. The simulation results were consistent with the experimental results and expected effects of variation in physical parameters of the system. The results of this study encourage further investigation and provide adequate evidence that the numerical modeling of a shock wave therapy system is feasible and can provide a practical means to test novel ideas in new device design procedures.
Andreev, Victor P; Gillespie, Brenda W; Helfand, Brian T; Merion, Robert M
2016-01-01
Unsupervised classification methods are gaining acceptance in omics studies of complex common diseases, which are often vaguely defined and are likely the collections of disease subtypes. Unsupervised classification based on the molecular signatures identified in omics studies have the potential to reflect molecular mechanisms of the subtypes of the disease and to lead to more targeted and successful interventions for the identified subtypes. Multiple classification algorithms exist but none is ideal for all types of data. Importantly, there are no established methods to estimate sample size in unsupervised classification (unlike power analysis in hypothesis testing). Therefore, we developed a simulation approach allowing comparison of misclassification errors and estimating the required sample size for a given effect size, number, and correlation matrix of the differentially abundant proteins in targeted proteomics studies. All the experiments were performed in silico. The simulated data imitated the expected one from the study of the plasma of patients with lower urinary tract dysfunction with the aptamer proteomics assay Somascan (SomaLogic Inc, Boulder, CO), which targeted 1129 proteins, including 330 involved in inflammation, 180 in stress response, 80 in aging, etc. Three popular clustering methods (hierarchical, k-means, and k-medoids) were compared. K-means clustering performed much better for the simulated data than the other two methods and enabled classification with misclassification error below 5% in the simulated cohort of 100 patients based on the molecular signatures of 40 differentially abundant proteins (effect size 1.5) from among the 1129-protein panel. PMID:27524871
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
The Contribution of Statistical Errors in DNS Data Quantified with RANS-DNS Simulations
NASA Astrophysics Data System (ADS)
Poroseva, Svetlana V.; Jeyapaul, Elbert; Murman, Scott M.; Colmenares F., Juan D.
2016-11-01
In RANS-DNS simulations, the Reynolds-averaged Navier-Stokes (RANS) equations are solved, with all terms but molecular diffusion being represented by the data from direct numerical simulations (DNS). No turbulence modeling is involved in such simulations. Recently, we demonstrated the use of RANS-DNS simulations as a framework for uncertainty quantification in statistical data collected from DNS. In the current study, contribution of the statistical error in the DNS data uncertainty is investigated using RANS-DNS simulations. Simulations of the Reynolds stress transport were conducted in a planar fully-developed turbulent channel flow at Re = 392 (based on the friction velocity) using DNS data collected at seven averaging times. The open-source CFD software OpenFOAM was used in RANS simulations. Budgets for the Reynolds stresses were obtained from DNS performed using a pseudo-spectral (Fourier/Chebyshev-tau) method. The material is in part based upon work supported by NASA under Award NNX12AJ61A.
NASA Astrophysics Data System (ADS)
Zhao, Y.; Ciais, P.; Peylin, P.; Viovy, N.; Longdoz, B.; Bonnefond, J. M.; Rambal, S.; Klumpp, K.; Olioso, A.; Cellier, P.; Maignan, F.; Eglin, T.; Calvet, J. C.
2011-03-01
We analyze how biases of meteorological drivers impact the calculation of ecosystem CO2, water and energy fluxes by models. To do so, we drive the same ecosystem model by meteorology from gridded products and by ''true" meteorology from local observation at eddy-covariance flux sites. The study is focused on six flux tower sites in France spanning across a 7-14 °C and 600-1040 mm yr-1 climate gradient, with forest, grassland and cropland ecosystems. We evaluate the results of the ORCHIDEE process-based model driven by four different meteorological models against the same model driven by site-observed meteorology. The evaluation is decomposed into characteristic time scales. The main result is that there are significant differences between meteorological models and local tower meteorology. The seasonal cycle of air temperature, humidity and shortwave downward radiation is reproduced correctly by all meteorological models (average R2=0.90). At sites located near the coast and influenced by sea-breeze, or located in altitude, the misfit of meteorological drivers from gridded dataproducts and tower meteorology is the largest. We show that day-to-day variations in weather are not completely well reproduced by meteorological models, with R2 between modeled grid point and measured local meteorology going from 0.35 (REMO model) to 0.70 (SAFRAN model). The bias of meteorological models impacts the flux simulation by ORCHIDEE, and thus would have an effect on regional and global budgets. The forcing error defined by the simulated flux difference resulting from prescribing modeled instead than observed local meteorology drivers to ORCHIDEE is quantified for the six studied sites and different time scales. The magnitude of this forcing error is compared to that of the model error defined as the modeled-minus-observed flux, thus containing uncertain parameterizations, parameter values, and initialization. The forcing error is the largest on a daily time scale, for which it is
Perez-Benito, Joaquin F; Mulero-Raichs, Mar
2016-10-06
Many kinetic studies concerning homologous reaction series report the existence of an activation enthalpy-entropy linear correlation (compensation plot), its slope being the temperature at which all the members of the series have the same rate constant (isokinetic temperature). Unfortunately, it has been demonstrated by statistical methods that the experimental errors associated with the activation enthalpy and entropy are mutually interdependent. Therefore, the possibility that some of those correlations might be caused by accidental errors has been explored by numerical simulations. As a result of this study, a computer program has been developed to evaluate the probability that experimental errors might lead to a linear compensation plot parting from an initial randomly scattered set of activation parameters (p-test). Application of this program to kinetic data for 100 homologous reaction series extracted from bibliographic sources has allowed concluding that most of the reported compensation plots can hardly be explained by the accumulation of experimental errors, thus requiring the existence of a previously existing, physically meaningful correlation.
Wave-like warp propagation in circumbinary discs - I. Analytic theory and numerical simulations
NASA Astrophysics Data System (ADS)
Facchini, Stefano; Lodato, Giuseppe; Price, Daniel J.
2013-08-01
In this paper we analyse the propagation of warps in protostellar circumbinary discs. We use these systems as a test environment in which to study warp propagation in the bending-wave regime, with the addition of an external torque due to the binary gravitational potential. In particular, we want to test the linear regime, for which an analytic theory has been developed. In order to do so, we first compute analytically the steady-state shape of an inviscid disc subject to the binary torques. The steady-state tilt is a monotonically increasing function of radius, but misalignment is found at the disc inner edge. In the absence of viscosity, the disc does not present any twist. Then, we compare the time-dependent evolution of the warped disc calculated via the known linearized equations both with the analytic solutions and with full 3D numerical simulations. The simulations have been performed with the PHANTOM smoothed particle hydrodynamics (SPH) code using two million particles. We find a good agreement both in the tilt and in the phase evolution for small inclinations, even at very low viscosities. Moreover, we have verified that the linearized equations are able to reproduce the diffusive behaviour when α > H/R, where α is the disc viscosity parameter. Finally, we have used the 3D simulations to explore the non-linear regime. We observe a strongly non-linear behaviour, which leads to the breaking of the disc. Then, the inner disc starts precessing with its own precessional frequency. This behaviour has already been observed with numerical simulations in accretion discs around spinning black holes. The evolution of circumstellar accretion discs strongly depends on the warp evolution. Therefore, the issue explored in this paper could be of fundamental importance in order to understand the evolution of accretion discs in crowded environments, when the gravitational interaction with other stars is highly likely, and in multiple systems. Moreover, the evolution of
NASA Astrophysics Data System (ADS)
Vincent, John B.
1999-10-01
Most freshman chemistry textbooks include a figure illustrating the relationship between spinning electrons and the resultant magnetic field. However, some textbooks predict the direction of the magnetic moment using the left-hand rule, while others incorrectly use the right-hand rule. An examination of textbooks published during the last four decades reveals that reversal of the direction of the resultant magnetic field accompanied the introduction of these figures into freshman chemistry textbooks about 20 years ago. Since then, these illustrations have become increasingly popular; and while the error persists, its rate of occurrence has declined until most current textbooks have the direction of the magnetic field produced by a spinning electron correct.
Shan, S.; Bevis, M.; Kendrick, E.; Mader, G.L.; Raleigh, D.; Hudnut, K.; Sartori, M.; Phillips, D.
2007-01-01
When kinematic GPS processing software is used to estimate the trajectory of an aircraft, unless the delays imposed on the GPS signals by the atmosphere are either estimated or calibrated via external observations, then vertical height errors of decimeters can occur. This problem is clearly manifested when the aircraft is positioned against multiple base stations in areas of pronounced topography because the aircraft height solutions obtained using different base stations will tend to be mutually offset, or biased, in proportion to the elevation differences between the base stations. When performing kinematic surveys in areas with significant topography it should be standard procedure to use multiple base stations, and to separate them vertically to the maximum extent possible, since it will then be much easier to detect mis-modeling of the atmosphere. Copyright 2007 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Murshid, Syed H.; Chakravarty, Abhijit
2011-06-01
Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.
A simulator study of the interaction of pilot workload with errors, vigilance, and decisions
NASA Technical Reports Server (NTRS)
Smith, H. P. R.
1979-01-01
A full mission simulation of a civil air transport scenario that had two levels of workload was used to observe the actions of the crews and the basic aircraft parameters and to record heart rates. The results showed that the number of errors was very variable among crews but the mean increased in the higher workload case. The increase in errors was not related to rise in heart rate but was associated with vigilance times as well as the days since the last flight. The recorded data also made it possible to investigate decision time and decision order. These also varied among crews and seemed related to the ability of captains to manage the resources available to them on the flight deck.
1D and 2D simulations of seismic wave propagation in fractured media
NASA Astrophysics Data System (ADS)
Möller, Thomas; Friederich, Wolfgang
2016-04-01
Fractures and cracks have a significant influence on the propagation of seismic waves. Their presence causes reflections and scattering and makes the medium effectively anisotropic. We present a numerical approach to simulation of seismic waves in fractured media that does not require direct modelling of the fracture itself, but uses the concept of linear slip interfaces developed by Schoenberg (1980). This condition states that at an interface between two imperfectly bonded elastic media, stress is continuous across the interface while displacement is discontinuous. It is assumed that the jump of displacement is proportional to stress which implies a jump in particle velocity at the interface. We use this condition as a boundary condition to the elastic wave equation and solve this equation in the framework of a Nodal Discontinuous Galerkin scheme using a velocity-stress formulation. We use meshes with tetrahedral elements to discretise the medium. Each individual element face may be declared as a slip interface. Numerical fluxes have been derived by solving the 1D Riemann problem for slip interfaces with elastic and viscoelastic rheology. Viscoelasticity is realised either by a Kelvin-Voigt body or a Standard Linear Solid. These fluxes are not limited to 1D and can - with little modification - be used for simulations in higher dimensions as well. The Nodal Discontinuous Galerkin code "neXd" developed by Lambrecht (2013) is used as a basis for the numerical implementation of this concept. We present examples of simulations in 1D and 2D that illustrate the influence of fractures on the seismic wavefield. We demonstrate the accuracy of the simulation through comparison to an analytical solution in 1D.
Shahmirzadi, Danial; Li, Ronny X; Konofagou, Elisa E
2012-11-01
Pulse wave imaging (PWI) is an ultrasound-based method for noninvasive characterization of arterial stiffness based on pulse wave propagation. Reliable numerical models of pulse wave propagation in normal and pathological aortas could serve as powerful tools for local pulse wave analysis and a guideline for PWI measurements in vivo. The objectives of this paper are to (1) apply a fluid-structure interaction (FSI) simulation of a straight-geometry aorta to confirm the Moens-Korteweg relationship between the pulse wave velocity (PWV) and the wall modulus, and (2) validate the simulation findings against phantom and in vitro results. PWI depicted and tracked the pulse wave propagation along the abdominal wall of canine aorta in vitro in sequential Radio-Frequency (RF) ultrasound frames and estimates the PWV in the imaged wall. The same system was also used to image multiple polyacrylamide phantoms, mimicking the canine measurements as well as modeling softer and stiffer walls. Finally, the model parameters from the canine and phantom studies were used to perform 3D two-way coupled FSI simulations of pulse wave propagation and estimate the PWV. The simulation results were found to correlate well with the corresponding Moens-Korteweg equation. A high linear correlation was also established between PWV² and E measurements using the combined simulation and experimental findings (R² = 0.98) confirming the relationship established by the aforementioned equation.
NASA Astrophysics Data System (ADS)
Guerdoux, Simon; Fourment, Lionel
2007-05-01
An Arbitrary Lagrangian Eulerian (ALE) formulation is developed to simulate the different stages of the Friction Stir Welding (FSW) process with the FORGE3® F.E. software. A splitting method is utilized: a) the material velocity/pressure and temperature fields are calculated, b) the mesh velocity is derived from the domain boundary evolution and an adaptive refinement criterion provided by error estimation, c) P1 and P0 variables are remapped. Different velocity computation and remap techniques have been investigated, providing significant improvement with respect to more standard approaches. The proposed ALE formulation is applied to FSW simulation. Steady state welding, but also transient phases are simulated, showing good robustness and accuracy of the developed formulation. Friction parameters are identified for an Eulerian steady state simulation by comparison with experimental results. Void formation can be simulated. Simulations of the transient plunge and welding phases help to better understand the deposition process that occurs at the trailing edge of the probe. Flexibility and robustness of the model finally allows investigating the influence of new tooling designs on the deposition process.
NASA Astrophysics Data System (ADS)
Celik, Cihangir
-scale technologies. Prevention of SEEs has been studied and applied in the semiconductor industry by including radiation protection precautions in the system architecture or by using corrective algorithms in the system operation. Decreasing 10B content (20%of natural boron) in the natural boron of Borophosphosilicate glass (BPSG) layers that are conventionally used in the fabrication of semiconductor devices was one of the major radiation protection approaches for the system architecture. Neutron interaction in the BPSG layer was the origin of the SEEs because of the 10B (n,alpha) 7Li reaction products. Both of the particles produced have the capability of ionization in the silicon substrate region, whose thickness is comparable to the ranges of these particles. Using the soft error phenomenon in exactly the opposite manner of the semiconductor industry can provide a new neutron detection system based on the SERs in the semiconductor memories. By investigating the soft error mechanisms in the available semiconductor memories and enhancing the soft error occurrences in these devices, one can convert all memory using intelligent systems into portable, power efficient, directiondependent neutron detectors. The Neutron Intercepting Silicon Chip (NISC) project aims to achieve this goal by introducing 10B-enriched BPSG layers to the semiconductor memory architectures. This research addresses the development of a simulation tool, the NISC Soft Error Analysis Tool (NISCSAT), for soft error modeling and analysis in the semiconductor memories to provide basic design considerations for the NISC. NISCSAT performs particle transport and calculates the soft error probabilities, or SER, depending on energy depositions of the particles in a given memory node model of the NISC. Soft error measurements were performed with commercially available, off-the-shelf semiconductor memories and microprocessors to observe soft error variations with the neutron flux and memory supply voltage. Measurement
Enhancement of the NMSU Channel Error Simulator to Provide User-Selectable Link Delays
NASA Technical Reports Server (NTRS)
Horan, Stephen; Wang, Ru-Hai
2000-01-01
This is the third in a continuing series of reports describing the development of the Space-to-Ground Link Simulator (SGLS) to be used for testing data transfers under simulated space channel conditions. The SGLS is based upon Virtual Instrument (VI) software techniques for managing the error generation, link data rate configuration, and, now, selection of the link delay value. In this report we detail the changes that needed to be made to the SGLS VI configuration to permit link delays to be added to the basic error generation and link data rate control capabilities. This was accomplished by modifying the rate-splitting VIs to include a buffer the hold the incoming data for the duration selected by the user to emulate the channel link delay. In sample tests of this configuration, the TCP/IP(sub ftp) service and the SCPS(sub fp) service were used to transmit 10-KB data files using both symmetric (both forward and return links set to 115200 bps) and unsymmetric (forward link set at 2400 bps and a return link set at 115200 bps) link configurations. Transmission times were recorded at bit error rates of 0 through 10(exp -5) to give an indication of the link performance. In these tests. we noted separate timings for the protocol setup time to initiate the file transfer and the variation in the actual file transfer time caused by channel errors. Both protocols showed similar performance to that seen earlier for the symmetric and unsymmetric channels. This time, the delays in establishing the file protocol also showed that these delays could double the transmission time and need to be accounted for in mission planning. Both protocols also showed a difficulty in transmitting large data files over large link delays. In these tests, there was no clear favorite between the TCP/IP(sub ftp) and the SCPS(sub fp). Based upon these tests, further testing is recommended to extend the results to different file transfer configurations.
Reilly, Sean; Grasha, Anthony F; Matthews, Gerald; Schafer, John
2003-08-01
The relationships between attentional variables and information-processing demands of pharmacy dispensing tasks that contribute to difficulties in cognitive performance are not well-known. In the present study, a psychological approach to medical dispensing errors, the cognitive-systems performance model of Grasha, was employed to evaluate the contributions of individual differences in attention and alterations in visual task information on simulated pharmacy-verification performance, perceived workload, and self-reported stress. 73 college-age volunteers completed a pretest battery containing psychological measures of automatic and controlled information processing, and one-week later spent 265 min. completing the end visual-inspection process for 200 simulated prescriptions, 27% of which contained artificially inserted errors. Evidence suggesting that both automatic and controlled processes underlie performance of a simulated pharmacy-verification task was obtained. Individual differences in controlled information processing were mildly predictive of detection accuracy, while contrary to expectations, automatic processing scores did not produce significant relationships. Detection associated with experimental alterations in font size (12-pt. vs 6-pt.) of critical prescription label information was partially in line with expectations from the cognitive-systems performance model, while additional visual enhancements via a magnification/illumination device yielded mixed results. Finally, reports of perceived workload (NASA Task Load Index) and specific patterns of self-reported stress (Dundee Stress State Questionnaire) were consistent with a three-tier behavioral framework offered recently by Matthews, Davies, Westerman, and Stammers for predicting behaviors along the automatic-controlled continuum.
Boundary element model for simulating sound propagation and source localization within the lungs.
Ozer, M B; Acikgoz, S; Royston, T J; Mansy, H A; Sandler, R H
2007-07-01
An acoustic boundary element (BE) model is used to simulate sound propagation in the lung parenchyma. It is computationally validated and then compared with experimental studies on lung phantom models. Parametric studies quantify the effect of different model parameters on the resulting acoustic field within the lung phantoms. The BE model is then coupled with a source localization algorithm to predict the position of an acoustic source within the phantom. Experimental studies validate the BE-based source localization algorithm and show that the same algorithm does not perform as well if the BE simulation is replaced with a free field assumption that neglects reflections and standing wave patterns created within the finite-size lung phantom. The BE model and source localization procedure are then applied to actual lung geometry taken from the National Library of Medicine's Visible Human Project. These numerical studies are in agreement with the studies on simpler geometry in that use of a BE model in place of the free field assumption alters the predicted acoustic field and source localization results. This work is relevant to the development of advanced auscultatory techniques that utilize multiple noninvasive sensors to construct acoustic images of sound generation and transmission to identify pathologies.
NASA Astrophysics Data System (ADS)
Warren, Craig; Giannopoulos, Antonios; Giannakis, Iraklis
2016-12-01
gprMax is open source software that simulates electromagnetic wave propagation, using the Finite-Difference Time-Domain (FDTD) method, for the numerical modelling of Ground Penetrating Radar (GPR). gprMax was originally developed in 1996 when numerical modelling using the FDTD method and, in general, the numerical modelling of GPR were in their infancy. Current computing resources offer the opportunity to build detailed and complex FDTD models of GPR to an extent that was not previously possible. To enable these types of simulations to be more easily realised, and also to facilitate the addition of more advanced features, gprMax has been redeveloped and significantly modernised. The original C-based code has been completely rewritten using a combination of Python and Cython programming languages. Standard and robust file formats have been chosen for geometry and field output files. New advanced modelling features have been added including: an unsplit implementation of higher order Perfectly Matched Layers (PMLs) using a recursive integration approach; diagonally anisotropic materials; dispersive media using multi-pole Debye, Drude or Lorenz expressions; soil modelling using a semi-empirical formulation for dielectric properties and fractals for geometric characteristics; rough surface generation; and the ability to embed complex transducers and targets.
Forward and adjoint simulations of seismic wave propagation on fully unstructured hexahedral meshes
NASA Astrophysics Data System (ADS)
Peter, Daniel; Komatitsch, Dimitri; Luo, Yang; Martin, Roland; Le Goff, Nicolas; Casarotti, Emanuele; Le Loher, Pieyre; Magnoni, Federica; Liu, Qinya; Blitz, Céline; Nissen-Meyer, Tarje; Basini, Piero; Tromp, Jeroen
2011-08-01
We present forward and adjoint spectral-element simulations of coupled acoustic and (an)elastic seismic wave propagation on fully unstructured hexahedral meshes. Simulations benefit from recent advances in hexahedral meshing, load balancing and software optimization. Meshing may be accomplished using a mesh generation tool kit such as CUBIT, and load balancing is facilitated by graph partitioning based on the SCOTCH library. Coupling between fluid and solid regions is incorporated in a straightforward fashion using domain decomposition. Topography, bathymetry and Moho undulations may be readily included in the mesh, and physical dispersion and attenuation associated with anelasticity are accounted for using a series of standard linear solids. Finite-frequency Fréchet derivatives are calculated using adjoint methods in both fluid and solid domains. The software is benchmarked for a layercake model. We present various examples of fully unstructured meshes, snapshots of wavefields and finite-frequency kernels generated by Version 2.0 'Sesame' of our widely used open source spectral-element package SPECFEM3D.
NASA Astrophysics Data System (ADS)
Suvorov, Alexey; Cai, Yong Q.; Sutter, John P.; Chubar, Oleg
2014-09-01
Up to now simulation of perfect crystal optics in the "Synchrotron Radiation Workshop" (SRW) wave-optics computer code was not available, thus hindering the accurate modelling of synchrotron radiation beamlines containing optical components with multiple-crystal arrangements, such as double-crystal monochromators and high-energy-resolution monochromators. A new module has been developed for SRW for calculating dynamical diffraction from a perfect crystal in the Bragg case. We demonstrate its successful application to the modelling of partially-coherent undulator radiation propagating through the Inelastic X-ray Scattering (IXS) beamline of the National Synchrotron Light Source II (NSLS-II) at Brookhaven National Laboratory. The IXS beamline contains a double-crystal and a multiple-crystal highenergy- resolution monochromator, as well as complex optics such as compound refractive lenses and Kirkpatrick-Baez mirrors for the X-ray beam transport and shaping, which makes it an excellent case for benchmarking the new functionalities of the updated SRW codes. As a photon-hungry experimental technique, this case study for the IXS beamline is particularly valuable as it provides an accurate evaluation of the photon flux at the sample position, using the most advanced simulation methods and taking into account parameters of the electron beam, details of undulator source, and the crystal optics.
First-principles simulation for strong and ultra-short laser pulse propagation in dielectrics
NASA Astrophysics Data System (ADS)
Yabana, K.
2016-05-01
We develop a computational approach for interaction between strong laser pulse and dielectrics based on time-dependent density functional theory (TDDFT). In this approach, a key ingredient is a solver to simulate electron dynamics in a unit cell of solids under a time-varying electric field that is a time-dependent extension of the static band calculation. This calculation can be regarded as a constitutive relation, providing macroscopic electric current for a given electric field applied to the medium. Combining the solver with Maxwell equations for electromagnetic fields of the laser pulse, we describe propagation of laser pulses in dielectrics without any empirical parameters. An important output from the coupled Maxwell+TDDFT simulation is the energy transfer from the laser pulse to electrons in the medium. We have found an abrupt increase of the energy transfer at certain laser intensity close to damage threshold. We also estimate damage threshold by comparing the transferred energy with melting and cohesive energies. It shows reasonable agreement with measurements.
Difference in Simulated Low-Frequency Sound Propagation in the Various Species of Baleen Whale
NASA Astrophysics Data System (ADS)
Tsuchiya, Toshio; Naoi, Jun; Futa, Koji; Kikuchi, Toshiaki
2004-05-01
Whales found in the north Pacific are known to migrate over several thousand kilometers, from the Alaskan coast where they heartily feed during the summer to low latitude waters where they breed during the winter. Therefore, it is assumed that whales are using the “deep sound channel” for their long-distance communication. The main objective of this study is to clarify the behaviors of baleen whales from the standpoint of acoustical oceanography. Hence, authors investigated the possibility of long distance communication in various species of baleen whales, by simulating the long-distance propagation of their sound transmission, by applying the mode theory to actual sound speed profiles and by simulating their transmission frequencies. As a result, the possibility of long distance communication among blue whales using the deep sound channel was indicated. It was also indicated that communication among fin whales and blue whales can be made possible by coming close to shore slopes such as the Island of Hawaii.
Seidman, M.M.; Bredberg, A.; Seetharam, S.; Kraemer, K.H.
1987-07-01
Mutagenesis was studied at the DNA-sequence level in human fibroblast and lymphoid cells by use of a shuttle vector plasmid, pZ189, containing a suppressor tRNA marker gene. In a series of experiments, 62 plasmids were recovered that had two to six base substitutions in the 160-base-pair marker gene. Approximately 20-30% of the mutant plasmids that were recovered after passing ultraviolet-treated pZ189 through a repair-proficient human fibroblast line contained these multiple mutations. In contrast, passage of ultraviolet-treated pZ189 through an excision-repair-deficient (xeroderma pigmentosum) line yielded only 2% multiple base substitution mutants. Introducing a single-strand nick in otherwise unmodified pZ189 adjacent to the marker, followed by passage through the xeroderma pigmentosum cells, resulted in about 66% multiple base substitution mutants. The multiple mutations were found in a 160-base-pair region containing the marker gene but were rarely found in an adjacent 170-base-pair region. Passing ultraviolet-treated or nicked pZ189 through a repair-proficient human B-cell line also yielded multiple base substitution mutations in 20-33% of the mutant plasmids. An explanation for these multiple mutations is that they were generated by an error-prone polymerase while filling gaps. These mutations share many of the properties displayed by mutations in the immunoglobulin hypervariable regions.
Experimental simulations of beam propagation over large distances in a compact linear Paul trapa)
NASA Astrophysics Data System (ADS)
Gilson, Erik P.; Chung, Moses; Davidson, Ronald C.; Dorf, Mikhail; Efthimion, Philip C.; Majeski, Richard
2006-05-01
The Paul Trap Simulator Experiment (PTSX) is a compact laboratory experiment that places the physicist in the frame of reference of a long, charged-particle bunch coasting through a kilometers-long magnetic alternating-gradient (AG) transport system. The transverse dynamics of particles in both systems are described by similar equations, including nonlinear space-charge effects. The time-dependent voltages applied to the PTSX quadrupole electrodes are equivalent to the axially oscillating magnetic fields applied in the AG system. Experiments concerning the quiescent propagation of intense beams over large distances can then be performed in a compact and flexible facility. An understanding and characterization of the conditions required for quiescent beam transport, minimum halo particle generation, and precise beam compression and manipulation techniques, are essential, as accelerators and transport systems demand that ever-increasing amounts of space charge be transported. Application areas include ion-beam-driven high energy density physics, high energy and nuclear physics accelerator systems, etc. One-component cesium plasmas have been trapped in PTSX that correspond to normalized beam intensities, ŝ=ωp2(0)/2ωq2, up to 80% of the space-charge limit where self-electric forces balance the applied focusing force. Here, ωp(0)=[nb(0)eb2/mbɛ0]1/2 is the on-axis plasma frequency, and ωq is the smooth-focusing frequency associated with the applied focusing field. Plasmas in PTSX with values of ŝ that are 20% of the limit have been trapped for times corresponding to equivalent beam propagation over 10km. Results are presented for experiments in which the amplitude of the quadrupole focusing lattice is modified as a function of time. It is found that instantaneous changes in lattice amplitude can be detrimental to transverse confinement of the charge bunch.
NASA Astrophysics Data System (ADS)
Kumar, N.; Suanda, S. H.; Colosi, J. A.; Cai, D.; Haas, K. A.; Di Lorenzo, E.; Edwards, C. A.; Miller, A. J.; Feddersen, F.
2016-12-01
The inner- to outer-shelf region of Santa Maria Basin, north of Point Conception, CA is subjected to strong semi-diurnal internal-tidal variability, with implications for cross-shore tracer exchange. The generation hot-spots, and pathways for propagation of internal tides to this region is unknown, and is investigated through a set of realistic, one-way nested, hydrostatic numerical ocean model (ROMS) simulations. Modeled temperature and velocity variability are compared to mid-shelf observations adjacent to Point Sal, CA, and compare well. Modeled semi-diurnal barotropic to baroclinic conversion occurs at multiple locations along the Santa-Rosa Cortes Ridge at water depths of 1000-2000 m. The modeled, depth-integrated, baroclinic energy fluxes originate from the generation region, are directed towards the Santa Maria basin, and agree well with observed energy fluxes at 50 and 30 m water depth. The baroclinic tidal energy balance terms separated into coherent (phase locked to M2, S2 and N2) and incoherent terms. Energy fluxes are strongly coherent over the generation region, while in the shelf region incoherent and coherent energy fluxes are of similar magnitude. The increase in incoherent energy fluxes occur within 4 internal-tidal wavelengths. Modeled mid-water column temperature spectra is narrow-banded at the generation region, however, on propagation to shallower waters, the temperature spectra broadens indicating energy transfer to lower and higher-frequencies potentially due to interaction with meso- and submesoscale processes with velocities up to 40% of the first mode internal tidal phase speed. Finally, the length scales of modeled incoherent semi-diurnal baroclinic energy (10 km) are one order of magnitude smaller than those for coherent energy. Funded by the Office of Naval Research.
Numerical Simulation of Stoneley Surface Wave Propagating Along Elastic-Elastic Interface
NASA Astrophysics Data System (ADS)
Korneev, V. A.; Zuev, M. A.; Petrov, P.; Magomedov, M.
2014-12-01
There are seven waves in dynamic theory of elasticity that are named after their discoverers. In 1885, Lord Rayleigh had published a paper where he described a wave capable to propagate along a free surface of an elastic half-space. In 1911, Love had considered a pure shear motion for a model of an elastic layer, bounded by an elastic halfspace. In 1917, Lamb had discovered symmetric and asymmetric waves propagating in an isolated elastic plate. Stoneley (1924) had found that a surface wave can propagate along an interface between two elastic halfspaces for some parameter combinations, and then Scholte had shown in 1942, that in a model where one of the halfspaces is fluid, the surface wave can exist for any parameters. The sixth wave is named after Biot (1956), and it describes a slow diffusive wave in a fluid-saturated poroelastic media. Finally, in 1962 Krauklis had found a dispersive fluid wave in a system of a fluid layer bounded by two elastic halfspaces. Remarkably, all but one of the named waves were found and predicted theoretically as the results of mathematical and physical approaches in Nature exploration to be later confirmed in experiments and used in various scientific and practical applications. The only wave, which was not observed neither numerically nor experimentally until now is Stoneley wave. A likely reason for that is in rather restricted combinations of material parameters for this wave to exist. Indeed, the ratio R of shear velocities a model must be inside of the interval (0.8742 - 1). The ratio of the Stoneley wave velocity to the largest share wave velocity must be in the interval (0.8742 - R). To fill the gap, we performed 2D finite-difference simulation for a model consisting of polysterene (with velocities Vp1=2.350 m/s, Vs1=1190. m/s, and density Rho1= 1.06 g/m3) and gold (with velocities Vp2=3.240 m/s, Vs2=1200. m/s, and density Rho2= 19.7 g/m3). A corresponded root of a dispersion equation was found with a help of original
NASA Astrophysics Data System (ADS)
Gelman, David; Schwartz, Steven D.
2010-05-01
The recently developed quantum-classical method has been applied to the study of dissipative dynamics in multidimensional systems. The method is designed to treat many-body systems consisting of a low dimensional quantum part coupled to a classical bath. Assuming the approximate zeroth order evolution rule, the corrections to the quantum propagator are defined in terms of the total Hamiltonian and the zeroth order propagator. Then the corrections are taken to the classical limit by introducing the frozen Gaussian approximation for the bath degrees of freedom. The evolution of the primary part is governed by the corrected propagator yielding the exact quantum dynamics. The method has been tested on two model systems coupled to a harmonic bath: (i) an anharmonic (Morse) oscillator and (ii) a double-well potential. The simulations have been performed at zero temperature. The results have been compared to the exact quantum simulations using the surrogate Hamiltonian approach.
Measurement error in time-series analysis: a simulation study comparing modelled and monitored data.
Butland, Barbara K; Armstrong, Ben; Atkinson, Richard W; Wilkinson, Paul; Heal, Mathew R; Doherty, Ruth M; Vieno, Massimo
2013-11-13
Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the