Science.gov

Sample records for propagation error simulations

  1. Numerical study of error propagation in Monte Carlo depletion simulations

    SciTech Connect

    Wyant, T.; Petrovic, B.

    2012-07-01

    Improving computer technology and the desire to more accurately model the heterogeneity of the nuclear reactor environment have made the use of Monte Carlo depletion codes more attractive in recent years, and feasible (if not practical) even for 3-D depletion simulation. However, in this case statistical uncertainty is combined with error propagating through the calculation from previous steps. In an effort to understand this error propagation, a numerical study was undertaken to model and track individual fuel pins in four 17 x 17 PWR fuel assemblies. By changing the code's initial random number seed, the data produced by a series of 19 replica runs was used to investigate the true and apparent variance in k{sub eff}, pin powers, and number densities of several isotopes. While this study does not intend to develop a predictive model for error propagation, it is hoped that its results can help to identify some common regularities in the behavior of uncertainty in several key parameters. (authors)

  2. Simulation of radar rainfall errors and their propagation into rainfall-runoff processes

    NASA Astrophysics Data System (ADS)

    Aghakouchak, A.; Habib, E.

    2008-05-01

    Radar rainfall data compared with rain gauge measurements provide higher spatial and temporal resolution. However, radar data obtained form reflectivity patterns are subject to various errors such as errors in Z-R relationship, vertical profile of reflectivity, spatial and temporal sampling, etc. Characterization of such uncertainties in radar data and their effects on hydrologic simulations (e.g., streamflow estimation) is a challenging issue. This study aims to analyze radar rainfall error characteristics empirically to gain information on prosperities of random error representativeness and its temporal and spatial dependency. To empirically analyze error characteristics, high resolution and accurate rain gauge measurements are required. The Goodwin Creek watershed located in the north part of Mississippi is selected for this study due to availability of a dense rain gauge network. A total of 30 rain gauge measurement stations within Goodwin Creak watershed and the NWS Level II radar reflectivity data obtained from the WSR-88dD Memphis radar station with temporal resolution of 5min and spatial resolution of 1 km2 are used in this study. Radar data and rain gauge measurements comparisons are used to estimate overall bias, and statistical characteristics and spatio-temporal dependency of radar rainfall error fields. This information is then used to simulate realizations of radar error patterns with multiple correlated variables using Monte Calro method and the Cholesky decomposition. The generated error fields are then imposed on radar rainfall fields to obtain statistical realizations of input rainfall fields. Each simulated realization is then fed as input to a distributed physically based hydrological model resulting in an ensemble of predicted runoff hydrographs. The study analyzes the propagation of radar errors on the simulation of different rainfall-runoff processes such as streamflow, soil moisture, infiltration, and over-land flooding.

  3. Error Propagation in a System Model

    NASA Technical Reports Server (NTRS)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  4. NLO error propagation exercise: statistical results

    SciTech Connect

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or /sup 235/U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, /sup 235/U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and /sup 235/U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods.

  5. Using Least Squares for Error Propagation

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2015-01-01

    The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…

  6. Scout trajectory error propagation computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1982-01-01

    Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.

  7. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  8. Error coding simulations

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1993-11-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  9. Observations concerning licensee practices in error propagation

    SciTech Connect

    Lumb, R.F.; Messinger, M.; Tingey, F.H.

    1983-07-01

    This paper describes some of NUSAC's observations concerning licensee error propagation practice. NUSAC's findings are based on the results of work performed for the NRC whereby NUSAC visited seven nuclear fuel fabrication facilities, four processing low enriched uranium (LEU) and three processing high enriched uranium (HEU), in order to develop a detailed evaluation of the processing of material accounting data by those facilities. Discussed is the diversity that was found to exist across the industry in material accounting data accumulation; in error propagation methodology, for both inventory difference (ID) and shipper/receiver difference (SRD); as well as in measurement error modeling and estimation. Problems that have been identified are, in general, common to the industry. The significance of nonmeasurement effects on the variance of ID is discussed. This paper will also outline a four-phase program that can be implemented to improve the existing situation.

  10. An error analysis of higher-order finite element methods: Effect of degenerate coupling on simulation of elastic wave propagation

    NASA Astrophysics Data System (ADS)

    Hasegawa, Kei; Geller, Robert J.; Hirabayashi, Nobuyasu

    2016-02-01

    We present a theoretical analysis of the error of synthetic seismograms computed by higher-order finite element methods (ho-FEMs). We show the existence of a previously unrecognized type of error due to degenerate coupling between waves with the same frequency but different wavenumbers. These results are confirmed by simple numerical experiments using the spectral element method (SEM) as an example of ho-FEMs. Errors of the type found by this study may occur generally in applications of ho-FEMs.

  11. Observation error propagation on video meteor orbit determination

    NASA Astrophysics Data System (ADS)

    SonotaCo

    2016-04-01

    A new radiant direction error computation method on SonotaCo Network meteor observation data was tested. It uses single station observation error obtained by reference star measurement and trajectory linearity measurement on each video, as its source error value, and propagates this to the radiant and orbit parameter errors via the Monte Carlo simulation method. The resulting error values on a sample data set showed a reasonable error distribution that makes accuracy-based selecting feasible. A sample set of selected orbits obtained by this method revealed a sharper concentration of shower meteor radiants than we have ever seen before. The simultaneously observed meteor data sets published by the SonotaCo Network will be revised to include this error value on each record and will be publically available along with the computation program in near future.

  12. Error Propagation Analysis for Quantitative Intracellular Metabolomics

    PubMed Central

    Tillack, Jana; Paczia, Nicole; Nöh, Katharina; Wiechert, Wolfgang; Noack, Stephan

    2012-01-01

    Model-based analyses have become an integral part of modern metabolic engineering and systems biology in order to gain knowledge about complex and not directly observable cellular processes. For quantitative analyses, not only experimental data, but also measurement errors, play a crucial role. The total measurement error of any analytical protocol is the result of an accumulation of single errors introduced by several processing steps. Here, we present a framework for the quantification of intracellular metabolites, including error propagation during metabolome sample processing. Focusing on one specific protocol, we comprehensively investigate all currently known and accessible factors that ultimately impact the accuracy of intracellular metabolite concentration data. All intermediate steps are modeled, and their uncertainty with respect to the final concentration data is rigorously quantified. Finally, on the basis of a comprehensive metabolome dataset of Corynebacterium glutamicum, an integrated error propagation analysis for all parts of the model is conducted, and the most critical steps for intracellular metabolite quantification are detected. PMID:24957773

  13. NLO error propagation exercise data collection system

    SciTech Connect

    Keisch, B.; Bieber, A.M. Jr.

    1983-01-01

    A combined automated and manual system for data collection is described. The system is suitable for collecting, storing, and retrieving data related to nuclear material control at a bulk processing facility. The system, which was applied to the NLO operated Feed Materials Production Center, was successfully demonstrated for a selected portion of the facility. The instrumentation consisted of off-the-shelf commercial equipment and provided timeliness, convenience, and efficiency in providing information for generating a material balance and performing error propagation on a sound statistical basis.

  14. Error propagation in energetic carrying capacity models

    USGS Publications Warehouse

    Pearse, Aaron T.; Stafford, Joshua D.

    2014-01-01

    Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.

  15. Error Propagation Made Easy--Or at Least Easier

    ERIC Educational Resources Information Center

    Gardenier, George H.; Gui, Feng; Demas, James N.

    2011-01-01

    Complex error propagation is reduced to formula and data entry into a Mathcad worksheet or an Excel spreadsheet. The Mathcad routine uses both symbolic calculus analysis and Monte Carlo methods to propagate errors in a formula of up to four variables. Graphical output is used to clarify the contributions to the final error of each of the…

  16. Error propagation in a digital avionic mini processor. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Lomelino, Dale L.

    1987-01-01

    A methodology is introduced and demonstrated for the study of error propagation from the gate to the chip level. The importance of understanding error propagation derives from its close tie with system activity. In this system the target system is BDX-930, a digital avionic multiprocessor. The simulator used was developed at NASA-Langley, and is a gate level, event-driven, unit delay, software logic simulator. An approach is highly structured and easily adapted to other systems. The analysis shows the nature and extent of the dependency of error propagation on microinstruction type, assembly level instruction, and fault-free gate activity.

  17. Techniques for containing error propagation in compression/decompression schemes

    NASA Technical Reports Server (NTRS)

    Kobler, Ben

    1991-01-01

    Data compression has the potential for increasing the risk of data loss. It can also cause bit error propagation, resulting in catastrophic failures. There are a number of approaches possible for containing error propagation due to data compression: (1) data retransmission; (2) data interpolation; (3) error containment; and (4) error correction. The most fruitful techniques will be ones where error containment and error correction are integrated with data compression to provide optimal performance for both. The error containment characteristics of existing compression schemes should be analyzed for their behavior under different data and error conditions. The error tolerance requirements of different data sets need to be understood, so guidelines can then be developed for matching error requirements to suitable compression algorithms.

  18. CADNA: a library for estimating round-off error propagation

    NASA Astrophysics Data System (ADS)

    Jézéquel, Fabienne; Chesneaux, Jean-Marie

    2008-06-01

    The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each

  19. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  20. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  1. Detecting and preventing error propagation via competitive learning.

    PubMed

    Silva, Thiago Christiano; Zhao, Liang

    2013-05-01

    Semisupervised learning is a machine learning approach which is able to employ both labeled and unlabeled samples in the training process. It is an important mechanism for autonomous systems due to the ability of exploiting the already acquired information and for exploring the new knowledge in the learning space at the same time. In these cases, the reliability of the labels is a crucial factor, because mislabeled samples may propagate wrong labels to a portion of or even the entire data set. This paper has the objective of addressing the error propagation problem originated by these mislabeled samples by presenting a mechanism embedded in a network-based (graph-based) semisupervised learning method. Such a procedure is based on a combined random-preferential walk of particles in a network constructed from the input data set. The particles of the same class cooperate among them, while the particles of different classes compete with each other to propagate class labels to the whole network. Computer simulations conducted on synthetic and real-world data sets reveal the effectiveness of the model. PMID:23200192

  2. Error analysis using organizational simulation.

    PubMed Central

    Fridsma, D. B.

    2000-01-01

    Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885

  3. Uncertainty and error in computational simulations

    SciTech Connect

    Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.

    1997-10-01

    The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.

  4. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

    SciTech Connect

    Beckerman, M.; Jones, J.P.

    1999-02-01

    We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

  5. Position error propagation in the simplex strapdown navigation system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The results of an analysis of the effects of deterministic error sources on position error in the simplex strapdown navigation system were documented. Improving the long term accuracy of the system was addressed in two phases: understanding and controlling the error within the system, and defining methods of damping the net system error through the use of an external reference velocity or position. Review of the flight and ground data revealed error containing the Schuler frequency as well as non-repeatable trends. The only unbounded terms are those involving gyro bias and azimuth error coupled with velocity. All forms of Schuler-periodic position error were found to be sufficiently large to require update or damping capability unless the source coefficients can be limited to values less than those used in this analysis for misalignment and gyro and accelerometer bias. The first-order effects of the deterministic error sources were determined with a simple error propagator which provided plots of error time functions in response to various source error values.

  6. An error analysis of higher-order finite-element methods: effect of degenerate coupling on simulation of elastic wave propagation

    NASA Astrophysics Data System (ADS)

    Hasegawa, Kei; Geller, Robert J.; Hirabayashi, Nobuyasu

    2016-06-01

    We present a theoretical analysis of the error of synthetic seismograms computed by higher-order finite-element methods (ho-FEMs). We show the existence of a previously unrecognized type of error due to degenerate coupling between waves with the same frequency but different wavenumbers. These results are confirmed by simple numerical experiments using the spectral element method as an example of ho-FEMs. Errors of the type found by this study may occur generally in applications of ho-FEMs.

  7. Error propagation in PIV-based Poisson pressure calculations

    NASA Astrophysics Data System (ADS)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2015-11-01

    After more than 20 years of development, PIV has become a standard non-invasive velocity field measurement technique, and promises to make PIV-based pressure calculations possible. However, the errors inherent in PIV velocity fields propagate through integration and contaminate the calculated pressure field. We propose an analysis that shows how the uncertainties in the velocity field propagate to the pressure field through the Poisson equation. First we model the dynamics of error propagation using boundary value problems (BVPs). Next, L2-norm and/or L∞-norm are utilized as the measure of error in the velocity and pressure field. Finally, using analysis techniques including the maximum principle, the Poincare inequality pressure field can be bounded by the error level of the data by considering the well-posedness of the BVPs. Specifically, we exam if and how the error in the pressure field depend continually on the BVP data. Factors such as flow field geometry, boundary conditions, and velocity field noise levels will be discussed analytically.

  8. Inductively Coupled Plasma Mass Spectrometry Uranium Error Propagation

    SciTech Connect

    Hickman, D P; Maclean, S; Shepley, D; Shaw, R K

    2001-07-01

    The Hazards Control Department at Lawrence Livermore National Laboratory (LLNL) uses Inductively Coupled Plasma Mass Spectrometer (ICP/MS) technology to analyze uranium in urine. The ICP/MS used by the Hazards Control Department is a Perkin-Elmer Elan 6000 ICP/MS. The Department of Energy Laboratory Accreditation Program requires that the total error be assessed for bioassay measurements. A previous evaluation of the errors associated with the ICP/MS measurement of uranium demonstrated a {+-} 9.6% error in the range of 0.01 to 0.02 {micro}g/l. However, the propagation of total error for concentrations above and below this level have heretofore been undetermined. This document is an evaluation of the errors associated with the current LLNL ICP/MS method for a more expanded range of uranium concentrations.

  9. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  10. Error and efficiency of simulated tempering simulations

    PubMed Central

    Rosta, Edina; Hummer, Gerhard

    2010-01-01

    We derive simple analytical expressions for the error and computational efficiency of simulated tempering (ST) simulations. The theory applies to the important case of systems whose dynamics at long times is dominated by the slow interconversion between two metastable states. An extension to the multistate case is described. We show that the relative gain in efficiency of ST simulations over regular molecular dynamics (MD) or Monte Carlo (MC) simulations is given by the ratio of their reactive fluxes, i.e., the number of transitions between the two states summed over all ST temperatures divided by the number of transitions at the single temperature of the MD or MC simulation. This relation for the efficiency is derived for the limit in which changes in the ST temperature are fast compared to the two-state transitions. In this limit, ST is most efficient. Our expression for the maximum efficiency gain of ST simulations is essentially identical to the corresponding expression derived by us for replica exchange MD and MC simulations [E. Rosta and G. Hummer, J. Chem. Phys. 131, 165102 (2009)] on a different route. We find quantitative agreement between predicted and observed efficiency gains in a test against ST and replica exchange MC simulations of a two-dimensional Ising model. Based on the efficiency formula, we provide recommendations for the optimal choice of ST simulation parameters, in particular, the range and number of temperatures, and the frequency of attempted temperature changes. PMID:20095723

  11. Error and efficiency of simulated tempering simulations.

    PubMed

    Rosta, Edina; Hummer, Gerhard

    2010-01-21

    We derive simple analytical expressions for the error and computational efficiency of simulated tempering (ST) simulations. The theory applies to the important case of systems whose dynamics at long times is dominated by the slow interconversion between two metastable states. An extension to the multistate case is described. We show that the relative gain in efficiency of ST simulations over regular molecular dynamics (MD) or Monte Carlo (MC) simulations is given by the ratio of their reactive fluxes, i.e., the number of transitions between the two states summed over all ST temperatures divided by the number of transitions at the single temperature of the MD or MC simulation. This relation for the efficiency is derived for the limit in which changes in the ST temperature are fast compared to the two-state transitions. In this limit, ST is most efficient. Our expression for the maximum efficiency gain of ST simulations is essentially identical to the corresponding expression derived by us for replica exchange MD and MC simulations [E. Rosta and G. Hummer, J. Chem. Phys. 131, 165102 (2009)] on a different route. We find quantitative agreement between predicted and observed efficiency gains in a test against ST and replica exchange MC simulations of a two-dimensional Ising model. Based on the efficiency formula, we provide recommendations for the optimal choice of ST simulation parameters, in particular, the range and number of temperatures, and the frequency of attempted temperature changes. PMID:20095723

  12. A neural fuzzy controller learning by fuzzy error propagation

    NASA Technical Reports Server (NTRS)

    Nauck, Detlef; Kruse, Rudolf

    1992-01-01

    In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.

  13. Simulation of wave propagation in three-dimensional random media

    NASA Technical Reports Server (NTRS)

    Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.

    1993-01-01

    Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.

  14. Cross Section Sensitivity and Propagated Errors in HZE Exposures

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Wilson, John W.; Blatnig, Steve R.; Qualls, Garry D.; Badavi, Francis F.; Cucinotta, Francis A.

    2005-01-01

    It has long been recognized that galactic cosmic rays are of such high energy that they tend to pass through available shielding materials resulting in exposure of astronauts and equipment within space vehicles and habitats. Any protection provided by shielding materials result not so much from stopping such particles but by changing their physical character in interaction with shielding material nuclei forming, hopefully, less dangerous species. Clearly, the fidelity of the nuclear cross-sections is essential to correct specification of shield design and sensitivity to cross-section error is important in guiding experimental validation of cross-section models and database. We examine the Boltzmann transport equation which is used to calculate dose equivalent during solar minimum, with units (cSv/yr), associated with various depths of shielding materials. The dose equivalent is a weighted sum of contributions from neutrons, protons, light ions, medium ions and heavy ions. We investigate the sensitivity of dose equivalent calculations due to errors in nuclear fragmentation cross-sections. We do this error analysis for all possible projectile-fragment combinations (14,365 such combinations) to estimate the sensitivity of the shielding calculations to errors in the nuclear fragmentation cross-sections. Numerical differentiation with respect to the cross-sections will be evaluated in a broad class of materials including polyethylene, aluminum and copper. We will identify the most important cross-sections for further experimental study and evaluate their impact on propagated errors in shielding estimates.

  15. Phase unwrapping algorithms in laser propagation simulation

    NASA Astrophysics Data System (ADS)

    Du, Rui; Yang, Lijia

    2013-08-01

    Currently simulating on laser propagation in atmosphere usually need to deal with beam in strong turbulence, which may lose a part of information via Fourier Transform to simulate the transmission, makes the phase of beam as a 2-D array wrap by 2π . An effective unwrapping algorithm is needed for continuing result and faster calculation. The unwrapping algorithms in atmospheric propagation are similar to the unwrapping algorithm in radar or 3-D surface rebuilding, but not the same. In this article, three classic unwrapping algorithms: the block least squares (BLS), mask-cut (MCUT), and the Flynn's minimal discontinuity algorithm (FMD) are tried in wave-front reconstruction simulation. Each of those algorithms are tested 100 times in 6 same conditions, including low(64x64), medium(128x128), and high(256x256) resolution phase array, with and without noises. Compared the results, the conclusions are delivered as follows. The BLS-based algorithm is the fastest, and the result is acceptable in low resolution environment without noise. The MCUT are higher in accuracy, though they are slower with the array resolution increased, and it is sensitive to noise, resulted in large area errors. Flynn's algorithm has the better accuracy, and it occupies large memory in calculation. After all, the article delivered a new algorithm that based on Active on Vertex (AOV) Network, to build a logical graph to cut the search space then find minimal discontinuity solution. The AOV is faster than MCUT in dealing with high resolution phase arrays, and better accuracy as FMD that has been tested.

  16. Relationships between GPS-signal propagation errors and EISCAT observations

    NASA Astrophysics Data System (ADS)

    Jakowski, N.; Sardon, E.; Engler, E.; Jungstand, A.; Klähn, D.

    1996-12-01

    When travelling through the ionosphere the signals of space-based radio navigation systems such as the Global Positioning System (GPS) are subject to modifications in amplitude, phase and polarization. In particular, phase changes due to refraction lead to propagation errors of up to 50 m for single-frequency GPS users. If both the L1 and the L2 frequencies transmitted by the GPS satellites are measured, first-order range error contributions of the ionosphere can be determined and removed by difference methods. The ionospheric contribution is proportional to the total electron content (TEC) along the ray path between satellite and receiver. Using about ten European GPS receiving stations of the International GPS Service for Geodynamics (IGS), the TEC over Europe is estimated within the geographic ranges -20°leqleq40°E and 32.5°leqleq70°N in longitude and latitude, respectively. The derived TEC maps over Europe contribute to the study of horizontal coupling and transport proces- ses during significant ionospheric events. Due to their comprehensive information about the high-latitude ionosphere, EISCAT observations may help to study the influence of ionospheric phenomena upon propagation errors in GPS navigation systems. Since there are still some accuracy limiting problems to be solved in TEC determination using GPS, data comparison of TEC with vertical electron density profiles derived from EISCAT observations is valuable to enhance the accuracy of propagation-error estimations. This is evident both for absolute TEC calibration as well as for the conversion of ray-path-related observations to vertical TEC. The combination of EISCAT data and GPS-derived TEC data enables a better understanding of large-scale ionospheric processes. Acknowledgements. This work has been supported by the UK Particle-Physics and Astronomy Research Council. The assistance of the director and staff of the EISCAT Scientific Association, the staff of the Norsk Polarinstitutt

  17. Optimal control of quaternion propagation errors in spacecraft navigation

    NASA Technical Reports Server (NTRS)

    Vathsal, S.

    1986-01-01

    Optimal control techniques are used to drive the numerical error (truncation, roundoff, commutation) in computing the quaternion vector to zero. The normalization of the quaternion is carried out by appropriate choice of a performance index, which can be optimized. The error equations are derived from Friedland's (1978) theoretical development, and a matrix Riccati equation results for the computation of the gain matrix. Simulation results show that a high precision of the order of 10 to the -12th can be obtained using this technique in meeting the q(T)q=1 constraint. The performance of the estimator in the presence of the feedback control that maintains the normalization, is studied.

  18. Using back error propagation networks for automatic document image classification

    NASA Astrophysics Data System (ADS)

    Hauser, Susan E.; Cookson, Timothy J.; Thoma, George R.

    1993-09-01

    The Lister Hill National Center for Biomedical Communications is a Research and Development Division of the National Library of Medicine. One of the Center's current research projects involves the conversion of entire journals to bitmapped binary page images. In an effort to reduce operator errors that sometimes occur during document capture, three back error propagation networks were designed to automatically identify journal title based on features in the binary image of the journal's front cover page. For all three network designs, twenty five journal titles were randomly selected from the stored database of image files. Seven cover page images from each title were selected as the training set. For each title, three other cover page images were selected as the test set. Each bitmapped image was initially processed by counting the total number of black pixels in 32-pixel wide rows and columns of the page image. For the first network, these counts were scaled to create 122-element count vectors as the input vectors to a back error propagation network. The network had one output node for each journal classification. Although the network was successful in correctly classifying the 25 journals, the large input vector resulted in a large network and, consequently, a long training period. In an alternative approach, the first thirty-five coefficients of the Fast Fourier Transform of the count vector were used as the input vector to a second network. A third approach was to train a separate network for each journal using the original count vectors as input and with only one output node. The output of the network could be 'yes' (it is this journal) or 'no' (it is not this journal). This final design promises to be most efficient for a system in which journal titles are added or removed as it does not require retraining a large network for each change.

  19. Error field penetration and locking to the backward propagating wave

    DOE PAGESBeta

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less

  20. Error field penetration and locking to the backward propagating wave

    SciTech Connect

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.

  1. On the uncertainty of stream networks derived from elevation data: the error propagation approach

    NASA Astrophysics Data System (ADS)

    Hengl, T.; Heuvelink, G. B. M.; van Loon, E. E.

    2010-07-01

    DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks) using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations), and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations). All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial) clearly depicts areas where the extracted stream network is least precise - usually areas of low local relief and slightly convex (0-10 difference from the mean value). In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor) show high error (H>0.5) of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy the required accuracy level. Such error propagation tool should become a standard functionality in any modern GIS. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the

  2. Assessment of error propagation in ultraspectral sounder data via JPEG2000 compression and turbo coding

    NASA Astrophysics Data System (ADS)

    Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok

    2005-08-01

    Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of

  3. On the error propagation of semi-Lagrange and Fourier methods for advection problems☆

    PubMed Central

    Einkemmer, Lukas; Ostermann, Alexander

    2015-01-01

    In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018

  4. Molecular dynamics simulation of propagating cracks

    NASA Technical Reports Server (NTRS)

    Mullins, M.

    1982-01-01

    Steady state crack propagation is investigated numerically using a model consisting of 236 free atoms in two (010) planes of bcc alpha iron. The continuum region is modeled using the finite element method with 175 nodes and 288 elements. The model shows clear (010) plane fracture to the edge of the discrete region at moderate loads. Analysis of the results obtained indicates that models of this type can provide realistic simulation of steady state crack propagation.

  5. Numerical Simulation of Coherent Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, Mark

    A major goal in quantum computation is the implementation of error correction to produce a logical qubit with an error rate lower than that of the underlying physical qubits. Recent experimental progress demonstrates physical qubits can achieve error rates sufficiently low for error correction, particularly for codes with relatively high thresholds such as the surface code and color code. Motivated by experimental capabilities of neutral atom systems, we use numerical simulation to investigate whether coherent error correction can be effectively used with the 7-qubit color code. The results indicate that coherent error correction does not work at the 10-qubit level in neutral atom array quantum computers. By adding more qubits there is a possibility of making the encoding circuits fault-tolerant which could improve performance.

  6. Error propagation in the computation of volumes in 3D city models with the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Biljecki, F.; Ledoux, H.; Stoter, J.

    2014-11-01

    This paper describes the analysis of the propagation of positional uncertainty in 3D city models to the uncertainty in the computation of their volumes. Current work related to error propagation in GIS is limited to 2D data and 2D GIS operations, especially of rasters. In this research we have (1) developed two engines, one that generates random 3D buildings in CityGML in multiple LODs, and one that simulates acquisition errors to the geometry; (2) performed an error propagation analysis on volume computation based on the Monte Carlo method; and (3) worked towards establishing a framework for investigating error propagation in 3D GIS. The results of the experiments show that a comparatively small error in the geometry of a 3D city model may cause significant discrepancies in the computation of its volume. This has consequences for several applications, such as in estimation of energy demand and property taxes. The contribution of this work is twofold: this is the first error propagation analysis in 3D city modelling, and the novel approach and the engines that we have created can be used for analysing most of 3D GIS operations, supporting related research efforts in the future.

  7. Propagation Of Error And The Reliability Of Global Air Temperature Projections

    NASA Astrophysics Data System (ADS)

    Frank, P.

    2013-12-01

    General circulation model (GCM) projections of the impact of rising greenhouse gases (GHGs) on globally averaged annual surface air temperatures are a simple linear extrapolation of GHG forcing, as indicated by their accurate simulation using the equation, ΔT = a×33K×[(F0+∑iΔFi)/F0], where F0 is the total GHG forcing of projection year zero, ΔFi is the increment of GHG forcing in the ith year, and a is a variable dimensionless fraction that follows GCM climate sensitivity. Linearity of GCM air temperature projections means that uncertainty propagates step-wise as the root-sum-square of error. The annual average error in total cloud fraction (TCF) resulting from CMIP5 model theory-bias is ×12%, equivalent to ×5 Wm-2 uncertainty in the energy state of the projected atmosphere. Propagated uncertainty due to TCF error is always much larger than the projected globally averaged air temperature anomaly, and reaches ×20 C in a centennial projection. CMIP5 GCMs thus have no predictive value.

  8. Simulation of guided wave propagation near numerical Brillouin zones

    NASA Astrophysics Data System (ADS)

    Kijanka, Piotr; Staszewski, Wieslaw J.; Packo, Pawel

    2016-04-01

    Attractive properties of guided waves provides very unique potential for characterization of incipient damage, particularly in plate-like structures. Among other properties, guided waves can propagate over long distances and can be used to monitor hidden structural features and components. On the other hand, guided propagation brings substantial challenges for data analysis. Signal processing techniques are frequently supported by numerical simulations in order to facilitate problem solution. When employing numerical models additional sources of errors are introduced. These can play significant role for design and development of a wave-based monitoring strategy. Hence, the paper presents an investigation of numerical models for guided waves generation, propagation and sensing. Numerical dispersion analysis, for guided waves in plates, based on the LISA approach is presented and discussed in the paper. Both dispersion and modal amplitudes characteristics are analysed. It is shown that wave propagation in a numerical model resembles propagation in a periodic medium. Consequently, Lamb wave propagation close to numerical Brillouin zone is investigated and characterized.

  9. Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models

    USGS Publications Warehouse

    Phillips, D.L.; Marks, D.G.

    1996-01-01

    In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated

  10. Propagation of errors from the sensitivity image in list mode reconstruction

    SciTech Connect

    Qi, Jinyi; Huesman, Ronald H.

    2003-11-15

    List mode image reconstruction is attracting renewed attention. It eliminates the storage of empty sinogram bins. However, a single back projection of all LORs is still necessary for the pre-calculation of a sensitivity image. Since the detection sensitivity is dependent on the object attenuation and detector efficiency, it must be computed for each study. Exact computation of the sensitivity image can be a daunting task for modern scanners with huge numbers of LORs. Thus, some fast approximate calculation may be desirable. In this paper, we theoretically analyze the error propagation from the sensitivity image into the reconstructed image. The theoretical analysis is based on the fixed point condition of the list mode reconstruction. The non-negativity constraint is modeled using the Kuhn-Tucker condition. With certain assumptions and the first order Taylor series approximation, we derive a closed form expression for the error in the reconstructed image as a function of the error in the sensitivity image. The result provides insights on what kind of error might be allowable in the sensitivity image. Computer simulations show that the theoretical results are in good agreement with the measured results.

  11. Error propagation and scaling for tropical forest biomass estimates.

    PubMed Central

    Chave, Jerome; Condit, Richard; Aguilar, Salomon; Hernandez, Andres; Lao, Suzanne; Perez, Rolando

    2004-01-01

    The above-ground biomass (AGB) of tropical forests is a crucial variable for ecologists, biogeochemists, foresters and policymakers. Tree inventories are an efficient way of assessing forest carbon stocks and emissions to the atmosphere during deforestation. To make correct inferences about long-term changes in biomass stocks, it is essential to know the uncertainty associated with AGB estimates, yet this uncertainty is rarely evaluated carefully. Here, we quantify four types of uncertainty that could lead to statistical error in AGB estimates: (i) error due to tree measurement; (ii) error due to the choice of an allometric model relating AGB to other tree dimensions; (iii) sampling uncertainty, related to the size of the study plot; (iv) representativeness of a network of small plots across a vast forest landscape. In previous studies, these sources of error were reported but rarely integrated into a consistent framework. We estimate all four terms in a 50 hectare (ha, where 1 ha = 10(4) m2) plot on Barro Colorado Island, Panama, and in a network of 1 ha plots scattered across central Panama. We find that the most important source of error is currently related to the choice of the allometric model. More work should be devoted to improving the predictive power of allometric models for biomass. PMID:15212093

  12. Assessment of Random Error in Phantom Dosimetry with the Use of Error Simulation in Statistical Software

    PubMed Central

    Hoogeveen, R. C.; Martens, E. P.; van der Stelt, P. F.; Berkhout, W. E. R.

    2015-01-01

    Objective. To investigate if software simulation is practical for quantifying random error (RE) in phantom dosimetry. Materials and Methods. We applied software error simulation to an existing dosimetry study. The specifications and the measurement values of this study were brought into the software (R version 3.0.2) together with the algorithm of the calculation of the effective dose (E). Four sources of RE were specified: (1) the calibration factor; (2) the background radiation correction; (3) the read-out process of the dosimeters; and (4) the fluctuation of the X-ray generator. Results. The amount of RE introduced by these sources was calculated on the basis of the experimental values and the mathematical rules of error propagation. The software repeated the calculations of E multiple times (n = 10,000) while attributing the applicable RE to the experimental values. A distribution of E emerged as a confidence interval around an expected value. Conclusions. Credible confidence intervals around E in phantom dose studies can be calculated by using software modelling of the experiment. With credible confidence intervals, the statistical significance of differences between protocols can be substantiated or rejected. This modelling software can also be used for a power analysis when planning phantom dose experiments. PMID:26881200

  13. Skeletal mechanism generation for surrogate fuels using directed relation graph with error propagation and sensitivity analysis

    SciTech Connect

    Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.

    2010-09-15

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article

  14. Spatial and temporal patterns of error in land cover change analyses: Identifying and propagating uncertainty for ecological monitoring and modeling

    NASA Astrophysics Data System (ADS)

    Burnicki, Amy Colette

    Improving our understanding of the uncertainty associated with a map of land-cover change is needed given the importance placed on modeling our changing landscape. My dissertation research addressed the challenges of estimating the accuracy of a map of change by improving our understanding of the spatio-temporal structure of error in multi-date classified imagery, investigating the relative strength and importance of a temporal dependence between classification errors in multi-date imagery, and exploring the interaction of classification errors within a simulated model of land-cover change. First, I quantified the spatial and temporal patterns of error in multi-date classified imagery acquired for Pittsfield Township, Michigan. Specifically, I examined the propagation of error in a post-classification change analysis. The spatial patterns of misclassification for each classified map, the temporal correlation between the errors in each classified map, and secondary variables that may have affected the pattern of error associated with the map of change were analyzed by addressing a series of research hypothesis. The results of all analyses provided a thorough description and understanding of the spatio-temporal error structure for this test township. Second, I developed a model of error propagation in land-cover change that simulated user-defined spatial and temporal patterns of error within a time-series of classified maps to assess the impact of the specified error patterns on the accuracy of the resulting map of change. Two models were developed. The first established the overall modeling framework using land-cover maps composed of two land-cover classes. The second extended the initial model by using three land-cover class maps to investigate model performance under increased landscape complexity. The results of the simulated model demonstrated that the presence of temporal interaction between the errors of individual classified maps affected the resulting

  15. Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements

    NASA Technical Reports Server (NTRS)

    Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when

  16. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    NASA Astrophysics Data System (ADS)

    Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is

  17. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    NASA Astrophysics Data System (ADS)

    Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.

    2013-08-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006-2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC) ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92) are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors in the 7-15 hPa layer (29-32 km), a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold). Comparisons of total column O3 yield average differences of +1.6 DU (-1.1 to +4.9 DU 10th to 90th percentiles) when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (-1.1 to +2.2 DU) when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly distinguishable

  18. Richardson Extrapolation Based Error Estimation for Stochastic Kinetic Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Cartwright, Keigh

    2014-10-01

    To have a high degree of confidence in simulations one needs code verification, validation, solution verification and uncertainty qualification. This talk will focus on numerical error estimation for stochastic kinetic plasma simulations using the Particle-In-Cell (PIC) method and how it impacts the code verification and validation. A technique Is developed to determine the full converged solution with error bounds from the stochastic output of a Particle-In-Cell code with multiple convergence parameters (e.g. ?t, ?x, and macro particle weight). The core of this method is a multi parameter regression based on a second-order error convergence model with arbitrary convergence rates. Stochastic uncertainties in the data set are propagated through the model usin gstandard bootstrapping on a redundant data sets, while a suite of nine regression models introduces uncertainties in the fitting process. These techniques are demonstrated on Flasov-Poisson Child-Langmuir diode, relaxation of an electro distribution to a Maxwellian due to collisions and undriven sheaths and pre-sheaths. Sandia National Laboratories is a multie-program laboratory managed and operated by Sandia Corporation, a wholly owned subisidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  19. Simulating tsunami propagation in fjords with long-wave models

    NASA Astrophysics Data System (ADS)

    Løvholt, F.; Glimsdal, S.; Lynett, P.; Pedersen, G.

    2015-03-01

    Tsunamis induced by rock slides constitute a severe hazard towards coastal fjord communities. Fjords are narrow and rugged with steep slopes, and modeling the short-frequency and high-amplitude tsunamis in this environment is demanding. In the present paper, our ability (and the lack thereof) to simulate tsunami propagation and run-up in fjords for typical wave characteristics of rock-slide-induced waves is demonstrated. The starting point is a 1 : 500 scale model of the topography and bathymetry of the southern part of Storfjorden fjord system in western Norway. Using measured wave data from the scale model as input to numerical simulations, we find that the leading wave is moderately influenced by nonlinearity and dispersion. For the trailing waves, dispersion and dissipation from the alongshore inundation on the traveling wave become more important. The tsunami inundation was simulated at the two locations of Hellesylt and Geiranger, providing a good match with the measurements in the former location. In Geiranger, the most demanding case of the two, discrepancies are larger. The discrepancies may be explained by a combinations of factors, such as the accumulated errors in the wave propagation along large stretches of the fjord, the coarse grid resolution needed to ensure model stability, and scale effects in the laboratory experiments.

  20. Modeling human response errors in synthetic flight simulator domain

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  1. Simulation of MAD Cow Disease Propagation

    NASA Astrophysics Data System (ADS)

    Magdoń-Maksymowicz, M. S.; Maksymowicz, A. Z.; Gołdasz, J.

    Computer simulation of dynamic of BSE disease is presented. Both vertical (to baby) and horizontal (to neighbor) mechanisms of the disease spread are considered. The game takes place on a two-dimensional square lattice Nx×Ny = 1000×1000 with initial population randomly distributed on the net. The disease may be introduced either with the initial population or by a spontaneous development of BSE in an item, at a small frequency. Main results show a critical probability of the BSE transmission above which the disease is present in the population. This value is vulnerable to possible spatial clustering of the population and it also depends on the mechanism responsible for the disease onset, evolution and propagation. A threshold birth rate below which the population is extinct is seen. Above this threshold the population is disease free at equilibrium until another birth rate value is reached when the disease is present in population. For typical model parameters used for the simulation, which may correspond to the mad cow disease, we are close to the BSE-free case.

  2. Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results

    SciTech Connect

    Clark, E.L.

    1994-07-01

    Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.

  3. Analog-digital simulation of transient-induced logic errors and upset susceptibility of an advanced control system

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.; Choi, G.; Iyer, R. K.

    1990-01-01

    A simulation study is described which predicts the susceptibility of an advanced control system to electrical transients resulting in logic errors, latched errors, error propagation, and digital upset. The system is based on a custom-designed microprocessor and it incorporates fault-tolerant techniques. The system under test and the method to perform the transient injection experiment are described. Results for 2100 transient injections are analyzed and classified according to charge level, type of error, and location of injection.

  4. Spatio-temporal precipitation error propagation in runoff modelling: a case study in central Sweden

    NASA Astrophysics Data System (ADS)

    Olsson, J.

    2006-07-01

    The propagation of spatio-temporal errors in precipitation estimates to runoff errors in the output from the conceptual hydrological HBV model was investigated. The study region was the Gimån catchment in central Sweden, and the period year 2002. Five precipitation sources were considered: NWP model (H22), weather radar (RAD), precipitation gauges (PTH), and two versions of a mesoscale analysis system (M11, M22). To define the baseline estimates of precipitation and runoff, used to define seasonal precipitation and runoff biases, the mesoscale climate analysis M11 was used. The main precipitation biases were a systematic overestimation of precipitation by H22, in particular during winter and early spring, and a pronounced local overestimation by RAD during autumn, in the western part of the catchment. These overestimations in some cases exceeded 50% in terms of seasonal subcatchment relative accumulated volume bias, but generally the bias was within ±20%. The precipitation data from the different sources were used to drive the HBV model, set up and calibrated for two stations in Gimån, both for continuous simulation during 2002 and for forecasting of the spring flood peak. In summer, autumn and winter all sources agreed well. In spring H22 overestimated the accumulated runoff volume by ~50% and peak discharge by almost 100%, owing to both overestimated snow depth and precipitation during the spring flood. PTH overestimated spring runoff volumes by ~15% owing to overestimated winter precipitation. The results demonstrate how biases in precipitation estimates may exhibit a substantial space-time variability, and may further become either magnified or reduced when applied for hydrological purposes, depending on both temporal and spatial variations in the catchment. Thus, the uncertainty in precipitation estimates should preferably be specified as a function of both time and space.

  5. Effects of Error Experience When Learning to Simulate Hypernasality

    ERIC Educational Resources Information Center

    Wong, Andus W.-K.; Tse, Andy C.-Y.; Ma, Estella P.-M.; Whitehill, Tara L.; Masters, Rich S. W.

    2013-01-01

    Purpose: The purpose of this study was to evaluate the effects of error experience on the acquisition of hypernasal speech. Method: Twenty-eight healthy participants were asked to simulate hypernasality in either an "errorless learning" condition (in which the possibility for errors was limited) or an "errorful learning"…

  6. GEOS-2 refraction program summary document. [ionospheric and tropospheric propagation errors in satellite tracking instruments

    NASA Technical Reports Server (NTRS)

    Mallinckrodt, A. J.

    1977-01-01

    Data from an extensive array of collocated instrumentation at the Wallops Island test facility were intercompared in order to (1) determine the practical achievable accuracy limitations of various tropospheric and ionospheric correction techniques; (2) examine the theoretical bases and derivation of improved refraction correction techniques; and (3) estimate internal systematic and random error levels of the various tracking stations. The GEOS 2 satellite was used as the target vehicle. Data were obtained regarding the ionospheric and tropospheric propagation errors, the theoretical and data analysis of which was documented in some 30 separate reports over the last 6 years. An overview of project results is presented.

  7. Simulation of Systematic Errors in Phase-Referenced VLBI Astrometry

    NASA Astrophysics Data System (ADS)

    Pradel, N.; Charlot, P.; Lestrade, J.-F.

    2005-12-01

    The astrometric accuracy in the relative coordinates of two angularly-close radio sources observed with the phase-referencing VLBI technique is limited by systematic errors. These include geometric errors and atmospheric errors. Based on simulation with the SPRINT software, we evaluate the impact of these errors in the estimated relative source coordinates for standard VLBA observations. Such evaluations are useful to estimate the actual accuracy of phase-referenced VLBI astrometry.

  8. Audibility of dispersion error in room acoustic finite-difference time-domain simulation as a function of simulation distance.

    PubMed

    Saarelma, Jukka; Botts, Jonathan; Hamilton, Brian; Savioja, Lauri

    2016-04-01

    Finite-difference time-domain (FDTD) simulation has been a popular area of research in room acoustics due to its capability to simulate wave phenomena in a wide bandwidth directly in the time-domain. A downside of the method is that it introduces a direction and frequency dependent error to the simulated sound field due to the non-linear dispersion relation of the discrete system. In this study, the perceptual threshold of the dispersion error is measured in three-dimensional FDTD schemes as a function of simulation distance. Dispersion error is evaluated for three different explicit, non-staggered FDTD schemes using the numerical wavenumber in the direction of the worst-case error of each scheme. It is found that the thresholds for the different schemes do not vary significantly when the phase velocity error level is fixed. The thresholds are found to vary significantly between the different sound samples. The measured threshold for the audibility of dispersion error at the probability level of 82% correct discrimination for three-alternative forced choice is found to be 9.1 m of propagation in a free field, that leads to a maximum group delay error of 1.8 ms at 20 kHz with the chosen phase velocity error level of 2%. PMID:27106330

  9. Error propagation: a comparison of Shack-Hartmann and curvature sensors.

    PubMed

    Kellerer, Aglaé N; Kellerer, Albrecht M

    2011-05-01

    Phase estimates in adaptive-optics systems are computed by use of wavefront sensors, such as Shack-Hartmann or curvature sensors. In either case, the standard error of the phase estimates is proportional to the standard error of the measurements; but the error-propagation factors are different. We calculate the ratio of these factors for curvature and Shack-Hartmann sensors in dependence on the number of sensors, n, on a circular aperture. If the sensor spacing is kept constant and the pupil is enlarged, the ratio increases as n(0.4). When more sensing elements are accommodated on the same aperture, it increases even faster, namely, proportional to n(0.8). With large numbers of sensing elements, this increase can limit the applicability of curvature sensors. PMID:21532691

  10. Error-Based Simulation for Error-Awareness in Learning Mechanics: An Evaluation

    ERIC Educational Resources Information Center

    Horiguchi, Tomoya; Imai, Isao; Toumoto, Takahito; Hirashima, Tsukasa

    2014-01-01

    Error-based simulation (EBS) has been developed to generate phenomena by using students' erroneous ideas and also offers promise for promoting students' awareness of errors. In this paper, we report the evaluation of EBS used in learning "normal reaction" in a junior high school. An EBS class, where students learned the concept…

  11. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  12. Error propagation dynamics of PIV-based pressure field calculations: How well does the pressure Poisson solver perform inherently?

    NASA Astrophysics Data System (ADS)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  13. Mathematical analysis study for radar data processing and enchancement. Part 2: Modeling of propagation path errors

    NASA Technical Reports Server (NTRS)

    James, R.; Brownlow, J. D.

    1985-01-01

    A study is performed under NASA contract to evaluate data from an AN/FPS-16 radar installed for support of flight programs at Dryden Flight Research Facility of NASA Ames Research Center. The purpose of this study is to provide information necessary for improving post-flight data reduction and knowledge of accuracy of derived radar quantities. Tracking data from six flights are analyzed. Noise and bias errors in raw tracking data are determined for each of the flights. A discussion of an altitude bias error during all of the tracking missions is included. This bias error is defined by utilizing pressure altitude measurements made during survey flights. Four separate filtering methods, representative of the most widely used optimal estimation techniques for enhancement of radar tracking data, are analyzed for suitability in processing both real-time and post-mission data. Additional information regarding the radar and its measurements, including typical noise and bias errors in the range and angle measurements, is also presented. This report is in two parts. This is part 2, a discussion of the modeling of propagation path errors.

  14. Simulation of Radar Rainfall Fields: A Random Error Model

    NASA Astrophysics Data System (ADS)

    Aghakouchak, A.; Habib, E.; Bardossy, A.

    2008-12-01

    Precipitation is a major input in hydrological and meteorological models. It is believed that uncertainties due to input data will propagate in modeling hydrologic processes. Stochastically generated rainfall data are used as input to hydrological and meteorological models to assess model uncertainties and climate variability in water resources systems. The superposition of random errors of different sources is one of the main factors in uncertainty of radar estimates. One way to express these uncertainties is to stochastically generate random error fields to impose them on radar measurements in order to obtain an ensemble of radar rainfall estimates. In the method introduced here, the random error consists of two components: purely random error and dependent error on the indicator variable. Model parameters of the error model are estimated using a heteroscedastic maximum likelihood model in order to account for variance heterogeneity in radar rainfall error estimates. When reflectivity values are considered, the exponent and multiplicative factor of the Z-R relationship are estimated simultaneously with the model parameters. The presented model performs better compared to the previous approaches that generally result in unaccounted heteroscedasticity in error fields and thus radar ensemble.

  15. The influence of natural variability and interpolation errors on bias characterization in RCM simulations

    NASA Astrophysics Data System (ADS)

    Addor, Nans; Fischer, Erich M.

    2015-10-01

    Climate model simulations are routinely compared to observational data sets for evaluation purposes. The resulting differences can be large and induce artifacts if propagated through impact models. They are usually termed "model biases," suggesting that they exclusively stem from systematic models errors. Here we explore for Switzerland the contribution of two other components of this mismatch, which are usually overlooked: interpolation errors and natural variability. Precipitation and temperature simulations from the RCM COSMO-Community Land Model were compared to two observational data sets, for which estimates of interpolation errors were derived. Natural variability on the multidecadal time scale was estimated using three approaches relying on homogenized time series, multiple runs of the same climate model, and bootstrapping of 30 year meteorological records. We find that although these methods yield different estimates, the contribution of the natural variability to RCM-observation differences in 30 year means is usually small. In contrast, uncertainties in observational data sets induced by interpolation errors can explain a substantial proportion of the mismatch of 30 year means. In those cases, we argue that the model biases can hardly be distinguished from interpolation errors, making the characterization and reduction of model biases particularly delicate. In other regions, RCM biases clearly exceed the estimated contribution of natural variability and interpolation errors, enabling bias characterization and robust model evaluation. Overall, we argue that bias correction of climate simulations needs to account for observational uncertainties and natural variability. We particularly stress the need for reliable error estimates to accompany observational data sets.

  16. Programmable simulator for beam propagation in turbulent atmosphere.

    PubMed

    Rickenstorff, Carolina; Rodrigo, José A; Alieva, Tatiana

    2016-05-01

    The study of light propagation though the atmosphere is crucial in different areas such as astronomy, free-space communications, remote sensing, etc. Since outdoors experiments are expensive and difficult to reproduce it is important to develop realistic numerical and experimental simulations. It has been demonstrated that spatial light modulators (SLMs) are well-suited for simulating different turbulent conditions in the laboratory. Here, we present a programmable experimental setup based on liquid crystal SLMs for simulation and analysis of the beam propagation through weak turbulent atmosphere. The simulator allows changing the propagation distances and atmospheric conditions without the need of moving optical elements. Its performance is tested for Gaussian and vortex beams. PMID:27137610

  17. Belief Propagation for Error Correcting Codes and Lossy Compression Using Multilayer Perceptrons

    NASA Astrophysics Data System (ADS)

    Mimura, Kazushi; Cousseau, Florent; Okada, Masato

    2011-03-01

    The belief propagation (BP) based algorithm is investigated as a potential decoder for both of error correcting codes and lossy compression, which are based on non-monotonic tree-like multilayer perceptron encoders. We discuss that whether the BP can give practical algorithms or not in these schemes. The BP implementations in those kind of fully connected networks unfortunately shows strong limitation, while the theoretical results seems a bit promising. Instead, it reveals it might have a rich and complex structure of the solution space via the BP-based algorithms.

  18. Reducing the error growth in the numerical propagation of satellite orbits

    NASA Astrophysics Data System (ADS)

    Ferrandiz, Jose M.; Vigo, Jesus; Martin, P.

    1991-12-01

    An algorithm especially designed for the long term numerical integration of perturbed oscillators, in one or several frequencies, is presented. The method is applied to the numerical propagation of satellite orbits, using focal variables, and the results concerning highly eccentric or nearly circular cases are reported. The method performs particularly well for high eccentricity. For e = 0.99 and J2 + J3 perturbations it allows the last perigee after 1000 revolutions with an error less than 1 cm, with only 80 derivative evaluations per revolution. In general the approach provides about a hundred times more accuracy than Bettis methods over one thousand revolutions.

  19. Simulation of long distance optical propagation on a benchtop.

    PubMed

    Fein, M E; Sheng, S C; Sobottke, M

    1989-04-15

    An optical instrument derived from two telescopes simulates long distance propagation of optical wavefronts, in short real distances. Both geometric and wave optical effects are correctly simulated. One 900:1 distance scaler is used routinely for benchtop testing and adjustment of laser leveling instruments that work at ranges of the order of a kilometer. PMID:20548700

  20. Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES

    NASA Astrophysics Data System (ADS)

    Sarkar, B.; Bhunia, C. T.; Maulik, U.

    2012-06-01

    Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.

  1. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  2. Simulation of rare events in quantum error correction

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey; Vargo, Alexander

    2013-12-01

    We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.

  3. Wavefront error simulator for evaluating optical testing instrumentation

    NASA Technical Reports Server (NTRS)

    Golden, L. J.

    1975-01-01

    A wavefront error simulator has been designed and fabricated to evaluate experimentally test instrumentation for the Large Space Telescope (LST) program. The principal operating part of the simulator is an aberration generator that introduces low-order aberrations of several waves magnitude with an incremented adjustment capability of lambda/100. Each aberration type can be introduced independently with any desired spatial orientation.

  4. The Propagation of Errors in Experimental Data Analysis: A Comparison of Pre-and Post-Test Designs

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2013-01-01

    Experimental designs involving the randomization of cases to treatment and control groups are powerful and under-used in many areas of social science and social policy. This paper reminds readers of the pre-and post-test, and the post-test only, designs, before explaining briefly how measurement errors propagate according to error theory. The…

  5. Characteristics and dependencies of error in satellite-based flood event simulations

    NASA Astrophysics Data System (ADS)

    Mei, Yiwen; Nikolopoulos, Efthymios I.; Anagnostou, Emmanouil N.; Zoccatelli, Davide; Borga, Marco

    2016-04-01

    The error in satellite precipitation driven complex terrain flood simulations is characterized in this study for eight different global satellite products and 128 flood events over the Eastern Italian Alps. The flood events are grouped according to two flood types: rain floods and flash floods. The satellite precipitation products and runoff simulations are evaluated based on systematic and random error metrics applied on the matched event pairs and basin scale event properties (i.e. rainfall and runoff cumulative depth and time series shape). Overall, error characteristics exhibit dependency on the flood type. Generally, timing of the event precipitation mass center and dispersion of the time series derived from satellite-precipitation exhibits good agreement with reference; the cumulative depth is mostly underestimated. The study shows a dampening effect in both systematic and random error components of the satellite-driven hydrograph relative to the satellite-retrieved hyetograph. The systematic error in shape of time series shows significant dampening effect. The random error dampening effect is less pronounced for the flash flood events, and the rain flood events with high runoff coefficient. This event-based analysis of the satellite precipitation error propagation in flood modeling sheds light on the application of satellite precipitation in mountain flood hydrology.

  6. Abundance recovery error analysis using simulated AVIRIS data

    NASA Technical Reports Server (NTRS)

    Stoner, William W.; Harsanyi, Joseph C.; Farrand, William H.; Wong, Jennifer A.

    1992-01-01

    Measurement noise and imperfect atmospheric correction translate directly into errors in the determination of the surficial abundance of materials from imaging spectrometer data. The effects of errors on abundance recovery were investigated previously using Monte Carlo simulation methods by Sabol et. al. The drawback of the Monte Carlo approach is that thousands of trials are needed to develop good statistics on the probable error in abundance recovery. This computational burden invariably limits the number of scenarios of interest that can practically be investigated. A more efficient approach is based on covariance analysis. The covariance analysis approach expresses errors in abundance as a function of noise in the spectral measurements and provides a closed form result eliminating the need for multiple trials. Monte Carlo simulation and covariance analysis are used to predict confidence limits for abundance recovery for a scenario which is modeled as being derived from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).

  7. Numerical error in groundwater flow and solute transport simulation

    NASA Astrophysics Data System (ADS)

    Woods, Juliette A.; Teubner, Michael D.; Simmons, Craig T.; Narayan, Kumar A.

    2003-06-01

    Models of groundwater flow and solute transport may be affected by numerical error, leading to quantitative and qualitative changes in behavior. In this paper we compare and combine three methods of assessing the extent of numerical error: grid refinement, mathematical analysis, and benchmark test problems. In particular, we assess the popular solute transport code SUTRA [Voss, 1984] as being a typical finite element code. Our numerical analysis suggests that SUTRA incorporates a numerical dispersion error and that its mass-lumped numerical scheme increases the numerical error. This is confirmed using a Gaussian test problem. A modified SUTRA code, in which the numerical dispersion is calculated and subtracted, produces better results. The much more challenging Elder problem [Elder, 1967; Voss and Souza, 1987] is then considered. Calculation of its numerical dispersion coefficients and numerical stability show that the Elder problem is prone to error. We confirm that Elder problem results are extremely sensitive to the simulation method used.

  8. Subaperture test of wavefront error of large telescopes: error sources and stitching performance simulations

    NASA Astrophysics Data System (ADS)

    Chen, Shanyong; Li, Shengyi; Wang, Guilin

    2014-11-01

    environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.

  9. Temperature measurement error simulation of the pure rotational Raman lidar

    NASA Astrophysics Data System (ADS)

    Jia, Jingyu; Huang, Yong; Wang, Zhirui; Yi, Fan; Shen, Jianglin; Jia, Xiaoxing; Chen, Huabin; Yang, Chuan; Zhang, Mingyang

    2015-11-01

    Temperature represents the atmospheric thermodynamic state. Measure the atmospheric temperature accurately and precisely is very important to understand the physics of the atmospheric process. Lidar has some advantages in the atmospheric temperature measurement. Based on the lidar equation and the theory of pure rotational Raman (PRR), we've simulated the temperature measurement errors of the double-grating-polychromator (DGP) based PRR lidar. First of all, without considering the attenuation terms of the atmospheric transmittance and the range in the lidar equation, we've simulated the temperature measurement errors which are influenced by the beam splitting system parameters, such as the center wavelength, the receiving bandwidth and the atmospheric temperature. We analyzed three types of the temperature measurement errors in theory. We've proposed several design methods for the beam splitting system to reduce the temperature measurement errors. Secondly, we simulated the temperature measurement error profiles by the lidar equation. As the lidar power-aperture product is determined, the main target of our lidar system is to reduce the statistical and the leakage errors.

  10. An Integrated Signaling-Encryption Mechanism to Reduce Error Propagation in Wireless Communications: Performance Analyses

    SciTech Connect

    Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko

    2015-01-01

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).

  11. A simulation of high energy cosmic ray propagation 2

    NASA Technical Reports Server (NTRS)

    Honda, M.; Kamata, K.; Kifune, T.; Matsubara, Y.; Mori, M.; Nishijima, K.

    1985-01-01

    The cosmic ray propagation in the Galactic arm is simulated. The Galactic magnetic fields are known to go along with so called Galactic arms as a main structure with turbulences of the scale about 30pc. The distribution of cosmic ray in Galactic arm is studied. The escape time and the possible anisotropies caused by the arm structure are discussed.

  12. Monte Carlo Simulations of Light Propagation in Apples

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports on the investigation of light propagation in fresh apples in the visible and short-wave near-infrared region using Monte Carlo simulations. Optical properties of ‘Golden Delicious’ apples were determined over the spectral range of 500-1100 nm using a hyperspectral imaging method, ...

  13. New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations

    NASA Technical Reports Server (NTRS)

    Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.

    2012-01-01

    In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.

  14. Propagation of radar rainfall uncertainty in urban flood simulations

    NASA Astrophysics Data System (ADS)

    Liguori, Sara; Rico-Ramirez, Miguel

    2013-04-01

    This work discusses the results of the implementation of a novel probabilistic system designed to improve ensemble sewer flow predictions for the drainage network of a small urban area in the North of England. The probabilistic system has been developed to model the uncertainty associated to radar rainfall estimates and propagate it through radar-based ensemble sewer flow predictions. The assessment of this system aims at outlining the benefits of addressing the uncertainty associated to radar rainfall estimates in a probabilistic framework, to be potentially implemented in the real-time management of the sewer network in the study area. Radar rainfall estimates are affected by uncertainty due to various factors [1-3] and quality control and correction techniques have been developed in order to improve their accuracy. However, the hydrological use of radar rainfall estimates and forecasts remains challenging. A significant effort has been devoted by the international research community to the assessment of the uncertainty propagation through probabilistic hydro-meteorological forecast systems [4-5], and various approaches have been implemented for the purpose of characterizing the uncertainty in radar rainfall estimates and forecasts [6-11]. A radar-based ensemble stochastic approach, similar to the one implemented for use in the Southern-Alps by the REAL system [6], has been developed for the purpose of this work. An ensemble generator has been calibrated on the basis of the spatial-temporal characteristics of the residual error in radar estimates assessed with reference to rainfall records from around 200 rain gauges available for the year 2007, previously post-processed and corrected by the UK Met Office [12-13]. Each ensemble member is determined by summing a perturbation field to the unperturbed radar rainfall field. The perturbations are generated by imposing the radar error spatial and temporal correlation structure to purely stochastic fields. A

  15. Simulation of the elastic wave propagation in anisotropic microstructures

    NASA Astrophysics Data System (ADS)

    Bryner, Juerg; Vollmann, Jacqueline; Profunser, Dieter M.; Dual, Jurg

    2007-06-01

    For the interpretation of optical Pump-Probe Measurements on microstructures the wave propagation in anisotropic 3-D structures with arbitrary geometries is numerically calculated. The laser acoustic Pump-Probe technique generates bulk waves in structures in a thermo-elastic way. This method is well established for non-destructive measurements of thin films with an indepth resolution in the order of 10 nm. The Pump-Probe technique can also be used for measurements, e.g. for quality inspection of three-dimensional structures with arbitrary geometries, like MEMS components. For the interpretation of the measurements it is necessary that the wave propagation in the specimen to be inspected can be calculated. Here, the wave propagation for various geometries and materials is investigated. In the first part, the wave propagation in isotropic axisymmetric structures is simulated with a 2-D finite difference formulation. The numerical results are verified with measurements of macroscopic specimens. In a second step, the simulations are extended to 3-D structures with orthotopic material properties. The implemented code allows the calculation of the wave propagation for different orientations of the material axes (orientation of the orthotropic axes relative to the geometry of the structure). Limits of the presented algorithm are discussed and future directions of the on-going research project are presented.

  16. Simulations of time spreading in shallow water propagation

    NASA Astrophysics Data System (ADS)

    Thorsos, Eric I.; Elam, W. T.; Tang, Dajun; Henyey, Frank S.; Williams, Kevin L.; Reynolds, Stephen A.

    2002-11-01

    Pulse propagation in a shallow water wave guide leads to time spreading due to multipath effects. Results of PE simulations will be described for pulse propagation in shallow water with a rough sea surface and a flat sandy sea floor. The simulations illustrate that such time spreading may be significantly less at longer ranges than for the flat surface case. Pressure fields are simulated in two space dimensions and have been obtained using a wide-angle PE code developed by Rosenberg [A. D. Rosenberg, J. Acoust. Soc. Am. 105, 144-153 (1999)]. The effect of rough surface scattering is to cause acoustic energy initially propagating at relatively high angles but still below the critical angle at the sea floor to be eventually shifted to grazing angles above the critical angle. This energy is then lost into the bottom, effectively stripping higher propagating modes. The surviving energy at longer ranges is concentrated in the lowest modes and shows little effect of time spreading. Thus, the effect of rough surface scattering is found to produce a simpler temporal field structure than if the surface were treated as flat. [Work supported by ONR.

  17. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

    NASA Technical Reports Server (NTRS)

    LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

    2011-01-01

    This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

  18. Targeting Error Simulator for Image-guided Prostate Needle Placement

    PubMed Central

    Lasso, Andras; Avni, Shachar; Fichtinger, Gabor

    2010-01-01

    Motivation Needle-based biopsy and local therapy of prostate cancer depend multimodal imaging for both target planning and needle guidance. The clinical process involves selection of target locations in a pre-operative image volume and registering these to an intra-operative volume. Registration inaccuracies inevitably lead to targeting error, a major clinical concern. The analysis of targeting error requires a large number of images with known ground truth, which has been infeasible even for the largest research centers. Methods We propose to generate realistic prostate imaging data in a controllable way, with known ground truth, by simulation of prostate size, shape, motion and deformation typically encountered in prostatic needle placement. This data is then used to evaluate a given registration algorithm, by testing its ability to reproduce ground truth contours, motions and deformations. The method builds on statistical shape atlas to generate large number of realistic prostate shapes and finite element modeling to generate high-fidelity deformations, while segmentation error is simulated by warping the ground truth data in specific prostate regions. Expected target registration error (TRE) is computed as a vector field. Results The simulator was configured to evaluate the TRE when using a surface-based rigid registration algorithm in a typical prostate biopsy targeting scenario. Simulator parameters, such as segmentation error and deformation, were determined by measurements in clinical images. Turnaround time for the full simulation of one test case was below 3 minutes. The simulator is customizable for testing, comparing, optimizing segmentation and registration methods and is independent of the imaging modalities used. PMID:21096275

  19. Statistical error in particle simulations of low mach number flows

    SciTech Connect

    Hadjiconstantinou, N G; Garcia, A L

    2000-11-13

    We present predictions for the statistical error due to finite sampling in the presence of thermal fluctuations in molecular simulation algorithms. The expressions are derived using equilibrium statistical mechanics. The results show that the number of samples needed to adequately resolve the flowfield scales as the inverse square of the Mach number. Agreement of the theory with direct Monte Carlo simulations shows that the use of equilibrium theory is justified.

  20. Discreteness noise versus force errors in N-body simulations

    NASA Technical Reports Server (NTRS)

    Hernquist, Lars; Hut, Piet; Makino, Jun

    1993-01-01

    A low accuracy in the force calculation per time step of a few percent for each particle pair is sufficient for collisionless N-body simulations. Higher accuracy is made meaningless by the dominant discreteness noise in the form of two-body relaxation, which can be reduced only by increasing the number of particles. Since an N-body simulation is a Monte Carlo procedure in which each particle-particle force is essentially random, i.e., carries an error of about 1000 percent, the only requirement is a systematic averaging-out of these intrinsic errors. We illustrate these assertions with two specific examples in which individual pairwise forces are deliberately allowed to carry significant errors: tree-codes on supercomputers and algorithms on special-purpose machines with low-precision hardware.

  1. Ar-Ar_Redux: rigorous error propagation of 40Ar/39Ar data, including covariances

    NASA Astrophysics Data System (ADS)

    Vermeesch, P.

    2015-12-01

    Rigorous data reduction and error propagation algorithms are needed to realise Earthtime's objective to improve the interlaboratory accuracy of 40Ar/39Ar dating to better than 1% and thereby facilitate the comparison and combination of the K-Ar and U-Pb chronometers. Ar-Ar_Redux is a new data reduction protocol and software program for 40Ar/39Ar geochronology which takes into account two previously underappreciated aspects of the method: 1. 40Ar/39Ar measurements are compositional dataIn its simplest form, the 40Ar/39Ar age equation can be written as: t = log(1+J [40Ar/39Ar-298.5636Ar/39Ar])/λ = log(1 + JR)/λ Where λ is the 40K decay constant and J is the irradiation parameter. The age t does not depend on the absolute abundances of the three argon isotopes but only on their relative ratios. Thus, the 36Ar, 39Ar and 40Ar abundances can be normalised to unity and plotted on a ternary diagram or 'simplex'. Argon isotopic data are therefore subject to the peculiar mathematics of 'compositional data', sensu Aitchison (1986, The Statistical Analysis of Compositional Data, Chapman & Hall). 2. Correlated errors are pervasive throughout the 40Ar/39Ar methodCurrent data reduction protocols for 40Ar/39Ar geochronology propagate the age uncertainty as follows: σ2(t) = [J2 σ2(R) + R2 σ2(J)] / [λ2 (1 + R J)], which implies zero covariance between R and J. In reality, however, significant error correlations are found in every step of the 40Ar/39Ar data acquisition and processing, in both single and multi collector instruments, during blank, interference and decay corrections, age calculation etc. Ar-Ar_Redux revisits every aspect of the 40Ar/39Ar method by casting the raw mass spectrometer data into a contingency table of logratios, which automatically keeps track of all covariances in a compositional context. Application of the method to real data reveals strong correlations (r2 of up to 0.9) between age measurements within a single irradiation batch. Propertly taking

  2. Communication Systems Simulator with Error Correcting Codes Using MATLAB

    ERIC Educational Resources Information Center

    Gomez, C.; Gonzalez, J. E.; Pardo, J. M.

    2003-01-01

    In this work, the characteristics of a simulator for channel coding techniques used in communication systems, are described. This software has been designed for engineering students in order to facilitate the understanding of how the error correcting codes work. To help students understand easily the concepts related to these kinds of codes, a…

  3. Error and efficiency of replica exchange molecular dynamics simulations

    PubMed Central

    Rosta, Edina; Hummer, Gerhard

    2009-01-01

    We derive simple analytical expressions for the error and computational efficiency of replica exchange molecular dynamics (REMD) simulations (and by analogy replica exchange Monte Carlo simulations). The theory applies to the important case of systems whose dynamics at long times is dominated by the slow interconversion between two metastable states. As a specific example, we consider the folding and unfolding of a protein. The efficiency is defined as the rate with which the error in an estimated equilibrium property, as measured by the variance of the estimator over repeated simulations, decreases with simulation time. For two-state systems, this rate is in general independent of the particular property. Our main result is that, with comparable computational resources used, the relative efficiency of REMD and molecular dynamics (MD) simulations is given by the ratio of the number of transitions between the two states averaged over all replicas at the different temperatures, and the number of transitions at the single temperature of the MD run. This formula applies if replica exchange is frequent, as compared to the transition times. High efficiency of REMD is thus achieved by including replica temperatures in which the frequency of transitions is higher than that at the temperature of interest. In tests of the expressions for the error in the estimator, computational efficiency, and the rate of equilibration we find quantitative agreement with the results both from kinetic models of REMD and from actual all-atom simulations of the folding of a peptide in water. PMID:19894977

  4. Error and efficiency of replica exchange molecular dynamics simulations.

    PubMed

    Rosta, Edina; Hummer, Gerhard

    2009-10-28

    We derive simple analytical expressions for the error and computational efficiency of replica exchange molecular dynamics (REMD) simulations (and by analogy replica exchange Monte Carlo simulations). The theory applies to the important case of systems whose dynamics at long times is dominated by the slow interconversion between two metastable states. As a specific example, we consider the folding and unfolding of a protein. The efficiency is defined as the rate with which the error in an estimated equilibrium property, as measured by the variance of the estimator over repeated simulations, decreases with simulation time. For two-state systems, this rate is in general independent of the particular property. Our main result is that, with comparable computational resources used, the relative efficiency of REMD and molecular dynamics (MD) simulations is given by the ratio of the number of transitions between the two states averaged over all replicas at the different temperatures, and the number of transitions at the single temperature of the MD run. This formula applies if replica exchange is frequent, as compared to the transition times. High efficiency of REMD is thus achieved by including replica temperatures in which the frequency of transitions is higher than that at the temperature of interest. In tests of the expressions for the error in the estimator, computational efficiency, and the rate of equilibration we find quantitative agreement with the results both from kinetic models of REMD and from actual all-atom simulations of the folding of a peptide in water. PMID:19894977

  5. Error propagation in relative real-time reverse transcription polymerase chain reaction quantification models: the balance between accuracy and precision.

    PubMed

    Nordgård, Oddmund; Kvaløy, Jan Terje; Farmen, Ragne Kristin; Heikkilä, Reino

    2006-09-15

    Real-time reverse transcription polymerase chain reaction (RT-PCR) has gained wide popularity as a sensitive and reliable technique for mRNA quantification. The development of new mathematical models for such quantifications has generally paid little attention to the aspect of error propagation. In this study we evaluate, both theoretically and experimentally, several recent models for relative real-time RT-PCR quantification of mRNA with respect to random error accumulation. We present error propagation expressions for the most common quantification models and discuss the influence of the various components on the total random error. Normalization against a calibrator sample to improve comparability between different runs is shown to increase the overall random error in our system. On the other hand, normalization against multiple reference genes, introduced to improve accuracy, does not increase error propagation compared to normalization against a single reference gene. Finally, we present evidence that sample-specific amplification efficiencies determined from individual amplification curves primarily increase the random error of real-time RT-PCR quantifications and should be avoided. Our data emphasize that the gain of accuracy associated with new quantification models should be validated against the corresponding loss of precision. PMID:16899212

  6. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    USGS Publications Warehouse

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  7. Symbol error rate bound of DPSK modulation system in directional wave propagation

    NASA Astrophysics Data System (ADS)

    Hua, Jingyu; Zhuang, Changfei; Zhao, Xiaomin; Li, Gang; Meng, Qingmin

    This paper presents a new approach to determine the symbol error rate (SER) bound of differential phase shift keying (DPSK) systems in a directional fading channel, where the von Mises distribution is used to illustrate the non-isotropic angle of arrival (AOA). Our approach relies on the closed-form expression of the phase difference probability density function (pdf) in coherent fading channels and leads to expressions of the DPSK SER bound involving a single finite-range integral which can be readily evaluated numerically. Moreover, the simulation yields results consistent with numerical computation.

  8. Propagation of radiation in fluctuating multiscale plasmas. II. Kinetic simulations

    SciTech Connect

    Pal Singh, Kunwar; Robinson, P. A.; Cairns, Iver H.; Tyshetskiy, Yu.

    2012-11-15

    A numerical algorithm is developed and tested that implements the kinetic treatment of electromagnetic radiation propagating through plasmas whose properties have small scale fluctuations, which was developed in a companion paper. This method incorporates the effects of refraction, damping, mode structure, and other aspects of large-scale propagation of electromagnetic waves on the distribution function of quanta in position and wave vector, with small-scale effects of nonuniformities, including scattering and mode conversion approximated as causing drift and diffusion in wave vector. Numerical solution of the kinetic equation yields the distribution function of radiation quanta in space, time, and wave vector. Simulations verify the convergence, accuracy, and speed of the methods used to treat each term in the equation. The simulations also illustrate the main physical effects and place the results in a form that can be used in future applications.

  9. A probabilistic approach to quantify the impact of uncertainty propagation in musculoskeletal simulations.

    PubMed

    Myers, Casey A; Laz, Peter J; Shelburne, Kevin B; Davidson, Bradley S

    2015-05-01

    Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5-95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535

  10. A Probabilistic Approach to Quantify the Impact of Uncertainty Propagation in Musculoskeletal Simulations

    PubMed Central

    Myers, Casey A.; Laz, Peter J.; Shelburne, Kevin B.; Davidson, Bradley S.

    2015-01-01

    Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5–95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535

  11. Propagation of radar rainfall uncertainty in urban flood simulations

    NASA Astrophysics Data System (ADS)

    Liguori, Sara; Rico-Ramirez, Miguel

    2013-04-01

    This work discusses the results of the implementation of a novel probabilistic system designed to improve ensemble sewer flow predictions for the drainage network of a small urban area in the North of England. The probabilistic system has been developed to model the uncertainty associated to radar rainfall estimates and propagate it through radar-based ensemble sewer flow predictions. The assessment of this system aims at outlining the benefits of addressing the uncertainty associated to radar rainfall estimates in a probabilistic framework, to be potentially implemented in the real-time management of the sewer network in the study area. Radar rainfall estimates are affected by uncertainty due to various factors [1-3] and quality control and correction techniques have been developed in order to improve their accuracy. However, the hydrological use of radar rainfall estimates and forecasts remains challenging. A significant effort has been devoted by the international research community to the assessment of the uncertainty propagation through probabilistic hydro-meteorological forecast systems [4-5], and various approaches have been implemented for the purpose of characterizing the uncertainty in radar rainfall estimates and forecasts [6-11]. A radar-based ensemble stochastic approach, similar to the one implemented for use in the Southern-Alps by the REAL system [6], has been developed for the purpose of this work. An ensemble generator has been calibrated on the basis of the spatial-temporal characteristics of the residual error in radar estimates assessed with reference to rainfall records from around 200 rain gauges available for the year 2007, previously post-processed and corrected by the UK Met Office [12-13]. Each ensemble member is determined by summing a perturbation field to the unperturbed radar rainfall field. The perturbations are generated by imposing the radar error spatial and temporal correlation structure to purely stochastic fields. A

  12. The Communication Link and Error ANalysis (CLEAN) simulator

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.; Crowe, Shane

    1993-01-01

    During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.

  13. A wideband propagation simulator for high speed mobile radio communications

    NASA Astrophysics Data System (ADS)

    Busson, P.; Lejannic, J. C.; Elzein, G.; Citerne, J.

    1994-07-01

    Multipath, jamming, listening and detection are the main limitations for mobile radio communications. Spread spectrum techniques, especially frequency hopping, can be used to avoid these problems. Therefore, a wideband simulation for multipath mobile channels appeared the most appropriate evaluation technique. It also gives useful indications for system characteristic improvements. This paper presents the design and realization of a new UHF-VHF propagation simulator, which can be considered as an extended version of Bussgang's one. This frequency hopping simulator (up to 100,000 hops per second) is wideband thus capable to deal with spread spectrum signals. As it generates up to 16 paths, it can be used in almost all mobile radio propagation situations. Moreover, it is also able to simulate high mobile relative speeds up to 2000km/h such as air-air communication systems. This simulator can reproduce, in laboratory, 16 rays Rician or Rayleigh fading channels with a maximum time delay of about 15 ms. At the highest frequency of 1200 MHz, Doppler rates up to 2 kHz can be generated corresponding to vehicle speeds up to 2000 km/h. Let note that the Bussgang simulator was defined for narrowband and fixed radio communications. In both equipments, in-phase and quadrature signals are obtained using two numerical transversal filters. Simulation results were derived in various situations especially in terrestrial urban and suburban environments, where they could be compared with measurements. The main advantage of the simulator lies in its capacity to simulate the high speed and wideband mobile radio communication channels.

  14. An error model for GCM precipitation and temperature simulations

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Woldemeskel, F.; Mehrotra, R.; Sivakumar, B.

    2012-04-01

    Water resources assessments for future climates require meaningful simulations of likely precipitation and evaporation for simulation of flow and derived quantities of interest. The current approach for making such assessments involve using simulations from one or a handful of General Circulation Models (GCMs), for usually one assumed future greenhouse gas emission scenario, deriving associated flows and the planning or design attributes required, and using these as the basis of any planning or design that is needed. An assumption that is implicit in this approach is that the single or multiple simulations being considered are representative of what is likely to occur in the future. Is this a reasonable assumption to make and use in designing future water resources infrastructure? Is the uncertainty in the simulations captured through this process a real reflection of the likely uncertainty, even though a handful of GCMs are considered? Can one, instead, develop a measure of this uncertainty for a given GCM simulation for all variables in space and time, and use this information as the basis of water resources planning (similar to using "input uncertainty" in rainfall-runoff modelling)? These are some of the questions we address in course of this presentation. We present here a new basis for assigning a measure of uncertainty to GCM simulations of precipitation and temperature. Unlike other alternatives which assess overall GCM uncertainty, our approach leads to a unique measure of uncertainty in the variable of interest for each simulated value in space and time. We refer to this as an error model of GCM precipitation and temperature simulations, to allow a complete assessment of the merits or demerits associated with future infrastructure options being considered, or mitigation plans being devised. The presented error model quantifies the error variance of GCM monthly precipitation and temperature, and reports it as the Square Root Error Variance (SREV

  15. Code System for NE-213 Unfolding of Neutron Spectra up to 100 MeV with Response Function Error Propagation.

    Energy Science and Technology Software Center (ESTSC)

    1987-09-30

    Version 00 The REFERDOU system can be used to calculate the response function of a NE-213 scintillation detector for energies up to 100 MeV, to interpolate and spread (Gaussian) the response function, and unfold the measured spectrum of neutrons while propagating errors from the response functions to the unfolded spectrum.

  16. Fast Video Encryption Using the H.264 Error Propagation Property for Smart Mobile Devices

    PubMed Central

    Chung, Yongwha; Lee, Sungju; Jeon, Taewoong; Park, Daihee

    2015-01-01

    In transmitting video data securely over Video Sensor Networks (VSNs), since mobile handheld devices have limited resources in terms of processor clock speed and battery size, it is necessary to develop an efficient method to encrypt video data to meet the increasing demand for secure connections. Selective encryption methods can reduce the amount of computation needed while satisfying high-level security requirements. This is achieved by selecting an important part of the video data and encrypting it. In this paper, to ensure format compliance and security, we propose a special encryption method for H.264, which encrypts only the DC/ACs of I-macroblocks and the motion vectors of P-macroblocks. In particular, the proposed new selective encryption method exploits the error propagation property in an H.264 decoder and improves the collective performance by analyzing the tradeoff between the visual security level and the processing speed compared to typical selective encryption methods (i.e., I-frame, P-frame encryption, and combined I-/P-frame encryption). Experimental results show that the proposed method can significantly reduce the encryption workload without any significant degradation of visual security. PMID:25850068

  17. Correction of Discretization Errors Simulated at Supply Wells.

    PubMed

    MacMillan, Gordon J; Schumacher, Jens

    2015-01-01

    Many hydrogeology problems require predictions of hydraulic heads in a supply well. In most cases, the regional hydraulic response to groundwater withdrawal is best approximated using a numerical model; however, simulated hydraulic heads at supply wells are subject to errors associated with model discretization and well loss. An approach for correcting the simulated head at a pumping node is described here. The approach corrects for errors associated with model discretization and can incorporate the user's knowledge of well loss. The approach is model independent, can be applied to finite difference or finite element models, and allows the numerical model to remain somewhat coarsely discretized and therefore numerically efficient. Because the correction is implemented external to the numerical model, one important benefit of this approach is that a response matrix, reduced model approach can be supported even when nonlinear well loss is considered. PMID:25142180

  18. Starlight emergence angle error analysis of star simulator

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Zhang, Guo-yu

    2015-10-01

    With continuous development of the key technologies of star sensor, the precision of star simulator have been to be further improved, for it directly affects the accuracy of star sensor laboratory calibration. For improving the accuracy level of the star simulator, a theoretical accuracy analysis model need to be proposed. According the ideal imaging model of star simulator, the theoretical accuracy analysis model can be established. Based on theoretically analyzing the theoretical accuracy analysis model we can get that the starlight emergent angle deviation is primarily affected by star position deviation, main point position deviation, focal length deviation, distortion deviation and object plane tilt deviation. Based on the above affecting factors, a comprehensive deviation model can be established. According to the model, the formula of each factors deviation model separately and the comprehensive deviation model can be summarized and concluded out. By analyzing the properties of each factors deviation model and the comprehensive deviation model formula, concluding the characteristics of each factors respectively and the weight relationship among them. According the result of analysis of the comprehensive deviation model, a reasonable designing indexes can be given by considering the star simulator optical system requirements and the precision of machining and adjustment. So, starlight emergence angle error analysis of star simulator is very significant to guide the direction of determining and demonstrating the index of star simulator, analyzing and compensating the error of star simulator for improving the accuracy of star simulator and establishing a theoretical basis for further improving the starlight angle precision of the star simulator can effectively solve the problem.

  19. A new version of the CADNA library for estimating round-off error propagation in Fortran programs

    NASA Astrophysics Data System (ADS)

    Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc

    2010-11-01

    The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round

  20. Numerical simulation of shock wave propagation in flows

    NASA Astrophysics Data System (ADS)

    Rénier, Mathieu; Marchiano, Régis; Gaudard, Eric; Gallin, Louis-Jonardan; Coulouvrat, François

    2012-09-01

    Acoustical shock waves propagate through flows in many situations. The sonic boom produced by a supersonic aircraft influenced by winds, or the so-called Buzz-Saw-Noise produced by turbo-engine fan blades when rotating at supersonic speeds, are two examples of such a phenomenon. In this work, an original method called FLHOWARD, acronym for FLow and Heterogeneous One-Way Approximation for Resolution of Diffraction, is presented. It relies on a scalar nonlinear wave equation, which takes into account propagation in a privileged direction (one-way approach), with diffraction, flow, heterogeneous and nonlinear effects. Theoretical comparison of the dispersion relations between that equation and parabolic equations (standard or wide angle) shows that this approach is more precise than the parabolic approach because there are no restrictions about the angle of propagation. A numerical procedure based on the standard split-step technique is used. It consists in splitting the nonlinear wave equation into simpler equations. Each of these equations is solved thanks to an analytical solution when it is possible, and a finite differences scheme in other cases. The advancement along the propagation direction is done with an implicit scheme. The validity of that numerical procedure is assessed by comparisons with analytical solutions of the Lilley's equation in waveguides for uniform or shear flows in linear regime. Attention is paid to the advantages and drawbacks of that method. Finally, the numerical code is used to simulate the propagation of sonic boom through a piece of atmosphere with flows and heterogeneities. The effects of the various parameters are analysed.

  1. Hybrid simulation of wave propagation in the Io plasma torus

    NASA Astrophysics Data System (ADS)

    Stauffer, B. H.; Delamere, P. A.; Damiano, P. A.

    2015-12-01

    The transmission of waves between Jupiter and Io is an excellent case study of magnetosphere/ionosphere (MI) coupling because the power generated by the interaction at Io and the auroral power emitted at Jupiter can be reasonably estimated. Wave formation begins with mass loading as Io passes through the plasma torus. A ring beam distribution of pickup ions and perturbation of the local flow by the conducting satellite generate electromagnetic ion cyclotron waves and Alfven waves. We investigate wave propagation through the torus and to higher latitudes using a hybrid plasma simulation with a physically realistic density gradient, assessing the transmission of Poynting flux and wave dispersion. We also analyze the propagation of kinetic Alfven waves through a density gradient in two dimensions.

  2. Numerical Homogenization of Jointed Rock Masses Using Wave Propagation Simulation

    NASA Astrophysics Data System (ADS)

    Gasmi, Hatem; Hamdi, Essaïeb; Bouden Romdhane, Nejla

    2014-07-01

    Homogenization in fractured rock analyses is essentially based on the calculation of equivalent elastic parameters. In this paper, a new numerical homogenization method that was programmed by means of a MATLAB code, called HLA-Dissim, is presented. The developed approach simulates a discontinuity network of real rock masses based on the International Society of Rock Mechanics (ISRM) scanline field mapping methodology. Then, it evaluates a series of classic joint parameters to characterize density (RQD, specific length of discontinuities). A pulse wave, characterized by its amplitude, central frequency, and duration, is propagated from a source point to a receiver point of the simulated jointed rock mass using a complex recursive method for evaluating the transmission and reflection coefficient for each simulated discontinuity. The seismic parameters, such as delay, velocity, and attenuation, are then calculated. Finally, the equivalent medium model parameters of the rock mass are computed numerically while taking into account the natural discontinuity distribution. This methodology was applied to 17 bench fronts from six aggregate quarries located in Tunisia, Spain, Austria, and Sweden. It allowed characterizing the rock mass discontinuity network, the resulting seismic performance, and the equivalent medium stiffness. The relationship between the equivalent Young's modulus and rock discontinuity parameters was also analyzed. For these different bench fronts, the proposed numerical approach was also compared to several empirical formulas, based on RQD and fracture density values, published in previous research studies, showing its usefulness and efficiency in estimating rapidly the Young's modulus of equivalent medium for wave propagation analysis.

  3. Time-dependent simulations of filament propagation in photoconducting switches

    SciTech Connect

    Rambo, P.W.; Lawson, W.S.; Capps, C.D.; Falk, R.A.

    1994-05-01

    The authors present a model for investigating filamentary structures observed in laser-triggered photoswitches. The model simulates electrons and holes in two-dimensional cylindrical (r-z) geometry, with realistic electron and hole mobilities and field dependent impact ionization. Because of the large range of spatial and temporal scales to be resolved, they are using an explicit approach with fast, direct solution of the field equation. A flux limiting scheme is employed to avoid the time-step constraint due to the short time for resistive relaxation in the high density filament. Self-consistent filament propagation with speeds greater than the carrier drift velocity are observed in agreement with experiments.

  4. Shock Propagation in Dusty Plasmas by MD Simulations

    NASA Astrophysics Data System (ADS)

    Marciante, Mathieu; Murillo, Michael

    2014-10-01

    The study of shock propagation has become a common way to obtain statistical information on a medium, as one can relate properties of the undisturbed medium to the shock dynamics through the Rankine-Hugoniot (R-H) relations. However, theoretical investigations of shock dynamics are often done through idealized fluid models, which mainly neglect kinetic properties of the medium constituents. Motivated by recent experimental results, we use molecular dynamics simulations to study the propagation of shocks in 2D-dusty plasmas, focusing our attention on the influence of kinetic aspects of the plasma, such as viscosity effects. This study is undertaken on two sides. On a first side, the shock wave is generated by an external electric field acting on the dust particles, giving rise to a shock wave as obtained in a laboratory experiment. On another side, we generate a shock wave by the displacement of a two-dimensional piston at constant velocity, allowing to obtain a steady-state shock wave. Experiment-like shock waves propagate in a highly non-steady state, what should ask for a careful application of the R-H relations in the context of non-steady shocks. Steady-state shock waves show an oscillatory pattern attributed to the dominating dispersive effect of the dusty plasma.

  5. Unraveling the uncertainty and error propagation in the vertical flux Martin curve

    NASA Astrophysics Data System (ADS)

    Olli, Kalle

    2015-06-01

    Analyzing the vertical particle flux and particle retention in the upper twilight zone has commonly been accomplished by fitting a power function to the data. Measuring the vertical particle flux in the upper twilight zone, where most of the re-mineralization occurs, is a complex endeavor. Here I use field data and simulations to show how uncertainty in the particle flux measurements propagates into the vertical flux attenuation model parameters. Further, I analyze how the number of sampling depths, and variations in the vertical sampling locations influences the model performance and parameters stability. The arguments provide a simple framework to optimize sampling scheme when vertical flux attenuation profiles are measured in the field, either by using an array of sediment traps or 234Th methodology. A compromise between effort and quality of results is to sample from at least six depths: upper sampling depth as close to the base of the euphotic layer as feasible, the vertical sampling depths slightly aggregated toward the upper aphotic zone where most of the vertical flux attenuation takes place, and extending the lower end of the sampling range to as deep as practicable in the twilight zone.

  6. A simulation of high energy cosmic ray propagation 1

    NASA Technical Reports Server (NTRS)

    Honda, M.; Kifune, T.; Matsubara, Y.; Mori, M.; Nishijima, K.; Teshima, M.

    1985-01-01

    High energy cosmic ray propagation of the energy region 10 to the 14.5 power - 10 to the 18th power eV is simulated in the inter steller circumstances. In conclusion, the diffusion process by turbulent magnetic fields is classified into several regions by ratio of the gyro-radius and the scale of turbulence. When the ratio becomes larger then 10 to the minus 0.5 power, the analysis with the assumption of point scattering can be applied with the mean free path E sup 2. However, when the ratio is smaller than 10 to the minus 0.5 power, we need a more complicated analysis or simulation. Assuming the turbulence scale of magnetic fields of the Galaxy is 10-30pc and the mean magnetic field strength is 3 micro gauss, the energy of cosmic ray with that gyro-radius is about 10 to the 16.5 power eV.

  7. Seismic Wave Propagation Simulation using Circular Hough Transform

    NASA Astrophysics Data System (ADS)

    Miah, K.; Potter, D. K.

    2012-12-01

    Synthetic data generation by numerically solving a two-way wave equation is an essential part of seismic tomography, especially in full-waveform inversion. Finite-difference and finite-element are the two common methods of seismic wave propagation modeling in heterogeneous media. Either time or frequency domain representation of wave equation is used for these simulations. Hanahara and Hiyane [1] proposed and implemented a circle-detection algorithm based on the Circular Hough transform (CHT) to numerically solve a two-dimensional wave equation. The Hough transform is generally used in image processing applications to identify objects of various shapes in an image [2]. In this abstract, we use the Circular Hough transform to numerically solve an acoustic wave equation, with the purpose to identify and locate primaries and multiples in the transform domain. Relationships between different seismic events and the CHT parameter are also investigated. [1] Hanahara, K. and Hiyane, M., A Circle-Detection Algorithm Simulating Wave Propagation, Machine Vision and Applications, vol. 3, pp. 97-111, 1990. [2 ] Petcher, P. A. and Dixon, S., A modified Hough transform for removal of direct and reflected surface waves from B-scans, NDT & E International, vol. 44, no. 2, pp. 139-144, 2011.

  8. Numerical simulation of premixed flame propagation in a closed tube

    NASA Astrophysics Data System (ADS)

    Kuzuu, Kazuto; Ishii, Katsuya; Kuwahara, Kunio

    1996-08-01

    Premixed flame propagation of methane-air mixture in a closed tube is estimated through a direct numerical simulation of the three-dimensional unsteady Navier-Stokes equations coupled with chemical reaction. In order to deal with a combusting flow, an extended version of the MAC method, which can be applied to a compressible flow with strong density variation, is employed as a numerical method. The chemical reaction is assumed to be an irreversible single step reaction between methane and oxygen. The chemical species are CH 4, O 2, N 2, CO 2, and H 2O. In this simulation, we reproduce a formation of a tulip flame in a closed tube during the flame propagation. Furthermore we estimate not only a two-dimensional shape but also a three-dimensional structure of the flame and flame-induced vortices, which cannot be observed in the experiments. The agreement between the calculated results and the experimental data is satisfactory, and we compare the phenomenon near the side wall with the one in the corner of the tube.

  9. Simulation of 3D Seismic Wave Propagation with Volcano Topography

    NASA Astrophysics Data System (ADS)

    Ripperger, J.; Igel, H.; Wassermann, J.

    2001-12-01

    We investigate the possibilities of using three-dimensional finite difference (FD) methods for numerical simulation of the seismic wave field at active volcanoes. We put special emphasis on the implementation of the boundary conditions for free surface topography. We compare two different approaches to solve the free surface boundary conditions. The algorithms are implemented on parallel hardware and have been tested for correctness and stability. We apply them to smooth artificial topographies and to the real topography of Mount Merapi, Indonesia. We conclude, that grid stretching type methods (e.g. Hestholm & Ruud, 1994) are not well suited for realistic volcano topography as they tend to become unstable for large topographic gradients. The representation of topography through staircase shaped grids (Ohminato & Chouet, 1997) results in stable calculations, while demanding very fine gridding. The simulations show the effects of a three-dimensional surface topography on elastic wave propagation. Ground motion at the surface is severely affected by topography. If neglected, this may jeopardize attempts to determine source location by analyzing particle motion. Numerical studies like this can help to understand wave propagation phenomena observed on field recordings in volcano seismology. Future studies will aim at separating the wave effects of internal scattering, topography and sources (tremors, tectonic events, pyroclastic flows).

  10. Propagation of spectral characterization errors of imaging spectrometers at level-1 and its correction within a level-2 recalibration scheme

    NASA Astrophysics Data System (ADS)

    Vicent, Jorge; Alonso, Luis; Sabater, Neus; Miesch, Christophe; Kraft, Stefan; Moreno, Jose

    2015-09-01

    The uncertainties in the knowledge of the Instrument Spectral Response Function (ISRF), barycenter of the spectral channels and bandwidth / spectral sampling (spectral resolution) are important error sources in the processing of satellite imaging spectrometers within narrow atmospheric absorption bands. The exhaustive laboratory spectral characterization is a costly engineering process that differs from the instrument configuration in-flight given the harsh space environment and harmful launching phase. The retrieval schemes at Level-2 commonly assume a Gaussian ISRF, leading to uncorrected spectral stray-light effects and wrong characterization and correction of the spectral shift and smile. These effects produce inaccurate atmospherically corrected data and are propagated to the final Level-2 mission products. Within ESA's FLEX satellite mission activities, the impact of the ISRF knowledge error and spectral calibration at Level-1 products and its propagation to Level-2 retrieved chlorophyll fluorescence has been analyzed. A spectral recalibration scheme has been implemented at Level-2 reducing the errors in Level-1 products below the 10% error in retrieved fluorescence within the oxygen absorption bands enhancing the quality of the retrieved products. The work presented here shows how the minimization of the spectral calibration errors requires an effort both for the laboratory characterization and for the implementation of specific algorithms at Level-2.

  11. Monte Carlo simulation of light propagation in the adult brain

    NASA Astrophysics Data System (ADS)

    Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter

    2004-06-01

    When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.

  12. Supporting the Development of Soft-Error Resilient Message Passing Applications using Simulation

    SciTech Connect

    Engelmann, Christian; Naughton III, Thomas J

    2016-01-01

    Radiation-induced bit flip faults are of particular concern in extreme-scale high-performance computing systems. This paper presents a simulation-based tool that enables the development of soft-error resilient message passing applications by permitting the investigation of their correctness and performance under various fault conditions. The documented extensions to the Extreme-scale Simulator (xSim) enable the injection of bit flip faults at specific of injection location(s) and fault activation time(s), while supporting a significant degree of configurability of the fault type. Experiments show that the simulation overhead with the new feature is ~2,325% for serial execution and ~1,730% at 128 MPI processes, both with very fine-grain fault injection. Fault injection experiments demonstrate the usefulness of the new feature by injecting bit flips in the input and output matrices of a matrix-matrix multiply application, revealing vulnerability of data structures, masking and error propagation. xSim is the very first simulation-based MPI performance tool that supports both, the injection of process failures and bit flip faults.

  13. Molecular Optical Simulation Environment (MOSE): A Platform for the Simulation of Light Propagation in Turbid Media

    PubMed Central

    Ren, Shenghan; Chen, Xueli; Wang, Hailong; Qu, Xiaochao; Wang, Ge; Liang, Jimin; Tian, Jie

    2013-01-01

    The study of light propagation in turbid media has attracted extensive attention in the field of biomedical optical molecular imaging. In this paper, we present a software platform for the simulation of light propagation in turbid media named the “Molecular Optical Simulation Environment (MOSE)”. Based on the gold standard of the Monte Carlo method, MOSE simulates light propagation both in tissues with complicated structures and through free-space. In particular, MOSE synthesizes realistic data for bioluminescence tomography (BLT), fluorescence molecular tomography (FMT), and diffuse optical tomography (DOT). The user-friendly interface and powerful visualization tools facilitate data analysis and system evaluation. As a major measure for resource sharing and reproducible research, MOSE aims to provide freeware for research and educational institutions, which can be downloaded at http://www.mosetm.net. PMID:23577215

  14. Simulation of intense microwave pulse propagation in air breakdown environment

    NASA Technical Reports Server (NTRS)

    Kuo, S. P.; Zhang, Y. S.

    1991-01-01

    An experiment is conducted to examine the tail erosion phenomenon which occurs to an intense microwave pulse propagating in air breakdown environment. In the experiment, a 1 MW microwave pulse (1.1 microsec) is transmitted through a large plexiglas chamber filled with dry air at about 1-2 torr pressure. Two different degrees of tail erosion caused by two different mechanisms are identified. This experimental effort leads to the understanding of the fundamental behavior of tail erosion and provides a data base for validating the theoretical model. A theoretical model based on two coupled partial differential equations is established to describe the propagation on an intense microwave pulse in air breakdown environment. One is derived from the Poynting theorem, and the other one is the rate equation of electron density. A semi-empirical formula of the ionization frequency is adopted for this model. A transformation of these two equations to local time frame of reference is introduced so that they can be solved numerically with considerably reduced computation time. This model is tested by using it to perform the computer simulation of the experiment. The numerical results are shown to agree well with the experimental results.

  15. Reducing On-Board Computer Propagation Errors Due to Omitted Geopotential Terms by Judicious Selection of Uploaded State Vector

    NASA Technical Reports Server (NTRS)

    Greatorex, Scott (Editor); Beckman, Mark

    1996-01-01

    Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would

  16. Error propagation equations and tables for estimating the uncertainty in high-speed wind tunnel test results

    SciTech Connect

    Clark, E.L.

    1993-08-01

    Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, calibration Mach number and Reynolds number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-stream Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for nine fundamental aerodynamic ratios, most of which relate free-stream test conditions (pressure, temperature, density or velocity) to a reference condition. Tables of the ratios, R, absolute sensitivity coefficients, {partial_derivative}R/{partial_derivative}M{infinity}, and relative sensitivity coefficients, (M{infinity}/R) ({partial_derivative}R/{partial_derivative}M{infinity}), are provided as functions of M{infinity}.

  17. Simulations of ultra-high-energy cosmic rays propagation

    SciTech Connect

    Kalashev, O. E.; Kido, E.

    2015-05-15

    We compare two techniques for simulation of the propagation of ultra-high-energy cosmic rays (UHECR) in intergalactic space: the Monte Carlo approach and a method based on solving transport equations in one dimension. For the former, we adopt the publicly available tool CRPropa and for the latter, we use the code TransportCR, which has been developed by the first author and used in a number of applications, and is made available online with publishing this paper. While the CRPropa code is more universal, the transport equation solver has the advantage of a roughly 100 times higher calculation speed. We conclude that the methods give practically identical results for proton or neutron primaries if some accuracy improvements are introduced to the CRPropa code.

  18. Multiscale simulation of 2D elastic wave propagation

    NASA Astrophysics Data System (ADS)

    Zhang, Wensheng; Zheng, Hui

    2016-06-01

    In this paper, we develop the multiscale method for simulation of elastic wave propagation. Based on the first-order velocity-stress hyperbolic form of 2D elastic wave equation, the particle velocities are solved first ona coarse grid by the finite volume method. Then the stress tensor is solved by using the multiscale basis functions which can represent the fine-scale variation of the wavefield on the coarse grid. The basis functions are computed by solving a local problem with the finite element method. The theoretical formulae and description of the multiscale method for elastic wave equation are given in more detail. The numerical computations for an inhomogeneous model with random scatter are completed. The results show the effectiveness of the multiscale method.

  19. Simulation of Crack Propagation in Metal Powder Compaction

    NASA Astrophysics Data System (ADS)

    Tahir, S. M.; Ariffin, A. K.

    2006-08-01

    This paper presents the fracture criterion of metal powder compact and simulation of the crack initiation and propagation during cold compaction process. Based on the fracture criterion of rock in compression, a displacement-based finite element model has been developed to analyze fracture initiation and crack growth in iron powder compact. Estimation of fracture toughness variation with relative density is established in order to provide the fracture parameter as compaction proceeds. A finite element model with adaptive remeshing technique is used to accommodate changes in geometry during the compaction and fracture process. Friction between crack faces is modelled using the six-node isoparametric interface elements. The shear stress and relative density distributions of the iron compact with predicted crack growth are presented, where the effects of different loading conditions are presented for comparison purposes.

  20. Simulation of seismic wave propagation for reconnaissance in machined tunnelling

    NASA Astrophysics Data System (ADS)

    Lambrecht, L.; Friederich, W.

    2012-04-01

    During machined tunnelling, there is a complex interaction chain of the involved components. For example, on one hand the machine influences the surrounding ground during excavation, on the other hand supporting measures are needed acting on the ground. Furthermore, the different soil conditions are influencing the wearing of tools, the speed of the excavation and the safety of the construction site. In order to get information about the ground along the tunnel track, one can use seismic imaging. To get a better understanding of seismic wave propagation for a tunnel environment, we want to perform numerical simulations. For that, we use the spectral element method (SEM) and the nodal discontinuous galerkin method (NDG). In both methods, elements are the basis to discretize the domain of interest for performing high order elastodynamic simulations. The SEM is a fast and widely used method but the biggest drawback is it's limitation to hexahedral elements. For complex heterogeneous models with a tunnel included, it is a better choice to use the NDG, which needs more computation time but can be adapted to tetrahedral elements. Using this technique, we can perform high resolution simulations of waves initialized by a single force acting either on the front face or the side face of the tunnel. The aim is to produce waves that travel mainly in the direction of the tunnel track and to get as much information as possible from the backscattered part of the wave field.

  1. Topics in quantum cryptography, quantum error correction, and channel simulation

    NASA Astrophysics Data System (ADS)

    Luo, Zhicheng

    In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel

  2. The influence of observation errors on analysis error and forecast skill investigated with an observing system simulation experiment

    NASA Astrophysics Data System (ADS)

    Privé, N. C.; Errico, R. M.; Tai, K.-S.

    2013-06-01

    The National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a 1 month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 h forecast, increased observation error only yields a slight decline in forecast skill in the extratropics and no discernible degradation of forecast skill in the tropics.

  3. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, R. M.; Tai, K.-S.

    2013-01-01

    The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

  4. Evaluation of color error and noise on simulated images

    NASA Astrophysics Data System (ADS)

    Mornet, Clémence; Vaillant, Jérôme; Decroux, Thomas; Hérault, Didier; Schanen, Isabelle

    2010-01-01

    The evaluation of CMOS sensors performance in terms of color accuracy and noise is a big challenge for camera phone manufacturers. On this paper, we present a tool developed with Matlab at STMicroelectronics which allows quality parameters to be evaluated on simulated images. These images are computed based on measured or predicted Quantum Efficiency (QE) curves and noise model. By setting the parameters of integration time and illumination, the tool optimizes the color correction matrix (CCM) and calculates the color error, color saturation and signal-to-noise ratio (SNR). After this color correction optimization step, a Graphics User Interface (GUI) has been designed to display a simulated image at a chosen illumination level, with all the characteristics of a real image taken by the sensor with the previous color correction. Simulated images can be a synthetic Macbeth ColorChecker, for which reflectance of each patch is known, or a multi-spectral image, described by the reflectance spectrum of each pixel or an image taken at high-light level. A validation of the results has been performed with ST under development sensors. Finally we present two applications one based on the trade-offs between color saturation and noise by optimizing the CCM and the other based on demosaicking SNR trade-offs.

  5. Error propagation for velocity and shear stress prediction using 2D models for environmental management

    NASA Astrophysics Data System (ADS)

    Pasternack, Gregory B.; Gilbert, Andrew T.; Wheaton, Joseph M.; Buckland, Evan M.

    2006-08-01

    SummaryResource managers, scientists, government regulators, and stakeholders are considering sophisticated numerical models for managing complex environmental problems. In this study, observations from a river-rehabilitation experiment involving gravel augmentation and spawning habitat enhancement were used to assess sources and magnitudes of error in depth, velocity, and shear velocity predictions made at the 1-m scale with a commercial two-dimensional (depth-averaged) model. Error in 2D model depth prediction averaged 21%. This error was attributable to topographic survey resolution, which at 1 pt per 1.14 m 2, was inadequate to resolve small humps and depressions influencing point measurements. Error in 2D model velocity prediction averaged 29%. More than half of this error was attributable to depth prediction error. Despite depth and velocity error, 56% of tested 2D model predictions of shear velocity were within the 95% confidence limit of the best field-based estimation method. Ninety percent of the error in shear velocity prediction was explained by velocity prediction error. Multiple field-based estimates of shear velocity differed by up to 160%, so the lower error for the 2D model's predictions suggests such models are at least as accurate as field measurement. 2D models enable detailed, spatially distributed estimates compared to the small number measurable in a field campaign of comparable cost. They also can be used for design evaluation. Although such numerical models are limited to channel types adhering to model assumptions and yield predictions only accurate to ˜20-30%, they can provide a useful tool for river-rehabilitation design and assessment, including spatially diverse habitat heterogeneity as well as for pre- and post-project appraisal.

  6. Comparison of Tropospheric Signal Delay Models for GNSS Error Simulation

    NASA Astrophysics Data System (ADS)

    Kim, Hye-In; Ha, Jihyun; Park, Kwan-Dong; Lee, Sanguk; Kim, Jaehoon

    2009-06-01

    As one of the GNSS error simulation case studies, we computed tropospheric signal delays based on three well-known models (Hopfield, Modified Hopfield and Saastamoinen) and a simple model. In the computation, default meteorological values were used. The result was compared with the GIPSY result, which we assumed as truth. The RMS of a simple model with Marini mapping function was the largest, 31.0 cm. For the other models, the average RMS is 5.2 cm. In addition, to quantify the influence of the accuracy of meteorological information on the signal delay, we did sensitivity analysis of pressure and temperature. As a result, all models used this study were not very sensitive to pressure variations. Also all models, except for the modified Hopfield model, were not sensitive to temperature variations.

  7. Simulation of 3D Global Wave Propagation Through Geodynamic Models

    NASA Astrophysics Data System (ADS)

    Schuberth, B.; Piazzoni, A.; Bunge, H.; Igel, H.; Steinle-Neumann, G.

    2005-12-01

    This project aims at a better understanding of the forward problem of global 3D wave propagation. We use the spectral element program "SPECFEM3D" (Komatitsch and Tromp, 2002a,b) with varying input models of seismic velocities derived from mantle convection simulations (Bunge et al., 2002). The purpose of this approach is to obtain seismic velocity models independently from seismological studies. In this way one can test the effects of varying parameters of the mantle convection models on the seismic wave field. In order to obtain the seismic velocities from the temperature field of the geodynamical simulations we follow a mineral physics approach. Assuming a certain mantle composition (e.g. pyrolite with CMASF composition) we compute the stable phases for each depth (i.e. pressure) and temperature by system Gibbs free energy minimization. Elastic moduli and density are calculated from the equations of state of the stable mineral phases. For this we use a mineral physics database derived from calorimetric experiments (enthalphy and entropy of formation, heat capacity) and EOS parameters.

  8. Numerical Simulation of Time-Dependent Wave Propagation Using Nonreflective Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Ionescu, D.; Muehlhaus, H.

    2003-12-01

    Solving numerically the wave equation for modelling wave propagation on an unbounded domain with complex geometry requires a truncation of the domain, to fit the infinite region on a finite computer. Minimizing the amount of spurious reflections requires in many cases the introduction of an artificial boundary and of associated nonreflecting boundary conditions. Here, a question arises, namely which boundary condition guarantees that the solution of the time dependent problem inside the artificial boundary coincides with the solution of the original problem in the infinite region. Recent investigations have shown that the accuracy and performance of numerical algorithms and the interpretation of the results critically depend on the proper treatment of external boundaries. Despite the computational speed of finite difference schemes and the robustness of finite elements in handling complex geometries the resulting numerical error consists of two independent contributions: the discretization error of the numerical method used and the spurious reflection generated at the artificial boundary. This spurious contribution travels back and substantially degrades the accuracy of the solution everywhere in the computational domain. Unless both error components are reduced systematically, the numerical solution does not converge to the solution of the original problem in the infinite region. In the present study we present and discuss absorbing boundary condition techniques for the time-dependent scalar wave equation in three spatial dimensions. In particular, exact conditions that annihilate wave harmonics on a spherical artificial boundary up to a given order are obtained and subsequently applied in numerical simulations by employing a finite differences implementation.

  9. A propagation method with adaptive mesh grid based on wave characteristics for wave optics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan

    2015-10-01

    Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.

  10. Error propagation in hydrodynamics of lowland rivers due to uncertainty in vegetation roughness parameterization

    NASA Astrophysics Data System (ADS)

    Straatsma, Menno

    2010-05-01

    Accurate water level prediction for the design discharge of large rivers is of main importance for the flood safety of large embanked areas in The Netherlands. Within a larger framework of uncertainty assessment, this report focusses on the effect of uncertainty in roughness parameterization in a 2D hydrodynamic model. Two key elements are considered in this roughness parameterization. Firstly the manually classified ecotope map that provides base data for roughness classes, and secondly the lookup table that translates roughness classes to vegetation structural characteristics. The aim is to quantify the effects of these two error sources on the following hydrodynamic aspects: 1. the discharge distribution at the bifurcation points within the river Rhine 2. peak water levels at a stationary discharge of 16000 m3/s. To assess the effect of the first error source, new realisations of ecotope maps were made based on the current ecotope map and an error matrix of the classification. Using these realisations of the ecotope maps, twelve succesfull model runs were carried out of the Rhine distributaries at design discharge. The classification error leads to a standard deviation of the water levels per river kilometer of 0.08, 0.05 and 0.10 m for Upper Rhine- Waal, Pannerdensch Kanaal-Nederrijn-Lek and the IJssel river respectively. The range is maximum range in water levels is 0.40, 0.40 and 0.57 m for these river sections respectively. Largest effects are found in the IJssel river and the Pannerdensch Kanaal. For the second error source, the accuracy of the values in the lookup table, a compilation was made of 445 field measurements of vegetation structure was carried out. For each of the vegetation types, the minimum, 25-percentile, median, 75-percentile and maximum for vegetation height and density were computed. These five values were subsequently put in the lookup table that was used for the hydrodynamic model. The interquartile range in vegetation height and

  11. Dispersion analysis and linear error analysis capabilities of the space vehicle dynamics simulation program

    NASA Technical Reports Server (NTRS)

    Snow, L. S.; Kuhn, A. E.

    1975-01-01

    Previous error analyses conducted by the Guidance and Dynamics Branch of NASA have used the Guidance Analysis Program (GAP) as the trajectory simulation tool. Plans are made to conduct all future error analyses using the Space Vehicle Dynamics Simulation (SVDS) program. A study was conducted to compare the inertial measurement unit (IMU) error simulations of the two programs. Results of the GAP/SVDS comparison are presented and problem areas encountered while attempting to simulate IMU errors, vehicle performance uncertainties and environmental uncertainties using SVDS are defined. An evaluation of the SVDS linear error analysis capability is also included.

  12. How to measure propagation velocity in cardiac tissue: a simulation study

    PubMed Central

    Linnenbank, Andre C.; de Bakker, Jacques M. T.; Coronel, Ruben

    2014-01-01

    To estimate conduction velocities from activation times in myocardial tissue, the “average vector” method computes all the local activation directions and velocities from local activation times and estimates the fastest and slowest propagation speed from these local values. The “single vector” method uses areas of apparent uniform elliptical spread of activation and chooses a single vector for the estimated longitudinal velocity and one for the transversal. A simulation study was performed to estimate the influence of grid size, anisotropy, and vector angle bin size. The results indicate that the “average vector” method can best be used if the grid- or bin-size is large, although systematic errors occur. The “single vector” method performs better, but requires human intervention for the definition of fiber direction. The average vector method can be automated. PMID:25101004

  13. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    PubMed

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506

  14. Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation

    NASA Astrophysics Data System (ADS)

    Schiavazzi, Daniele; Marsden, Alison

    2015-11-01

    Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.

  15. Numerical simulation of optical vortex propagation and reflection by the methods of scalar diffraction theory

    SciTech Connect

    Petrov, Nikolay V; Pavlov, Pavel V; Malov, A N

    2013-06-30

    Using the equations of scalar diffraction theory we consider the formation of an optical vortex on a diffractive optical element. The algorithms are proposed for simulating the processes of propagation of spiral wavefronts in free space and their reflections from surfaces with different roughness parameters. The given approach is illustrated by the results of numerical simulations. (propagation of wave fronts)

  16. The statistical significance of error probability as determined from decoding simulations for long codes

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  17. Statistical error propagation in ab initio no-core full configuration calculations of light nuclei

    NASA Astrophysics Data System (ADS)

    Navarro Pérez, R.; Amaro, J. E.; Ruiz Arriola, E.; Maris, P.; Vary, J. P.

    2015-12-01

    We propagate the statistical uncertainty of experimental N N scattering data into the binding energy of 3H and 4He. We also study the sensitivity of the magnetic moment and proton radius of the 3H to changes in the N N interaction. The calculations are made with the no-core full configuration method in a sufficiently large harmonic oscillator basis. For those light nuclei we obtain Δ Estat(3H) =0.015 MeV and Δ Estat(4He) =0.055 MeV .

  18. Errors Characteristics of Two Grid Refinement Approaches in Aquaplanet Simulations: MPAS-A and WRF

    SciTech Connect

    Hagos, Samson M.; Leung, Lai-Yung R.; Rauscher, Sara; Ringler, Todd

    2013-09-01

    This study compares the error characteristics associated with two grid refinement approaches including global variable resolution and nesting for high resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales-Atmosphere (MPAS-A), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context. For MPAS-A, simulations have been performed with a quasi-uniform resolution global domain at coarse (1°) and high (0.25°) resolution, and a variable resolution domain with a high resolution region at 0.25° configured inside a coarse resolution global domain at 1° resolution. Similarly, WRF has been configured to run on a coarse (1°) and high (0.25°) tropical channel domain as well as a nested domain with a high resolution region at 0.25° nested two-way inside the coarse resolution (1°) tropical channel. The variable resolution or nested simulations are compared against the high resolution simulations. Both models respond to increased resolution with enhanced precipitation. Limited and significant reduction in the ratio of convective to non-convective precipitation. The limited area grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. Within the high resolution limited area, the zonal distribution of precipitation is affected by advection in MPAS-A and by the nesting strategy in WRF. In both models, 20 day Kelvin waves propagate through the high-resolution domains fairly unaffected by the change in resolution (and the presence of a boundary in WRF) but increased resolution strengthens eastward propagating inertio-gravity waves.

  19. Precision Analysis Based on Complicated Error Simulation for the Orbit Determination with the Space Tracking Ship

    NASA Astrophysics Data System (ADS)

    Lei, YANG; Caifa, GUO; Zhengxu, DAI; Xiaoyong, LI; Shaolin, WANG

    2016-02-01

    The space tracking ship is a moving platform in the TT&C network. The orbit determination precision of the ship plays a key role in the TT&C mission. Based on the measuring data obtained by the ship-borne equipments, the paper presents the mathematic models of the complicated error from the space tracking ship, which can separate the random error and the correction residual error with secondary low frequency from the complicated error. An error simulation algorithm is proposed to analyze the orbit determination precision based on the two set of the different equipments. With this algorithm, a group of complicated error can be simulated from a measured sample. The simulated error groups can meet the requirements of sufficient complicated error for the equipment tests before the mission execution, which is helpful to the practical application.

  20. Error propagation in the numerical solutions of the differential equations of orbital mechanics

    NASA Technical Reports Server (NTRS)

    Bond, V. R.

    1982-01-01

    The relationship between the eigenvalues of the linearized differential equations of orbital mechanics and the stability characteristics of numerical methods is presented. It is shown that the Cowell, Encke, and Encke formulation with an independent variable related to the eccentric anomaly all have a real positive eigenvalue when linearized about the initial conditions. The real positive eigenvalue causes an amplification of the error of the solution when used in conjunction with a numerical integration method. In contrast an element formulation has zero eigenvalues and is numerically stable.

  1. A hybrid method for X-ray optics simulation: combining geometric ray-tracing and wavefront propagation

    PubMed Central

    Shi, Xianbo; Reininger, Ruben; Sanchez del Rio, Manuel; Assoufid, Lahsen

    2014-01-01

    A new method for beamline simulation combining ray-tracing and wavefront propagation is described. The ‘Hybrid Method’ computes diffraction effects when the beam is clipped by an aperture or mirror length and can also simulate the effect of figure errors in the optical elements when diffraction is present. The effect of different spatial frequencies of figure errors on the image is compared with SHADOW results pointing to the limitations of the latter. The code has been benchmarked against the multi-electron version of SRW in one dimension to show its validity in the case of fully, partially and non-coherent beams. The results demonstrate that the code is considerably faster than the multi-electron version of SRW and is therefore a useful tool for beamline design and optimization. PMID:24971960

  2. A hybrid method for X-ray optics simulation: combining geometric ray-tracing and wavefront propagation.

    PubMed

    Shi, Xianbo; Reininger, Ruben; Sanchez Del Rio, Manuel; Assoufid, Lahsen

    2014-07-01

    A new method for beamline simulation combining ray-tracing and wavefront propagation is described. The `Hybrid Method' computes diffraction effects when the beam is clipped by an aperture or mirror length and can also simulate the effect of figure errors in the optical elements when diffraction is present. The effect of different spatial frequencies of figure errors on the image is compared with SHADOW results pointing to the limitations of the latter. The code has been benchmarked against the multi-electron version of SRW in one dimension to show its validity in the case of fully, partially and non-coherent beams. The results demonstrate that the code is considerably faster than the multi-electron version of SRW and is therefore a useful tool for beamline design and optimization. PMID:24971960

  3. PLASIM: A computer code for simulating charge exchange plasma propagation

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Deininger, W. D.; Winder, D. R.; Kaufman, H. R.

    1982-01-01

    The propagation of the charge exchange plasma for an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ANSI Standard FORTRAN.

  4. Modeling and Simulation for Realistic Propagation Environments of Communications Signals at SHF Band

    NASA Technical Reports Server (NTRS)

    Ho, Christian

    2005-01-01

    In this article, most of widely accepted radio wave propagation models that have proven to be accurate in practice as well as numerically efficient at SHF band will be reviewed. Weather and terrain data along the signal's paths can be input in order to more accurately simulate the propagation environments under particular weather and terrain conditions. Radio signal degradation and communications impairment severity will be investigated through the realistic radio propagation channel simulator. Three types of simulation approaches in predicting signal's behaviors are classified as: deterministic, stochastic and attenuation map. The performance of the simulation can be evaluated under operating conditions for the test ranges of interest. Demonstration tests of a real-time propagation channel simulator will show the capabilities and limitations of the simulation tool and underlying models.

  5. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  6. Revised error propagation of 40Ar/39Ar data, including covariances

    NASA Astrophysics Data System (ADS)

    Vermeesch, Pieter

    2015-12-01

    The main advantage of the 40Ar/39Ar method over conventional K-Ar dating is that it does not depend on any absolute abundance or concentration measurements, but only uses the relative ratios between five isotopes of the same element -argon- which can be measured with great precision on a noble gas mass spectrometer. The relative abundances of the argon isotopes are subject to a constant sum constraint, which imposes a covariant structure on the data: the relative amount of any of the five isotopes can always be obtained from that of the other four. Thus, the 40Ar/39Ar method is a classic example of a 'compositional data problem'. In addition to the constant sum constraint, covariances are introduced by a host of other processes, including data acquisition, blank correction, detector calibration, mass fractionation, decay correction, interference correction, atmospheric argon correction, interpolation of the irradiation parameter, and age calculation. The myriad of correlated errors arising during the data reduction are best handled by casting the 40Ar/39Ar data reduction protocol in a matrix form. The completely revised workflow presented in this paper is implemented in a new software platform, Ar-Ar_Redux, which takes raw mass spectrometer data as input and generates accurate 40Ar/39Ar ages and their (co-)variances as output. Ar-Ar_Redux accounts for all sources of analytical uncertainty, including those associated with decay constants and the air ratio. Knowing the covariance matrix of the ages removes the need to consider 'internal' and 'external' uncertainties separately when calculating (weighted) mean ages. Ar-Ar_Redux is built on the same principles as its sibling program in the U-Pb community (U-Pb_Redux), thus improving the intercomparability of the two methods with tangible benefits to the accuracy of the geologic time scale. The program can be downloaded free of charge from

  7. Standard Errors of Equating Differences: Prior Developments, Extensions, and Simulations

    ERIC Educational Resources Information Center

    Moses, Tim; Zhang, Wenmin

    2011-01-01

    The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…

  8. ITER Test Blanket Module Error Field Simulation Experiments

    NASA Astrophysics Data System (ADS)

    Schaffer, M. J.

    2010-11-01

    Recent experiments at DIII-D used an active-coil mock-up to investigate effects of magnetic error fields similar to those expected from two ferromagnetic Test Blanket Modules (TBMs) in one ITER equatorial port. The largest and most prevalent observed effect was plasma toroidal rotation slowing across the entire radial profile, up to 60% in H-mode when the mock-up local ripple at the plasma was ˜4 times the local ripple expected in front of ITER TBMs. Analysis showed the slowing to be consistent with non-resonant braking by the mock-up field. There was no evidence of strong electromagnetic braking by resonant harmonics. These results are consistent with the near absence of resonant helical harmonics in the TBM field. Global particle and energy confinement in H-mode decreased by <20% for the maximum mock-up ripple, but <5% at the local ripple expected in ITER. These confinement reductions may be linked with the large velocity reductions. TBM field effects were small in L-mode but increased with plasma beta. The L-H power threshold was unaffected within error bars. The mock-up field increased plasma sensitivity to mode locking by a known n=1 test field (n = toroidal harmonic number). In H-mode the increased locking sensitivity was from TBM torque slowing plasma rotation. At low beta, locked mode tolerance was fully recovered by re-optimizing the conventional DIII-D ``I-coils'' empirical compensation of n=1 errors in the presence of the TBM mock-up field. Empirical error compensation in H-mode should be addressed in future experiments. Global loss of injected neutral beam fast ions was within error bars, but 1 MeV fusion triton loss may have increased. The many DIII-D mock-up results provide important benchmarks for models needed to predict effects of TBMs in ITER.

  9. Simulation of blast wave propagation from source to long distance with topography and atmospheric effects

    NASA Astrophysics Data System (ADS)

    Nguyen-Dinh, Maxime; Gainville, Olaf; Lardjane, Nicolas

    2015-10-01

    We present new results for the blast wave propagation from strong shock regime to the weak shock limit. For this purpose, we analyse the blast wave propagation using both Direct Numerical Simulation and an acoustic asymptotic model. This approach allows a full numerical study of a realistic pyrotechnic site taking into account for the main physical effects. We also compare simulation results with first measurements. This study is a part of the french ANR-Prolonge project (ANR-12-ASTR-0026).

  10. SimProp: a simulation code for ultra high energy cosmic ray propagation

    SciTech Connect

    Aloisio, R.; Grillo, A.F.; Boncioli, D.; Petrera, S.; Salamida, F. E-mail: denise.boncioli@roma2.infn.it E-mail: petrera@aquila.infn.it

    2012-10-01

    A new Monte Carlo simulation code for the propagation of Ultra High Energy Cosmic Rays is presented. The results of this simulation scheme are tested by comparison with results of another Monte Carlo computation as well as with the results obtained by directly solving the kinetic equation for the propagation of Ultra High Energy Cosmic Rays. A short comparison with the latest flux published by the Pierre Auger collaboration is also presented.

  11. Accumulation of errors in numerical simulations of chemically reacting gas dynamics

    NASA Astrophysics Data System (ADS)

    Smirnov, N. N.; Betelin, V. B.; Nikitin, V. F.; Stamov, L. I.; Altoukhov, D. I.

    2015-12-01

    The aim of the present study is to investigate problems of numerical simulations precision and stochastic errors accumulation in solving problems of detonation or deflagration combustion of gas mixtures in rocket engines. Computational models for parallel computing on supercomputers incorporating CPU and GPU units were tested and verified. Investigation of the influence of computational grid size on simulation precision and computational speed was performed. Investigation of accumulation of errors for simulations implying different strategies of computation were performed.

  12. Numerical simulation of impurity propagation in sea channels

    NASA Astrophysics Data System (ADS)

    Cherniy, Dmitro; Dovgiy, Stanislav; Gourjii, Alexandre

    2009-11-01

    Building the dike (2003) in Kerch channel (between Black and Azov seas) from Taman peninsula is an example of technological influence on the fluid flow and hydrological conditions in the channel. Increasing velocity flow by two times in a fairway region results in the appearance dangerous tendencies in hydrology of Kerch channel. A flow near the coastal edges generates large scale vortices, which move along the channel. A shipwreck (November 11, 2007) of tanker ``Volganeft-139'' in Kerch channel resulted in an ecological catastrophe in the indicated region. More than 1300 tons of petroleum appeared on the sea surface. Intensive vortices formed here involve part of the impurity region in own motion. Boundary of the impurity region is deformed, stretched and cover the center part of the channel. The adapted vortex singularity method for the impurity propagation in Kerch channel and analyze of the pollution propagation are the main goal of the report.

  13. Computer simulation on the linear and nonlinear propagation of the electromagnetic waves in the dielectric media

    SciTech Connect

    Abe, H.; Okuda, H.

    1993-08-01

    In this Letter, we first present a new computer simulation model developed to study the propagation of electromagnetic waves in a dielectric medium in the linear and nonlinear regimes. The model is constructed by combining a microscopic model used in the semi-classical approximation for the dielectric media and the particle model developed for the plasma simulations. The model was then used for studying linear and nonlinear wave propagation in the dielectric medium such as an optical fiber. It is shown that the model may be useful for studying nonlinear wave propagation and harmonics generation in the nonlinear dielectric media.

  14. Evaluating the Effect of Global Positioning System (GPS) Satellite Clock Error via GPS Simulation

    NASA Astrophysics Data System (ADS)

    Sathyamoorthy, Dinesh; Shafii, Shalini; Amin, Zainal Fitry M.; Jusoh, Asmariah; Zainun Ali, Siti

    2016-06-01

    This study is aimed at evaluating the effect of Global Positioning System (GPS) satellite clock error using GPS simulation. Two conditions of tests are used; Case 1: All the GPS satellites have clock errors within the normal range of 0 to 7 ns, corresponding to pseudorange error range of 0 to 2.1 m; Case 2: One GPS satellite suffers from critical failure, resulting in clock error in the pseudorange of up to 1 km. It is found that increase of GPS satellite clock error causes increase of average positional error due to increase of pseudorange error in the GPS satellite signals, which results in increasing error in the coordinates computed by the GPS receiver. Varying average positional error patterns are observed for the each of the readings. This is due to the GPS satellite constellation being dynamic, causing varying GPS satellite geometry over location and time, resulting in GPS accuracy being location / time dependent. For Case 1, in general, the highest average positional error values are observed for readings with the highest PDOP values, while the lowest average positional error values are observed for readings with the lowest PDOP values. For Case 2, no correlation is observed between the average positional error values and PDOP, indicating that the error generated is random.

  15. End-to-End Network Simulation Using a Site-Specific Radio Wave Propagation Model

    SciTech Connect

    Djouadi, Seddik M; Kuruganti, Phani Teja; Nutaro, James J

    2013-01-01

    The performance of systems that rely on a wireless network depends on the propagation environment in which that network operates. To predict how these systems and their supporting networks will perform, simulations must take into consideration the propagation environment and how this effects the performance of the wireless network. Network simulators typically use empirical models of the propagation environment. However, these models are not intended for, and cannot be used, to predict a wireless system will perform in a specific location, e.g., in the center of a particular city or the interior of a specific manufacturing facility. In this paper, we demonstrate how a site-specific propagation model and the NS3 simulator can be used to predict the end-to-end performance of a wireless network.

  16. Investigation of Radar Propagation in Buildings: A 10 Billion Element Cartesian-Mesh FETD Simulation

    SciTech Connect

    Stowell, M L; Fasenfest, B J; White, D A

    2008-01-14

    In this paper large scale full-wave simulations are performed to investigate radar wave propagation inside buildings. In principle, a radar system combined with sophisticated numerical methods for inverse problems can be used to determine the internal structure of a building. The composition of the walls (cinder block, re-bar) may effect the propagation of the radar waves in a complicated manner. In order to provide a benchmark solution of radar propagation in buildings, including the effects of typical cinder block and re-bar, we performed large scale full wave simulations using a Finite Element Time Domain (FETD) method. This particular FETD implementation is tuned for the special case of an orthogonal Cartesian mesh and hence resembles FDTD in accuracy and efficiency. The method was implemented on a general-purpose massively parallel computer. In this paper we briefly describe the radar propagation problem, the FETD implementation, and we present results of simulations that used over 10 billion elements.

  17. Optical laboratory solution and error model simulation of a linear time-varying finite element equation

    NASA Technical Reports Server (NTRS)

    Taylor, B. K.; Casasent, D. P.

    1989-01-01

    The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.

  18. Coherent-wave Monte Carlo method for simulating light propagation in tissue

    NASA Astrophysics Data System (ADS)

    Kraszewski, Maciej; Pluciński, Jerzy

    2016-03-01

    Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime.

  19. Whistler propagation in ionospheric density ducts: Simulations and DEMETER observations

    NASA Astrophysics Data System (ADS)

    Woodroffe, J. R.; Streltsov, A. V.; Vartanyan, A.; Milikh, G. M.

    2013-11-01

    On 16 October 2009, the Detection of Electromagnetic Emissions Transmitted from Earthquake Regions (DEMETER) satellite observed VLF whistler wave activity coincident with an ionospheric heating experiment conducted at HAARP. At the same time, density measurements by DEMETER indicate the presence of multiple field-aligned enhancements. Using an electron MHD model, we show that the distribution of VLF power observed by DEMETER is consistent with the propagation of whistlers from the heating region inside the observed density enhancements. We also discuss other interesting features of this event, including coupling of the lower hybrid and whistler modes, whistler trapping in artificial density ducts, and the interference of whistlers waves from two adjacent ducts.

  20. LOCA simulation: analysis of rarefaction waves propagating through geometric singularities

    SciTech Connect

    Crouzet, Fabien; Faucher, Vincent; Galon, Pascal; Piteau, Philippe; Izquierdo, Patrick

    2012-07-01

    The propagation of a transient wave through an orifice is investigated for applications to Loss Of Coolant Accident in nuclear plants. An analytical model is proposed for the response of an orifice plate and implemented in the EUROPLEXUS fast transient dynamics software. It includes an acoustic inertial effect in addition to a quasi-steady dissipation term. The model is experimentally validated on a test rig consisting in a single pipe filled with pressurized water. The test rig is designed to generate a rapid depressurization of the pipe, by means of a bursting disk. The proposed model gives results which compare favourably with experimental data. (authors)

  1. Practitioner's guide to laser pulse propagation models and simulation. Numerical implementation and practical usage of modern pulse propagation models

    NASA Astrophysics Data System (ADS)

    Couairon, A.; Brambilla, E.; Corti, T.; Majus, D.; de J. Ramírez-Góngora, O.; Kolesik, M.

    2011-11-01

    The purpose of this article is to provide practical introduction into numerical modeling of ultrashort optical pulses in extreme nonlinear regimes. The theoretic background section covers derivation of modern pulse propagation models starting from Maxwell's equations, and includes both envelope-based models and carrier-resolving propagation equations. We then continue with a detailed description of implementation in software of Nonlinear Envelope Equations as an example of a mixed approach which combines finite-difference and spectral techniques. Fully spectral numerical solution methods for the Unidirectional Pulse Propagation Equation are discussed next. The modeling part of this guide concludes with a brief introduction into efficient implementations of nonlinear medium responses. Finally, we include several worked-out simulation examples. These are mini-projects designed to highlight numerical and modeling issues, and to teach numerical-experiment practices. They are also meant to illustrate, first and foremost for a non-specialist, how tools discussed in this guide can be applied in practical numerical modeling.

  2. Digital simulation error curves for a spring-mass-damper system

    NASA Technical Reports Server (NTRS)

    Knox, L. A.

    1971-01-01

    Plotting digital simulation errors for a spring-mass-damper system and using these error curves to select type of integration, feedback update method, and number of samples per cycle at resonance reduces excessive number of samples per cycle and unnecessary iterations.

  3. A nonlocal finite difference scheme for simulation of wave propagation in 2D models with reduced numerical dispersion

    NASA Astrophysics Data System (ADS)

    Martowicz, A.; Ruzzene, M.; Staszewski, W. J.; Rimoli, J. J.; Uhl, T.

    2014-03-01

    The work deals with the reduction of numerical dispersion in simulations of wave propagation in solids. The phenomenon of numerical dispersion naturally results from time and spatial discretization present in a numerical model of mechanical continuum. Although discretization itself makes possible to model wave propagation in structures with complicated geometries and made of different materials, it inevitably causes simulation errors when improper time and length scales are chosen for the simulations domains. Therefore, by definition, any characteristic parameter for spatial and time resolution must create limitations on maximal wavenumber and frequency for a numerical model. It should be however noted that expected increase of the model quality and its functionality in terms of affordable wavenumbers, frequencies and speeds should not be achieved merely by denser mesh and reduced time integration step. The computational cost would be simply unacceptable. The authors present a nonlocal finite difference scheme with the coefficients calculated applying a Fourier series, which allows for considerable reduction of numerical dispersion. There are presented the results of analyses for 2D models, with isotropic and anisotropic materials, fulfilling the planar stress state. Reduced numerical dispersion is shown in the dispersion surfaces for longitudinal and shear waves propagating for different directions with respect to the mesh orientation and without dramatic increase of required number of nonlocal interactions. A case with the propagation of longitudinal wave in composite material is studied with given referential solution of the initial value problem for verification of the time-domain outcomes. The work gives a perspective of modeling of any type of real material dispersion according to measurements and with assumed accuracy.

  4. On the construction and analysis of stochastic models: Characterization and propagation of the errors associated with limited data

    SciTech Connect

    Ghanem, Roger G. . E-mail: ghanem@usc.edu; Doostan, Alireza . E-mail: doostan@jhu.edu

    2006-09-01

    This paper investigates the predictive accuracy of stochastic models. In particular, a formulation is presented for the impact of data limitations associated with the calibration of parameters for these models, on their overall predictive accuracy. In the course of this development, a new method for the characterization of stochastic processes from corresponding experimental observations is obtained. Specifically, polynomial chaos representations of these processes are estimated that are consistent, in some useful sense, with the data. The estimated polynomial chaos coefficients are themselves characterized as random variables with known probability density function, thus permitting the analysis of the dependence of their values on further experimental evidence. Moreover, the error in these coefficients, associated with limited data, is propagated through a physical system characterized by a stochastic partial differential equation (SPDE). This formalism permits the rational allocation of resources in view of studying the possibility of validating a particular predictive model. A Bayesian inference scheme is relied upon as the logic for parameter estimation, with its computational engine provided by a Metropolis-Hastings Markov chain Monte Carlo procedure.

  5. Simulated performance of the superconducting section of the APT linac under various fault and error conditions

    SciTech Connect

    Gray, E.R.; Nath, S.; Wangler, T.P.

    1997-08-01

    The current design for the production of tritium uses both normal-conducting (NC) and superconducting (SC) structures. To evaluate the performance of the superconducting part of the linac which constitutes more than 80% of the accelerator, studies have been made to include the effects of various error and fault conditions. Here, the authors present the simulation results of studies such as effects of rf phase and amplitude errors, cavity/klystron failure, quadrupole misalignment errors, quadrupole gradient error, and beam-input mismatches.

  6. Micro-scale finite element modeling of ultrasound propagation in aluminum trabecular bone-mimicking phantoms: A comparison between numerical simulation and experimental results.

    PubMed

    Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S

    2016-05-01

    The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. PMID:26894840

  7. Simulation-based reasoning about the physical propagation of fault effects

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan; Li, Dalu

    1990-01-01

    The research described deals with the effects of faults on complex physical systems, with particular emphasis on aircraft and spacecraft systems. Given that a malfunction has occurred and been diagnosed, the goal is to determine how that fault will propagate to other subsystems, and what the effects will be on vehicle functionality. In particular, the use of qualitative spatial simulation to determine the physical propagation of fault effects in 3-D space is described.

  8. Theory and simulations of electrostatic field error transport

    SciTech Connect

    Dubin, Daniel H. E.

    2008-07-15

    Asymmetries in applied electromagnetic fields cause plasma loss (or compression) in stellarators, tokamaks, and non-neutral plasmas. Here, this transport is studied using idealized simulations that follow guiding centers in given fields, neglecting collective effects on the plasma evolution, but including collisions at rate {nu}. For simplicity the magnetic field is assumed to be uniform; transport is due to asymmetries in applied electrostatic fields. Also, the Fokker-Planck equation describing the particle distribution is solved, and the predicted transport is found to agree with the simulations. Banana, plateau, and fluid regimes are identified and observed in the simulations. When separate trapped particle populations are created by application of an axisymmetric squeeze potential, enhanced transport regimes are observed, scaling as {radical}({nu}) when {nu}<{omega}{sub 0}<{omega}{sub B} and as 1/{nu} when {omega}{sub 0}<{nu}<{omega}{sub B} (where {omega}{sub 0} and {omega}{sub B} are the rotation and axial bounce frequencies, respectively). These regimes are similar to those predicted for neoclassical transport in stellarators.

  9. Modeling decenter, wedge, and tilt errors in optical tolerance analysis and simulation

    NASA Astrophysics Data System (ADS)

    Youngworth, Richard N.; Herman, Eric

    2014-09-01

    Many optical designs have lenses with circular outer profiles that are mounted in cylindrical barrels. This geometry leads to errors on mounting parameters such as decenter and tilt, and component error like wedge which are best modeled with a cylindrical or spherical coordinate system. In the absence of clocking registration, this class of errors is effectively reduced to an error magnitude with a random clocking azimuth. Optical engineers consequently must fully understand how cylindrical or spherical basis geometry relates to Cartesian representation. Understanding these factors as well as how optical design codes can differ in error application for Monte Carlo simulations produces the most effective statistical simulations for tolerance assignment, analysis, and verification. This paper covers these topics to aid practicing optical engineers and designers.

  10. Error-measure for anisotropic grid-adaptation in turbulence-resolving simulations

    NASA Astrophysics Data System (ADS)

    Toosi, Siavash; Larsson, Johan

    2015-11-01

    Grid-adaptation requires an error-measure that identifies where the grid should be refined. In the case of turbulence-resolving simulations (DES, LES, DNS), a simple error-measure is the small-scale resolved energy, which scales with both the modeled subgrid-stresses and the numerical truncation errors in many situations. Since this is a scalar measure, it does not carry any information on the anisotropy of the optimal grid-refinement. The purpose of this work is to introduce a new error-measure for turbulence-resolving simulations that is capable of predicting nearly-optimal anisotropic grids. Turbulent channel flow at Reτ ~ 300 is used to assess the performance of the proposed error-measure. The formulation is geometrically general, applicable to any type of unstructured grid.

  11. Modelling laser light propagation in thermoplastics using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Parkinson, Alexander

    Laser welding has great potential as a fast, non-contact joining method for thermoplastic parts. In the laser transmission welding of thermoplastics, light passes through a semi-transparent part to reach the weld interface. There, it is absorbed as heat, which causes melting and subsequent welding. The distribution and quantity of light reaching the interface are important for predicting the quality of a weld, but are experimentally difficult to estimate. A model for simulating the path of this laser light through these light-scattering plastic parts has been developed. The technique uses a Monte-Carlo approach to generate photon paths through the material, accounting for absorption, scattering and reflection between boundaries in the transparent polymer. It was assumed that any light escaping the bottom surface contributed to welding. The photon paths are then scaled according to the input beam profile in order to simulate non-Gaussian beam profiles. A method for determining the 3 independent optical parameters to accurately predict transmission and beam power distribution at the interface was established using experimental data for polycarbonate at 4 different glass fibre concentrations and polyamide-6 reinforced with 20% long glass fibres. Exit beam profiles and transmissions predicted by the simulation were found to be in generally good agreement (R2>0.90) with experimental measurements. The simulations allowed the prediction of transmission and power distributions at other thicknesses as well as information on reflection, energy absorption and power distributions at other thicknesses for these materials.

  12. FDTD Simulation on Terahertz Waves Propagation Through a Dusty Plasma

    NASA Astrophysics Data System (ADS)

    Wang, Maoyan; Zhang, Meng; Li, Guiping; Jiang, Baojun; Zhang, Xiaochuan; Xu, Jun

    2016-08-01

    The frequency dependent permittivity for dusty plasmas is provided by introducing the charging response factor and charge relaxation rate of airborne particles. The field equations that describe the characteristics of Terahertz (THz) waves propagation in a dusty plasma sheath are derived and discretized on the basis of the auxiliary differential equation (ADE) in the finite difference time domain (FDTD) method. Compared with numerical solutions in reference, the accuracy for the ADE FDTD method is validated. The reflection property of the metal Aluminum interlayer of the sheath at THz frequencies is discussed. The effects of the thickness, effective collision frequency, airborne particle density, and charge relaxation rate of airborne particles on the electromagnetic properties of Terahertz waves through a dusty plasma slab are investigated. Finally, some potential applications for Terahertz waves in information and communication are analyzed. supported by National Natural Science Foundation of China (Nos. 41104097, 11504252, 61201007, 41304119), the Fundamental Research Funds for the Central Universities (Nos. ZYGX2015J039, ZYGX2015J041), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (No. 20120185120012)

  13. A Compact Code for Simulations of Quantum Error Correction in Classical Computers

    SciTech Connect

    Nyman, Peter

    2009-03-10

    This study considers implementations of error correction in a simulation language on a classical computer. Error correction will be necessarily in quantum computing and quantum information. We will give some examples of the implementations of some error correction codes. These implementations will be made in a more general quantum simulation language on a classical computer in the language Mathematica. The intention of this research is to develop a programming language that is able to make simulations of all quantum algorithms and error corrections in the same framework. The program code implemented on a classical computer will provide a connection between the mathematical formulation of quantum mechanics and computational methods. This gives us a clear uncomplicated language for the implementations of algorithms.

  14. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, Ronald M.

    2015-01-01

    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  15. HERMES: Simulating the propagation of ultra-high energy cosmic rays

    NASA Astrophysics Data System (ADS)

    De Domenico, Manlio

    2013-08-01

    The study of ultra-high energy cosmic rays (UHECR) at Earth cannot prescind from the study of their propagation in the Universe. In this paper, we present HERMES, the ad hoc Monte Carlo code we have developed for the realistic simulation of UHECR propagation. We discuss the modeling adopted to simulate the cosmology, the magnetic fields, the interactions with relic photons and the production of secondary particles. In order to show the potential applications of HERMES for astroparticle studies, we provide an estimation of the surviving probability of UHE protons, the GZK horizons of nuclei and the all-particle spectrum observed at Earth in different astrophysical scenarios. Finally, we show the expected arrival direction distribution of UHECR produced from nearby candidate sources. A stable version of HERMES will be released in the next future for public use together with libraries of already propagated nuclei to allow the community to perform mass composition and energy spectrum analysis with our simulator.

  16. SIMULATION OF SHOCK WAVE PROPAGATION AND DAMAGE IN GEOLOGIC MATERIALS

    SciTech Connect

    Lomov, I; Vorobiev, O; Antoun, T H

    2004-09-17

    A new thermodynamically consistent material model for large deformation has been developed. It describes quasistatic loading of limestone as well as high-rate phenomena. This constitutive model has been implemented into an Eulerian shock wave code with adaptive mesh refinement. This approach was successfully used to reproduce static triaxial compression tests and to simulate experiments of blast loading and damage of limestone. Results compare favorably with experimentally available wave profiles from spherically-symmetric explosion in rock samples.

  17. CFD simulation of vented explosion and turbulent flame propagation

    NASA Astrophysics Data System (ADS)

    Tulach, Aleš; Mynarz, Miroslav; Kozubková, Milada

    2015-05-01

    Very rapid physical and chemical processes during the explosion require both quality and quantity of detection devices. CFD numerical simulations are suitable instruments for more detailed determination of explosion parameters. The paper deals with mathematical modelling of vented explosion and turbulent flame spread with use of ANSYS Fluent software. The paper is focused on verification of preciseness of calculations comparing calculated data with the results obtained in realised experiments in the explosion chamber.

  18. Monte Carlo simulations of intensity profiles for energetic particle propagation

    NASA Astrophysics Data System (ADS)

    Tautz, R. C.; Bolte, J.; Shalchi, A.

    2016-02-01

    Aims: Numerical test-particle simulations are a reliable and frequently used tool for testing analytical transport theories and predicting mean-free paths. The comparison between solutions of the diffusion equation and the particle flux is used to critically judge the applicability of diffusion to the stochastic transport of energetic particles in magnetized turbulence. Methods: A Monte Carlo simulation code is extended to allow for the generation of intensity profiles and anisotropy-time profiles. Because of the relatively low number density of computational particles, a kernel function has to be used to describe the spatial extent of each particle. Results: The obtained intensity profiles are interpreted as solutions of the diffusion equation by inserting the diffusion coefficients that have been directly determined from the mean-square displacements. The comparison shows that the time dependence of the diffusion coefficients needs to be considered, in particular the initial ballistic phase and the often subdiffusive perpendicular coefficient. Conclusions: It is argued that the perpendicular component of the distribution function is essential if agreement between the diffusion solution and the simulated flux is to be obtained. In addition, time-dependent diffusion can provide a better description than the classic diffusion equation only after the initial ballistic phase.

  19. Simulations of Wave Propagation in the Jovian Atmosphere after SL9 Impact Events

    NASA Astrophysics Data System (ADS)

    Pond, Jarrad W.; Palotai, C.; Korycansky, D.; Harrington, J.

    2013-10-01

    Our previous numerical investigations into Jovian impacts, including the Shoemaker Levy- 9 (SL9) event (Korycansky et al. 2006 ApJ 646. 642; Palotai et al. 2011 ApJ 731. 3), the 2009 bolide (Pond et al. 2012 ApJ 745. 113), and the ephemeral flashes caused by smaller impactors in 2010 and 2012 (Hueso et al. 2013; Submitted to A&A), have covered only up to approximately 3 to 30 seconds after impact. Here, we present further SL9 impacts extending to minutes after collision with Jupiter’s atmosphere, with a focus on the propagation of shock waves generated as a result of the impact events. Using a similar yet more efficient remapping method than previously presented (Pond et al. 2012; DPS 2012), we move our simulation results onto a larger computational grid, conserving quantities with minimal error. The Jovian atmosphere is extended as needed to accommodate the evolution of the features of the impact event. We restart the simulation, allowing the impact event to continue to progress to greater spatial extents and for longer times, but at lower resolutions. This remap-restart process can be implemented multiple times to achieve the spatial and temporal scales needed to investigate the observable effects of waves generated by the deposition of energy and momentum into the Jovian atmosphere by an SL9-like impactor. As before, we use the three-dimensional, parallel hydrodynamics code ZEUS-MP 2 (Hayes et al. 2006 ApJ.SS. 165. 188) to conduct our simulations. Wave characteristics are tracked throughout these simulations. Of particular interest are the wave speeds and wave positions in the atmosphere as a function of time. These properties are compared to the characteristics of the HST rings to see if shock wave behavior within one hour of impact is consistent with waves observed at one hour post-impact and beyond (Hammel et al. 1995 Science 267. 1288). This research was supported by National Science Foundation Grant AST-1109729 and NASA Planetary Atmospheres Program Grant

  20. Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.

  1. Simulation of ultrasonic wave propagation in welds using ray-based methods

    NASA Astrophysics Data System (ADS)

    Gardahaut, A.; Jezzine, K.; Cassereau, D.; Leymarie, N.

    2014-04-01

    Austenitic or bimetallic welds are particularly difficult to control due to their anisotropic and inhomogeneous properties. In this paper, we present a ray-based method to simulate the propagation of ultrasonic waves in such structures, taking into account their internal properties. This method is applied on a smooth representation of the orientation of the grain in the weld. The propagation model consists in solving the eikonal and transport equations in an inhomogeneous anisotropic medium. Simulation results are presented and compared to finite elements for a distribution of grain orientation expressed in a closed-form.

  2. Simulations of elastic wave propagation through Voronoi polycrystals

    NASA Astrophysics Data System (ADS)

    Turner, Joseph A.; Ghoshal, Goutam

    2002-11-01

    The scattering of elastic waves in polycrystalline media is relevant for ultrasonic materials characterization and nondestructive evaluation. Ultrasonic attenuation and backscatter are routinely used for extracting microstructural parameters such as grain size and grain texture. The inversion of experimental data requires robust ultrasonic scattering models. Such models are often idealizations of real media through assumptions such as constant density, single grain size, and randomness hypotheses. The accuracy and limits of applicability of these models cannot be fully tested due to practical limits of real materials processing. Here, this problem is examined in terms of numerical simulations of elastic waves through two-dimensional polycrystals. The numerical models are based on the Voronoi polycrystal. Voronoi tessellations have been shown to model accurately the microstructure of polycrystalline metals and ceramics. The Voronoi cells are discretized using finite elements and integrated directly in time. The material properties of the individual Voronoi cells are chosen according to appropriate distributions here, cubic crystals that are statistically isotropic. Results are presented and compared with scattering theories. Issues relevant to spatial/ensemble averaging will also be discussed. These simulations will provide insight into the attenuation models relevant for polycrystalline materials. [Work supported by DOE.

  3. Hybrid simulations of rotational discontinuities. [Alfven wave propagation in astrophysics

    NASA Technical Reports Server (NTRS)

    Goodrich, C. C.; Cargill, P. J.

    1991-01-01

    1D hybrid simulations of rotational discontinuities (RDs) are presented. When the angle between the discontinuity normal and the magnetic field (theta-BN) is 30 deg, the RD broadens into a quasi-steady state of width 60-80 c/omega-i. The hodogram has a characteristic S-shape. When theta-BN = 60 deg, the RD is much narrower (10 c/omega-i). For right handed rotations, the results are similar to theta-BN = 30 deg. For left handed rotations, the RD does not evolve much from its initial conditions and the S-shape in the hodogram is much less visible. The results can be understood in terms of matching a fast mode wavelike structure upstream of the RD with an intermediate mode one downstream.

  4. Design of a predictive targeting error simulator for MRI-guided prostate biopsy

    NASA Astrophysics Data System (ADS)

    Avni, Shachar; Vikal, Siddharth; Fichtinger, Gabor

    2010-02-01

    Multi-parametric MRI is a new imaging modality superior in quality to Ultrasound (US) which is currently used in standard prostate biopsy procedures. Surface-based registration of the pre-operative and intra-operative prostate volumes is a simple alternative to side-step the challenges involved with deformable registration. However, segmentation errors inevitably introduced during prostate contouring spoil the registration and biopsy targeting accuracies. For the crucial purpose of validating this procedure, we introduce a fully interactive and customizable simulator which determines the resulting targeting errors of simulated registrations between prostate volumes given user-provided parameters for organ deformation, segmentation, and targeting. We present the workflow executed by the simulator in detail and discuss the parameters involved. We also present a segmentation error introduction algorithm, based on polar curves and natural cubic spline interpolation, which introduces statistically realistic contouring errors. One simulation, including all I/O and preparation for rendering, takes approximately 1 minute and 40 seconds to complete on a system with 3 GB of RAM and four Intel Core 2 Quad CPUs each with a speed of 2.40 GHz. Preliminary results of our simulation suggest the maximum tolerable segmentation error given the presence of a 5.0 mm wide small tumor is between 4-5 mm. We intend to validate these results via clinical trials as part of our ongoing work.

  5. SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors

    SciTech Connect

    Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I

    2014-06-01

    Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with γ<κ was < (or ≥) τ. (Conventionally, κ=1 and τ=90%.) A ROC curve was generated from the 616 tests by varying τ. For a range of κ and τ, true/false positive/negative rates were calculated. This procedure was repeated for inserted errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some κ+τ combination. Results: 20mm{sup 2} errors with intensity altered by ≥20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ≥50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though

  6. Simulating underwater plasma sound sources to evaluate focusing performance and analyze errors

    NASA Astrophysics Data System (ADS)

    Ma, Tian; Huang, Jian-Guo; Lei, Kai-Zhuo; Chen, Jian-Feng; Zhang, Qun-Fei

    2010-03-01

    Focused underwater plasma sound sources are being applied in more and more fields. Focusing performance is one of the most important factors determining transmission distance and peak values of the pulsed sound waves. The sound source’s components and focusing mechanism were all analyzed. A model was built in 3D Max and wave strength was measured on the simulation platform. Error analysis was fully integrated into the model so that effects on sound focusing performance of processing-errors and installation-errors could be studied. Based on what was practical, ways to limit the errors were proposed. The results of the error analysis should guide the design, machining, placement, debugging and application of underwater plasma sound sources.

  7. Numerical simulation of fracture rocks and wave propagation by means of fractal theory

    SciTech Connect

    Valle G., R. del

    1994-12-31

    A numerical approach was developed for the dynamic simulation of fracture rocks and wave propagation. Based on some ideas of percolation theory and fractal growth, a network of particles and strings represent the rock model. To simulate an inhomogeneous medium, the particles and springs have random distributed elastic parameters and are implemented in the dynamic Navier equation. Some of the springs snap with criteria based on the confined stress applied, therefore creating a fractured rock consistent with the physical environment. The basic purpose of this research was to provide a method to construct a fractured rock with confined stress conditions as well as the wave propagation imposed in the model. Such models provide a better understanding of the behavior of wave propagation in fractured media. The synthetic seismic data obtained henceforth, can be used as a tool to develop methods for characterizing fractured rocks by means of geophysical inference.

  8. Time-Sliced Thawed Gaussian Propagation Method for Simulations of Quantum Dynamics.

    PubMed

    Kong, Xiangmeng; Markmann, Andreas; Batista, Victor S

    2016-05-19

    A rigorous method for simulations of quantum dynamics is introduced on the basis of concatenation of semiclassical thawed Gaussian propagation steps. The time-evolving state is represented as a linear superposition of closely overlapping Gaussians that evolve in time according to their characteristic equations of motion, integrated by fourth-order Runge-Kutta or velocity Verlet. The expansion coefficients of the initial superposition are updated after each semiclassical propagation period by implementing the Husimi Transform analytically in the basis of closely overlapping Gaussians. An advantage of the resulting time-sliced thawed Gaussian (TSTG) method is that it allows for full-quantum dynamics propagation without any kind of multidimensional integral calculation, or inversion of overlap matrices. The accuracy of the TSTG method is demonstrated as applied to simulations of quantum tunneling, showing quantitative agreement with benchmark calculations based on the split-operator Fourier transform method. PMID:26845486

  9. Geant4 Simulations of SuperCDMS iZip Detector Charge Carrier Propagation

    NASA Astrophysics Data System (ADS)

    Agnese, Robert; Brandt, Daniel; Redl, Peter; Asai, Makoto; Faiez, Dana; Kelsey, Mike; Bagli, Enrico; Anderson, Adam; Schlupf, Chandler

    2014-03-01

    The SuperCDMS experiment uses germanium crystal detectors instrumented with ionization and phonon readout circuits to search for dark matter. In order to simulate the response of the detectors to particle interactions the SuperCDMS Detector Monte Carlo (DMC) group has been implementing the processes governing electrons and phonons at low temperatures in Geant4. The charge portion of the DMC simulates oblique propagation of the electrons through the L-valleys, propagation of holes through the Γ-valleys, inter-valley scattering, and emission of Neganov-Luke phonons in a complex applied electric field. The field is calculated by applying a directed walk search on a tetrahedral mesh of known potentials and then interpolating the value. This talk will present an overview of the DMC status and a comparison of the charge portion of the DMC to experimental data of electron-hole pair propagation in germanium.

  10. On the propagation of blobs in the magnetotail: MHD simulations

    NASA Astrophysics Data System (ADS)

    Birn, J.; Nakamura, R.; Hesse, M.

    2013-09-01

    Using three-dimensional magnetohydrodynamic (MHD) simulations of the magnetotail, we investigate the fate of entropy-enhanced localized magnetic flux tubes ("blobs"). Such flux tubes may be the result of a slippage process that also generates entropy-depleted flux tubes ("bubbles") or of a rapid localized energy increase, for instance, from wave absorption. We confirm the expectation that the entropy enhancement leads to a tailward motion and that the speed and distance traveled into the tail increase with the entropy enhancement, even though the blobs tend to break up into pieces. The vorticity on the outside of the blobs twists the magnetic field and generates field-aligned currents predominantly of region-2 sense (earthward on the dusk side and tailward on the dawn side), which might provide a possibility for remote identification from the ground. The breakup, however, leads to more turbulent flow patterns, associated with opposite vorticity and the generation of region-1 sense field-aligned currents of lower intensity but approximately equal integrated magnitude.

  11. Analysis of transmission error effects on the transfer of real-time simulation data

    NASA Technical Reports Server (NTRS)

    Credeur, L.

    1977-01-01

    An analysis was made to determine the effect of transmission errors on the quality of data transferred from the Terminal Area Air Traffic Model to a remote site. Data formating schemes feasible within the operational constraints of the data link were proposed and their susceptibility to both random bit error and to noise burst were investigated. It was shown that satisfactory reliability is achieved by a scheme formating the simulation output into three data blocks which has the priority data triply redundant in the first block in addition to having a retransmission priority on that first block when it is received in error.

  12. Kalman filter application to mitigate the errors in the trajectory simulations due to the lunar gravitational model uncertainty

    NASA Astrophysics Data System (ADS)

    Gonçalves, L. D.; Rocco, E. M.; de Moraes, R. V.; Kuga, H. K.

    2015-10-01

    This paper aims to simulate part of the orbital trajectory of Lunar Prospector mission to analyze the relevance of using a Kalman filter to estimate the trajectory. For this study it is considered the disturbance due to the lunar gravitational potential using one of the most recent models, the LP100K model, which is based on spherical harmonics, and considers the maximum degree and order up to the value 100. In order to simplify the expression of the gravitational potential and, consequently, to reduce the computational effort required in the simulation, in some cases, lower values for degree and order are used. Following this aim, it is made an analysis of the inserted error in the simulations when using such values of degree and order to propagate the spacecraft trajectory and control. This analysis was done using the standard deviation that characterizes the uncertainty for each one of the values of the degree and order used in LP100K model for the satellite orbit. With knowledge of the uncertainty of the gravity model adopted, lunar orbital trajectory simulations may be accomplished considering these values of uncertainty. Furthermore, it was also used a Kalman filter, where is considered the sensor's uncertainty that defines the satellite position at each step of the simulation and the uncertainty of the model, by means of the characteristic variance of the truncated gravity model. Thus, this procedure represents an effort to approximate the results obtained using lower values for the degree and order of the spherical harmonics, to the results that would be attained if the maximum accuracy of the model LP100K were adopted. Also a comparison is made between the error in the satellite position in the situation in which the Kalman filter is used and the situation in which the filter is not used. The data for the comparison were obtained from the standard deviation in the velocity increment of the space vehicle.

  13. Investigation of elliptical vortex beams propagating in atmospheric turbulence by numerical simulations

    NASA Astrophysics Data System (ADS)

    Taozheng

    2015-08-01

    In recent years, due to the high stability and privacy of vortex beam, the optical vortex became the hot spot in research of atmospheric optical transmission .We numerically investigate the propagation of vector elliptical vortex beams in turbulent atmosphere. Numerical simulations are realized with random phase screen. To simulate the vortex beam transport processes in the atmospheric turbulence. Using numerical simulation method to study in the atmospheric turbulence vortex beam transmission characteristics (light intensity, phase, polarization, etc.) Our simulation results show that, vortex beam in the atmospheric transmission distortion is small, make elliptic vortex beam for space communications is a promising strategy.

  14. Simulation study of wakefield generation by two color laser pulses propagating in homogeneous plasma

    SciTech Connect

    Kumar Mishra, Rohit; Saroch, Akanksha; Jha, Pallavi

    2013-09-15

    This paper deals with a two-dimensional simulation of electric wakefields generated by two color laser pulses propagating in homogeneous plasma, using VORPAL simulation code. The laser pulses are assumed to have a frequency difference equal to the plasma frequency. Simulation studies are performed for two similarly as well as oppositely polarized laser pulses and the respective amplitudes of the generated longitudinal wakefields for the two cases are compared. Enhancement of wake amplitude for the latter case is reported. This simulation study validates the analytical results presented by Jha et al.[Phys. Plasmas 20, 053102 (2013)].

  15. GPU simulation of nonlinear propagation of dual band ultrasound pulse complexes

    NASA Astrophysics Data System (ADS)

    Kvam, Johannes; Angelsen, Bjørn A. J.; Elster, Anne C.

    2015-10-01

    In a new method of ultrasound imaging, called SURF imaging, dual band pulse complexes composed of overlapping low frequency (LF) and high frequency (HF) pulses are transmitted, where the frequency ratio LF:HF ˜ 1 : 20, and the relative bandwidth of both pulses are ˜ 50 - 70%. The LF pulse length is hence ˜ 20 times the HF pulse length. The LF pulse is used to nonlinearly manipulate the material elasticity observed by the co-propagating HF pulse. This produces nonlinear interaction effects that give more information on the propagation of the pulse complex. Due to the large difference in frequency and pulse length between the LF and the HF pulses, we have developed a dual level simulation where the LF pulse propagation is first simulated independent of the HF pulse, using a temporal sampling frequency matched to the LF pulse. A separate equation for the HF pulse is developed, where the the presimulated LF pulse modifies the propagation velocity. The equations are adapted to parallel processing in a GPU, where nonlinear simulations of a typical HF beam of 10 MHz down to 40 mm is done in ˜ 2 secs in a standard GPU. This simulation is hence very useful for studying the manipulation effect of the LF pulse on the HF pulse.

  16. GPU simulation of nonlinear propagation of dual band ultrasound pulse complexes

    SciTech Connect

    Kvam, Johannes Angelsen, Bjørn A. J.; Elster, Anne C.

    2015-10-28

    In a new method of ultrasound imaging, called SURF imaging, dual band pulse complexes composed of overlapping low frequency (LF) and high frequency (HF) pulses are transmitted, where the frequency ratio LF:HF ∼ 1 : 20, and the relative bandwidth of both pulses are ∼ 50 − 70%. The LF pulse length is hence ∼ 20 times the HF pulse length. The LF pulse is used to nonlinearly manipulate the material elasticity observed by the co-propagating HF pulse. This produces nonlinear interaction effects that give more information on the propagation of the pulse complex. Due to the large difference in frequency and pulse length between the LF and the HF pulses, we have developed a dual level simulation where the LF pulse propagation is first simulated independent of the HF pulse, using a temporal sampling frequency matched to the LF pulse. A separate equation for the HF pulse is developed, where the the presimulated LF pulse modifies the propagation velocity. The equations are adapted to parallel processing in a GPU, where nonlinear simulations of a typical HF beam of 10 MHz down to 40 mm is done in ∼ 2 secs in a standard GPU. This simulation is hence very useful for studying the manipulation effect of the LF pulse on the HF pulse.

  17. Molecular dynamics simulation of a glissile dislocation interface propagating a martensitic transformation.

    PubMed

    Lill, J V; Broughton, J Q

    2000-06-19

    The method of Parrinello and Rahman is generalized to include slip in addition to deformation of the simulation cell. Equations of motion are derived, and a microscopic expression for traction is introduced. Lagrangian constraints are imposed so that the combination of deformation and slip conform to the invariant plane shear characteristic of martensites. Simulation of a model transformation demonstrates the nucleation and propagation of a glissile dislocation interface. PMID:10991054

  18. Characterizing the propagation of gravity waves in 3D nonlinear simulations of solar-like stars

    NASA Astrophysics Data System (ADS)

    Alvan, L.; Strugarek, A.; Brun, A. S.; Mathis, S.; Garcia, R. A.

    2015-09-01

    Context. The revolution of helio- and asteroseismology provides access to the detailed properties of stellar interiors by studying the star's oscillation modes. Among them, gravity (g) modes are formed by constructive interferences between progressive internal gravity waves (IGWs), propagating in stellar radiative zones. Our new 3D nonlinear simulations of the interior of a solar-like star allows us to study the excitation, propagation, and dissipation of these waves. Aims: The aim of this article is to clarify our understanding of the behavior of IGWs in a 3D radiative zone and to provide a clear overview of their properties. Methods: We use a method of frequency filtering that reveals the path of individual gravity waves of different frequencies in the radiative zone. Results: We are able to identify the region of propagation of different waves in 2D and 3D, to compare them to the linear raytracing theory and to distinguish between propagative and standing waves (g-modes). We also show that the energy carried by waves is distributed in different planes in the sphere, depending on their azimuthal wave number. Conclusions: We are able to isolate individual IGWs from a complex spectrum and to study their propagation in space and time. In particular, we highlight in this paper the necessity of studying the propagation of waves in 3D spherical geometry, since the distribution of their energy is not equipartitioned in the sphere.

  19. Sampling errors in free energy simulations of small molecules in lipid bilayers.

    PubMed

    Neale, Chris; Pomès, Régis

    2016-10-01

    Free energy simulations are a powerful tool for evaluating the interactions of molecular solutes with lipid bilayers as mimetics of cellular membranes. However, these simulations are frequently hindered by systematic sampling errors. This review highlights recent progress in computing free energy profiles for inserting molecular solutes into lipid bilayers. Particular emphasis is placed on a systematic analysis of the free energy profiles, identifying the sources of sampling errors that reduce computational efficiency, and highlighting methodological advances that may alleviate sampling deficiencies. This article is part of a Special Issue entitled: Biosimulations edited by Ilpo Vattulainen and Tomasz Róg. PMID:26952019

  20. Simulation of Ocean-Generated Microseismic Noise Propagation in the North-East Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Ying, Y.; Bean, C. J.; Lokmer, I.; Faure, T.

    2013-12-01

    Ocean-generated microseisms are small ground oscillations associated with the occurrence of the interactions between the solid Earth and ocean water waves. Microseismic noise field is mostly composed of surface waves, where the wave energy propagate along the ocean floor predominantly in the form of Rayleigh waves, but some Love waves are also present. Microseisms will pick up some information about the medium on the propagation paths due to the interaction between the seismic waves and the structure. Recently, seismologists become more and more interested in using cross-correlations of continuously recorded microseismic noise to retrieve information about the Earth's structure. In order to use this information well, it's important to identify the rich noise source domain in the ocean and quantify the propagation process of microseism from the origins to the land-based seismic stations. In this work, we try to characterize how a microseism propagates along a fluid-solid interface through numerical simulations, in which a North-East Atlantic Ocean model is adopted and a microseism is generated on the bottom of deep ocean with the expected source mechanism. The spectral element method is used to simulate coupled acoustic/elastic wave propagation in an unstructured mesh, and the coupling between fluid and solid regions is accommodated by using a domain decomposition method. The effects of crustal structure, sediment layer, bathymetry and ocean load on the microseismic wave propagation will be examined, and a special attention will be paid to the fluid-solid coupling. We find that microseismic wave will be highly dispersive propagating in the ocean environment.

  1. Impacts of Characteristics of Errors in Radar Rainfall Estimates for Rainfall-Runoff Simulation

    NASA Astrophysics Data System (ADS)

    KO, D.; PARK, T.; Lee, T. S.; Shin, J. Y.; Lee, D.

    2015-12-01

    For flood prediction, weather radar has been commonly employed to measure the amount of precipitation and its spatial distribution. However, estimated rainfall from the radar contains uncertainty caused by its errors such as beam blockage and ground clutter. Even though, previous studies have been focused on removing error of radar data, it is crucial to evaluate runoff volumes which are influenced primarily by the radar errors. Furthermore, resolution of rainfall modeled by previous studies for rainfall uncertainty analysis or distributed hydrological simulation are quite coarse to apply to real application. Therefore, in the current study, we tested the effects of radar rainfall errors on rainfall runoff with a high resolution approach, called spatial error model (SEM). In the current study, the synthetic generation of random and cross-correlated radar errors were employed as SEM. A number of events for the Nam River dam region were tested to investigate the peak discharge from a basin according to error variance. The results indicate that the dependent error brings much higher variations in peak discharge than the independent random error. To further investigate the effect of the magnitude of cross-correlation between radar errors, the different magnitudes of spatial cross-correlations were employed for the rainfall-runoff simulation. The results demonstrate that the stronger correlation leads to higher variation of peak discharge and vice versa. We conclude that the error structure in radar rainfall estimates significantly affects on predicting the runoff peak. Therefore, the efforts must take into consideration on not only removing radar rainfall error itself but also weakening the cross-correlation structure of radar errors in order to forecast flood events more accurately. Acknowledgements This research was supported by a grant from a Strategic Research Project (Development of Flood Warning and Snowfall Estimation Platform Using Hydrological Radars), which

  2. Fast error simulation of optical 3D measurements at translucent objects

    NASA Astrophysics Data System (ADS)

    Lutzke, P.; Kühmstedt, P.; Notni, G.

    2012-09-01

    The scan results of optical 3D measurements at translucent objects deviate from the real objects surface. This error is caused by the fact that light is scattered in the objects volume and is not exclusively reflected at its surface. A few approaches were made to separate the surface reflected light from the volume scattered. For smooth objects the surface reflected light is dominantly concentrated in specular direction and could only be observed from a point in this direction. Thus the separation either leads to measurement results only creating data for near specular directions or provides data from not well separated areas. To ensure the flexibility and precision of optical 3D measurement systems for translucent materials it is necessary to enhance the understanding of the error forming process. For this purpose a technique for simulating the 3D measurement at translucent objects is presented. A simple error model is shortly outlined and extended to an efficient simulation environment based upon ordinary raytracing methods. In comparison the results of a Monte-Carlo simulation are presented. Only a few material and object parameters are needed for the raytracing simulation approach. The attempt of in-system collection of these material and object specific parameters is illustrated. The main concept of developing an error-compensation method based on the simulation environment and the collected parameters is described. The complete procedure is using both, the surface reflected and the volume scattered light for further processing.

  3. Local error estimates for adaptive simulation of the Reaction–Diffusion Master Equation via operator splitting

    PubMed Central

    Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda

    2015-01-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735

  4. Local error estimates for adaptive simulation of the reaction-diffusion master equation via operator splitting

    NASA Astrophysics Data System (ADS)

    Hellander, Andreas; Lawson, Michael J.; Drawert, Brian; Petzold, Linda

    2014-06-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps were adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the diffusive finite-state projection (DFSP) method, to incorporate temporal adaptivity.

  5. EPIC: an Error Propagation/Inquiry Code. [EPIC estimates the variance of a materials balance closed about a materials balance area in a processing plant

    SciTech Connect

    Baker, A.L.

    1985-01-01

    The use of a computer program EPIC (Error Propagation/Inquiry Code) will be discussed. EPIC calculates the variance of a materials balance closed about a materials balance area (MBA) in a processing plant operated under steady-state conditions. It was designed for use in evaluating the significance of inventory differences in the Department of Energy (DOE) nuclear plants. EPIC rapidly estimates the variance of a materials balance using average plant operating data. The intent is to learn as much as possible about problem areas in a process with simple straightforward calculations assuming a process is running in a steady-state mode. EPIC is designed to be used by plant personnel or others with little computer background. However, the user should be knowledgeable about measurement errors in the system being evaluated and have a limited knowledge of how error terms are combined in error propagation analyses. EPIC contains six variance equations; the appropriate equation is used to calculate the variance at each measurement point. After all of these variances are calculated, the total variance for the MBA is calculated using a simple algebraic sum of variances. The EPIC code runs on any computer that accepts a standard form of the BASIC language. 2 refs., 1 fig., 6 tabs.

  6. Rescaled Local Interaction Simulation Approach for Shear Wave Propagation Modelling in Magnetic Resonance Elastography.

    PubMed

    Hashemiyan, Z; Packo, P; Staszewski, W J; Uhl, T

    2016-01-01

    Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808

  7. Rescaled Local Interaction Simulation Approach for Shear Wave Propagation Modelling in Magnetic Resonance Elastography

    PubMed Central

    Packo, P.; Staszewski, W. J.; Uhl, T.

    2016-01-01

    Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808

  8. Bulk Quantum Computation with Pulsed Electron Paramagnetic Resonance: Simulations of Single-Qubit Error Correction Schemes

    NASA Astrophysics Data System (ADS)

    Ishmuratov, I. K.; Baibekov, E. I.

    2015-12-01

    We investigate the possibility to restore transient nutations of electron spin centers embedded in the solid using specific composite pulse sequences developed previously for the application in nuclear magnetic resonance spectroscopy. We treat two types of systematic errors simultaneously: (i) rotation angle errors related to the spatial distribution of microwave field amplitude in the sample volume, and (ii) off-resonance errors related to the spectral distribution of Larmor precession frequencies of the electron spin centers. Our direct simulations of the transient signal in erbium- and chromium-doped CaWO4 crystal samples with and without error corrections show that the application of the selected composite pulse sequences can substantially increase the lifetime of Rabi oscillations. Finally, we discuss the applicability limitations of the studied pulse sequences for the use in solid-state electron paramagnetic resonance spectroscopy.

  9. Nuclear Reaction Models Responsible for Simulation of Neutron-induced Soft Errors in Microelectronics

    SciTech Connect

    Watanabe, Y. Abe, S.

    2014-06-15

    Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.

  10. A measurement methodology for dynamic angle of sight errors in hardware-in-the-loop simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-pan; Wu, Jun-hui; Gan, Lin; Zhao, Hong-peng; Liang, Wei-wei

    2015-10-01

    In order to precisely measure dynamic angle of sight for hardware-in-the-loop simulation, a dynamic measurement methodology was established and a set of measurement system was built. The errors and drifts, such as synchronization delay, CCD measurement error and drift, laser spot error on diffuse reflection plane and optics axis drift of laser, were measured and analyzed. First, by analyzing and measuring synchronization time between laser and time of controlling data, an error control method was devised and lowered synchronization delay to 21μs. Then, the relationship between CCD device and laser spot position was calibrated precisely and fitted by two-dimension surface fitting. CCD measurement error and drift were controlled below 0.26mrad. Next, angular resolution was calculated, and laser spot error on diffuse reflection plane was estimated to be 0.065mrad. Finally, optics axis drift of laser was analyzed and measured which did not exceed 0.06mrad. The measurement results indicate that the maximum of errors and drifts of the measurement methodology is less than 0.275mrad. The methodology can satisfy the measurement on dynamic angle of sight of higher precision and lager scale.

  11. Using Simulation to Address Hierarchy-Related Errors in Medical Practice

    PubMed Central

    Calhoun, Aaron William; Boone, Megan C; Porter, Melissa B; Miller, Karen H

    2014-01-01

    Objective: Hierarchy, the unavoidable authority gradients that exist within and between clinical disciplines, can lead to significant patient harm in high-risk situations if not mitigated. High-fidelity simulation is a powerful means of addressing this issue in a reproducible manner, but participant psychological safety must be assured. Our institution experienced a hierarchy-related medication error that we subsequently addressed using simulation. The purpose of this article is to discuss the implementation and outcome of these simulations. Methods: Script and simulation flowcharts were developed to replicate the case. Each session included the use of faculty misdirection to precipitate the error. Care was taken to assure psychological safety via carefully conducted briefing and debriefing periods. Case outcomes were assessed using the validated Team Performance During Simulated Crises Instrument. Gap analysis was used to quantify team self-insight. Session content was analyzed via video review. Results: Five sessions were conducted (3 in the pediatric intensive care unit and 2 in the Pediatric Emergency Department). The team was unsuccessful at addressing the error in 4 (80%) of 5 cases. Trends toward lower communication scores (3.4/5 vs 2.3/5), as well as poor team self-assessment of communicative ability, were noted in unsuccessful sessions. Learners had a positive impression of the case. Conclusions: Simulation is a useful means to replicate hierarchy error in an educational environment. This methodology was viewed positively by learner teams, suggesting that psychological safety was maintained. Teams that did not address the error successfully may have impaired self-assessment ability in the communication skill domain. PMID:24867545

  12. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2014-12-01

    Physically based models provide insights into key hydrologic processes, but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology. Here we employ global sensitivity analysis to explore how different error types (i.e., bias, random errors), different error distributions, and different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use Sobol' global sensitivity analysis, which is typically used for model parameters, but adapted here for testing model sensitivity to co-existing errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 520 000 Monte Carlo simulations across four sites and four different scenarios. Model outputs were generally (1) more sensitive to forcing biases than random errors, (2) less sensitive to forcing error distributions, and (3) sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a significant impact depending on forcing error magnitudes. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  13. Evaluation of clinical margins via simulation of patient setup errors in prostate IMRT treatment plans

    SciTech Connect

    Gordon, J. J.; Crimaldi, A. J.; Hagan, M.; Moore, J.; Siebers, J. V.

    2007-01-15

    This work evaluates: (i) the size of random and systematic setup errors that can be absorbed by 5 mm clinical target volume (CTV) to planning target volume (PTV) margins in prostate intensity modulated radiation therapy (IMRT); (ii) agreement between simulation results and published margin recipes; and (iii) whether shifting contours with respect to a static dose distribution accurately predicts dose coverage due to setup errors. In 27 IMRT treatment plans created with 5 mm CTV-to-PTV margins, random setup errors with standard deviations (SDs) of 1.5, 3, 5 and 10 mm were simulated by fluence convolution. Systematic errors with identical SDs were simulated using two methods: (a) shifting the isocenter and recomputing dose (isocenter shift), and (b) shifting patient contours with respect to the static dose distribution (contour shift). Maximum tolerated setup errors were evaluated such that 90% of plans had target coverage equal to the planned PTV coverage. For coverage criteria consistent with published margin formulas, plans with 5 mm margins were found to absorb combined random and systematic SDs{approx_equal}3 mm. Published recipes require margins of 8-10 mm for 3 mm SDs. For the prostate IMRT cases presented here a 5 mm margin would suffice, indicating that published recipes may be pessimistic. We found significant errors in individual plan doses given by the contour shift method. However, dose population plots (DPPs) given by the contour shift method agreed with the isocenter shift method for all structures except the nodal CTV and small bowel. For the nodal CTV, contour shift DPP differences were due to the structure moving outside the patient. Small bowel DPP errors were an artifact of large relative differences at low doses. Estimating individual plan doses by shifting contours with respect to a static dose distribution is not recommended. However, approximating DPPs is acceptable, provided care is taken with structures such as the nodal CTV which lie close

  14. Measurement and simulation of clock errors from resource-constrained embedded systems

    NASA Astrophysics Data System (ADS)

    Collett, M. A.; Matthews, C. E.; Esward, T. J.; Whibberley, P. B.

    2010-07-01

    Resource-constrained embedded systems such as wireless sensor networks are becoming increasingly sought-after in a range of critical sensing applications. Hardware for such systems is typically developed as a general tool, intended for research and flexibility. These systems often have unexpected limitations and sources of error when being implemented for specific applications. We investigate via measurement and simulation the output of the onboard clock of a Crossbow MICAz testbed, comprising a quartz oscillator accessed via a combination of hardware and software. We show that the clock output available to the user suffers a number of instabilities and errors. Using a simple software simulation of the system based on a series of nested loops, we identify the source of each component of the error, finding that there is a 7.5 × 10-6 probability that a given oscillation from the governing crystal will be miscounted, resulting in frequency jitter over a 60 µHz range.

  15. Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.

  16. Computational study of nonlinear plasma waves: 1: Simulation model and monochromatic wave propagation

    NASA Technical Reports Server (NTRS)

    Matda, Y.; Crawford, F. W.

    1974-01-01

    An economical low noise plasma simulation model is applied to a series of problems associated with electrostatic wave propagation in a one-dimensional, collisionless, Maxwellian plasma, in the absence of magnetic field. The model is described and tested, first in the absence of an applied signal, and then with a small amplitude perturbation, to establish the low noise features and to verify the theoretical linear dispersion relation at wave energy levels as low as 0.000,001 of the plasma thermal energy. The method is then used to study propagation of an essentially monochromatic plane wave. Results on amplitude oscillation and nonlinear frequency shift are compared with available theories. The additional phenomena of sideband instability and satellite growth, stimulated by large amplitude wave propagation and the resulting particle trapping, are described.

  17. Effects of Crew Resource Management Training on Medical Errors in a Simulated Prehospital Setting

    ERIC Educational Resources Information Center

    Carhart, Elliot D.

    2012-01-01

    This applied dissertation investigated the effect of crew resource management (CRM) training on medical errors in a simulated prehospital setting. Specific areas addressed by this program included situational awareness, decision making, task management, teamwork, and communication. This study is believed to be the first investigation of CRM…

  18. Gaussian beam propagation in anisotropic turbulence along horizontal links: theory, simulation, and laboratory implementation.

    PubMed

    Xiao, Xifeng; Voelz, David G; Toselli, Italo; Korotkova, Olga

    2016-05-20

    Experimental and theoretical work has shown that atmospheric turbulence can exhibit "non-Kolmogorov" behavior including anisotropy and modifications of the classically accepted spatial power spectral slope, -11/3. In typical horizontal scenarios, atmospheric anisotropy implies that the variations in the refractive index are more spatially correlated in both horizontal directions than in the vertical. In this work, we extend Gaussian beam theory for propagation through Kolmogorov turbulence to the case of anisotropic turbulence along the horizontal direction. We also study the effects of different spatial power spectral slopes on the beam propagation. A description is developed for the average beam intensity profile, and the results for a range of scenarios are demonstrated for the first time with a wave optics simulation and a spatial light modulator-based laboratory benchtop counterpart. The theoretical, simulation, and benchtop intensity profiles show good agreement and illustrate that an elliptically shaped beam profile can develop upon propagation. For stronger turbulent fluctuation regimes and larger anisotropies, the theory predicts a slightly more elliptical form of the beam than is generated by the simulation or benchtop setup. The theory also predicts that without an outer scale limit, the beam width becomes unbounded as the power spectral slope index α approaches a maximum value of 4. This behavior is not seen in the simulation or benchtop results because the numerical phase screens used for these studies do not model the unbounded wavefront tilt component implied in the analytic theory. PMID:27411135

  19. PUQ: A code for non-intrusive uncertainty propagation in computer simulations

    NASA Astrophysics Data System (ADS)

    Hunt, Martin; Haley, Benjamin; McLennan, Michael; Koslowski, Marisol; Murthy, Jayathi; Strachan, Alejandro

    2015-09-01

    We present a software package for the non-intrusive propagation of uncertainties in input parameters through computer simulation codes or mathematical models and associated analysis; we demonstrate its use to drive micromechanical simulations using a phase field approach to dislocation dynamics. The PRISM uncertainty quantification framework (PUQ) offers several methods to sample the distribution of input variables and to obtain surrogate models (or response functions) that relate the uncertain inputs with the quantities of interest (QoIs); the surrogate models are ultimately used to propagate uncertainties. PUQ requires minimal changes in the simulation code, just those required to annotate the QoI(s) for its analysis. Collocation methods include Monte Carlo, Latin Hypercube and Smolyak sparse grids and surrogate models can be obtained in terms of radial basis functions and via generalized polynomial chaos. PUQ uses the method of elementary effects for sensitivity analysis in Smolyak runs. The code is available for download and also available for cloud computing in nanoHUB. PUQ orchestrates runs of the nanoPLASTICITY tool at nanoHUB where users can propagate uncertainties in dislocation dynamics simulations using simply a web browser, without downloading or installing any software.

  20. Stream-floodwave propagation through the Great Bend alluvial aquifer, Kansas: Field measurements and numerical simulations

    USGS Publications Warehouse

    Sophocleous, M.A.

    1991-01-01

    The hypothesis is explored that groundwater-level rises in the Great Bend Prairie aquifer of Kansas are caused not only by water percolating downward through the soil but also by pressure pulses from stream flooding that propagate in a translatory motion through numerous high hydraulic diffusivity buried channels crossing the Great Bend Prairie aquifer in an approximately west to east direction. To validate this hypothesis, two transects of wells in a north-south and east-west orientation crossing and alongside some paleochannels in the area were instrumented with water-level-recording devices; streamflow data from all area streams were obtained from available stream-gaging stations. A theoretical approach was also developed to conceptualize numerically the stream-aquifer processes. The field data and numerical simulations provided support for the hypothesis. Thus, observation wells located along the shoulders or in between the inferred paleochannels show little or no fluctuations and no correlations with streamflow, whereas wells located along paleochannels show high water-level fluctuations and good correlation with the streamflows of the stream connected to the observation site by means of the paleochannels. The stream-aquifer numerical simulation results demonstrate that the larger the hydraulic diffusivity of the aquifer, the larger the extent of pressure pulse propagation and the faster the propagation speed. The conceptual simulation results indicate that long-distance propagation of stream floodwaves (of the order of tens of kilometers) through the Great Bend aquifer is indeed feasible with plausible stream and aquifer parameters. The sensitivity analysis results indicate that the extent and speed of pulse propagation is more sensitive to variations of stream roughness (Manning's coefficient) and stream channel slope than to any aquifer parameter. ?? 1991.

  1. Exploring errors in paleoclimate proxy reconstructions using Monte Carlo simulations: paleotemperature from mollusk and coral geochemistry

    NASA Astrophysics Data System (ADS)

    Carré, M.; Sachs, J. P.; Wallace, J. M.; Favier, C.

    2012-03-01

    Quantitative reconstructions of the past climate statistics from geochemical coral or mollusk records require quantified error bars in order to properly interpret the amplitude of the climate change and to perform meaningful comparisons with climate model outputs. We introduce here a more precise categorization of reconstruction errors, differentiating the error bar due to the proxy calibration uncertainty from the standard error due to sampling and variability in the proxy formation process. Then, we propose a numerical approach based on Monte Carlo simulations with surrogate proxy-derived climate records. These are produced by perturbing a known time series in a way that mimics the uncertainty sources in the proxy climate reconstruction. A freely available algorithm, MoCo, was designed to be parameterized by the user and to calculate realistic systematic and standard errors of the mean and the variance of the annual temperature, and of the mean and the variance of the temperature seasonality reconstructed from marine accretionary archive geochemistry. In this study, the algorithm is used for sensitivity experiments in a case study to characterize and quantitatively evaluate the sensitivity of systematic and standard errors to sampling size, stochastic uncertainty sources, archive-specific biological limitations, and climate non-stationarity. The results of the experiments yield an illustrative example of the range of variations of the standard error and the systematic error in the reconstruction of climate statistics in the Eastern Tropical Pacific. Thus, we show that the sample size and the climate variability are the main sources of the standard error. The experiments allowed the identification and estimation of systematic bias that would not otherwise be detected because of limited modern datasets. Our study demonstrates that numerical simulations based on Monte Carlo analyses are a simple and powerful approach to improve the understanding of the proxy records

  2. Simulating Reflective Propagating Slow-wave/flow in a Flaring Loop

    NASA Astrophysics Data System (ADS)

    Fang, X.

    2015-12-01

    Quasi-periodic propagating intensity disturbances have been observed in large coronal loops in EUV images over a decade, and are widely accepted to be slow magnetosonic waves. However, spectroscopic observations from Hinode/EIS revealed their association with persistent coronal upflows, making this interpretation debatable. We perform a 2.5D magnetohydrodynamic simulation to imitate the chromospheric evaporation and the following reflected patterns in a post flare loop. Our model encompasses the corona, transition region, and chromosphere. We demonstrate that the quasi periodic propagating intensity variations captured by our synthesized AIA 131, 94~Å~emission images match the previous observations well. With particle tracers in the simulation, we confirm that these quasi periodic propagating intensity variations consist of reflected slow mode waves and mass flows with an average speed of 310 km/s in an 80 Mm length loop with an average temperature of 9 MK. With the synthesized Doppler shift velocity and intensity maps in SUMER Fe XIX line emission, we confirm that these reflected slow mode waves are propagating waves.

  3. A study of differentiation errors in large-eddy simulations based on the EDQNM theory

    SciTech Connect

    Berland, J. Bogey, C.; Bailly, C.

    2008-09-10

    This paper is concerned with the investigation of numerical errors in large-eddy simulations by means of two-point turbulence modeling. Based on the eddy-damped quasi-normal Markovian (EDQNM) theory, a stochastic model is developed in order to predict the time evolution of the kinetic energy spectrum obtained by a large-eddy simulation (LES), including the effects of the numerics. Using this framework, the influence of the accuracy of the approximate space differencing schemes on LES quality is studied, for decaying homogeneous isotropic incompressible turbulence, with Reynolds numbers Re{sub {lambda}} based on the transverse Taylor scale equal to 780, 2500 and 8000. The results show that the discretization of the filtered Navier-Stokes equations leads to differentiation and aliasing errors. Error spectra are also presented, and indicate that the numerical errors are mainly originating from the approximate differentiation. In addition, increasing the order of accuracy of the differencing schemes or using algorithms optimized in the Fourier space is found to widen the range of well-resolved scales. Unfortunately, for all the schemes, the smaller scales with wavenumbers close to the grid cut-off wavenumber, are badly calculated and generate differentiation errors over the whole energy spectrum. The eventual use of explicit filtering to remove spurious motions with short wavelength is finally shown to significantly improve LES accuracy.

  4. A simulation study with a new residual ionospheric error model for GPS radio occultation climatologies

    NASA Astrophysics Data System (ADS)

    Danzer, J.; Healy, S. B.; Culverwell, I. D.

    2015-08-01

    In this study, a new model was explored which corrects for higher order ionospheric residuals in Global Positioning System (GPS) radio occultation (RO) data. Recently, the theoretical basis of this new "residual ionospheric error model" has been outlined (Healy and Culverwell, 2015). The method was tested in simulations with a one-dimensional model ionosphere. The proposed new model for computing the residual ionospheric error is the product of two factors, one of which expresses its variation from profile to profile and from time to time in terms of measurable quantities (the L1 and L2 bending angles), while the other describes the weak variation with altitude. A simple integral expression for the residual error (Vorob'ev and Krasil'nikova, 1994) has been shown to be in excellent numerical agreement with the exact value, for a simple Chapman layer ionosphere. In this case, the "altitudinal" element of the residual error varies (decreases) by no more than about 25 % between ~10 and ~100 km for physically reasonable Chapman layer parameters. For other simple model ionospheres the integral can be evaluated exactly, and results are in reasonable agreement with those of an equivalent Chapman layer. In this follow-up study the overall objective was to explore the validity of the new residual ionospheric error model for more detailed simulations, based on modeling through a complex three-dimensional ionosphere. The simulation study was set up, simulating day and night GPS RO profiles for the period of a solar cycle with and without an ionosphere. The residual ionospheric error was studied, the new error model was tested, and temporal and spatial variations of the model were investigated. The model performed well in the simulation study, capturing the temporal variability of the ionospheric residual. Although it was not possible, due to high noise of the simulated bending-angle profiles at mid- to high latitudes, to perform a thorough latitudinal investigation of the

  5. Accelerating spectral-element simulations of seismic wave propagation using local time stepping

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Rietmann, M.; Galvez, P.; Nissen-Meyer, T.; Grote, M.; Schenk, O.

    2013-12-01

    Seismic tomography using full-waveform inversion requires accurate simulations of seismic wave propagation in complex 3D media. However, finite element meshing in complex media often leads to areas of local refinement, generating small elements that accurately capture e.g. strong topography and/or low-velocity sediment basins. For explicit time schemes, this dramatically reduces the global time-step for wave-propagation problems due to numerical stability conditions, ultimately making seismic inversions prohibitively expensive. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. Numerical simulations are thus liberated of global time-step constraints potentially speeding up simulation runtimes significantly. We present here a new, efficient multi-level LTS-Newmark scheme for general use with spectral-element methods (SEM) with applications in seismic wave propagation. We fit the implementation of our scheme onto the package SPECFEM3D_Cartesian, which is a widely used community code, simulating seismic and acoustic wave propagation in earth-science applications. Our new LTS scheme extends the 2nd-order accurate Newmark time-stepping scheme, and leads to an efficient implementation, producing real-world speedup of multi-resolution seismic applications. Furthermore, we generalize the method to utilize many refinement levels with a design specifically for continuous finite elements. We demonstrate performance speedup using a state-of-the-art dynamic earthquake rupture model for the Tohoku-Oki event, which is currently limited by small elements along the rupture fault. Utilizing our new algorithmic LTS implementation together with advances in exploiting graphic processing units (GPUs), numerical seismic wave propagation simulations in complex media will dramatically reduce computation times, empowering high

  6. Efficient simulation for fixed-receiver bistatic SAR with time and frequency synchronization errors

    NASA Astrophysics Data System (ADS)

    Yan, Feifei; Chang, Wenge; Li, Xiangyang

    2015-12-01

    Raw signal simulation is a useful tool for synthetic aperture radar (SAR) system design, mission planning, processing algorithm testing, and inversion algorithm design. Time and frequency synchronization is the key technique of bistatic SAR (BiSAR) system, and raw data simulation is an effective tool for verifying the time and frequency synchronization techniques. According to the two-dimensional (2-D) frequency spectrum of fixed-receiver BiSAR, a rapid raw data simulation approach with time and frequency synchronization errors is proposed in this paper. Through 2-D inverse Stolt transform in 2-D frequency domain and phase compensation in range-Doppler frequency domain, this method can significantly improve the efficiency of scene raw data simulation. Simulation results of point targets and extended scene are presented to validate the feasibility and efficiency of the proposed simulation approach.

  7. Little shop of errors: an innovative simulation patient safety workshop for community health care professionals.

    PubMed

    Tupper, Judith B; Pearson, Karen B; Meinersmann, Krista M; Dvorak, Jean

    2013-06-01

    Continuing education for health care workers is an important mechanism for maintaining patient safety and high-quality health care. Interdisciplinary continuing education that incorporates simulation can be an effective teaching strategy for improving patient safety. Health care professionals who attended a recent Patient Safety Academy had the opportunity to experience firsthand a simulated situation that included many potential patient safety errors. This high-fidelity activity combined the best practice components of a simulation and a collaborative experience that promoted interdisciplinary communication and learning. Participants were challenged to see, learn, and experience "ah-ha" moments of insight as a basis for error reduction and quality improvement. This innovative interdisciplinary educational training method can be offered in place of traditional lecture or online instruction in any facility, hospital, nursing home, or community care setting. PMID:23654294

  8. Attributing uncertainties in simulated biospheric carbon fluxes to different error sources

    NASA Astrophysics Data System (ADS)

    Lin, J. C.; Pejam, M. R.; Chan, E.; Wofsy, S. C.; Gottlieb, E. W.; Margolis, H. A.; McCaughey, J. H.

    2011-06-01

    Estimating the current sources and sinks of carbon and projecting future levels of CO2 and climate require biospheric carbon models that cover the landscape. Such models inevitably suffer from deficiencies and uncertainties. This paper addresses how to quantify errors in modeled carbon fluxes and then trace them to specific input variables. To date, few studies have examined uncertainties in biospheric models in a quantitative fashion that are relevant to landscape-scale simulations. In this paper, we introduce a general framework to quantify errors in biospheric carbon models that "unmix" the contributions to the total uncertainty in simulated carbon fluxes and attribute the error to different variables. To illustrate this framework we apply and use a simple biospheric model, the Vegetation Photosynthesis and Respiration Model (VPRM), in boreal forests of central Canada, using eddy covariance flux measurement data from two main sites of the Canadian Carbon Program (CCP). We explicitly distinguish between systematic errors ("biases") and random errors and focus on the impact of errors present in biospheric parameters as well as driver data sets (satellite indices, temperature, solar radiation, and land cover). Biases in downward shortwave radiation accumulated to the most significant amount out of the driver data sets and accounted for a significant percentage of the annually summed carbon uptake. However, the largest cumulative errors were shown to stem from biospheric parameters controlling the light-use efficiency and respiration-temperature relationships. This work represents a step toward a carbon model-data fusion system because in such systems the outcome is determined as much by uncertainties as by the measurements themselves.

  9. Paul Trap Simulator Experiment to Model Intense Beam Propagation in Alternating-gradient Transport Systems

    SciTech Connect

    Erik P. Gilson; Ronald C. Davidson; Philip C. Efthimion; Richard Majeski

    2004-01-29

    The results presented here demonstrate that the Paul Trap Simulator Experiment (PTSX) simulates the propagation of intense charged particle beams over distances of many kilometers through magnetic alternating-gradient (AG) transport systems by making use of the similarity between the transverse dynamics of particles in the two systems. Plasmas have been trapped that correspond to normalized intensity parameters s = wp2 (0)/2wq2 * 0.8, where wp(r) is the plasmas frequency and wq is the average transverse focusing frequency in the smooth-focusing approximation. The measured root-mean-squared (RMS) radius of the beam is consistent with a model, equally applicable to both PTSX and AG systems that balances the average inward confining force against the outward pressure-gradient and space-charge forces. The PTSX device confines one-component cesium ion plasmas for hundreds of milliseconds, which is equivalent to over 10 km of beam propagation.

  10. A Large Scale Simulation of Ultrasonic Wave Propagation in Concrete Using Parallelized EFIT

    NASA Astrophysics Data System (ADS)

    Nakahata, Kazuyuki; Tokunaga, Jyunichi; Kimoto, Kazushi; Hirose, Sohichi

    A time domain simulation tool for the ultrasonic propagation in concrete is developed using the elastodynamic finite integration technique (EFIT) and the image-based modeling. The EFIT is a grid-based time domain differential technique and easily treats the different boundary conditions in the inhomogeneous material such as concrete. Here, the geometry of concrete is determined by a scanned image of concrete and the processed color bitmap image is fed into the EFIT. Although the ultrasonic wave simulation in such a complex material requires much time to calculate, we here execute the EFIT by a parallel computing technique using a shared memory computer system. In this study, formulations of the EFIT and treatment of the different boundary conditions are briefly described and examples of shear horizontal wave propagations in reinforced concrete are demonstrated. The methodology and performance of parallelization for the EFIT are also discussed.

  11. Confirmation of standard error analysis techniques applied to EXAFS using simulations

    SciTech Connect

    Booth, Corwin H; Hu, Yung-Jin

    2009-12-14

    Systematic uncertainties, such as those in calculated backscattering amplitudes, crystal glitches, etc., not only limit the ultimate accuracy of the EXAFS technique, but also affect the covariance matrix representation of real parameter errors in typical fitting routines. Despite major advances in EXAFS analysis and in understanding all potential uncertainties, these methods are not routinely applied by all EXAFS users. Consequently, reported parameter errors are not reliable in many EXAFS studies in the literature. This situation has made many EXAFS practitioners leery of conventional error analysis applied to EXAFS data. However, conventional error analysis, if properly applied, can teach us more about our data, and even about the power and limitations of the EXAFS technique. Here, we describe the proper application of conventional error analysis to r-space fitting to EXAFS data. Using simulations, we demonstrate the veracity of this analysis by, for instance, showing that the number of independent dat a points from Stern's rule is balanced by the degrees of freedom obtained from a 2 statistical analysis. By applying such analysis to real data, we determine the quantitative effect of systematic errors. In short, this study is intended to remind the EXAFS community about the role of fundamental noise distributions in interpreting our final results.

  12. Effects of parameter errors in the simulation of transcranial focused ultrasound

    NASA Astrophysics Data System (ADS)

    Vaughan, Timothy E.; Hynynen, Kullervo

    2002-01-01

    Previous numerical simulation work has supported experiments showing that a sharply focused transcranial ultrasound field can be generated for noninvasive therapy and surgery in the brain. The predicted pressure gain and optimal sonicating frequency could be affected by uncertainty in the simulation parameters. We estimate the effects of uncertainty in the speed of sound in the skull and brain, and in CT data that specifies the contour of the skull. The results of our simulations indicate that each of these errors may change the predicted pressure gain by up to a few percent, but the predicted optimal frequency is not significantly affected.

  13. Errors in short circuit measurements due to spectral mismatch between sunlight and solar simulators

    NASA Technical Reports Server (NTRS)

    Curtis, H. B.

    1976-01-01

    Errors in short circuit current measurement were calculated for a variety of spectral mismatch conditions. The differences in spectral irradiance between terrestrial sunlight and three types of solar simulator were studied, as well as the differences in spectral response between three types of reference solar cells and various test cells. The simulators considered were a short arc xenon lamp AMO sunlight simulator, an ordinary quartz halogen lamp, and an ELH-type quartz halogen lamp. Three types of solar cells studied were a silicon cell, a cadmium sulfide cell and a gallium arsenide cell.

  14. Simulations and Characteristics of Large Solar Events Propagating Throughout the Heliosphere and Beyond (Invited)

    NASA Astrophysics Data System (ADS)

    Intriligator, D. S.; Sun, W.; Detman, T. R.; Dryer, Ph D., M.; Intriligator, J.; Deehr, C. S.; Webber, W. R.; Gloeckler, G.; Miller, W. D.

    2015-12-01

    Large solar events can have severe adverse global impacts at Earth. These solar events also can propagate throughout the heliopshere and into the interstellar medium. We focus on the July 2012 and Halloween 2003 solar events. We simulate these events starting from the vicinity of the Sun at 2.5 Rs. We compare our three dimensional (3D) time-dependent simulations to available spacecraft (s/c) observations at 1 AU and beyond. Based on the comparisons of the predictions from our simulations with in-situ measurements we find that the effects of these large solar events can be observed in the outer heliosphere, the heliosheath, and even into the interstellar medium. We use two simulation models. The HAFSS (HAF Source Surface) model is a kinematic model. HHMS-PI (Hybrid Heliospheric Modeling System with Pickup protons) is a numerical magnetohydrodynamic solar wind (SW) simulation model. Both HHMS-PI and HAFSS are ideally suited for these analyses since starting at 2.5 Rs from the Sun they model the slowly evolving background SW and the impulsive, time-dependent events associated with solar activity. Our models naturally reproduce dynamic 3D spatially asymmetric effects observed throughout the heliosphere. Pre-existing SW background conditions have a strong influence on the propagation of shock waves from solar events. Time-dependence is a crucial aspect of interpreting s/c data. We show comparisons of our simulation results with STEREO A, ACE, Ulysses, and Voyager s/c observations.

  15. On the Chemical Basis of Trotter-Suzuki Errors in Quantum Chemistry Simulation

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan; McClean, Jarrod; Wecker, Dave; Aspuru-Guzik, Alán; Wiebe, Nathan

    2015-03-01

    Although the simulation of quantum chemistry is one of the most anticipated applications of quantum computing, the scaling of known upper bounds on the complexity of these algorithms is daunting. Prior work has bounded errors due to Trotterization in terms of the norm of the error operator and analyzed scaling with respect to the number of spin-orbitals. However, we find that these error bounds can be loose by up to sixteen orders of magnitude for some molecules. Furthermore, numerical results for small systems fail to reveal any clear correlation between ground state error and number of spin-orbitals. We instead argue that chemical properties, such as the maximum nuclear charge in a molecule and the filling fraction of orbitals, can be decisive for determining the cost of a quantum simulation. Our analysis motivates several strategies to use classical processing to further reduce the required Trotter step size and to estimate the necessary number of steps, without requiring additional quantum resources. Finally, we demonstrate improved methods for state preparation techniques which are asymptotically superior to proposals in the simulation literature.

  16. Chemical basis of Trotter-Suzuki errors in quantum chemistry simulation

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan; McClean, Jarrod; Wecker, Dave; Aspuru-Guzik, Alán; Wiebe, Nathan

    2015-02-01

    Although the simulation of quantum chemistry is one of the most anticipated applications of quantum computing, the scaling of known upper bounds on the complexity of these algorithms is daunting. Prior work has bounded errors due to discretization of the time evolution (known as "Trotterization") in terms of the norm of the error operator and analyzed scaling with respect to the number of spin orbitals. However, we find that these error bounds can be loose by up to 16 orders of magnitude for some molecules. Furthermore, numerical results for small systems fail to reveal any clear correlation between ground-state error and number of spin orbitals. We instead argue that chemical properties, such as the maximum nuclear charge in a molecule and the filling fraction of orbitals, can be decisive for determining the cost of a quantum simulation. Our analysis motivates several strategies to use classical processing to further reduce the required Trotter step size and estimate the necessary number of steps, without requiring additional quantum resources. Finally, we demonstrate improved methods for state preparation techniques which are asymptotically superior to proposals in the simulation literature.

  17. Simulation analysis of position error of parabolic trough concentrator mirror installation

    NASA Astrophysics Data System (ADS)

    Tian, Guo-liang; Yan, Bi-xi; Dong, Mingli; Wang, Jun; Sun, Peng

    2015-02-01

    The research showed the simulation of position error when assembling a reflective mirror of parabolic trough concentrator. The shape of a reflective mirror is like a parabolic cylinder model, relying on the back of the four-point mounted on a special setup, making it unable to move. Therefore, it is of great importance of the machining precision of special bracket. We need to analyze the influence of reflective mirror`s intercept factor in order to guide the processing precision. It is assumed that each reflective mirror is rigid, we have calculated the intercept factor of reflector with mounting points' random error of different standard deviation, comparing the simulating results with TRACEPRO. As a sequence, we can approve the feasibility of the algorithm, and give the effect of different random errors on the light-gathering efficiency. On the basis, we provide the machining accuracy of bracket. The simulation results show that when the mounting points' standard deviation of position error is less than 0.5 mm, the intercept factor of receiver has reached upwards of 92% with 60 mm diameter for receiver, which can satisfy the design requirements.

  18. Two spatial light modulator system for laboratory simulation of random beam propagation in random media.

    PubMed

    Wang, Fei; Toselli, Italo; Korotkova, Olga

    2016-02-10

    An optical system consisting of a laser source and two independent consecutive phase-only spatial light modulators (SLMs) is shown to accurately simulate a generated random beam (first SLM) after interaction with a stationary random medium (second SLM). To illustrate the range of possibilities, a recently introduced class of random optical frames is examined on propagation in free space and several weak turbulent channels with Kolmogorov and non-Kolmogorov statistics. PMID:26906385

  19. Discretization errors in molecular dynamics simulations with deterministic and stochastic thermostats

    SciTech Connect

    Davidchack, Ruslan L.

    2010-12-10

    We investigate the influence of numerical discretization errors on computed averages in a molecular dynamics simulation of TIP4P liquid water at 300 K coupled to different deterministic (Nose-Hoover and Nose-Poincare) and stochastic (Langevin) thermostats. We propose a couple of simple practical approaches to estimating such errors and taking them into account when computing the averages. We show that it is possible to obtain accurate measurements of various system quantities using step sizes of up to 70% of the stability threshold of the integrator, which for the system of TIP4P liquid water at 300 K corresponds to the step size of about 7 fs.

  20. The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Suarez, M. J.; Bacmeister, J. T.; Chen, B.; Takacs, L. L.

    2006-01-01

    Previous GCM studies have found that the systematic errors in the GCM simulation of the seasonal mean ITCZ intensity and location could be substantially corrected by adding suitable amount of rain re-evaporation or cumulus momentum transport. However, the reason(s) for these systematic errors and solutions has remained a puzzle. In this work the knowledge gained from previous studies of the ITCZ in an aqua-planet model with zonally uniform SST is applied to solve this puzzle. The solution is supported by further aqua-planet and full model experiments using the latest version of the Goddard Earth Observing System GCM.

  1. Numerical simulation of seismic wave propagation produced by earthquake by using a particle method

    NASA Astrophysics Data System (ADS)

    Takekawa, Junichi; Madariaga, Raul; Mikada, Hitoshi; Goto, Tada-nori

    2012-12-01

    We propose a forward wavefield simulation based on a particle continuum model to simulate seismic waves travelling through a complex subsurface structure with arbitrary topography. The inclusion of arbitrary topography in the numerical simulation is a key issue not only for scientific interests but also for disaster prediction and mitigation purposes. In this study, a Hamiltonian particle method (HPM) is employed. It is easy to introduce traction-free boundary conditions in HPM and to refine the particle density in space. Any model with complex geometry and velocity structure can be simulated by HPM because the connectivity between particles is easily calculated based on their relative positions and the free surfaces are automatically introduced. In addition, the spatial resolution of the simulation could be refined in a simple manner even in a relatively complex velocity structure with arbitrary surface topography. For these reasons, the present method possesses great potential for the simulation of strong ground motions. In this paper, we first investigate the dispersion property of HPM through a plane wave analysis. Next, we simulate surface wave propagation in an elastic half space, and compare the numerical results with analytical solutions. HPM is more dispersive than FDM, however, our local refinement technique shows accuracy improvements in a simple and effective manner. Next, we introduce an earthquake double-couple source in HPM and compare a simulated seismic waveform obtained with HPM with that computed with FDM to demonstrate the performance of the method. Furthermore, we simulate the surface wave propagation in a model with a surface of arbitrary topographical shape and compare with results computed with FEM. In each simulation, HPM shows good agreement with the reference solutions. Finally, we discuss the calculation costs of HPM including its accuracy.

  2. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future

  3. Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation

    NASA Astrophysics Data System (ADS)

    Li, C.

    2012-07-01

    POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  4. GPU-based Monte Carlo simulation for light propagation in complex heterogeneous tissues.

    PubMed

    Ren, Nunu; Liang, Jimin; Qu, Xiaochao; Li, Jianfeng; Lu, Bingjia; Tian, Jie

    2010-03-29

    As the most accurate model for simulating light propagation in heterogeneous tissues, Monte Carlo (MC) method has been widely used in the field of optical molecular imaging. However, MC method is time-consuming due to the calculations of a large number of photons propagation in tissues. The structural complexity of the heterogeneous tissues further increases the computational time. In this paper we present a parallel implementation for MC simulation of light propagation in heterogeneous tissues whose surfaces are constructed by different number of triangle meshes. On the basis of graphics processing units (GPU), the code is implemented with compute unified device architecture (CUDA) platform and optimized to reduce the access latency as much as possible by making full use of the constant memory and texture memory on GPU. We test the implementation in the homogeneous and heterogeneous mouse models with a NVIDIA GTX 260 card and a 2.40GHz Intel Xeon CPU. The experimental results demonstrate the feasibility and efficiency of the parallel MC simulation on GPU. PMID:20389700

  5. Numerical simulation of elastic wave propagation in isotropic media considering material and geometrical nonlinearities

    NASA Astrophysics Data System (ADS)

    Rauter, N.; Lammering, R.

    2015-04-01

    In order to detect micro-structural damages accurately new methods are currently developed. A promising tool is the generation of higher harmonic wave modes caused by the nonlinear Lamb wave propagation in plate like structures. Due to the very small amplitudes a cumulative effect is used. To get a better overview of this inspection method numerical simulations are essential. Previous studies have developed the analytical description of this phenomenon which is based on the five-constant nonlinear elastic theory. The analytical solution has been approved by numerical simulations. In this work first the nonlinear cumulative wave propagation is simulated and analyzed considering micro-structural cracks in thin linear elastic isotropic plates. It is shown that there is a cumulative effect considering the S1-S2 mode pair. Furthermore the sensitivity of the relative acoustical nonlinearity parameter regarding those damages is validated. Furthermore, an influence of the crack size and orientation on the nonlinear wave propagation behavior is observed. In a second step the micro-structural cracks are replaced by a nonlinear material model. Instead of the five-constant nonlinear elastic theory hyperelastic material models that are implemented in commonly used FEM software are used to simulate the cumulative effect of the higher harmonic Lamb wave generation. The cumulative effect as well as the different nonlinear behavior of the S1-S2 and S2-S4 mode pairs are found by using these hyperelastic material models. It is shown that, both numerical simulations, which take into account micro-structural cracks on the one hand and nonlinear material on the other hand, lead to comparable results. Furthermore, in comparison to the five-constant nonlinear elastic theory the use of the well established hyperelastic material models like Neo-Hooke and Mooney-Rivlin are a suitable alternative to simulate the cumulative higher harmonic generation.

  6. Numerical simulation and experimental validation of Lamb wave propagation behavior in composite plates

    NASA Astrophysics Data System (ADS)

    Kim, Sungwon; Uprety, Bibhisha; Mathews, V. John; Adams, Daniel O.

    2015-03-01

    Structural Health Monitoring (SHM) based on Acoustic Emission (AE) is dependent on both the sensors to detect an impact event as well as an algorithm to determine the impact location. The propagation of Lamb waves produced by an impact event in thin composite structures is affected by several unique aspects including material anisotropy, ply orientations, and geometric discontinuities within the structure. The development of accurate numerical models of Lamb wave propagation has important benefits towards the development of AE-based SHM systems for impact location estimation. Currently, many impact location algorithms utilize the time of arrival or velocities of Lamb waves. Therefore the numerical prediction of characteristic wave velocities is of great interest. Additionally, the propagation of the initial symmetric (S0) and asymmetric (A0) wave modes is important, as these wave modes are used for time of arrival estimation. In this investigation, finite element analyses were performed to investigate aspects of Lamb wave propagation in composite plates with active signal excitation. A comparative evaluation of two three-dimensional modeling approaches was performed, with emphasis placed on the propagation and velocity of both the S0 and A0 wave modes. Results from numerical simulations are compared to experimental results obtained from active AE testing. Of particular interest is the directional dependence of Lamb waves in quasi-isotropic carbon/epoxy composite plates. Numerical and experimental results suggest that although a quasi-isotropic composite plate may have the same effective elastic modulus in all in-plane directions, the Lamb wave velocity may have some directional dependence. Further numerical analyses were performed to investigate Lamb wave propagation associated with circular cutouts in composite plates.

  7. Experimental study on propagation of fault slip along a simulated rock fault

    NASA Astrophysics Data System (ADS)

    Mizoguchi, K.

    2015-12-01

    Around pre-existing geological faults in the crust, we have often observed off-fault damage zone where there are many fractures with various scales, from ~ mm to ~ m and their density typically increases with proximity to the fault. One of the fracture formation processes is considered to be dynamic shear rupture propagation on the faults, which leads to the occurrence of earthquakes. Here, I have conducted experiments on propagation of fault slip along a pre-cut rock surface to investigate the damaging behavior of rocks with slip propagation. For the experiments, I used a pair of metagabbro blocks from Tamil Nadu, India, of which the contacting surface simulates a fault of 35 cm in length and 1cm width. The experiments were done with the similar uniaxial loading configuration to Rosakis et al. (2007). Axial load σ is applied to the fault plane with an angle 60° to the loading direction. When σ is 5kN, normal and shear stresses on the fault are 1.25MPa and 0.72MPa, respectively. Timing and direction of slip propagation on the fault during the experiments were monitored with several strain gauges arrayed at an interval along the fault. The gauge data were digitally recorded with a 1MHz sampling rate and 16bit resolution. When σ is 4.8kN is applied, we observed some fault slip events where a slip nucleates spontaneously in a subsection of the fault and propagates to the whole fault. However, the propagation speed is about 1.2km/s, much lower than the S-wave velocity of the rock. This indicates that the slip events were not earthquake-like dynamic rupture ones. More efforts are needed to reproduce earthquake-like slip events in the experiments. This work is supported by the JSPS KAKENHI (26870912).

  8. 3D simulation of seismic wave propagation around a tunnel using the spectral element method

    NASA Astrophysics Data System (ADS)

    Lambrecht, L.; Friederich, W.

    2010-05-01

    We model seismic wave propagation in the environment of a tunnel for later application to reconnaissance. Elastic wave propagation can be simulated by different numerical techniques such as finite differences and pseudospectral methods. Their disadvantage is the lack of accuracy on free surfaces, numerical dispersion and inflexibility of the mesh. Here we use the software package SPECFEM3D_SESAME in an svn development version, which is based on the spectral element method (SEM) and can handle complex mesh geometries. A weak form of the elastic wave equation leads to a linear system of equations with a diagonal mass matrix, where the free surface boundary of the tunnel can be treated under realistic conditions and can be effectively implemented in parallel. We have designed a 3D external mesh including a tunnel and realistic features such as layers and holes to simulate elastic wave propagation in the zone around the tunnel. The source is acting at the tunnel surface so that we excite Rayleigh waves which propagate to the front face of the tunnel. A conversion takes place and a high amplitude S-wave is radiated in the direction of the tunnel axis. Reflections from perturbations in front of the tunnel can be measured by receivers implemented on the tunnel face. For a shallow tunnel the land surface has high influence on the wave propagation. By implementing additional receivers at this surface we intent to improve the prediction. It shows that the SEM is very capable to handle the complex geometry of the model and especially incorporates the free surfaces of the model.

  9. Simulation of two-dimensional propagation and scattering of ultrasonic waves on personal computers

    NASA Astrophysics Data System (ADS)

    Yim, Hyunjune; Choi, Yongseok

    2001-04-01

    Several problems of two-dimensional propagation and scattering of ultrasonic waves are simulated and visualized by using a program based on the mass-spring lattice model. The problems are related to reflection, refraction, and diffraction of ultrasonic waves. It is found that all numerical results are in good qualitative agreement with the wave mechanics. Features incorporated into the updated program are explained. Though the present state is far from our ultimate goal to develop a complete simulator of ultrasonic testing, the developed software is useful for educational purposes even at the present stage of development.

  10. Simulation of Transrib HIFU Propagation and the Strategy of Phased-array Activation

    NASA Astrophysics Data System (ADS)

    Zhou, Yufeng; Wang, Mingjun

    Liver ablation is challenging in high-intensity focused ultrasound (HIFU) because of the presence of ribs and great inhomogeneity in multi-layer tissue. In this study, angular spectrum approach (ASA) has been used in the wave propagation from phased-array HIFU transducer, and diffraction, attenuation and the nonlinearity are accounted for by means of second order operator splitting method. Bioheat equation is used to simulate the subsequent temperature elevation and lesion formation with the formation of shifted focus and multiple foci. In summary, our approach could simulate the performance of phased-array HIFU in the clinics and then develop an appropriate treatment plan.

  11. An investigation of the information propagation and entropy transport aspects of Stirling machine numerical simulation

    NASA Technical Reports Server (NTRS)

    Goldberg, Louis F.

    1992-01-01

    Aspects of the information propagation modeling behavior of integral machine computer simulation programs are investigated in terms of a transmission line. In particular, the effects of pressure-linking and temporal integration algorithms on the amplitude ratio and phase angle predictions are compared against experimental and closed-form analytic data. It is concluded that the discretized, first order conservation balances may not be adequate for modeling information propagation effects at characteristic numbers less than about 24. An entropy transport equation suitable for generalized use in Stirling machine simulation is developed. The equation is evaluated by including it in a simulation of an incompressible oscillating flow apparatus designed to demonstrate the effect of flow oscillations on the enhancement of thermal diffusion. Numerical false diffusion is found to be a major factor inhibiting validation of the simulation predictions with experimental and closed-form analytic data. A generalized false diffusion correction algorithm is developed which allows the numerical results to match their analytic counterparts. Under these conditions, the simulation yields entropy predictions which satisfy Clausius' inequality.

  12. Propagation of Pi2 pulsations through the braking region in global MHD simulations

    NASA Astrophysics Data System (ADS)

    Ream, J. B.; Walker, R. J.; Ashour-Abdalla, M.; El-Alaoui, M.; Wiltberger, M.; Kivelson, M. G.; Goldstein, M. L.

    2015-12-01

    We investigate the propagation of Pi2 period pulsations from their origin in the plasma sheet through the braking region, the region where the fast flows are slowed as they approach the inner edge of the plasma sheet. Our approach is to use both the University of California, Los Angeles (UCLA) and Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamic (MHD) computer codes to simulate the Earth's magnetosphere during a substorm that occurred on 14 September 2004 when Pi2 pulsations were observed. We use two different MHD models in order to test the robustness of our conclusions about Pi2. The simulations are then compared with ground-based and satellite data. We find that the propagation of the pulsations in the simulations, especially through the braking region, depends strongly on the ionospheric models used at the inner boundary of the MHD models. With respect to typical observed values, the modeled conductances are high in the UCLA model and low in the LFM model. The different conductances affect the flows, producing stronger line tying that slows the flow in the braking region more in the UCLA model than in the LFM model. Therefore, perturbations are able to propagate much more freely into the inner magnetosphere in the LFM results. However, in both models Pi2 period perturbations travel with the dipolarization front (DF) that forms at the earthward edge of the flow channel, but as the DF slows in the braking region, -8≤x≤-6 RE, the Pi2 period perturbations begin to travel ahead of it into the inner magnetosphere. This indicates that the flow channels generate compressional waves with periods that fall within the Pi2 range and that, as the flows themselves are stopped in the braking region, the compressional wave continues to propagate into the inner magnetosphere.

  13. Simulations of laser propagation and ionization in l'OASIS experiments

    SciTech Connect

    Dimitrov, D.A.; Bruhwiler, D.L.; Leemans, W.; Esarey, E.; Catravas, P.; Toth, C.; Shadwick, B.; Cary, J.R.; Giacone, R.

    2002-06-30

    We have conducted particle-in-cell simulations of laser pulse propagation through neutral He, including the effects of tunneling ionization, within the parameter regime of the l'OASIS experiments [1,2] at the Lawrence Berkeley National Laboratory (LBNL). The simulations show the theoretically predicted [3] blue shifting of the laser frequency at the leading edge of the pulse. The observed blue shifting is in good agreement with the experimental data. These results indicate that such computations can be used to accurately simulate a number of important effects related to tunneling ionization for laser-plasma accelerator concepts, such as steepening due to ionization-induced pump depletion, which can seed and enhance instabilities. Our simulations show self-modulation occurring earlier when tunneling ionization is included then for a pre-ionized plasma.

  14. One-way approximation for the simulation of weak shock wave propagation in atmospheric flows.

    PubMed

    Gallin, Louis-Jonardan; Rénier, Mathieu; Gaudard, Eric; Farges, Thomas; Marchiano, Régis; Coulouvrat, François

    2014-05-01

    A numerical scheme is developed to simulate the propagation of weak acoustic shock waves in the atmosphere with no absorption. It generalizes the method previously developed for a heterogeneous medium [Dagrau, Rénier, Marchiano, and Coulouvrat, J. Acoust. Soc. Am. 130, 20-32 (2011)] to the case of a moving medium. It is based on an approximate scalar wave equation for potential, rewritten in a moving time frame, and separated into three parts: (i) the linear wave equation in a homogeneous and quiescent medium, (ii) the effects of atmospheric winds and of density and speed of sound heterogeneities, and (iii) nonlinearities. Each effect is then solved separately by an adapted method: angular spectrum for the wave equation, finite differences for the flow and heterogeneity corrections, and analytical method in time domain for nonlinearities. To keep a one-way formulation, only forward propagating waves are kept in the angular spectrum part, while a wide-angle parabolic approximation is performed on the correction terms. The numerical process is validated in the case of guided modal propagation with a shear flow. It is then applied to the case of blast wave propagation within a boundary layer flow over a flat and rigid ground. PMID:24815240

  15. Testing the Propagating Fluctuations Model with a Long, Global Accretion Disk Simulation

    NASA Astrophysics Data System (ADS)

    Hogg, J. Drew; Reynolds, Christopher S.

    2016-07-01

    The broadband variability of many accreting systems displays characteristic structures; log-normal flux distributions, root-mean square (rms)-flux relations, and long inter-band lags. These characteristics are usually interpreted as inward propagating fluctuations of the mass accretion rate in an accretion disk driven by stochasticity of the angular momentum transport mechanism. We present the first analysis of propagating fluctuations in a long-duration, high-resolution, global three-dimensional magnetohydrodynamic (MHD) simulation of a geometrically thin (h/r ≈ 0.1) accretion disk around a black hole. While the dynamical-timescale turbulent fluctuations in the Maxwell stresses are too rapid to drive radially coherent fluctuations in the accretion rate, we find that the low-frequency quasi-periodic dynamo action introduces low-frequency fluctuations in the Maxwell stresses, which then drive the propagating fluctuations. Examining both the mass accretion rate and emission proxies, we recover log-normality, linear rms-flux relations, and radial coherence that would produce inter-band lags. Hence, we successfully relate and connect the phenomenology of propagating fluctuations to modern MHD accretion disk theory.

  16. Acoustic pulse propagation in an urban environment using a three-dimensional numerical simulation.

    PubMed

    Mehra, Ravish; Raghuvanshi, Nikunj; Chandak, Anish; Albert, Donald G; Wilson, D Keith; Manocha, Dinesh

    2014-06-01

    Acoustic pulse propagation in outdoor urban environments is a physically complex phenomenon due to the predominance of reflection, diffraction, and scattering. This is especially true in non-line-of-sight cases, where edge diffraction and high-order scattering are major components of acoustic energy transport. Past work by Albert and Liu [J. Acoust. Soc. Am. 127, 1335-1346 (2010)] has shown that many of these effects can be captured using a two-dimensional finite-difference time-domain method, which was compared to the measured data recorded in an army training village. In this paper, a full three-dimensional analysis of acoustic pulse propagation is presented. This analysis is enabled by the adaptive rectangular decomposition method by Raghuvanshi, Narain and Lin [IEEE Trans. Visual. Comput. Graphics 15, 789-801 (2009)], which models sound propagation in the same scene in three dimensions. The simulation is run at a much higher usable bandwidth (nearly 450 Hz) and took only a few minutes on a desktop computer. It is shown that a three-dimensional solution provides better agreement with measured data than two-dimensional modeling, especially in cases where propagation over rooftops is important. In general, the predicted acoustic responses match well with measured results for the source/sensor locations. PMID:24907788

  17. Accelerating forward and adjoint simulations of seismic wave propagation on large GPU-clusters

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Rietmann, M.; Charles, J.; Messmer, P.; Komatitsch, D.; Schenk, O.; Tromp, J.

    2012-12-01

    In seismic tomography, waveform inversions require accurate simulations of seismic wave propagation in complex media.The current versions of our spectral-element method (SEM) packages, the local-scale code SPECFEM3D and the global-scale code SPECFEM3D_GLOBE, are widely used open-source community codes which simulate seismic wave propagation for local-, regional- and global-scale applications. These numerical simulations compute highly accurate seismic wavefields, accounting for fully 3D Earth models. However, code performance often governs whether seismic inversions become feasible or remain elusive. We report here on extending these high-order finite-element packages to further exploit graphic processing units (GPUs) and perform numerical simulations of seismic wave propagation on large GPU clusters. These enhanced packages can be readily run either on multi-core CPUs only or together with many-core GPU acceleration devices. One of the challenges in parallelizing finite element codes is the potential for race conditions during the assembly phase. We therefore investigated different methods such as mesh coloring or atomic updates on the GPU. In order to achieve strong scaling, we needed to ensure good overlap of data motion at all levels, including internode and host-accelerator transfers. These new MPI/CUDA solvers exhibit excellent scalability and achieve speedup on a node-to-node basis over the carefully tuned equivalent multi-core MPI solver. We present case studies run on a Cray XK6 GPU architecture up to 896 nodes to demonstrate the performance of both the forward and adjoint functionality of the code packages. Running simulations on such dedicated GPU clusters further reduces computation times and pushes seismic inversions into a new, higher frequency realm.

  18. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  19. GPU-based simulation of optical propagation through turbulence for active and passive imaging

    NASA Astrophysics Data System (ADS)

    Monnier, Goulven; Duval, François-Régis; Amram, Solène

    2012-10-01

    The usual numerical approach for accurate, spatially resolved simulation of optical propagation through atmospheric turbulence involves Fresnel diffraction through a series of phase screens. When used to reproduce instantaneous laser beam intensity distribution on a target, this numerical scheme may get quite expensive in terms of CPU and memory resources, due to the many constraints to be fulfilled to ensure the validity of the resulting quantities. In particular, computational requirements grow rapidly with higher-divergence beam, longer propagation distance, stronger turbulence and larger turbulence outer scale. Our team recently developed IMOTEP, a software which demonstrates the benefits of using the computational power of the Graphics Processing Units (GPU) for both accelerating such simulations and increasing the range of accessible simulated conditions. Simulating explicitly the instantaneous effects of turbulence on the backscattered optical wave is even more challenging when the isoplanatic or totally anisoplanatic approximations are not applicable. Two methods accounting for anisoplanatic effects have been implemented in IMOTEP. The first one, dedicated to narrow beams and non-imaging applications, involves exact propagation of spherical waves for an array of isoplanatic sources in the laser spot. The second one, designed for active or passive imaging applications, involves precomputation of the DSP of parameters describing the instantaneous PSF. PSF anisoplanatic statistics are "numerically measured" from numerous simulated realizations. Once the DSP are computed and stored for given conditions (with no intrinsic limitation on turbulence strength), which typically takes 5 to 30 minutes on a recent GPU, output blurred and distorted images are easily and quickly generated. The paper gives an overview of the software with its physical and numerical backgrounds. The approach developed for generating anisoplanatic instantaneous images is emphasized.

  20. Errors in the Simulated Heat Budget of CGCMs in the Eastern Part of the Tropical Oceans

    NASA Astrophysics Data System (ADS)

    Hazel, J.; Masarik, M. T.; Mechoso, C. R.; Small, R. J.; Curchitser, E. N.

    2014-12-01

    The simulation of the tropical climate by coupled atmosphere-ocean general circulation models (CGCMs) shows severe warm biases in the sea-surface temperature (SST) field of the southeastern part of the Pacific and the Atlantic (SEP and SEA, respectively). The errors are strongest near the land mass with a broad plume extending west, Also, the equatorial cold tongue is too strong and extends too far to the west. The simulated precipitation field generally shows a persistent double Inter-tropical Convergence Zone (ITCZ). Tremendous effort has been made to improve CGCM performance in general and to address these tropical errors in particular. The present paper start by comparing Taylor diagrams of the SST errors in the SEP and SEA by CGCMs participating in the Coupled Model Intercomparison Project phases 3 and 5 (CMIP3 and CMIP5, respectively). Some improvement is noted in models that perform poorly in CMIP3, but the overall performance is broadly similar in the two intercomparison projects. We explore the hypothesis that an improved representation of atmosphere-ocean interaction involving stratocumulus cloud decks and oceanic upwelling is essential to reduce errors in the SEP and SEA. To estimate the error contribution by clouds and upwelling, we examine the upper ocean surface heat flux budget. The resolution of the oceanic component of the CGCMs in both CMIP3 and CMIP5 is too coarse for a realistic representation of upwelling. Therefore, we also examine simulations by the Nested Regional Climate Model (nRCM) system, which is a CGCM with a very high-resolution regional model embedded in coastal regions. The nRCM consists of the Community Atmosphere Model (CAM, run at 1°) coupled to the global Parallel Ocean Program Model (POP, run at 1°) to which the Regional Ocean Modeling System (ROMS6, run at 5-10 km) is nested in selected coastal regions.

  1. Accuracy of the water column approximation in numerically simulating propagation of teleseismic PP waves and Rayleigh waves

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian

    2016-06-01

    Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modeling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5% and 9 ° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10% in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1% at periods greater than 30 s in most oceanic regions, but the error is up to 2% for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.

  2. Accuracy of the water column approximation in numerically simulating propagation of teleseismic PP waves and Rayleigh waves

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian

    2016-08-01

    Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modelling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5 per cent and 9° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10 per cent in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1 per cent at periods greater than 30 s in most oceanic regions, but the error is up to 2 per cent for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.

  3. Horizon sensors attitude errors simulation for the Brazilian Remote Sensing Satellite

    NASA Astrophysics Data System (ADS)

    Vicente de Brum, Antonio Gil; Ricci, Mario Cesar

    Remote sensing, meteorological and other types of satellites require an increasingly better Earth related positioning. From the past experience it is well known that the thermal horizon in the 15 micrometer band provides conditions of determining the local vertical at any time. This detection is done by horizon sensors which are accurate instruments for Earth referred attitude sensing and control whose performance is limited by systematic and random errors amounting about 0.5 deg. Using the computer programs OBLATE, SEASON, ELECTRO and MISALIGN, developed at INPE to simulate four distinct facets of conical scanning horizon sensors, attitude errors are obtained for the Brazilian Remote Sensing Satellite (the first one, SSR-1, is scheduled to fly in 1996). These errors are due to the oblate shape of the Earth, seasonal and latitudinal variations of the 15 micrometer infrared radiation, electronic processing time delay and misalignment of sensor axis. The sensor related attitude errors are thus properly quantified in this work and will, together with other systematic errors (for instance, ambient temperature variation) take part in the pre-launch analysis of the Brazilian Remote Sensing Satellite, with respect to the horizon sensor performance.

  4. Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael

    2009-01-01

    Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.

  5. An Approach to Assess Delamination Propagation Simulation Capabilities in Commercial Finite Element Codes

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2008-01-01

    An approach for assessing the delamination propagation simulation capabilities in commercial finite element codes is presented and demonstrated. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. The load-displacement relationship and the total strain energy obtained from the propagation analysis results and the benchmark results were compared and good agreements could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as was expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.

  6. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  7. Cross Sections, Error Bars and Event Distributions in Simulated DRELL-YAN Azimuthal Asymmetry Measurements

    NASA Astrophysics Data System (ADS)

    Bianconi, A.

    A short summary of results of recent simulations of (un) polarized Drell-Yan experiments is presented here. Dilepton production in pp, bar {p}p, π-p and π+p scattering is considered, for several kinematics corresponding to interesting regions for experiments at GSI, CERN-Compass and RHIC. A table of integrated cross sections, and a set of estimated error bars on measurements of azimuthal asymmetries (associated with collection of 5, 20 or 80 Kevents) are reported.

  8. Outdoor sound propagation effects on aircraft detection through passive phased-array acoustic antennas: 3D numerical simulations

    NASA Astrophysics Data System (ADS)

    Roselli, Ivan; Testa, Pierluigi; Caronna, Gaetano; Barbagelata, Andrea; Ferrando, Alessandro

    2005-09-01

    The present paper describes some of the main acoustic issues connected with the SAFE-AIRPORT European Project for the development of an innovative acoustic system for the improvement of air traffic management. The system sensors are two rotating passive phased-array antennas with 512 microphones each. In particular, this study focused on the propagation of sound waves in the atmosphere and its influence on the system detection efficiency. The effects of air temperature and wind gradients on aircraft tracking were analyzed. Algorithms were implemented to correct output data errors on aircraft location due to acoustic ray deviation in 3D environment. Numerical simulations were performed using several temperature and wind profiles according to common and critical meteorological conditions. Aircraft location was predicted through 3D acoustic ray triangulation methods, taking into account variation in speed of sound waves along rays path toward each antenna. The system range was also assessed considering aircraft noise spectral emission. Since the speed of common airplanes is not negligible with respect to sound speed during typical airport operations such as takeoff and approach, the influence of the Doppler effect on range calculation was also considered and most critical scenarios were simulated.

  9. Simulation of Lamb wave propagation for the characterization of complex structures.

    PubMed

    Agostini, Valentina; Delsanto, Pier Paolo; Genesio, Ivan; Olivero, Dimitri

    2003-04-01

    Reliable numerical simulation techniques represent a very valuable tool for analysis. For this purpose we investigated the applicability of the local interaction simulation approach (LISA) to the study of the propagation of Lamb waves in complex structures. The LISA allows very fast and flexible simulations, especially in conjunction with parallel processing, and it is particularly useful for complex (heterogeneous, anisotropic, attenuative, and/or nonlinear) media. We present simulations performed on a glass fiber reinforced plate, initially undamaged and then with a hole passing through its thickness (passing-by hole). In order to give a validation of the method, the results are compared with experimental data. Then we analyze the interaction of Lamb waves with notches, delaminations, and complex structures. In the first case the discontinuity due to a notch generates mode conversion, which may be used to predict the defect shape and size. In the case of a single delamination, the most striking "signature" is a time-shift delay, which may be observed in the temporal evolution of the signal recorded by a receiver. We also present some results obtained on a geometrically complex structure. Due to the inherent discontinuities, a wealth of propagation mechanisms are observed, which can be exploited for the purpose of quantitative nondestructive evaluation (NDE). PMID:12744400

  10. An error model for GCM precipitation and temperature simulations for future (warmer) climate

    NASA Astrophysics Data System (ADS)

    Sivakumar, B.; Woldemeskel, F. M.; Sharma, A.; Mehrotra, R.

    2013-12-01

    Water resources assessments for future climates require meaningful simulations of likely precipitation and evaporation for simulation of flow and derived quantities of interest. Future climate projections using Global Climate Models (GCMs) are commonly used to assess the impacts of global climate change on hydrology and water resources. The reliability of such assessments, however, is questionable due to the various uncertainties present in GCM simulations, such as those associated with model structure, scenario, and initial condition. We present here a new basis for assigning a measure of uncertainty to GCM simulations of precipitation and temperature. Unlike other alternatives which assess overall GCM uncertainty, our approach leads to a unique measure of uncertainty in the variable of interest for each simulated value in space and time. We refer to this as an error model of GCM precipitation and temperature simulations. This is done through estimation of an uncertainty metric, called square root of error variance (SREV), and it involves the following steps: (1) Interpolating GCM outputs to a common spatial grid; (2) Converting the interpolated GCM outputs to percentiles; (3) Estimating SREV for each percentile; and (4) Transforming SREV estimates to time series. The SREV is derived taking into account the model structural, the emission scenario, and the initial condition uncertainty of the simulated value, the full error model being formulated using six GCMs (from the Coupled Model Inter-comparison Project phase 3 (CMIP3) multi-model dataset); three emission scenarios (B1, A1B and A2) and three ensemble runs, with a total of 54 time series representing the period 2001 to 2099. The results reveal that model uncertainty is the main source of error followed by scenario uncertainty. For precipitation, total uncertainty is larger in the tropical region close to the equator and reduces towards the north and south poles. The opposite is true for temperature where

  11. Simulation of Crack Propagation in Engine Rotating Components under Variable Amplitude Loading

    NASA Technical Reports Server (NTRS)

    Bonacuse, P. J.; Ghosn, L. J.; Telesman, J.; Calomino, A. M.; Kantzos, P.

    1998-01-01

    The crack propagation life of tested specimens has been repeatedly shown to strongly depend on the loading history. Overloads and extended stress holds at temperature can either retard or accelerate the crack growth rate. Therefore, to accurately predict the crack propagation life of an actual component, it is essential to approximate the true loading history. In military rotorcraft engine applications, the loading profile (stress amplitudes, temperature, and number of excursions) can vary significantly depending on the type of mission flown. To accurately assess the durability of a fleet of engines, the crack propagation life distribution of a specific component should account for the variability in the missions performed (proportion of missions flown and sequence). In this report, analytical and experimental studies are described that calibrate/validate the crack propagation prediction capability ]or a disk alloy under variable amplitude loading. A crack closure based model was adopted to analytically predict the load interaction effects. Furthermore, a methodology has been developed to realistically simulate the actual mission mix loading on a fleet of engines over their lifetime. A sequence of missions is randomly selected and the number of repeats of each mission in the sequence is determined assuming a Poisson distributed random variable with a given mean occurrence rate. Multiple realizations of random mission histories are generated in this manner and are used to produce stress, temperature, and time points for fracture mechanics calculations. The result is a cumulative distribution of crack propagation lives for a given, life limiting, component location. This information can be used to determine a safe retirement life or inspection interval for the given location.

  12. Numerical simulations of large earthquakes: Dynamic rupture propagation on heterogeneous faults

    USGS Publications Warehouse

    Harris, R.A.

    2004-01-01

    Our current conceptions of earthquake rupture dynamics, especially for large earthquakes, require knowledge of the geometry of the faults involved in the rupture, the material properties of the rocks surrounding the faults, the initial state of stress on the faults, and a constitutive formulation that determines when the faults can slip. In numerical simulations each of these factors appears to play a significant role in rupture propagation, at the kilometer length scale. Observational evidence of the earth indicates that at least the first three of the elements, geometry, material, and stress, can vary over many scale dimensions. Future research on earthquake rupture dynamics needs to consider at which length scales these features are significant in affecting rupture propagation. ?? Birkha??user Verlag, Basel, 2004.

  13. 3D dynamic simulation of crack propagation in extracorporeal shock wave lithotripsy

    NASA Astrophysics Data System (ADS)

    Wijerathne, M. L. L.; Hori, Muneo; Sakaguchi, Hide; Oguni, Kenji

    2010-06-01

    Some experimental observations of Shock Wave Lithotripsy(SWL), which include 3D dynamic crack propagation, are simulated with the aim of reproducing fragmentation of kidney stones with SWL. Extracorporeal shock wave lithotripsy (ESWL) is the fragmentation of kidney stones by focusing an ultrasonic pressure pulse onto the stones. 3D models with fine discretization are used to accurately capture the high amplitude shear shock waves. For solving the resulting large scale dynamic crack propagation problem, PDS-FEM is used; it provides numerically efficient failure treatments. With a distributed memory parallel code of PDS-FEM, experimentally observed 3D photoelastic images of transient stress waves and crack patterns in cylindrical samples are successfully reproduced. The numerical crack patterns are in good agreement with the experimental ones, quantitatively. The results shows that the high amplitude shear waves induced in solid, by the lithotriptor generated shock wave, play a dominant role in stone fragmentation.

  14. Simulation of the trans-oceanic tsunami propagation due to the 1883 Krakatau volcanic eruption

    NASA Astrophysics Data System (ADS)

    Choi, B. H.; Pelinovsky, E.; Kim, K. O.; Lee, J. S.

    The 1883 Krakatau volcanic eruption has generated a destructive tsunami higher than 40 m on the Indonesian coast where more than 36 000 lives were lost. Sea level oscillations related with this event have been reported on significant distances from the source in the Indian, Atlantic and Pacific Oceans. Evidence of many manifestations of the Krakatau tsunami was a subject of the intense discussion, and it was suggested that some of them are not related with the direct propagation of the tsunami waves from the Krakatau volcanic eruption. Present paper analyzes the hydrodynamic part of the Krakatau event in details. The worldwide propagation of the tsunami waves generated by the Krakatau volcanic eruption is studied numerically using two conventional models: ray tracing method and two-dimensional linear shallow-water model. The results of the numerical simulations are compared with available data of the tsunami registration.

  15. An Atomistic Simulation of Crack Propagation in a Nickel Single Crystal

    NASA Technical Reports Server (NTRS)

    Karimi, Majid

    2002-01-01

    The main objective of this paper is to determine mechanisms of crack propagation in a nickel single crystal. Motivation for selecting nickel as a case study is because we believe that its physical properties are very close to that of nickel-base super alloy. We are directed in identifying some generic trends that would lead a single crystalline material to failure. We believe that the results obtained here would be of interest to the experimentalists in guiding them to a more optimized experimental strategy. The dynamic crack propagation experiments are very difficult to do. We are partially motivated to fill the gap by generating the simulation results in lieu of the experimental ones for the cases where experiment can not be done or when the data is not available.

  16. Numerical Simulation of Debris Cloud Propagation inside Gas-Filled Pressure Vessels under Hypervelocity Impact

    NASA Astrophysics Data System (ADS)

    Gai, F. F.; Pang, B. J.; Guan, G. S.

    2009-03-01

    In the paper SPH methods in AUTODYN-2D is used to investigate the characteristics of debris clouds propagation inside the gas-filled pressure vessels for hypervelocity impact on the pressure vessels. The effect of equation of state on debris cloud has been investigated. The numerical simulation performed to analyze the effect of the gas pressure and the impact condition on the propagation of the debris clouds. The result shows that the increase of gas pressure can reduce the damage of the debris clouds' impact on the back wall of vessels when the pressure value is in a certain range. The smaller projectile lead the axial velocity of the debris cloud to stronger deceleration and the debris cloud deceleration is increasing with increased impact velocity. The time of venting begins to occur is related to the "vacuum column" at the direction of impact-axial. The paper studied the effect of impact velocities on gas shock wave.

  17. Simulation of quasi-static hydraulic fracture propagation in porous media with XFEM

    NASA Astrophysics Data System (ADS)

    Juan-Lien Ramirez, Alina; Neuweiler, Insa; Löhnert, Stefan

    2015-04-01

    Hydraulic fracturing is the injection of a fracking fluid at high pressures into the underground. Its goal is to create and expand fracture networks to increase the rock permeability. It is a technique used, for example, for oil and gas recovery and for geothermal energy extraction, since higher rock permeability improves production. Many physical processes take place when it comes to fracking; rock deformation, fluid flow within the fractures, as well as into and through the porous rock. All these processes are strongly coupled, what makes its numerical simulation rather challenging. We present a 2D numerical model that simulates the hydraulic propagation of an embedded fracture quasi-statically in a poroelastic, fully saturated material. Fluid flow within the porous rock is described by Darcy's law and the flow within the fracture is approximated by a parallel plate model. Additionally, the effect of leak-off is taken into consideration. The solid component of the porous medium is assumed to be linear elastic and the propagation criteria are given by the energy release rate and the stress intensity factors [1]. The used numerical method for the spatial discretization is the eXtended Finite Element Method (XFEM) [2]. It is based on the standard Finite Element Method, but introduces additional degrees of freedom and enrichment functions to describe discontinuities locally in a system. Through them the geometry of the discontinuity (e.g. a fracture) becomes independent of the mesh allowing it to move freely through the domain without a mesh-adapting step. With this numerical model we are able to simulate hydraulic fracture propagation with different initial fracture geometries and material parameters. Results from these simulations will also be presented. References [1] D. Gross and T. Seelig. Fracture Mechanics with an Introduction to Micromechanics. Springer, 2nd edition, (2011) [2] T. Belytschko and T. Black. Elastic crack growth in finite elements with minimal

  18. Tracking Error analysis of Concentrator Photovoltaic Module Using Total 3-Dimensional Simulator

    NASA Astrophysics Data System (ADS)

    Ota, Yasuyuki; Nishioka, Kensuke

    2011-12-01

    A 3-dimensional (3D) operating simulator for concentrator photovoltaic (CPV) module using triple-junction solar cell was developed. By connecting 3D equivalent circuit simulation for triple-junction solar cell and ray-trace simulation for optics model, the operating characteristics of CPV module were calculated. A typical flat Fresnel lens and homogenizer were adapted to the optics model. The influence of tracking error on the performance of CPV module was calculated. There was the correlation between the optical efficiency and Isc. However, Pm was not correlated with these values, and was strongly dependent on FF. We can use this total simulator for the evaluation and optimization from the light incidence to operating characteristic of CPV modules.

  19. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  20. 3D geometric modeling and simulation of laser propagation through turbulence with plenoptic functions

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Nelson, William; Davis, Christopher C.

    2014-10-01

    Plenoptic functions are functions that preserve all the necessary light field information of optical events. Theoretical work has demonstrated that geometric based plenoptic functions can serve equally well in the traditional wave propagation equation known as the "scalar stochastic Helmholtz equation". However, in addressing problems of 3D turbulence simulation, the dominant methods using phase screen models have limitations both in explaining the choice of parameters (on the transverse plane) in real-world measurements, and finding proper correlations between neighboring phase screens (the Markov assumption breaks down). Though possible corrections to phase screen models are still promising, the equivalent geometric approach based on plenoptic functions begins to show some advantages. In fact, in these geometric approaches, a continuous wave problem is reduced to discrete trajectories of rays. This allows for convenience in parallel computing and guarantees conservation of energy. Besides the pairwise independence of simulated rays, the assigned refractive index grids can be directly tested by temperature measurements with tiny thermoprobes combined with other parameters such as humidity level and wind speed. Furthermore, without loss of generality one can break the causal chain in phase screen models by defining regional refractive centers to allow rays that are less affected to propagate through directly. As a result, our work shows that the 3D geometric approach serves as an efficient and accurate method in assessing relevant turbulence problems with inputs of several environmental measurements and reasonable guesses (such as Cn 2 levels). This approach will facilitate analysis and possible corrections in lateral wave propagation problems, such as image de-blurring, prediction of laser propagation over long ranges, and improvement of free space optic communication systems. In this paper, the plenoptic function model and relevant parallel algorithm computing

  1. Dipolarization fronts as earthward propagating flux ropes: A three-dimensional global hybrid simulation

    NASA Astrophysics Data System (ADS)

    Lu, San; Lu, Quanming; Lin, Yu; Wang, Xueyi; Ge, Yasong; Wang, Rongsheng; Zhou, Meng; Fu, Huishan; Huang, Can; Wu, Mingyu; Wang, Shui

    2015-08-01

    Dipolarization fronts (DFs) as earthward propagating flux ropes (FRs) in the Earth's magnetotail are presented and investigated with a three-dimensional (3-D) global hybrid simulation for the first time. In the simulation, several small-scale earthward propagating FRs are found to be formed by multiple X line reconnection in the near tail. During their earthward propagation, the magnetic field Bz of the FRs becomes highly asymmetric due to the imbalance of the reconnection rates between the multiple X lines. At the later stage, when the FRs approach the near-Earth dipole-like region, the antireconnection between the southward/negative Bz of the FRs and the northward geomagnetic field leads to the erosion of the southward magnetic flux of the FRs, which further aggravates the Bz asymmetry. Eventually, the FRs merge into the near-Earth region through the antireconnection. These earthward propagating FRs can fully reproduce the observational features of the DFs, e.g., a sharp enhancement of Bz preceded by a smaller amplitude Bz dip, an earthward flow enhancement, the presence of the electric field components in the normal and dawn-dusk directions, and ion energization. Our results show that the earthward propagating FRs can be used to explain the DFs observed in the magnetotail. The thickness of the DFs is on the order of several ion inertial lengths, and the electric field normal to the front is found to be dominated by the Hall physics. During the earthward propagation from the near-tail to the near-Earth region, the speed of the FR/DFs increases from ~150 km/s to ~1000 km/s. The FR/DFs can be tilted in the GSM (x, y) plane with respect to the y (dawn-dusk) axis and only extend several Earth radii in this direction. Moreover, the structure and evolution of the FRs/DFs are nonuniform in the dawn-dusk direction, which indicates that the DFs are essentially 3-D.

  2. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  3. Coupled Simulation of Seismic Wave Propagation and Failure Phenomena by Use of an MPS Method

    NASA Astrophysics Data System (ADS)

    Takekawa, Junichi; Mikada, Hitoshi; Goto, Tada-nori; Sanada, Yoshinori; Ashida, Yuzuru

    2013-04-01

    The failure of brittle materials, for example glasses and rock masses, is commonly observed to be discontinuous. It is, however, difficult to simulate these phenomena by use of conventional numerical simulation methods, for example the finite difference method or the finite element method, because of the presence of computational grids or elements artificially introduced before the simulation. It is, therefore, important for research on such discontinuous failures in science and engineering to analyze the phenomena seamlessly. This study deals with the coupled simulation of elastic wave propagation and failure phenomena by use of a moving particle semi-implicit (MPS) method. It is simple to model the objects of analysis because no grid or lattice structure is necessary. In addition, lack of a grid or lattice structure makes it simple to simulate large deformations and failure phenomena at the same time. We first compare analytical and MPS solutions by use of Lamb's problem with different offset distances, material properties, and source frequencies. Our results show that analytical and numerical seismograms are in good agreement with each other for 20 particles in a minimum wavelength. Finally, we focus our attention on the Hopkinson effect as an example of failure induced by elastic wave propagation. In the application of the MPS, the algorithm is basically the same as in the previous calculation except for the introduction of a failure criterion. The failure criterion applied in this study is that particle connectivity must be disconnected when the distance between the particles exceeds a failure threshold. We applied the developed algorithm to a suspended specimen that was modeled as a long bar consisting of thousands of particles. A compressional wave in the bar is generated by an abrupt pressure change on one edge. The compressional wave propagates along the interior of the specimen and is visualized clearly. At the other end of the bar, the spalling of the

  4. Global particle simulation of lower hybrid wave propagation and mode conversion in tokamaks

    NASA Astrophysics Data System (ADS)

    Bao, J.; Lin, Z.; Kuley, A.

    2015-12-01

    Particle-in-cell simulation of lower hybrid (LH) waves in core plasmas is presented with a realistic electron-to-ion mass ratio in toroidal geometry. Due to the fact that LH waves mainly interact with electrons to drive the current, ion dynamic is described by cold fluid equations for simplicity, while electron dynamic is described by drift kinetic equations. This model could be considered as a new method to study LH waves in tokamak plasmas, which has advantages in nonlinear simulations. The mode conversion between slow and fast waves is observed in the simulation when the accessibility condition is not satisfied, which is consistent with the theory. The poloidal spectrum upshift and broadening effects are observed during LH wave propagation in the toroidal geometry.

  5. Global particle simulation of lower hybrid wave propagation and mode conversion in tokamaks

    SciTech Connect

    Bao, J.; Lin, Z.; Kuley, A.

    2015-12-10

    Particle-in-cell simulation of lower hybrid (LH) waves in core plasmas is presented with a realistic electron-to-ion mass ratio in toroidal geometry. Due to the fact that LH waves mainly interact with electrons to drive the current, ion dynamic is described by cold fluid equations for simplicity, while electron dynamic is described by drift kinetic equations. This model could be considered as a new method to study LH waves in tokamak plasmas, which has advantages in nonlinear simulations. The mode conversion between slow and fast waves is observed in the simulation when the accessibility condition is not satisfied, which is consistent with the theory. The poloidal spectrum upshift and broadening effects are observed during LH wave propagation in the toroidal geometry.

  6. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  7. How errors on meteorological variables impact simulated ecosystem fluxes: a case study for six French sites

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Ciais, P.; Peylin, P.; Viovy, N.; Longdoz, B.; Bonnefond, J. M.; Rambal, S.; Klumpp, K.; Olioso, A.; Cellier, P.; Maignan, F.; Eglin, T.; Calvet, J. C.

    2011-03-01

    We analyze how biases of meteorological drivers impact the calculation of ecosystem CO2, water and energy fluxes by models. To do so, we drive the same ecosystem model by meteorology from gridded products and by ''true" meteorology from local observation at eddy-covariance flux sites. The study is focused on six flux tower sites in France spanning across a 7-14 °C and 600-1040 mm yr-1 climate gradient, with forest, grassland and cropland ecosystems. We evaluate the results of the ORCHIDEE process-based model driven by four different meteorological models against the same model driven by site-observed meteorology. The evaluation is decomposed into characteristic time scales. The main result is that there are significant differences between meteorological models and local tower meteorology. The seasonal cycle of air temperature, humidity and shortwave downward radiation is reproduced correctly by all meteorological models (average R2=0.90). At sites located near the coast and influenced by sea-breeze, or located in altitude, the misfit of meteorological drivers from gridded dataproducts and tower meteorology is the largest. We show that day-to-day variations in weather are not completely well reproduced by meteorological models, with R2 between modeled grid point and measured local meteorology going from 0.35 (REMO model) to 0.70 (SAFRAN model). The bias of meteorological models impacts the flux simulation by ORCHIDEE, and thus would have an effect on regional and global budgets. The forcing error defined by the simulated flux difference resulting from prescribing modeled instead than observed local meteorology drivers to ORCHIDEE is quantified for the six studied sites and different time scales. The magnitude of this forcing error is compared to that of the model error defined as the modeled-minus-observed flux, thus containing uncertain parameterizations, parameter values, and initialization. The forcing error is the largest on a daily time scale, for which it is

  8. Misclassification Errors in Unsupervised Classification Methods. Comparison Based on the Simulation of Targeted Proteomics Data

    PubMed Central

    Andreev, Victor P; Gillespie, Brenda W; Helfand, Brian T; Merion, Robert M

    2016-01-01

    Unsupervised classification methods are gaining acceptance in omics studies of complex common diseases, which are often vaguely defined and are likely the collections of disease subtypes. Unsupervised classification based on the molecular signatures identified in omics studies have the potential to reflect molecular mechanisms of the subtypes of the disease and to lead to more targeted and successful interventions for the identified subtypes. Multiple classification algorithms exist but none is ideal for all types of data. Importantly, there are no established methods to estimate sample size in unsupervised classification (unlike power analysis in hypothesis testing). Therefore, we developed a simulation approach allowing comparison of misclassification errors and estimating the required sample size for a given effect size, number, and correlation matrix of the differentially abundant proteins in targeted proteomics studies. All the experiments were performed in silico. The simulated data imitated the expected one from the study of the plasma of patients with lower urinary tract dysfunction with the aptamer proteomics assay Somascan (SomaLogic Inc, Boulder, CO), which targeted 1129 proteins, including 330 involved in inflammation, 180 in stress response, 80 in aging, etc. Three popular clustering methods (hierarchical, k-means, and k-medoids) were compared. K-means clustering performed much better for the simulated data than the other two methods and enabled classification with misclassification error below 5% in the simulated cohort of 100 patients based on the molecular signatures of 40 differentially abundant proteins (effect size 1.5) from among the 1129-protein panel. PMID:27524871

  9. A Hamiltonian Particle Method with a Staggered Particle Technique for Simulating Seismic Wave Propagation

    NASA Astrophysics Data System (ADS)

    Takekawa, Junichi; Mikada, Hitoshi; Goto, Tada-nori

    2014-08-01

    We present a Hamiltonian particle method (HPM) with a staggered particle technique for simulating seismic wave propagation. In the conventional HPM, physical variables, such as particle displacement and stress, are defined at the center, i.e., at the same position, of each particle. As most seismic simulations using finite difference methods (FDM) are practiced with staggered grid techniques, we know the staggered alignment of space variables could improve the numerical accuracy. In the present study, we hypothesized that staggered technique could improve the numerical accuracy also in the HPM and tested the hypothesis. First, we conducted a plane wave analysis for the HPM with the staggered particles in order to verify the validity of our strategy. The comparison of grid dispersion in our strategy with that in the conventional one suggests that the accuracy would be improved dramatically by use of the staggered technique. It is also observed that the dispersion of waves is dependent on the propagation direction due to the difference in the average spacing of the neighboring two particles for the same parameters, as is usually observed in FDM with a rotated staggered grid. Next, we compared the results from the conventional Lamb's problem using our HPM with those from an analytical approach in order to demonstrate the effectiveness of the staggered particle technique. Our results showed better agreement with the analytical solutions than those from HPM without the staggered particles. We conclude that the staggered particle technique would be a method to improve the calculation accuracy in the simulation of seismic wave propagation.

  10. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in

  11. Toward a reliable decomposition of predictive uncertainty in hydrological modeling: Characterizing rainfall errors using conditional simulation

    NASA Astrophysics Data System (ADS)

    Renard, Benjamin; Kavetski, Dmitri; Leblois, Etienne; Thyer, Mark; Kuczera, George; Franks, Stewart W.

    2011-11-01

    This study explores the decomposition of predictive uncertainty in hydrological modeling into its contributing sources. This is pursued by developing data-based probability models describing uncertainties in rainfall and runoff data and incorporating them into the Bayesian total error analysis methodology (BATEA). A case study based on the Yzeron catchment (France) and the conceptual rainfall-runoff model GR4J is presented. It exploits a calibration period where dense rain gauge data are available to characterize the uncertainty in the catchment average rainfall using geostatistical conditional simulation. The inclusion of information about rainfall and runoff data uncertainties overcomes ill-posedness problems and enables simultaneous estimation of forcing and structural errors as part of the Bayesian inference. This yields more reliable predictions than approaches that ignore or lump different sources of uncertainty in a simplistic way (e.g., standard least squares). It is shown that independently derived data quality estimates are needed to decompose the total uncertainty in the runoff predictions into the individual contributions of rainfall, runoff, and structural errors. In this case study, the total predictive uncertainty appears dominated by structural errors. Although further research is needed to interpret and verify this decomposition, it can provide strategic guidance for investments in environmental data collection and/or modeling improvement. More generally, this study demonstrates the power of the Bayesian paradigm to improve the reliability of environmental modeling using independent estimates of sampling and instrumental data uncertainties.

  12. Mean square displacements with error estimates from non-equidistant time-step kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2015-06-01

    We present a method to calculate mean square displacements (MSD) with error estimates from kinetic Monte Carlo (KMC) simulations of diffusion processes with non-equidistant time-steps. An analytical solution for estimating the errors is presented for the special case of one moving particle at fixed rate constant. The method is generalized to an efficient computational algorithm that can handle any number of moving particles or different rates in the simulated system. We show with examples that the proposed method gives the correct statistical error when the MSD curve describes pure Brownian motion and can otherwise be used as an upper bound for the true error.

  13. DTI quality control assessment via error estimation from Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Farzinfar, Mahshid; Li, Yin; Verde, Audrey R.; Oguz, Ipek; Gerig, Guido; Styner, Martin A.

    2013-03-01

    Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing the microscopic tissue structure of white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo (MC) simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and after QC.

  14. Fatigue-crack propagation behavior of ASTM A27 cast steel in simulated Hanford groundwater

    SciTech Connect

    James, L.A.

    1986-09-01

    Fatigue-crack propagation (FCP) tests were conducted on specimens of cast ASTM A27 steel in simulated Hanford ground-water at 150/sup 0/C and 250C/sup 0/C. Fatigue loadings were employed as the most feasible means of accelerating the environmentally assisted cracking (EAC) process. A tentative threshold for EAC was established, and an example calculation was used to show how such a threshold can be related to allowable stress levels and flaw sizes to assure that EAC will not occur.

  15. Finite Element Simulations on Erosion and Crack Propagation in Thermal Barrier Coatings

    NASA Astrophysics Data System (ADS)

    Ma, Z. S.; Fu, L. H.; Yang, L.; Zhou, Y. C.; Lu, C.

    2015-07-01

    Erosion of thermal barrier coatings occurs when atmospheric or carbon particles from the combustion chamber are ingested into aviation turbine engines. To understand the influence of erosion on the service life of thermal barrier coatings, we introduce the erosion and crack propagation models, and then by using finite element simulations, determine the relationship between the penetrating depth, the maximum principle stress and impingement variables such as velocity and angle. It is shown that cracks nucleate and extend during the erosion process and the length of a crack increases with the increase of the particle velocity and impact angle.

  16. Simulation of charge exchange plasma propagation near an ion thruster propelled spacecraft

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Kaufman, H. R.; Winder, D. R.

    1981-01-01

    A model describing the charge exchange plasma and its propagation is discussed, along with a computer code based on the model. The geometry of an idealized spacecraft having an ion thruster is outlined, with attention given to the assumptions used in modeling the ion beam. Also presented is the distribution function describing charge exchange production. The barometric equation is used in relating the variation in plasma potential to the variation in plasma density. The numerical methods and approximations employed in the calculations are discussed, and comparisons are made between the computer simulation and experimental data. An analytical solution of a simple configuration is also used in verifying the model.

  17. Nonlinear δF Simulation Studies of High-Intensity, Non-Axisymetric Beam Propagation

    NASA Astrophysics Data System (ADS)

    Kakaes, Konstantin; Stoltz, Peter; Davidson, Ronald

    1998-11-01

    The nonlinear δF formalism, previously developed and applied for axisymmetric beam propagation (P.H. Stoltz, W.W. Lee, and R-C. Davidson, this conference), has been extended to the case of general variation in the transverse phase space (X,Y,X^',Y^'). The analysis considers a high-intensity ion beam in the thin-beam approximation (rb << S) propagating through a periodic focusing solenoidal field κ_z(s+S)=κ_z(s). The distribution function Fb is divided into a zero-order part (F_b^0) plus a perturbation (δ F_b) which evolve nonlinearly in the zero-order and perturbed field configurations. The perturbed distribution function δ F_b(X,Y,X^',Y^',s) and potential δΨ(X,Y,s) are allowed to have general X-Y dependence, whereas the zero-order distribution F_b^0 is taken to be axisymmetric (fracpartialpartialθ=0). Simulation results are presented for two cases: (a) uniform focusing field with κ_z(s)=barκ_z=const, and (b) periodic focusing field with κ_z(s)=barκ_z+δκ_z(s). Beam propagation is investigated for both sudden and adiabatic turn-on of δκ_z(s).

  18. Fabrication and simulation of random and periodic composites for reduced stress wave propagation

    NASA Astrophysics Data System (ADS)

    McCuiston, Ryan Charles

    During a ballistic impact event between a monolithic ceramic target and a projectile, a shock wave precedes the projectile penetration and propagates through the target. Shock wave induced damage, fundamentally caused by the creation of tensile stress, can reduce the, expected performance of the target material. If the shock wave could be prevented from propagating it would be possible to improve ballistic performance of the target material. Recent research on phononic band gap structures has shown that it is possible to design and fabricate biphasic structures that forbid propagation of low amplitude acoustic waves. The goal of this dissertation was to determine the feasibility of creating a structure that is capable of limiting and or defeating large amplitude shock wave propagation by applying the concepts of phononic band gap research. A model system of Al2O3 and WC-Co was selected based on processing, acoustic and ballistic criteria. Al2O 3/WC-Co composites were fabricated by die pressing and vacuum sintering. The WC-Co was added as discrete inclusions 0.5 to 1.5 mm in diameter up to 50 vol. %. The interfacial bonding between Al2O3 and WC-Co was characterized by indentation and microscopy to determine optimal sintering conditions. A tape casting and lamination technique was developed to fabricate large dimension Al2O3 samples with periodically placed WC-Co inclusions. Through transmission acoustic characterization of green tape cast and laminated samples showed acoustic velocity could be reduced significantly by proper WC-Co inclusion arrangement. Two dimensional finite element simulations were performed on a series of designed Al2O3 structures containing both random and periodically arrayed WC-Co inclusions. For a fixed loading scheme, the effects of WC-Co inclusion diameter, area fraction and stacking arrangement were studied. Structures were found to respond either homogenously, heterogeneously or in a mixed mode fashion to the propagating stress wave. The

  19. A simulator study of the interaction of pilot workload with errors, vigilance, and decisions

    NASA Technical Reports Server (NTRS)

    Smith, H. P. R.

    1979-01-01

    A full mission simulation of a civil air transport scenario that had two levels of workload was used to observe the actions of the crews and the basic aircraft parameters and to record heart rates. The results showed that the number of errors was very variable among crews but the mean increased in the higher workload case. The increase in errors was not related to rise in heart rate but was associated with vigilance times as well as the days since the last flight. The recorded data also made it possible to investigate decision time and decision order. These also varied among crews and seemed related to the ability of captains to manage the resources available to them on the flight deck.

  20. Prior-predictive value from fast-growth simulations: Error analysis and bias estimation

    NASA Astrophysics Data System (ADS)

    Favaro, Alberto; Nickelsen, Daniel; Barykina, Elena; Engel, Andreas

    2015-01-01

    Variants of fluctuation theorems recently discovered in the statistical mechanics of nonequilibrium processes may be used for the efficient determination of high-dimensional integrals as typically occurring in Bayesian data analysis. In particular for multimodal distributions, Monte Carlo procedures not relying on perfect equilibration are advantageous. We provide a comprehensive statistical error analysis for the determination of the prior-predictive value (the evidence) in a Bayes problem, building on a variant of the Jarzynski equation. Special care is devoted to the characterization of the bias intrinsic to the method and statistical errors arising from exponential averages. We also discuss the determination of averages over multimodal posterior distributions with the help of a consequence of the Crooks relation. All our findings are verified by extensive numerical simulations of two model systems with bimodal likelihoods.

  1. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    NASA Astrophysics Data System (ADS)

    Celik, Cihangir

    -scale technologies. Prevention of SEEs has been studied and applied in the semiconductor industry by including radiation protection precautions in the system architecture or by using corrective algorithms in the system operation. Decreasing 10B content (20%of natural boron) in the natural boron of Borophosphosilicate glass (BPSG) layers that are conventionally used in the fabrication of semiconductor devices was one of the major radiation protection approaches for the system architecture. Neutron interaction in the BPSG layer was the origin of the SEEs because of the 10B (n,alpha) 7Li reaction products. Both of the particles produced have the capability of ionization in the silicon substrate region, whose thickness is comparable to the ranges of these particles. Using the soft error phenomenon in exactly the opposite manner of the semiconductor industry can provide a new neutron detection system based on the SERs in the semiconductor memories. By investigating the soft error mechanisms in the available semiconductor memories and enhancing the soft error occurrences in these devices, one can convert all memory using intelligent systems into portable, power efficient, directiondependent neutron detectors. The Neutron Intercepting Silicon Chip (NISC) project aims to achieve this goal by introducing 10B-enriched BPSG layers to the semiconductor memory architectures. This research addresses the development of a simulation tool, the NISC Soft Error Analysis Tool (NISCSAT), for soft error modeling and analysis in the semiconductor memories to provide basic design considerations for the NISC. NISCSAT performs particle transport and calculates the soft error probabilities, or SER, depending on energy depositions of the particles in a given memory node model of the NISC. Soft error measurements were performed with commercially available, off-the-shelf semiconductor memories and microprocessors to observe soft error variations with the neutron flux and memory supply voltage. Measurement

  2. Enhancement of the NMSU Channel Error Simulator to Provide User-Selectable Link Delays

    NASA Technical Reports Server (NTRS)

    Horan, Stephen; Wang, Ru-Hai

    2000-01-01

    This is the third in a continuing series of reports describing the development of the Space-to-Ground Link Simulator (SGLS) to be used for testing data transfers under simulated space channel conditions. The SGLS is based upon Virtual Instrument (VI) software techniques for managing the error generation, link data rate configuration, and, now, selection of the link delay value. In this report we detail the changes that needed to be made to the SGLS VI configuration to permit link delays to be added to the basic error generation and link data rate control capabilities. This was accomplished by modifying the rate-splitting VIs to include a buffer the hold the incoming data for the duration selected by the user to emulate the channel link delay. In sample tests of this configuration, the TCP/IP(sub ftp) service and the SCPS(sub fp) service were used to transmit 10-KB data files using both symmetric (both forward and return links set to 115200 bps) and unsymmetric (forward link set at 2400 bps and a return link set at 115200 bps) link configurations. Transmission times were recorded at bit error rates of 0 through 10(exp -5) to give an indication of the link performance. In these tests. we noted separate timings for the protocol setup time to initiate the file transfer and the variation in the actual file transfer time caused by channel errors. Both protocols showed similar performance to that seen earlier for the symmetric and unsymmetric channels. This time, the delays in establishing the file protocol also showed that these delays could double the transmission time and need to be accounted for in mission planning. Both protocols also showed a difficulty in transmitting large data files over large link delays. In these tests, there was no clear favorite between the TCP/IP(sub ftp) and the SCPS(sub fp). Based upon these tests, further testing is recommended to extend the results to different file transfer configurations.

  3. Finite-difference simulation of seismic wave propagation for explosion earthquakes at Sakurajima volcano, Japan

    NASA Astrophysics Data System (ADS)

    Takenaka, H.; Fujioka, A.; Nakamura, T.; Okamoto, T.

    2013-12-01

    Sakurajima volcano is one of the most active volcanoes in Japan, which is located in a part of Kagoshima bay, i.e. Aira caldera, in the south of Kyushu island, Japan. It has elevation of 1117 m and three main peaks; Kita-dake (1117 m), Naka-dake (1060 meters) and Minami-dake (1040 m). Sakurajima is connected to the Osumi peninsula in the east. We construct a fully three-dimensional model of Sakurajima volcano and conduct numerical simulations of seismic wave propagation for eruption earthquakes at Sakurajima volcano with the finite-difference method (FDM, Nakamura et al., 2012, BSSA). Our FDM model area is 12 km x 15 km wide, which includes Sakurajima volcano around the center. Mesh size (size of each cubic cell) of the FDM model is 20 m. Seismic wave propagation is strongly affected not only by subsurface structure but also by topography of land and seafloor. For the surface model construction we employ the 50m-mesh DEM provided by the Geographical Survey Institute of Japan for land surface, and nearly-250m-mesh topographic data of Kishimoto (1999) for seafloor, while for the subsurface structure model construction we exploit the Japan Integrated Velocity Structure Model provided by the Headquarters for Earthquake Research Promotion. To incorporate the topography of land and seafloor into the FDM, a simple and accurate fluid-solid boundary condition is implemented, where the seawater is included in the sea area of the FDM model. We employ a simple pulse point source of a vertical single force or explosive (isotropic) type around the sea level depth in the volcano to excite seismic waves. The modeled frequency range of the simulation is lower than about 5 Hz. Our simulation results show rather complicated waveform and long duration, of which may come from a scattering effect due to the topography and a site effect due to the shallow surface layers on the seismic wave propagation. It suggests that appropriate modeling of effects of the topography on seismic wave

  4. Kinematic GPS solutions for aircraft trajectories: Identifying and minimizing systematic height errors associated with atmospheric propagation delays

    USGS Publications Warehouse

    Shan, S.; Bevis, M.; Kendrick, E.; Mader, G.L.; Raleigh, D.; Hudnut, K.; Sartori, M.; Phillips, D.

    2007-01-01

    When kinematic GPS processing software is used to estimate the trajectory of an aircraft, unless the delays imposed on the GPS signals by the atmosphere are either estimated or calibrated via external observations, then vertical height errors of decimeters can occur. This problem is clearly manifested when the aircraft is positioned against multiple base stations in areas of pronounced topography because the aircraft height solutions obtained using different base stations will tend to be mutually offset, or biased, in proportion to the elevation differences between the base stations. When performing kinematic surveys in areas with significant topography it should be standard procedure to use multiple base stations, and to separate them vertically to the maximum extent possible, since it will then be much easier to detect mis-modeling of the atmosphere. Copyright 2007 by the American Geophysical Union.

  5. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  6. Benchmarks for time-domain simulation of sound propagation in soft-walled airways: Steady configurations

    PubMed Central

    Titze, Ingo R.; Palaparthi, Anil; Smith, Simeon L.

    2014-01-01

    Time-domain computer simulation of sound production in airways is a widely used tool, both for research and synthetic speech production technology. Speed of computation is generally the rationale for one-dimensional approaches to sound propagation and radiation. Transmission line and wave-reflection (scattering) algorithms are used to produce formant frequencies and bandwidths for arbitrarily shaped airways. Some benchmark graphs and tables are provided for formant frequencies and bandwidth calculations based on specific mathematical terms in the one-dimensional Navier–Stokes equation. Some rules are provided here for temporal and spatial discretization in terms of desired accuracy and stability of the solution. Kinetic losses, which have been difficult to quantify in frequency-domain simulations, are quantified here on the basis of the measurements of Scherer, Torkaman, Kucinschi, and Afjeh [(2010). J. Acoust. Soc. Am. 128(2), 828–838]. PMID:25480071

  7. FE simulation of laser generated surface acoustic wave propagation in skin.

    PubMed

    L'Etang, Adèle; Huang, Zhihong

    2006-12-22

    Advances in laser ultrasonics have opened new possibilities in medical applications, such as the characterization of skin properties. This paper describes the development of a multilayered finite element model (FEM) using ANSYS to simulate the propagation of laser generated thermoelastic surface acoustic waves (SAWs) through skin and to generate signals one would expect to observe without causing thermal damage to skin. A transient thermal analysis is developed to simulate the thermal effect of the laser source penetrating into the skin. The results from the thermal analysis are subsequently applied as a load to the structural analysis where the out-of-plane displacement responses are analysed in models with varying dermis layer thickness. PMID:16814352

  8. A phase screen model for simulating numerically the propagation of a laser beam in rain

    SciTech Connect

    Lukin, I P; Rychkov, D S; Falits, A V; Lai, Kin S; Liu, Min R

    2009-09-30

    The method based on the generalisation of the phase screen method for a continuous random medium is proposed for simulating numerically the propagation of laser radiation in a turbulent atmosphere with precipitation. In the phase screen model for a discrete component of a heterogeneous 'air-rain droplet' medium, the amplitude screen describing the scattering of an optical field by discrete particles of the medium is replaced by an equivalent phase screen with a spectrum of the correlation function of the effective dielectric constant fluctuations that is similar to the spectrum of a discrete scattering component - water droplets in air. The 'turbulent' phase screen is constructed on the basis of the Kolmogorov model, while the 'rain' screen model utiises the exponential distribution of the number of rain drops with respect to their radii as a function of the rain intensity. Theresults of the numerical simulation are compared with the known theoretical estimates for a large-scale discrete scattering medium. (propagation of laser radiation in matter)

  9. Precipitation uncertainty propagation in hydrologic simulations: evaluation over the Iberian Peninsula.

    NASA Astrophysics Data System (ADS)

    Nikolopoulos, Efthymios I.; Polcher, Jan; Anagnostou, Emmanouil N.; Eisner, Stephanie; Fink, Gabriel; Kallos, George

    2016-04-01

    Precipitation is arguably one of the most important forcing variables that drive terrestrial water cycle processes. The process of precipitation exhibits significant variability in space and time, is associated with different water phases (liquid or solid) and depends on several other factors (aerosols, orography etc), which make estimation and modeling of this process a particularly challenging task. As such, precipitation information from different sensors/products is associated with uncertainty. Propagation of this uncertainty into hydrologic simulations can have a considerable impact on the accuracy of the simulated hydrologic variables. Therefore, to make hydrologic predictions more useful, it is important to investigate and assess the impact of precipitation uncertainty in hydrologic simulations in order to be able to quantify it and identify ways to minimize it. In this work we investigate the impact of precipitation uncertainty in hydrologic simulations using land surface models (e.g. ORCHIDEE) and global hydrologic models (e.g. WaterGAP3) for the simulation of several hydrologic variables (soil moisture, ET, runoff) over the Iberian Peninsula. Uncertainty in precipitation is assessed by utilizing various sources of precipitation input that include one reference precipitation dataset (SAFRAN), three widely-used satellite precipitation products (TRMM 3B42v7, CMORPH, PERSIANN) and a state-of-the-art reanalysis product (WFDEI) based on the ECMWF ERA-Interim reanalysis. Comparative analysis is based on using the SAFRAN-simulations as reference and it is carried out at different space (0.5deg or regional average) and time (daily or seasonal) scales. Furthermore, as an independent verification, simulated discharge is compared against available discharge observations for selected major rivers of Iberian region. Results allow us to draw conclusions regarding the impact of precipitation uncertainty with respect to i) hydrologic variable of interest, ii

  10. A combined approach to the estimation of statistical error of the direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Plotnikov, M. Yu.; Shkarupa, E. V.

    2015-11-01

    Presently, the direct simulation Monte Carlo (DSMC) method is widely used for solving rarefied gas dynamics problems. As applied to steady-state problems, a feature of this method is the use of dependent sample values of random variables for the calculation of macroparameters of gas flows. A new combined approach to estimating the statistical error of the method is proposed that does not practically require additional computations, and it is applicable for any degree of probabilistic dependence of sample values. Features of the proposed approach are analyzed theoretically and numerically. The approach is tested using the classical Fourier problem and the problem of supersonic flow of rarefied gas through permeable obstacle.

  11. Attenuation and bit error rate for four co-propagating spatially multiplexed optical communication channels of exactly same wavelength in step index multimode fibers

    NASA Astrophysics Data System (ADS)

    Murshid, Syed H.; Chakravarty, Abhijit

    2011-06-01

    Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.

  12. Characterization of ambient air pollution measurement error in a time-series health study using a geostatistical simulation approach

    NASA Astrophysics Data System (ADS)

    Goldman, Gretchen T.; Mulholland, James A.; Russell, Armistead G.; Gass, Katherine; Strickland, Matthew J.; Tolbert, Paige E.

    2012-09-01

    In recent years, geostatistical modeling has been used to inform air pollution health studies. In this study, distributions of daily ambient concentrations were modeled over space and time for 12 air pollutants. Simulated pollutant fields were produced for a 6-year time period over the 20-county metropolitan Atlanta area using the Stanford Geostatistical Modeling Software (SGeMS). These simulations incorporate the temporal and spatial autocorrelation structure of ambient pollutants, as well as season and day-of-week temporal and spatial trends; these fields were considered to be the true ambient pollutant fields for the purposes of the simulations that followed. Simulated monitor data at the locations of actual monitors were then generated that contain error representative of instrument imprecision. From the simulated monitor data, four exposure metrics were calculated: central monitor and unweighted, population-weighted, and area-weighted averages. For each metric, the amount and type of error relative to the simulated pollutant fields are characterized and the impact of error on an epidemiologic time-series analysis is predicted. The amount of error, as indicated by a lack of spatial autocorrelation, is greater for primary pollutants than for secondary pollutants and is only moderately reduced by averaging across monitors; more error will result in less statistical power in the epidemiologic analysis. The type of error, as indicated by the correlations of error with the monitor data and with the true ambient concentration, varies with exposure metric, with error in the central monitor metric more of the classical type (i.e., independent of the monitor data) and error in the spatial average metrics more of the Berkson type (i.e., independent of the true ambient concentration). Error type will affect the bias in the health risk estimate, with bias toward the null and away from the null predicted depending on the exposure metric; population-weighting yielded the

  13. Beyond MCMC: Data-constraint and error propagation in a dynamic terrestrial biosphere model through Bayesian model emulation (Invited)

    NASA Astrophysics Data System (ADS)

    Dietze, M.; Lebauer, D.; Moorcroft, P. R.; Richardson, A. D.; Wang, D.

    2009-12-01

    Data-model integration plays a critical role in assessing and improving our capacity to predict the dynamics of the terrestrial carbon cycle. Likewise, the ability to attach quantitative statements of uncertainty around model forecasts is crucial for model assessment and interpretation and for setting field research priorities. Bayesian methods have garnered recent attention for these applications, especially for problems with multiple data constraints, but the Markov Chain/Monte Carlo usually methods employed can be computationally prohibitive for large data sets and slow models. We describe an alternative method, Bayesian model emulation, that can approximate the full joint posterior density, is more amenable to parallelization, and provides an estimate of parameter sensitivity as a byproduct. We report on the application of these methods to the parameterization of the Ecosystem Demography model v2.1, an age and size structured terrestrial biosphere model. Results will focus on the application of the model to the parameterization at two flux tower sites, one in the northern hardwood forest of New Hampshire and the second for a biofuel crop field trial in Illinois. Analysis of both sites involved multiple data constraints, the specification of both model and data uncertainties, and the inclusion of informative priors constructed from a meta-analysis of the primary literature . The model is well-constrained at both sites, with particular improvement in parameters controlling below-ground processes and allocation, which had poor prior constraint. Observation error for NEE is highest during the growing season while model error, by contrast, is highest in the winter due to sensitivity of the model to soil freezing. Model fit is sensitive to the weighting of different data sources, in particular if the data sources are in disagreement (e.g. nighttime NEE and soil respiration). Statistically accounting for the high degree of temporal autocorrelation in eddy

  14. Simulated retrievals of methane total columns in support of future satellite missions: an error sources analysis

    NASA Astrophysics Data System (ADS)

    Checa-Garcia, Ramiro; Alkemade, Frans; Boudon, Vincent; Fischerkeller, Constanze; Hahne, Philipp; Tran, Ha; Landgraf, Jochen; Butz, Andre

    2014-05-01

    Measuring atmospheric composition is a central objective for monitoring climate change and understanding human impact on the environment. In particular, quantifying natural and anthropogenic sources and sinks of greenhouse gases is a primary target of future Earth observing satellite missions. To this end, upcoming satellites are designed to measure carbon dioxide and/or methane total columns with high accuracy. Here, our research focuses on investigating and quantifying the main error sources of methane total column retrievals from satellites collecting solar backscatter absorption spectra in the shortwave infrared spectral range. Since errors as small as fractions of a percent can jeopardize the concentration estimates, our study in particular aims at supporting the best selection of instrument properties of new sensors such as Sentinel-5. To achieve this goal we performed retrieval simulations for a detailed ensemble of synthetic scenarios covering typical geophysical scenes that any future satellite would encounter. The ensemble is based on a range of microphysical aerosol and cirrus properties, Lambertian surface reflection properties and seasonal variations. The use of synthetic scenarios provides insight into the partitioning of several error sources such as forward model approximations, instrument properties, imperfect spectroscopy. Finally, our assessment will point out the most critical aspects to be considered in the design of future satellite missions and their support studies.

  15. Simulating Seismic Wave Propagation in 3-D Structure: A Case Study For Istanbul City

    NASA Astrophysics Data System (ADS)

    Yelkenci, Seda; Aktar, Mustafa

    2013-04-01

    Investigation of the wave propagation around the Marmara Sea, in particular for the city of Istanbul is critical because this target area is identified as one of the megacities with the highest seismic risk in the world. This study makes an attempt for creating an integrated 3D seismic/geologic model and precise understanding of 3-D wave propagation in the city of Istanbul. The approach is based on generating synthetic seismograms using realistic velocity structures as well as accurate location, focal mechanism and source parameters of reference earthquakes. The modarate size reference earthquakes occured in the Marmara Sea and were recorded by the National Seismic Network of Turkey as well as the network of Istanbul Early Warning and Rapid Response System. The seismograms are simulated by means of a 3-D finite difference method operated on parallel processing environment. In the content of creating a robust velocity model; 1D velocity models which are derived fom previous crustal studies of Marmara region such as refraction seismic and receiver functions have been conducted firstly for depths greater than 1km. Velocity structure in shallower part of the study region is then derived from recent geophysical and geotechnical surveys. To construct 3-D model from the obtained 1-D model data, a variety of interpolation methods are considered. According to the observations on amplitude and arrival time based on comparison of simulated seismograms, the considered velocity model is refined the way that S delay times are compensated. Another important task of this work is an application of the finite difference method to estimate three-dimensional seismic responses for a specified basin structure including soft sediments with low shear velocities in respect of the surrounded area in the Asian part of Istanbul. The analysis performed both in the time and frequency domain, helps in understanding of the comprehensive wave propagation characteristics and the distribution of

  16. Monte Carlo simulation for temporal characteristics of pulse laser propagation in discrete random medium

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Yuan, Hongwu; Mei, Haiping; Zhang, Qianghua

    2013-08-01

    Study the laser pulses transmission time characteristics in discrete random medium using the Monte Carlo method. Firstly, the medium optical parameters have been given by OPAC software. Then, create a Monte Carlo model and Monte Carlo simulation of photon transport behavior of a large number of tracking, statistics obtain the photon average arrival time and average pulse broadening case, the calculation result with calculation results of two-frequency mutual coherence function are compared, the results are very consistent. Finally, medium impulse response function given by polynomial fitting method can be used to correct discrete random medium inter-symbol interference in optical communications and reduce the rate of system error.

  17. Propagation of variability in railway dynamic simulations: application to virtual homologation

    NASA Astrophysics Data System (ADS)

    Funfschilling, Christine; Perrin, Guillaume; Kraft, Sönke

    2012-01-01

    Railway dynamic simulations are increasingly used to predict and analyse the behaviour of the vehicle and of the track during their whole life cycle. Up to now however, no simulation has been used in the certification procedure even if the expected benefits are important: cheaper and shorter procedures, more objectivity, better knowledge of the behaviour around critical situations. Deterministic simulations are nevertheless too poor to represent the whole physical of the track/vehicle system which contains several sources of variability: variability of the mechanical parameters of a train among a class of vehicles (mass, stiffness and damping of different suspensions), variability of the contact parameters (friction coefficient, wheel and rail profiles) and variability of the track design and quality. This variability plays an important role on the safety, on the ride quality, and thus on the certification criteria. When using the simulation for certification purposes, it seems therefore crucial to take into account the variability of the different inputs. The main goal of this article is thus to propose a method to introduce the variability in railway dynamics. A four-step method is described namely the definition of the stochastic problem, the modelling of the inputs variability, the propagation and the analysis of the output. Each step is illustrated with railway examples.

  18. Development and validation of a multiecho computer simulation of ultrasound propagation through cancellous bone

    NASA Astrophysics Data System (ADS)

    Langton, Christian; Church, Luke

    2002-05-01

    Cancellous bone consists of a porous open-celled framework of trabeculae interspersed with marrow. Although the measurement of broadband ultrasound attenuation (BUA) has been shown to be sensitive to osteoporotic changes, the exact dependence on material and structural parameters has not been elucidated. A 3-D computer simulation of ultrasound propagation through cancellous bone has been developed, based upon simple reflective behavior at the multitude of trabecular/marrow interfaces. A cancellous bone framework is initially described by an array of bone and marrow elements. An ultrasound pulse is launched along each row of the model with partial reflection occurring at each bone/marrow interface. If a reverse direction wave hits an interface, a further forward (echo) wave is created, with phase inversion implemented if appropriate. This process is monitored for each wave within each row. The effective received signal is created by summing the time domain data, thus simulating detection by a phase-sensitive ultrasound transducer, as incorporated in clinical systems. The simulation has been validated on a hexagonal honeycomb design of variable mesh size, first against a commercial computer simulation solution (Wave 2000 Pro), and second, via experimental measurement of physical replicas produced by stereolithography.

  19. First-principles simulation of light propagation and exciton dynamics in metal cluster nanostructures

    NASA Astrophysics Data System (ADS)

    Lisinetskaya, Polina G.; Röhr, Merle I. S.; Mitrić, Roland

    2016-06-01

    We present a theoretical approach for the simulation of the electric field and exciton propagation in ordered arrays constructed of molecular-sized noble metal clusters bound to organic polymer templates. In order to describe the electronic coupling between individual constituents of the nanostructure we use the ab initio parameterized transition charge method which is more accurate than the usual dipole-dipole coupling. The electronic population dynamics in the nanostructure under an external laser pulse excitation is simulated by numerical integration of the time-dependent Schrödinger equation employing the fully coupled Hamiltonian. The solution of the TDSE gives rise to time-dependent partial point charges for each subunit of the nanostructure, and the spatio-temporal electric field distribution is evaluated by means of classical electrodynamics methods. The time-dependent partial charges are determined based on the stationary partial and transition charges obtained in the framework of the TDDFT. In order to treat large plasmonic nanostructures constructed of many constituents, the approximate self-consistent iterative approach presented in (Lisinetskaya and Mitrić in Phys Rev B 89:035433, 2014) is modified to include the transition-charge-based interaction. The developed methods are used to study the optical response and exciton dynamics of {Ag}3+ and porphyrin-Ag4 dimers. Subsequently, the spatio-temporal electric field distribution in a ring constructed of ten porphyrin-Ag4 subunits under the action of circularly polarized laser pulse is simulated. The presented methodology provides a theoretical basis for the investigation of coupled light-exciton propagation in nanoarchitectures built from molecular size metal nanoclusters in which quantum confinement effects are important.

  20. First-principles simulation of light propagation and exciton dynamics in metal cluster nanostructures

    NASA Astrophysics Data System (ADS)

    Lisinetskaya, Polina G.; Röhr, Merle I. S.; Mitrić, Roland

    2016-06-01

    We present a theoretical approach for the simulation of the electric field and exciton propagation in ordered arrays constructed of molecular-sized noble metal clusters bound to organic polymer templates. In order to describe the electronic coupling between individual constituents of the nanostructure we use the ab initio parameterized transition charge method which is more accurate than the usual dipole-dipole coupling. The electronic population dynamics in the nanostructure under an external laser pulse excitation is simulated by numerical integration of the time-dependent Schrödinger equation employing the fully coupled Hamiltonian. The solution of the TDSE gives rise to time-dependent partial point charges for each subunit of the nanostructure, and the spatio-temporal electric field distribution is evaluated by means of classical electrodynamics methods. The time-dependent partial charges are determined based on the stationary partial and transition charges obtained in the framework of the TDDFT. In order to treat large plasmonic nanostructures constructed of many constituents, the approximate self-consistent iterative approach presented in (Lisinetskaya and Mitrić in Phys Rev B 89:035433, 2014) is modified to include the transition-charge-based interaction. The developed methods are used to study the optical response and exciton dynamics of Ag3+ and porphyrin-Ag4 dimers. Subsequently, the spatio-temporal electric field distribution in a ring constructed of ten porphyrin-Ag4 subunits under the action of circularly polarized laser pulse is simulated. The presented methodology provides a theoretical basis for the investigation of coupled light-exciton propagation in nanoarchitectures built from molecular size metal nanoclusters in which quantum confinement effects are important.

  1. Effects of simulated interpersonal touch and trait intrinsic motivation on the error-related negativity.

    PubMed

    Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J; Koole, Sander L

    2016-03-23

    The error-related negativity (ERN or Ne) is a negative event-related brain potential that peaks about 20-100ms after people perform an incorrect response in choice reaction time tasks. Prior research has shown that the ERN may be enhanced by situational and dispositional factors that promote intrinsic motivation. Building on and extending this work the authors hypothesized that simulated interpersonal touch may increase task engagement and thereby increase ERN amplitude. To test this notion, 20 participants performed a Go/No-Go task while holding a teddy bear or a same-sized cardboard box. As expected, the ERN was significantly larger when participants held a teddy bear rather than a cardboard box. This effect was most pronounced for people high (rather than low) in trait intrinsic motivation, who may depend more on intrinsically motivating task cues to maintain task engagement. These findings highlight the potential benefits of simulated interpersonal touch in stimulating attention to errors, especially among people who are intrinsically motivated. PMID:26876476

  2. DTI Quality Control Assessment via Error Estimation From Monte Carlo Simulations

    PubMed Central

    Farzinfar, Mahshid; Li, Yin; Verde, Audrey R.; Oguz, Ipek; Gerig, Guido; Styner, Martin A.

    2013-01-01

    Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing microscopic tissue structure in the white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and after QC. PMID:23833547

  3. Spectral-element simulations of wave propagation in complex exploration-industry models: Mesh generation and forward simulations

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Luo, Y.; Morency, C.; Tromp, J.

    2008-12-01

    Seismic-wave propagation in exploration-industry settings has seen major research and development efforts for decades, yet large-scale applications have often been limited to 2D or 3D finite-difference, (visco- )acoustic wave propagation due to computational limitations. We explore the possibility of including all relevant physical signatures in the wavefield using the spectral- element method (SPECFEM3D, SPECFEM2D), thereby accounting for acoustic, (visco-)elastic, poroelastic, anisotropic wave propagation in meshes which honor all crucial discontinuities. Mesh design is the crux of the problem, and we use CUBIT (Sandia Laboratories) to generate unstructured quadrilateral 2D and hexahedral 3D meshes for these complex background models. While general hexahedral mesh generation is an unresolved problem, we are able to accommodate most of the relevant settings (e.g., layer-cake models, salt bodies, overthrusting faults, and strong topography) with respectively tailored workflows. 2D simulations show localized, characteristic wave effects due to these features that shall be helpful in designing survey acquisition geometries in a relatively economic fashion. We address some of the fundamental issues this comprehensive modeling approach faces regarding its feasibility: Assessing geological structures in terms of the necessity to honor the major structural units, appropriate velocity model interpolation, quality control of the resultant mesh, and computational cost for realistic settings up to frequencies of 40 Hz. The solution to this forward problem forms the basis for subsequent 2D and 3D adjoint tomography within this context, which is the subject of a companion paper.

  4. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation

    PubMed Central

    Salomons, Erik M.; Lohman, Walter J. A.; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing. PMID:26789631

  5. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    PubMed

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing. PMID:26789631

  6. Numerical simulation of the nonlinear ultrasonic pressure wave propagation in a cavitating bubbly liquid inside a sonochemical reactor.

    PubMed

    Dogan, Hakan; Popov, Viktor

    2016-05-01

    We investigate the acoustic wave propagation in bubbly liquid inside a pilot sonochemical reactor which aims to produce antibacterial medical textile fabrics by coating the textile with ZnO or CuO nanoparticles. Computational models on acoustic propagation are developed in order to aid the design procedures. The acoustic pressure wave propagation in the sonoreactor is simulated by solving the Helmholtz equation using a meshless numerical method. The paper implements both the state-of-the-art linear model and a nonlinear wave propagation model recently introduced by Louisnard (2012), and presents a novel iterative solution procedure for the nonlinear propagation model which can be implemented using any numerical method and/or programming tool. Comparative results regarding both the linear and the nonlinear wave propagation are shown. Effects of bubble size distribution and bubble volume fraction on the acoustic wave propagation are discussed in detail. The simulations demonstrate that the nonlinear model successfully captures the realistic spatial distribution of the cavitation zones and the associated acoustic pressure amplitudes. PMID:26611813

  7. A spectral comparison of two methods of removing errors in Gauss` law in a 2-dimensional PIC plasma simulation

    SciTech Connect

    Mardahl, P.; Verboncoeur, J.; Birdsall, C.K.

    1995-12-31

    Non-charge conserving current collection algorithms for relativistic PIC plasma simulations can cause errors in Gauss` law. These errors arise from violations of the continuity equation. Two techniques for removing these errors are examined and compared, the Marder correction, a method which corrects electric fields locally and primarily affects short wavelengths, and a divergence correction, which uses a Poisson solve to correct the electric fields so that Gauss` law is enforced. The effect of each method on the spectrum of the error (short wavelengths vs. long) are examined. Computational efficiency and accuracy of the two techniques is compared.

  8. FDTD simulation of LEMP propagation over lossy ground: Influence of distance, ground conductivity, and source parameters

    NASA Astrophysics Data System (ADS)

    Aoki, Masanori; Baba, Yoshihiro; Rakov, Vladimir A.

    2015-08-01

    We have computed lightning electromagnetic pulses (LEMPs), including the azimuthal magnetic field Hφ, vertical electric field Ez, and horizontal (radial) electric field Eh that propagated over 5 to 200 km of flat lossy ground, using the finite difference time domain (FDTD) method in the 2-D cylindrical coordinate system. This is the first systematic full-wave study of LEMP propagation effects based on a realistic return-stroke model and including the complete return-stroke frequency range. Influences of the return-stroke wavefront speed (ranging from c/2 to c, where c is the speed of light), current risetime (ranging from 0.5 to 5 µs), and ground conductivity (ranging from 0.1 mS/m to ∞) on Hφ, Ez, and Eh have been investigated. Also, the FDTD-computed waveforms of Eh have been compared with the corresponding ones computed using the Cooray-Rubinstein formula. Peaks of Hφ, Ez, and Eh are nearly proportional to the return-stroke wavefront speed. The peak of Eh decreases with increasing current risetime, while those of Hφ and Ez are only slightly influenced by it. The peaks of Hφ and Ez are essentially independent of the ground conductivity at a distance of 5 km. Beyond this distance, they appreciably decrease relative to the perfectly conducting ground case, and the decrease is stronger for lower ground conductivity values. The peak of Eh increases with decreasing ground conductivity. The computed Eh/Ez is consistent with measurements of Thomson et al. (1988). The observed decrease of Ez peak and increase of Ez risetime due to propagation over 200 km of Florida soil are reasonably well reproduced by the FDTD simulation with ground conductivity of 1 mS/m.

  9. Using 3D Simulation of Elastic Wave Propagation in Laplace Domain for Electromagnetic-Seismic Inverse Modeling

    NASA Astrophysics Data System (ADS)

    Petrov, P.; Newman, G. A.

    2010-12-01

    -Fourier domain we had developed 3D code for full-wave field simulation in the elastic media which take into account nonlinearity introduced by free-surface effects. Our approach is based on the velocity-stress formulation. In the contrast to conventional formulation we defined the material properties such as density and Lame constants not at nodal points but within cells. This second order finite differences method formulated in the cell-based grid, generate numerical solutions compatible with analytical ones within the range errors determinate by dispersion analysis. Our simulator will be embedded in an inversion scheme for joint seismic- electromagnetic imaging. It also offers possibilities for preconditioning the seismic wave propagation problems in the frequency domain. References. Shin, C. & Cha, Y. (2009), Waveform inversion in the Laplace-Fourier domain, Geophys. J. Int. 177(3), 1067- 1079. Shin, C. & Cha, Y. H. (2008), Waveform inversion in the Laplace domain, Geophys. J. Int. 173(3), 922-931. Commer, M. & Newman, G. (2008), New advances in three-dimensional controlled-source electromagnetic inversion, Geophys. J. Int. 172(2), 513-535. Newman, G. A., Commer, M. & Carazzone, J. J. (2010), Imaging CSEM data in the presence of electrical anisotropy, Geophysics, in press.

  10. Propagation characteristics of atmospheric-pressure He+O2 plasmas inside a simulated endoscope channel

    NASA Astrophysics Data System (ADS)

    Wang, S.; Chen, Z. Y.; Wang, X. H.; Li, D.; Yang, A. J.; Liu, D. X.; Rong, M. Z.; Chen, H. L.; Kong, M. G.

    2015-11-01

    Cold atmospheric-pressure plasmas have potential to be used for endoscope sterilization. In this study, a long quartz tube was used as the simulated endoscope channel, and an array of electrodes was warped one by one along the tube. Plasmas were generated in the inner channel of the tube, and their propagation characteristics in He+O2 feedstock gases were studied as a function of the oxygen concentration. It is found that each of the plasmas originates at the edge of an instantaneous cathode, and then it propagates bidirectionally. Interestingly, a plasma head with bright spots is formed in the hollow instantaneous cathode and moves towards its center part, and a plasma tail expands through the electrode gap and then forms a swallow tail in the instantaneous anode. The plasmas are in good axisymmetry when [O2] ≤ 0.3%, but not for [O2] ≥ 1%, and even behave in a stochastic manner when [O2] = 3%. The antibacterial agents are charged species and reactive oxygen species, so their wall fluxes represent the "plasma dosage" for the sterilization. Such fluxes mainly act on the inner wall in the hollow electrode rather than that in the electrode gap, and they get to the maximum efficiency when the oxygen concentration is around 0.3%. It is estimated that one can reduce the electrode gap and enlarge the electrode width to achieve more homogenous and efficient antibacterial effect, which have benefits for sterilization applications.

  11. RF propagation simulator to predict location accuracy of GSM mobile phones for emergency applications

    NASA Astrophysics Data System (ADS)

    Green, Marilynn P.; Wang, S. S. Peter

    2002-11-01

    Mobile location is one of the fastest growing areas for the development of new technologies, services and applications. This paper describes the channel models that were developed as a basis of discussion to assist the Technical Subcommittee T1P1.5 in its consideration of various mobile location technologies for emergency applications (1997 - 1998) for presentation to the U.S. Federal Communication Commission (FCC). It also presents the PCS 1900 extension to this model, which is based on the COST-231 extended Hata model and review of the original Okumura graphical interpretation of signal propagation characteristics in different environments. Based on a wide array of published (and non-publicly disclosed) empirical data, the signal propagation models described in this paper were all obtained by consensus of a group of inter-company participants in order to facilitate the direct comparison between simulations of different handset-based and network-based location methods prior to their standardization for emergency E-911 applications by the FCC. Since that time, this model has become a de-facto standard for assessing the positioning accuracy of different location technologies using GSM mobile terminals. In this paper, the radio environment is described to the level of detail that is necessary to replicate it in a software environment.

  12. Monte Carlo simulations of converging laser beam propagating in turbid media with parallel computing

    NASA Astrophysics Data System (ADS)

    Wu, Di; Lu, Jun Q.; Hu, Xin H.; Zhao, S. S.

    1999-11-01

    Due to its flexibility and simplicity, Monte Carlo method is often used to study light propagation in turbid medium where the photons are treated like classic particles being scattered and absorbed randomly based on a radiative transfer theory. However, due to the need of large number of photons to produce statistically significance results, this type of calculations requires large computing resources. To overcome such difficulty, we implemented parallel computing technique into our Monte Carlo simulations. The algorithm is based on the fact that the classic particles are uncorrelated, and the trajectories of multiple photons can be tracked simultaneously. When a beam of focused light incident to the medium, the incident photons are divided into groups according to the available processes on a parallel machine and the calculations are carried out in parallel. Utilizing PVM (Parallel Virtual Machine, a parallel computing software), the parallel programs in both C and FORTRAN are developed on the massive parallel computer Cray T3E at the North Carolina Supercomputer Center and a local PC-cluster network running UNIX/Sun Solaris. The parallel performances of our codes have been excellent on both Cray T3E and the PC clusters. In this paper, we present results on a focusing laser beam propagating through a highly scattering and diluted solution of intralipid. The dependence of the spatial distribution of light near the focal point on the concentration of intralipid solution is studied and its significance is discussed.

  13. Stochastic simulation for the propagation of high-frequency acoustic waves through a random velocity field

    NASA Astrophysics Data System (ADS)

    Lu, B.; Darmon, M.; Leymarie, N.; Chatillon, S.; Potel, C.

    2012-05-01

    In-service inspection of Sodium-Cooled Fast Reactors (SFR) requires the development of non-destructive techniques adapted to the harsh environment conditions and the examination complexity. From past experiences, ultrasonic techniques are considered as suitable candidates. The ultrasonic telemetry is a technique used to constantly insure the safe functioning of reactor inner components by determining their exact position: it consists in measuring the time of flight of the ultrasonic response obtained after propagation of a pulse emitted by a transducer and its interaction with the targets. While in-service the sodium flow creates turbulences that lead to temperature inhomogeneities, which translates into ultrasonic velocity inhomogeneities. These velocity variations could directly impact the accuracy of the target locating by introducing time of flight variations. A stochastic simulation model has been developed to calculate the propagation of ultrasonic waves in such an inhomogeneous medium. Using this approach, the travel time is randomly generated by a stochastic process whose inputs are the statistical moments of travel times known analytically. The stochastic model predicts beam deviations due to velocity inhomogeneities, which are similar to those provided by a determinist method, such as the ray method.

  14. A RANS simulation toward the effect of turbulence and cavitation on spray propagation and combustion characteristics

    NASA Astrophysics Data System (ADS)

    Taghavifar, Hadi; Khalilarya, Shahram; Jafarmadar, Samad; Taghavifar, Hamid

    2016-08-01

    A multidimensional computational fluid dynamic code was developed and integrated with probability density function combustion model to give the detailed account of multiphase fluid flow. The vapor phase within injector domain is treated with Reynolds-averaged Navier-Stokes technique. A new parameter is proposed which is an index of plane-cut spray propagation and takes into account two parameters of spray penetration length and cone angle at the same time. It was found that spray propagation factor (SPI) tends to increase at lower r/ d ratios, although the spray penetration tends to decrease. The results of SPI obtained by empirical correlation of Hay and Jones were compared with the simulation computation as a function of respective r/ d ratio. Based on the results of this study, the spray distribution on plane area has proportional correlation with heat release amount, NO x emission mass fraction, and soot concentration reduction. Higher cavitation is attributed to the sharp edge of nozzle entrance, yielding better liquid jet disintegration and smaller spray droplet that reduces soot mass fraction of late combustion process. In order to have better insight of cavitation phenomenon, turbulence magnitude in nozzle and combustion chamber was acquired and depicted along with spray velocity.

  15. A RANS simulation toward the effect of turbulence and cavitation on spray propagation and combustion characteristics

    NASA Astrophysics Data System (ADS)

    Taghavifar, Hadi; Khalilarya, Shahram; Jafarmadar, Samad; Taghavifar, Hamid

    2016-03-01

    A multidimensional computational fluid dynamic code was developed and integrated with probability density function combustion model to give the detailed account of multiphase fluid flow. The vapor phase within injector domain is treated with Reynolds-averaged Navier-Stokes technique. A new parameter is proposed which is an index of plane-cut spray propagation and takes into account two parameters of spray penetration length and cone angle at the same time. It was found that spray propagation factor (SPI) tends to increase at lower r/d ratios, although the spray penetration tends to decrease. The results of SPI obtained by empirical correlation of Hay and Jones were compared with the simulation computation as a function of respective r/d ratio. Based on the results of this study, the spray distribution on plane area has proportional correlation with heat release amount, NO x emission mass fraction, and soot concentration reduction. Higher cavitation is attributed to the sharp edge of nozzle entrance, yielding better liquid jet disintegration and smaller spray droplet that reduces soot mass fraction of late combustion process. In order to have better insight of cavitation phenomenon, turbulence magnitude in nozzle and combustion chamber was acquired and depicted along with spray velocity.

  16. Stochastic simulation for the propagation of high-frequency acoustic waves through a random velocity field

    SciTech Connect

    Lu, B.; Darmon, M.; Leymarie, N.; Chatillon, S.; Potel, C.

    2012-05-17

    In-service inspection of Sodium-Cooled Fast Reactors (SFR) requires the development of non-destructive techniques adapted to the harsh environment conditions and the examination complexity. From past experiences, ultrasonic techniques are considered as suitable candidates. The ultrasonic telemetry is a technique used to constantly insure the safe functioning of reactor inner components by determining their exact position: it consists in measuring the time of flight of the ultrasonic response obtained after propagation of a pulse emitted by a transducer and its interaction with the targets. While in-service the sodium flow creates turbulences that lead to temperature inhomogeneities, which translates into ultrasonic velocity inhomogeneities. These velocity variations could directly impact the accuracy of the target locating by introducing time of flight variations. A stochastic simulation model has been developed to calculate the propagation of ultrasonic waves in such an inhomogeneous medium. Using this approach, the travel time is randomly generated by a stochastic process whose inputs are the statistical moments of travel times known analytically. The stochastic model predicts beam deviations due to velocity inhomogeneities, which are similar to those provided by a determinist method, such as the ray method.

  17. Open Boundary Particle-in-Cell Simulation of Dipolarization Front Propagation

    NASA Technical Reports Server (NTRS)

    Klimas, Alex; Hwang, Kyoung-Joo; Vinas, Adolfo F.; Goldstein, Melvyn L.

    2014-01-01

    First results are presented from an ongoing open boundary 2-1/2D particle-in-cell simulation study of dipolarization front (DF) propagation in Earth's magnetotail. At this stage, this study is focused on the compression, or pileup, region preceding the DF current sheet. We find that the earthward acceleration of the plasma in this region is in general agreement with a recent DF force balance model. A gyrophase bunched reflected ion population at the leading edge of the pileup region is reflected by a normal electric field in the pileup region itself, rather than through an interaction with the current sheet. We discuss plasma wave activity at the leading edge of the pileup region that may be driven by gradients, or by reflected ions, or both; the mode has not been identified. The waves oscillate near but above the ion cyclotron frequency with wavelength several ion inertial lengths. We show that the waves oscillate primarily in the perpendicular magnetic field components, do not propagate along the background magnetic field, are right handed elliptically (close to circularly) polarized, exist in a region of high electron and ion beta, and are stationary in the plasma frame moving earthward. We discuss the possibility that the waves are present in plasma sheet data, but have not, thus far, been discovered.

  18. A Study on Propagation of Monopole Ultrasonic Pulse by Simulation and Experiment

    NASA Astrophysics Data System (ADS)

    Sato, Takeki; Inoue, Hiroshi; Murata, Kenji

    2006-05-01

    The shock wave or intense impulsive acoustic wave generated by an explosion, whether in air or in water, can produce unexpected lesions on the human body. Short monopole or impulsive-like ultrasonic pulses have a fast rise and are similar to a shock wave generated by an explosion. Investigation of the propagation of the monopole ultrasonic pulse in a lossy medium would be basic research for clarifing where the problems lie. In this study, we investigate the sound field of the monopole ultrasonic pulse in degassed water and glycerine by simulation and experiment, as well as its mechanism and effect in a lossy medium. The results show that waveform changed from a monopole to a dipole owing to a diffraction loss as the pulse transmitted in the medium, the amplitude of a received pulse was decreased considerably in glycerine by the large absorption, and also the rise of the amplitude was more gradual owing to the reduction of high-frequency components. A short monopole ultrasonic pulse will approach a shock wave if the wave propagates as a plane wave because the pulse remains impulsive. As a monopole ultrasonic pulse radiated from a small source is transmitted, a negative pressure grows, and its action on a medium per unit time will weaken owing to the large absorption of the transmission medium.

  19. Simulation of wave propagation in boreholes and radial profiling of formation elastic parameters

    NASA Astrophysics Data System (ADS)

    Chi, Shihong

    Modern acoustic logging tools measure in-situ elastic wave velocities of rock formations. These velocities provide ground truth for time-depth conversions in seismic exploration. They are also widely used to quantify the mechanical strength of formations for applications such as wellbore stability analysis and sand production prevention. Despite continued improvements in acoustic logging technology and interpretation methods that take advantage of full waveform data, acoustic logs processed with current industry standard methods often remain influenced by formation damage and mud-filtrate invasion. This dissertation develops an efficient and accurate algorithm for the numerical simulation of wave propagation in fluid-filled boreholes in the presence of complex, near-wellbore damaged zones. The algorithm is based on the generalized reflection and transmission matrices method. Assessment of mud-filtrate invasion effects on borehole acoustic measurements is performed through simulation of time-lapse logging in the presence of complex radial invasion zones. The validity of log corrections performed with the Biot-Gassmann fluid substitution model is assessed by comparing the velocities estimated from array waveform data simulated for homogeneous and radially heterogeneous formations that sustain mud-filtrate invasion. The proposed inversion algorithm uses array waveform data to estimate radial profiles of formation elastic parameters. These elastic parameters can be used to construct more realistic near-wellbore petrophysical models for applications in seismic exploration, geo-mechanics, and production. Frequency-domain, normalized amplitude and phase information contained in array waveform data are input to the nonlinear Gauss-Newton inversion algorithm. Validation of both numerical simulation and inversion is performed against previously published results based on the Thomson-Haskell method and travel time tomography, respectively. This exercise indicates that the

  20. Cyclic fatigue-crack propagation in sapphire in air and simulated physiological environments.

    PubMed

    Asoo, B; McNaney, J M; Mitamura, Y; Ritchie, R O

    2000-12-01

    Single-crystal aluminas are being considered for use in the manufacture of prosthetic heart valves. To characterize such materials for biomedical application, subcritical crack growth by stress corrosion (static fatigue) and by cyclic fatigue has been examined in sapphire along (1100) planes in 24 degrees C humid air and 37 degrees C Ringer's solution (the latter as a simulated physiological environment). The relationships between crack-propagation rates and the linear-elastic stress intensity have been determined for the first time in sapphire for both modes of subcritical cracking. It was found that growth rates were significantly faster at a given stress intensity in the Ringer's solution compared to the humid air environment. Mechanistically, a true cyclic fatigue effect was not found in sapphire as experimentally measured cyclic fatigue-crack growth rates could be closely predicted simply by integrating the static fatigue-crack growth data over the cyclic loading cycle. PMID:11007616

  1. Hybrid electrodynamics and kinetics simulation for electromagnetic wave propagation in weakly ionized hydrogen plasmas

    NASA Astrophysics Data System (ADS)

    Chen, Qiang; Chen, Bin

    2012-10-01

    In this paper, a hybrid electrodynamics and kinetics numerical model based on the finite-difference time-domain method and lattice Boltzmann method is presented for electromagnetic wave propagation in weakly ionized hydrogen plasmas. In this framework, the multicomponent Bhatnagar-Gross-Krook collision model considering both elastic and Coulomb collisions and the multicomponent force model based on the Guo model are introduced, which supply a hyperfine description on the interaction between electromagnetic wave and weakly ionized plasma. Cubic spline interpolation and mean filtering technique are separately introduced to solve the multiscalar problem and enhance the physical quantities, which are polluted by numerical noise. Several simulations have been implemented to validate our model. The numerical results are consistent with a simplified analytical model, which demonstrates that this model can obtain satisfying numerical solutions successfully.

  2. A three-phase soil model for simulating stress wave propagation due to blast loading

    NASA Astrophysics Data System (ADS)

    Wang, Zhongqi; Hao, Hong; Lu, Yong

    2004-01-01

    A three-phase soil model is proposed to simulate stress wave propagation in soil mass to blast loading. The soil is modelled as a three-phase mass that includes the solid particles, water and air. It is considered as a structure that the solid particles form a skeleton and their voids are filled with water and air. The equation of state (EOS) of the soil is derived. The elastic-plastic theory is adopted to model the constitutive relation of the soil skeleton. The damage of the soil skeleton is also modelled. The Drucker-Prager strength model including the strain rate effect is used to describe the strength of the soil skeleton. The model is implemented into a hydrocode Autodyn. The recorded results obtained by explosion tests in soil are used to validate the proposed model. Copyright

  3. Steepening of parallel propagating hydromagnetic waves into magnetic pulsations - A simulation study

    NASA Technical Reports Server (NTRS)

    Akimoto, K.; Winske, D.; Onsager, T. G.; Thomsen, M. F.; Gary, S. P.

    1991-01-01

    The steepening mechanism of parallel propagating low-frequency MHD-like waves observed upstream of the earth's quasi-parallel bow shock has been investigated by means of electromagnetic hybrid simulations. It is shown that an ion beam through the resonant electromagnetic ion/ion instability excites large-amplitude waves, which consequently pitch angle scatter, decelerate, and eventually magnetically trap beam ions in regions where the wave amplitudes are largest. As a result, the beam ions become bunched in both space and gyrophase. As these higher-density, nongyrotropic beam segments are formed, the hydromagnetic waves rapidly steepen, resulting in magnetic pulsations, with properties generally in agreement with observations. This steepening process operates on the scale of the linear growth time of the resonant ion/ion instability. Many of the pulsations generated by this mechanism are left-hand polarized in the spacecraft frame.

  4. Immobilization of simulated radioactive soil waste containing cerium by self-propagating high-temperature synthesis

    NASA Astrophysics Data System (ADS)

    Mao, Xianhe; Qin, Zhigui; Yuan, Xiaoning; Wang, Chunming; Cai, Xinan; Zhao, Weixia; Zhao, Kang; Yang, Ping; Fan, Xiaoling

    2013-11-01

    A simulated radioactive soil waste containing cerium as an imitator element has been immobilized by a thermite self-propagating high-temperature synthesis (SHS) process. The compositions, structures, and element leaching rates of products with different cerium contents have been characterized. To investigate the influence of iron on the chemical stability of the immobilized products, leaching tests of samples with different iron contents with different leaching solutions were carried out. The results showed that the imitator element cerium mainly forms the crystalline phases CeAl11O18 and Ce2SiO5. The leaching rate of cerium over a period of 28 days was 10-5-10-6 g/(m2 day). Iron in the reactants, the reaction products, and the environment has no significant effect on the chemical stability of the immobilized SHS products.

  5. Combination of the discontinuous Galerkin method with finite differences for simulation of seismic wave propagation

    NASA Astrophysics Data System (ADS)

    Lisitsa, Vadim; Tcheverda, Vladimir; Botter, Charlotte

    2016-04-01

    We present an algorithm for the numerical simulation of seismic wave propagation in models with a complex near surface part and free surface topography. The approach is based on the combination of finite differences with the discontinuous Galerkin method. The discontinuous Galerkin method can be used on polyhedral meshes; thus, it is easy to handle the complex surfaces in the models. However, this approach is computationally intense in comparison with finite differences. Finite differences are computationally efficient, but in general, they require rectangular grids, leading to the stair-step approximation of the interfaces, which causes strong diffraction of the wavefield. In this research we present a hybrid algorithm where the discontinuous Galerkin method is used in a relatively small upper part of the model and finite differences are applied to the main part of the model.

  6. Laser beam propagation through bulk nonlinear media: Numerical simulation and experiment

    NASA Astrophysics Data System (ADS)

    Kovsh, Dmitriy I.

    This dissertation describes our efforts in modeling the propagation of high intensity laser pulses through optical systems consisting of one or multiple nonlinear elements. These nonlinear elements can be up to 103 times thicker than the depth of focus of the laser beam, so that the beam size changes drastically within the medium. The set of computer codes developed are organized in a software package (NLO_BPM). The ultrafast nonlinearities of the bound-electronic n2 and two-photon absorption as well as time dependent excited-state, free-carrier and thermal nonlinearities are included in the codes for modeling propagation of picosecond to nanosecond pulses and pulse trains. Various cylindrically symmetric spatial distributions of the input beam are modeled. We use the cylindrical symmetry typical of laser outputs to reduce the CPU and memory requirements making modeling a real- time task on PC's. The hydrodynamic equations describing the rarefaction of the medium due to heating and electrostriction are solved in the transient regime to determine refractive index changes on a nanosecond time scale. This effect can be simplified in some cases by an approximation that assumes an instantaneous expansion. We also find that the index change obtained from the photo-acoustic equation overshoots its steady-state value once the ratio between the pulse width and the acoustic transit time is greater than unity. We numerically study the sensitivity of the closed- aperture Z-scan experiment to nonlinear refraction for various input beam profiles. If the beam has a ring structure with a minimum (or zero) on axis in the far field, the sensitivity of Z-scan measurements can be increased by up to one order of magnitude. The linear propagation module integrated with the nonlinear beam propagation codes allows the simulation of typical experiments such as Z-scan and optical limiting experiments. We have used these codes to model the performance of optical limiters. We study two of the

  7. Large-eddy simulation of the generation and propagation of internal solitary waves

    NASA Astrophysics Data System (ADS)

    Zhu, Hai; Wang, LingLing; Tang, HongWu

    2014-06-01

    A modified large-eddy simulation model, the dynamic coherent eddy model (DCEM) is employed to simulate the generation and propagation of internal solitary waves (ISWs) of both depression and elevation type, with wave amplitudes ranging from small, medium to large scales. The simulation results agree well with the existing experimental data. The generation process of ISWs is successfully captured by the DCEM method. Shear instabilities and diapycnal mixing in the initial wave generation phase are observed. The dissipation rate is not equal at different locations of an ISW. ISW-induced velocity field is analyzed in the present study. The structure of the bottom boundary layer (BBL) of internal wave packets is found to be different from that of a single ISW. A reverse boundary jet instead of a separation bubble exists behind the leading internal wave while separation bubbles appear in other parts of the wave-induced velocity field. The boundary jet flow resulting from the adverse pressure gradients has distinctive dynamics compared with free shear jets.

  8. Computational Simulation of Damage Propagation in Three-Dimensional Woven Composites

    NASA Technical Reports Server (NTRS)

    Huang, Dade; Minnetyan, Levon

    2005-01-01

    Three dimensional (3D) woven composites have demonstrated multi-directional properties and improved transverse strength, impact resistance, and shear characteristics. The objective of this research is to develop a new model for predicting the elastic constants, hygrothermal effects, thermomechanical response, and stress limits of 3D woven composites; and to develop a computational tool to facilitate the evaluation of 3D woven composite structures with regard to damage tolerance and durability. Fiber orientations of weave and braid patterns are defined with reference to composite structural coordinates. Orthotropic ply properties and stress limits computed via micromechanics are transformed to composite structural coordinates and integrated to obtain the 3D properties. The various stages of degradation, from damage initiation to collapse of structures, in the 3D woven structures are simulated for the first time. Three dimensional woven composite specimens with various woven patterns under different loading conditions, such as tension, compression, bending, and shear are simulated in the validation process of this research. Damage initiation, growth, accumulation, and propagation to fracture are included in these simulations.

  9. Assessing viscoelasticity of shear wave propagation in cervical tissue by multiscale computational simulation.

    PubMed

    Peralta, L; Rus, G; Bochud, N; Molina, F S

    2015-06-25

    The viscoelastic properties are recently being reported to be particularly sensitive to the gestation process, and to be intimately related to the microstructure of the cervical tissue. However, this link is not fully understood yet. In this work, we explore the importance of the heterogeneous multi-scale nature of cervical tissue for quantifying both elasticity and viscosity from shear waves velocity. To this end, shear wave propagations are simulated in a microscopic cervical tissue model using the finite difference time domain technique, over a wide frequency range from 15 to 200 kHz. Three standard rheological models (Voigt, Maxwell and Zener) are evaluated regarding their ability to reproduce the simulated dispersion curves, and their plausibility to describe cervical tissue is ranked by a stochastic model-class selection formulation. It is shown that the simplest model, i.e. that with less parameters, which best describes the simulated dispersion curves in cervical tissue is the Maxwell model. Furthermore, results show that the excitation frequency determines which rheological model can be representative for the tissue. Typically, viscoelastic parameters tend to converge for excitation frequencies over 100 kHz. PMID:25700611

  10. Simulation study on light propagation in an anisotropic turbulence field of entrainment zone.

    PubMed

    Yuan, Renmin; Sun, Jianning; Luo, Tao; Wu, Xuping; Wang, Chen; Fu, Yunfei

    2014-06-01

    The convective atmospheric boundary layer was modeled in the water tank. In the entrainment zone (EZ), which is at the top of the convective boundary layer (CBL), the turbulence is anisotropic. An anisotropy coefficient was introduced in the presented anisotropic turbulence model. A laser beam was set to horizontally go through the EZ modeled in the water tank. The image of two-dimensional (2D) light intensity fluctuation was formed on the receiving plate perpendicular to the light path and was recorded by the CCD. The spatial spectra of both horizontal and vertical light intensity fluctuations were analyzed. Results indicate that the light intensity fluctuation in the EZ exhibits strong anisotropic characteristics. Numerical simulation shows there is a linear relationship between the anisotropy coefficients and the ratio of horizontal to vertical fluctuation spectra peak wavelength. By using the measured temperature fluctuations along the light path at different heights, together with the relationship between temperature and refractive index, the one-dimensional (1D) refractive index fluctuation spectra were derived. The anisotropy coefficients were estimated from the 2D light intensity fluctuation spectra modeled by the water tank. Then the turbulence parameters can be obtained using the 1D refractive index fluctuation spectra and the corresponding anisotropy coefficients. These parameters were used in numerical simulation of light propagation. The results of numerical simulations show this approach can reproduce the anisotropic features of light intensity fluctuations in the EZ modeled by the water tank experiment. PMID:24921536

  11. Full-wave simulations of lower hybrid wave propagation in the EAST tokamak

    NASA Astrophysics Data System (ADS)

    Bonoli, P. T.; Lee, J. P.; Shiraiwa, S.; Wright, J. C.; Ding, B.; Yang, C.

    2015-11-01

    Studies of lower hybrid (LH) wave propagation have been conducted in the EAST tokamak where electron Landau damping (ELD) of the wave is typically weak, resulting in multiple passes of the wave front prior to its being absorbed in the plasma core. Under these conditions it is interesting to investigate full-wave effects that can become important at the plasma cut-off where the wave is reflected at the edge, as well as full-wave effects such as caustic formation in the core. High fidelity LH full-wave simulations were performed for EAST using the TORLH field solver. These simulations used sufficient poloidal mode resolution to resolve the perpendicular wavelengths associated with electron Landau damping of the LH wave at the plasma periphery, thus achieving fully converged electric field solutions at all radii of the plasma. Comparison of these results with ray tracing simulations will also be presented. Work supported by the US DOE under Contract No. DE-SC0010492 and DE-FC02-01ER54648.

  12. Propagation of localized structures in relativistic magnetized electron-positron plasmas using particle-in-cell simulations

    SciTech Connect

    López, Rodrigo A.; Muñoz, Víctor; Viñas, Adolfo F.; Valdivia, Juan A.

    2015-09-15

    We use a particle-in-cell simulation to study the propagation of localized structures in a magnetized electron-positron plasma with relativistic finite temperature. We use as initial condition for the simulation an envelope soliton solution of the nonlinear Schrödinger equation, derived from the relativistic two fluid equations in the strongly magnetized limit. This envelope soliton turns out not to be a stable solution for the simulation and splits in two localized structures propagating in opposite directions. However, these two localized structures exhibit a soliton-like behavior, as they keep their profile after they collide with each other due to the periodic boundary conditions. We also observe the formation of localized structures in the evolution of a spatially uniform circularly polarized Alfvén wave. In both cases, the localized structures propagate with an amplitude independent velocity.

  13. Wave-like warp propagation in circumbinary discs - I. Analytic theory and numerical simulations

    NASA Astrophysics Data System (ADS)

    Facchini, Stefano; Lodato, Giuseppe; Price, Daniel J.

    2013-08-01

    In this paper we analyse the propagation of warps in protostellar circumbinary discs. We use these systems as a test environment in which to study warp propagation in the bending-wave regime, with the addition of an external torque due to the binary gravitational potential. In particular, we want to test the linear regime, for which an analytic theory has been developed. In order to do so, we first compute analytically the steady-state shape of an inviscid disc subject to the binary torques. The steady-state tilt is a monotonically increasing function of radius, but misalignment is found at the disc inner edge. In the absence of viscosity, the disc does not present any twist. Then, we compare the time-dependent evolution of the warped disc calculated via the known linearized equations both with the analytic solutions and with full 3D numerical simulations. The simulations have been performed with the PHANTOM smoothed particle hydrodynamics (SPH) code using two million particles. We find a good agreement both in the tilt and in the phase evolution for small inclinations, even at very low viscosities. Moreover, we have verified that the linearized equations are able to reproduce the diffusive behaviour when α > H/R, where α is the disc viscosity parameter. Finally, we have used the 3D simulations to explore the non-linear regime. We observe a strongly non-linear behaviour, which leads to the breaking of the disc. Then, the inner disc starts precessing with its own precessional frequency. This behaviour has already been observed with numerical simulations in accretion discs around spinning black holes. The evolution of circumstellar accretion discs strongly depends on the warp evolution. Therefore, the issue explored in this paper could be of fundamental importance in order to understand the evolution of accretion discs in crowded environments, when the gravitational interaction with other stars is highly likely, and in multiple systems. Moreover, the evolution of

  14. Dynamic rupture simulation with an experimentally-determined friction law leads to slip-pulse propagation

    NASA Astrophysics Data System (ADS)

    Liao, Z.; Chang, J. C.; Reches, Z.

    2013-12-01

    We simulate the dynamic rupture along a vertical, strike-slip fault in an elastic half-space. The fault has frictional properties that were determined in high-velocity, rotary shear apparatus Sierra-White granite. The experimental fault was abruptly loaded by a massive flywheel, which is assumed to simulate the loading of a fault patch during an earthquake, and termed Earthquake-Like-Slip Event (ELSE) (Chang et al., 2012). The experiments revealed systematic alteration between slip-weakening and slip-strengthening (Fig. 1A), and were considered as proxies of fault-patch behavior during earthquakes of M = 4-8. We used the friction-distance relations of these experiments to form an empirical slip-dependent friction model, ELSE-model (Fig. 1B). For the dynamic rupture simulation, we used the program of Ampuero (2002) (2D spectral boundary integral elements) designed for anti-plane (mode III) shear fracturing. To compare with published works, the calculations used a crust with mechanical properties and stress state of Version 3 benchmark of SCEC (Harris et al., 2004). The calculations with a fault of ELSE-model friction revealed: (1) Rupture propagation in a slip-pulse style with slip cessation behind the pulse; (2) Systematic decrease of slip distance away from the nucleation zone; and (3) Spontaneous arrest of the dynamic rupture without a barrier. These features suggest a rupture of a self-healing slip-pulse mode (Fig. 1C), in contrast to rupturing of a fault with linear slip-weakening friction (Fig. 1B) (Rojas et al., 2008) in crack-like mode and no spontaneous arrest. We deduce that the slip-pulse in our simulation results from the fast recovery of shear strength as observed in ELSE experiments, and argue that incorporating this experimentally-based friction model to rupture modeling produces realistic propagation style of earthquake rupture. Figure 1 Fault patch behavior during an earthquake. (A) Experimental evolution of frictional stress, slip velocity, and

  15. Non-iterative adaptive time stepping with truncation error control for simulating variable-density flow

    NASA Astrophysics Data System (ADS)

    Hirthe, E. M.; Graf, T.

    2012-04-01

    Fluid density variations occur due to changes in the solute concentration, temperature and pressure of groundwater. Examples are interaction between freshwater and seawater, radioactive waste disposal, groundwater contamination, and geothermal energy production. The physical coupling between flow and transport introduces non-linearity in the governing mathematical equations, such that solving variable-density flow problems typically requires very long computational time. Computational efficiency can be attained through the use of adaptive time-stepping schemes. The aim of this work is therefore to apply a non-iterative adaptive time-stepping scheme based on local truncation error in variable-density flow problems. That new scheme is implemented into the code of the HydroGeoSphere model (Therrien et al., 2011). The new time-stepping scheme is applied to the Elder (1967) and the Shikaze et al. (1998) problem of free convection in porous and fractured-porous media, respectively. Numerical simulations demonstrate that non-iterative time-stepping based on local truncation error control fully automates the time step size and efficiently limits the temporal discretization error to the user-defined tolerance. Results of the Elder problem show that the new time-stepping scheme presented here is significantly more efficient than uniform time-stepping when high accuracy is required. Results of the Shikaze problem reveal that the new scheme is considerably faster than conventional time-stepping where time step sizes are either constant or controlled by absolute head/concentration changes. Future research will focus on the application of the new time-stepping scheme to variable-density flow in complex real-world fractured-porous rock.

  16. Causes and cures for errors in the simulation of ion extraction from plasmas

    SciTech Connect

    Becker, R.

    2006-03-15

    For many years, computer programs have been available to simulate the extraction of positive ions from plasmas. The results of such simulations may not always agree with measurements. There are different reasons for this: the mathematical formulation must match with the simulated physics, the number of meshes must be high enough to correctly take into account the nonlinear space charge in the sheath, and ray tracing must be done in sufficiently small steps, using numerically correct field components and partial derivatives. In addition to these hidden problems the user may create errors by a wrong choice of parameters, which are not matching the assumptions of the mathematical formulation. Examples are the use of a positive ion extraction program for the extraction of negative ones, the choice of a wrong angle between the plasma electrode and the beam boundary in the vicinity of the meniscus, and the use of too few trajectories. The design of extraction electrodes generally has the aim to optimize the optical properties and the current of the ion beam. However, it is also important to take into account the surface fields in order to avoid dark currents and sparking.

  17. Numerical and simulation study of terahertz radiation generation by laser pulses propagating in the extraordinary mode in magnetized plasma

    SciTech Connect

    Jha, Pallavi; Kumar Verma, Nirmal

    2014-06-15

    A one-dimensional numerical model for studying terahertz radiation generation by intense laser pulses propagating, in the extraordinary mode, through magnetized plasma has been presented. The direction of the static external magnetic field is perpendicular to the polarization as well as propagation direction of the laser pulse. A transverse electromagnetic wave with frequency in the terahertz range is generated due to the presence of the magnetic field. Further, two-dimensional simulations using XOOPIC code show that the THz fields generated in plasma are transmitted into vacuum. The fields obtained via simulation study are found to be compatible with those obtained from the numerical model.

  18. Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD

    SciTech Connect

    Smith, Thomas Michael; Shadid, John N.; Pawlowski, Roger P.; Cyr, Eric C.; Wildey, Timothy Michael

    2014-01-01

    This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.

  19. Seismic wavefield propagation in 2D anisotropic media: Ray theory versus wave-equation simulation

    NASA Astrophysics Data System (ADS)

    Bai, Chao-ying; Hu, Guang-yi; Zhang, Yan-teng; Li, Zhong-sheng

    2014-05-01

    Despite the ray theory that is based on the high frequency assumption of the elastic wave-equation, the ray theory and the wave-equation simulation methods should be mutually proof of each other and hence jointly developed, but in fact parallel independent progressively. For this reason, in this paper we try an alternative way to mutually verify and test the computational accuracy and the solution correctness of both the ray theory (the multistage irregular shortest-path method) and the wave-equation simulation method (both the staggered finite difference method and the pseudo-spectral method) in anisotropic VTI and TTI media. Through the analysis and comparison of wavefield snapshot, common source gather profile and synthetic seismogram, it is able not only to verify the accuracy and correctness of each of the methods at least for kinematic features, but also to thoroughly understand the kinematic and dynamic features of the wave propagation in anisotropic media. The results show that both the staggered finite difference method and the pseudo-spectral method are able to yield the same results even for complex anisotropic media (such as a fault model); the multistage irregular shortest-path method is capable of predicting similar kinematic features as the wave-equation simulation method does, which can be used to mutually test each other for methodology accuracy and solution correctness. In addition, with the aid of the ray tracing results, it is easy to identify the multi-phases (or multiples) in the wavefield snapshot, common source point gather seismic section and synthetic seismogram predicted by the wave-equation simulation method, which is a key issue for later seismic application.

  20. 1D and 2D simulations of seismic wave propagation in fractured media

    NASA Astrophysics Data System (ADS)

    Möller, Thomas; Friederich, Wolfgang

    2016-04-01

    Fractures and cracks have a significant influence on the propagation of seismic waves. Their presence causes reflections and scattering and makes the medium effectively anisotropic. We present a numerical approach to simulation of seismic waves in fractured media that does not require direct modelling of the fracture itself, but uses the concept of linear slip interfaces developed by Schoenberg (1980). This condition states that at an interface between two imperfectly bonded elastic media, stress is continuous across the interface while displacement is discontinuous. It is assumed that the jump of displacement is proportional to stress which implies a jump in particle velocity at the interface. We use this condition as a boundary condition to the elastic wave equation and solve this equation in the framework of a Nodal Discontinuous Galerkin scheme using a velocity-stress formulation. We use meshes with tetrahedral elements to discretise the medium. Each individual element face may be declared as a slip interface. Numerical fluxes have been derived by solving the 1D Riemann problem for slip interfaces with elastic and viscoelastic rheology. Viscoelasticity is realised either by a Kelvin-Voigt body or a Standard Linear Solid. These fluxes are not limited to 1D and can - with little modification - be used for simulations in higher dimensions as well. The Nodal Discontinuous Galerkin code "neXd" developed by Lambrecht (2013) is used as a basis for the numerical implementation of this concept. We present examples of simulations in 1D and 2D that illustrate the influence of fractures on the seismic wavefield. We demonstrate the accuracy of the simulation through comparison to an analytical solution in 1D.

  1. CRPropa 3—a public astrophysical simulation framework for propagating extraterrestrial ultra-high energy particles

    NASA Astrophysics Data System (ADS)

    Alves Batista, Rafael; Dundovic, Andrej; Erdmann, Martin; Kampert, Karl-Heinz; Kuempel, Daniel; Müller, Gero; Sigl, Guenter; van Vliet, Arjen; Walz, David; Winchen, Tobias

    2016-05-01

    We present the simulation framework CRPropa version 3 designed for efficient development of astrophysical predictions for ultra-high energy particles. Users can assemble modules of the most relevant propagation effects in galactic and extragalactic space, include their own physics modules with new features, and receive on output primary and secondary cosmic messengers including nuclei, neutrinos and photons. In extension to the propagation physics contained in a previous CRPropa version, the new version facilitates high-performance computing and comprises new physical features such as an interface for galactic propagation using lensing techniques, an improved photonuclear interaction calculation, and propagation in time dependent environments to take into account cosmic evolution effects in anisotropy studies and variable sources. First applications using highlighted features are presented as well.

  2. Forward and adjoint simulations of seismic wave propagation on fully unstructured hexahedral meshes

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Komatitsch, Dimitri; Luo, Yang; Martin, Roland; Le Goff, Nicolas; Casarotti, Emanuele; Le Loher, Pieyre; Magnoni, Federica; Liu, Qinya; Blitz, Céline; Nissen-Meyer, Tarje; Basini, Piero; Tromp, Jeroen

    2011-08-01

    We present forward and adjoint spectral-element simulations of coupled acoustic and (an)elastic seismic wave propagation on fully unstructured hexahedral meshes. Simulations benefit from recent advances in hexahedral meshing, load balancing and software optimization. Meshing may be accomplished using a mesh generation tool kit such as CUBIT, and load balancing is facilitated by graph partitioning based on the SCOTCH library. Coupling between fluid and solid regions is incorporated in a straightforward fashion using domain decomposition. Topography, bathymetry and Moho undulations may be readily included in the mesh, and physical dispersion and attenuation associated with anelasticity are accounted for using a series of standard linear solids. Finite-frequency Fréchet derivatives are calculated using adjoint methods in both fluid and solid domains. The software is benchmarked for a layercake model. We present various examples of fully unstructured meshes, snapshots of wavefields and finite-frequency kernels generated by Version 2.0 'Sesame' of our widely used open source spectral-element package SPECFEM3D.

  3. Simulation systems for tsunami wave propagation forecasting within the French tsunami warning center

    NASA Astrophysics Data System (ADS)

    Gailler, A.; Hébert, H.; Loevenbruck, A.; Hernandez, B.

    2013-10-01

    A model-based tsunami prediction system has been developed as part of the French Tsunami Warning Center (operational since 1 July 2012). It involves a precomputed unit source functions database (i.e., a number of tsunami model runs that are calculated ahead of time and stored). For the Mediterranean basin, the faults of the unit functions are placed adjacent to each other, following the discretization of the main seismogenic faults. An automated composite scenarios calculation tool is implemented to allow the simulation of any tsunami propagation scenario (i.e., of any seismic moment). Uncertainty on the magnitude of the detected event and inaccuracy of the epicenter location are taken into account in the composite scenarios calculation. Together with this forecasting system, another operational tool based on real time computing is implemented as part of the French Tsunami Warning Center. This second tsunami simulation tool takes advantage of multiprocessor approaches and more realistic seismological parameters, once the focal mechanism is established. Three examples of historical earthquakes are presented, providing warning refinement compared to the rough tsunami risk map given by the model-based decision matrix.

  4. Difference in Simulated Low-Frequency Sound Propagation in the Various Species of Baleen Whale

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Toshio; Naoi, Jun; Futa, Koji; Kikuchi, Toshiaki

    2004-05-01

    Whales found in the north Pacific are known to migrate over several thousand kilometers, from the Alaskan coast where they heartily feed during the summer to low latitude waters where they breed during the winter. Therefore, it is assumed that whales are using the “deep sound channel” for their long-distance communication. The main objective of this study is to clarify the behaviors of baleen whales from the standpoint of acoustical oceanography. Hence, authors investigated the possibility of long distance communication in various species of baleen whales, by simulating the long-distance propagation of their sound transmission, by applying the mode theory to actual sound speed profiles and by simulating their transmission frequencies. As a result, the possibility of long distance communication among blue whales using the deep sound channel was indicated. It was also indicated that communication among fin whales and blue whales can be made possible by coming close to shore slopes such as the Island of Hawaii.

  5. Partially coherent wavefront propagation simulations for inelastic x-ray scattering beamline including crystal optics

    NASA Astrophysics Data System (ADS)

    Suvorov, Alexey; Cai, Yong Q.; Sutter, John P.; Chubar, Oleg

    2014-09-01

    Up to now simulation of perfect crystal optics in the "Synchrotron Radiation Workshop" (SRW) wave-optics computer code was not available, thus hindering the accurate modelling of synchrotron radiation beamlines containing optical components with multiple-crystal arrangements, such as double-crystal monochromators and high-energy-resolution monochromators. A new module has been developed for SRW for calculating dynamical diffraction from a perfect crystal in the Bragg case. We demonstrate its successful application to the modelling of partially-coherent undulator radiation propagating through the Inelastic X-ray Scattering (IXS) beamline of the National Synchrotron Light Source II (NSLS-II) at Brookhaven National Laboratory. The IXS beamline contains a double-crystal and a multiple-crystal highenergy- resolution monochromator, as well as complex optics such as compound refractive lenses and Kirkpatrick-Baez mirrors for the X-ray beam transport and shaping, which makes it an excellent case for benchmarking the new functionalities of the updated SRW codes. As a photon-hungry experimental technique, this case study for the IXS beamline is particularly valuable as it provides an accurate evaluation of the photon flux at the sample position, using the most advanced simulation methods and taking into account parameters of the electron beam, details of undulator source, and the crystal optics.

  6. Efficient simulation of cardiac electrical propagation using high order finite elements

    NASA Astrophysics Data System (ADS)

    Arthurs, Christopher J.; Bishop, Martin J.; Kay, David

    2012-05-01

    We present an application of high order hierarchical finite elements for the efficient approximation of solutions to the cardiac monodomain problem. We detail the hurdles which must be overcome in order to achieve theoretically-optimal errors in the approximations generated, including the choice of method for approximating the solution to the cardiac cell model component. We place our work on a solid theoretical foundation and show that it can greatly improve the accuracy in the approximation which can be achieved in a given amount of processor time. Our results demonstrate superior accuracy over linear finite elements at a cheaper computational cost and thus indicate the potential indispensability of our approach for large-scale cardiac simulation.

  7. Driving error and anxiety related to iPod mp3 player use in a simulated driving experience.

    PubMed

    Harvey, Ashley R; Carden, Randy L

    2009-08-01

    Driver distraction due to cellular phone usage has repeatedly been shown to increase the risk of vehicular accidents; however, the literature regarding the use of other personal electronic devices while driving is relatively sparse. It was hypothesized that the usage of an mp3 player would result in an increase in not only driving error while operating a driving simulator, but driver anxiety scores as well. It was also hypothesized that anxiety scores would be positively related to driving errors when using an mp3 player. 32 participants drove through a set course in a driving simulator twice, once with and once without an iPod mp3 player, with the order counterbalanced. Number of driving errors per course, such as leaving the road, impacts with stationary objects, loss of vehicular control, etc., and anxiety were significantly higher when an iPod was in use. Anxiety scores were unrelated to number of driving errors. PMID:19831096

  8. Further Developments in the Communication Link and Error Analysis (CLEAN) Simulator

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1995-01-01

    During the period 1 July 1993 - 30 June 1994, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed. Many of these were reported in the Semi-Annual report dated December 1993 which has been included in this report in Appendix A. Since December 1993, a number of additional modules have been added involving Unit-Memory Convolutional codes (UMC). These are: (1) Unit-Memory Convolutional Encoder module (UMCEncd); (2) Hard decision Unit-Memory Convolutional Decoder using the Viterbi decoding algorithm (VitUMC); and (3) a number of utility modules designed to investigate the performance of LTMC's such as LTMC column distance function (UMCdc), UMC free distance function (UMCdfree), UMC row distance function (UMCdr), and UMC Transformation (UMCTrans). The study of UMC's was driven, in part, by the desire to investigate high-rate convolutional codes which are better suited as inner codes for a concatenated coding scheme. A number of high-rate LTMC's were found which are good candidates for inner codes. Besides the further developments of the simulation, a study was performed to construct a table of the best known Unit-Memory Convolutional codes. Finally, a preliminary study of the usefulness of the Periodic Convolutional Interleaver (PCI) was completed and documented in a Technical note dated March 17, 1994. This technical note has also been included in this final report.

  9. Sources of error in CEMRA-based CFD simulations of the common carotid artery

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Owais; Wasserman, Bruce A.; Steinman, David A.

    2013-03-01

    Magnetic resonance imaging is often used as a source for reconstructing vascular anatomy for the purpose of computational fluid dynamics (CFD) analysis. We recently observed large discrepancies in such "image-based" CFD models of the normal common carotid artery (CCA) derived from contrast enhanced MR angiography (CEMRA), when compared to phase contrast MR imaging (PCMRI) of the same subjects. A novel quantitative comparison of velocity profile shape of N=20 cases revealed an average 25% overestimation of velocities by CFD, attributed to a corresponding underestimation of lumen area in the CEMRA-derived geometries. We hypothesized that this was due to blurring of edges in the images caused by dilution of contrast agent during the relatively long elliptic centric CEMRA acquisitions, and confirmed this with MRI simulations. Rescaling of CFD models to account for the lumen underestimation improved agreement with the velocity levels seen in the corresponding PCMRI images, but discrepancies in velocity profile shape remained, with CFD tending to over-predict velocity profile skewing. CFD simulations incorporating realistic inlet velocity profiles and non-Newtonian rheology had a negligible effect on velocity profile skewing, suggesting a role for other sources of error or modeling assumptions. In summary, our findings suggest that caution should be exercised when using elliptic-centric CEMRA data as a basis for image-based CFD modeling, and emphasize the importance of comparing image-based CFD models against in vivo data whenever possible.

  10. Simulation study of respiratory-induced errors in cardiac positron emission tomography/computed tomography

    SciTech Connect

    Fitzpatrick, Gianna M.; Wells, R. Glenn

    2006-08-15

    Heart disease is a leading killer in Canada and positron emission tomography (PET) provides clinicians with in vivo metabolic information for diagnosing heart disease. Transmission data are usually acquired with {sup 68}Ge, although the advent of PET/CT scanners has made computed tomography (CT) an alternative option. The fast data acquisition of CT compared to PET may cause potential misregistration problems, leading to inaccurate attenuation correction (AC). Using Monte Carlo simulations and an anthropomorphic dynamic computer phantom, this study determines the magnitude and location of respiratory-induced errors in radioactivity uptake measured in cardiac PET/CT. A homogeneous tracer distribution in the heart was considered. The AC was based on (1) a time-averaged attenuation map (2) CT maps from a single phase of the respiratory cycle, and (3) CT maps phase matched to the emission data. Circumferential profiles of the heart uptake were compared and differences of up to 24% were found between the single-phase CT-AC method and the true phantom values. Simulation results were supported by a PET/CT canine study which showed differences of up to 10% in the heart uptake in the lung-heart boundary region when comparing {sup 68}Ge- to CT-based AC with the CT map acquired at end inhalation.

  11. Caution: Precision Error in Blade Alignment Results in Faulty Unsteady CFD Simulation

    NASA Astrophysics Data System (ADS)

    Lewis, Bryan; Cimbala, John; Wouden, Alex

    2012-11-01

    Turbomachinery components experience unsteady loads at several frequencies. The rotor frequency corresponds to the time for one rotor blade to rotate between two stator vanes, and is normally dominant for rotor torque oscillations. The guide vane frequency corresponds to the time for two rotor blades to pass by one guide vane. The machine frequency corresponds to the machine RPM. Oscillations at the machine frequency are always present due to minor blade misalignments and imperfections resulting from manufacturing defects. However, machine frequency oscillations should not be present in CFD simulations if the mesh is free of both blade misalignment and surface imperfections. The flow through a Francis hydroturbine was modeled with unsteady Reynolds-Averaged Navier-Stokes (URANS) CFD simulations and a dynamic rotating grid. Spectral analysis of the unsteady torque on the rotor blades revealed a large component at the machine frequency. Close examination showed that one blade was displaced by 0 .0001° due to round-off errors during mesh generation. A second mesh without blade misalignment was then created. Subsequently, large machine frequency oscillations were not observed for this mesh. These results highlight the effect of minor geometry imperfections on CFD solutions. This research was supported by a grant from the DoE and a National Defense Science and Engineering Graduate Fellowship.

  12. Partitioning Net Ecosystem Carbon Exchange Into net Assimilation and Respiration With Canopy-scale Isotopic Measurements: an Error Propagation Analysis With Both 13C and 18O Data

    NASA Astrophysics Data System (ADS)

    Peylin, P.; Ogee, J.; Cuntz, M.; Bariac, T.; Ciais, P.; Brunet, Y.

    2003-12-01

    Stable CO2 isotope measurements are increasingly used to partition the net CO2 exchange between terrestrial ecosystems and the atmosphere in terms of non-foliar respiration (FR) and gross photosynthesis (FA). However the accuracy of the partitioning strongly depends on the isotopic disequilibrium between these two gross fluxes and a rigorous estimation of the errors on FA and FR is needed. In this study we account and propagate uncertainties on all terms in the mass balance equations for total and "labeled" CO2 in order to get precise estimates of the errors on FA and FR. We applied our method to a maritime pine forest in the Southwest of France. Using the δ 13C-CO2 and CO2 measurements, we show that the resulting uncertainty associated to the gross fluxes can be as large as 4 æmol m-2 s-1. In addition, even if we could get more precise estimates of the isoflux and the isotopic signature of FA we show that this error would not be significantly reduced. This is because the isotopic disequilibrium between FA and FR is around 2-3‰ , i.e. the order of magnitude of the uncertainty on the isotopic signature of FR (δ R). With δ 18O-CO2 and CO2 measurements, the uncertainty associated to the gross fluxes lies also around 4 æmol m-2 s-1. On the other hand, it could be dramatically reduced if we were able to get more precise estimates of the CO18O isoflux and the associated discrimination during photosynthesis. This is because the isotopic disequilibrium between FA and FR is large, of the order of 10-15‰ , i.e. much larger than the uncertainty on δ R. The isotopic disequilibrium between FA and FR or the uncertainty on δ R vary among ecosystems and over the year. Our approach may help to choose the best strategy to study the carbon budget of a given ecosystem using stable isotopes.

  13. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations. PMID:26560913

  14. Simulation-aided investigation of beam hardening induced errors in CT dimensional metrology

    NASA Astrophysics Data System (ADS)

    Tan, Ye; Kiekens, Kim; Welkenhuyzen, Frank; Angel, J.; De Chiffre, L.; Kruth, Jean-Pierre; Dewulf, Wim

    2014-06-01

    Industrial x-ray computed tomography (CT) systems are being increasingly used as dimensional measuring machines. However, micron level accuracy is not always achievable, as of yet. The measurement accuracy is influenced by many factors, such as the workpiece properties, x-ray voltage, filter, beam hardening, scattering and calibration methods (Kruth et al 2011 CIRP Ann. Manuf. Technol. 60 821-42, Bartscher et al 2007 CIRP Ann. Manuf. Technol. 56 495-8, De Chiffre et al 2005 CIRP Ann. Manuf. Technol. 54 479-82, Schmitt and Niggemann 2010 Meas. Sci. Technol. 21 054008). Since most of these factors are mutually correlated, it remains challenging to interpret measurement results and to identify the distinct error sources. Since simulations allow isolating the different affecting factors, they form a useful complement to experimental investigations. Dewulf et al (2012 CIRP Ann. Manuf. Technol. 61 495-8) investigated the influence of beam hardening correction parameters on the diameter of a calibrated steel pin in different experimental set-ups. It was clearly shown that an inappropriate beam hardening correction can result in significant dimensional errors. This paper confirms these results using simulations of a pin surrounded by a stepped cylinder: a clear discontinuity in the measured diameter of the inner pin is observed where it enters the surrounding material. The results are expanded with an investigation of the beam hardening effect on the measurement results for both inner and outer diameters of the surrounding stepped cylinder. Accuracy as well as the effect on the uncertainty determination is discussed. The results are compared with simulations using monochromatic beams in order to have a benchmark which excludes beam hardening effects and x-ray scattering. Furthermore, based on the above results, the authors propose a case-dependent calibration artefact for beam hardening correction and edge offset determination. In the final part of the paper, the

  15. Parallax error in long-axial field-of-view PET scanners—a simulation study

    NASA Astrophysics Data System (ADS)

    Schmall, Jeffrey P.; Karp, Joel S.; Werner, Matt; Surti, Suleman

    2016-07-01

    There is a growing interest in the design and construction of a PET scanner with a very long axial extent. One critical design challenge is the impact of the long axial extent on the scanner spatial resolution properties. In this work, we characterize the effect of parallax error in PET system designs having an axial field-of-view (FOV) of 198 cm (total-body PET scanner) using fully-3D Monte Carlo simulations. Two different scintillation materials were studied: LSO and LaBr3. The crystal size in both cases was 4  ×  4  ×  20 mm3. Several different depth-of-interaction (DOI) encoding techniques were investigated to characterize the improvement in spatial resolution when using a DOI capable detector. To measure spatial resolution we simulated point sources in a warm background in the center of the imaging FOV, where the effects of axial parallax are largest, and at several positions radially offset from the center. Using a line-of-response based ordered-subset expectation maximization reconstruction algorithm we found that the axial resolution in an LSO scanner degrades from 4.8 mm to 5.7 mm (full width at half max) at the center of the imaging FOV when extending the axial acceptance angle (α) from  ±12° (corresponding to an axial FOV of 18 cm) to the maximum of  ±67°—a similar result was obtained with LaBr3, in which the axial resolution degraded from 5.3 mm to 6.1 mm. For comparison we also measured the degradation due to radial parallax error in the transverse imaging FOV; the transverse resolution, averaging radial and tangential directions, of an LSO scanner was degraded from 4.9 mm to 7.7 mm, for a measurement at the center of the scanner compared to a measurement with a radial offset of 23 cm. Simulations of a DOI detector design improved the spatial resolution in all dimensions. The axial resolution in the LSO-based scanner, with α  =  ± 67°, was improved from 5.7 mm to 5.0 mm by

  16. Coherence of Mach fronts during heterogeneous supershear earthquake rupture propagation: Simulations and comparison with observations

    USGS Publications Warehouse

    Bizzarri, A.; Dunham, Eric M.; Spudich, P.

    2010-01-01

    We study how heterogeneous rupture propagation affects the coherence of shear and Rayleigh Mach wavefronts radiated by supershear earthquakes. We address this question using numerical simulations of ruptures on a planar, vertical strike-slip fault embedded in a three-dimensional, homogeneous, linear elastic half-space. Ruptures propagate spontaneously in accordance with a linear slip-weakening friction law through both homogeneous and heterogeneous initial shear stress fields. In the 3-D homogeneous case, rupture fronts are curved owing to interactions with the free surface and the finite fault width; however, this curvature does not greatly diminish the coherence of Mach fronts relative to cases in which the rupture front is constrained to be straight, as studied by Dunham and Bhat (2008a). Introducing heterogeneity in the initial shear stress distribution causes ruptures to propagate at speeds that locally fluctuate above and below the shear wave speed. Calculations of the Fourier amplitude spectra (FAS) of ground velocity time histories corroborate the kinematic results of Bizzarri and Spudich (2008a): (1) The ground motion of a supershear rupture is richer in high frequency with respect to a subshear one. (2) When a Mach pulse is present, its high frequency content overwhelms that arising from stress heterogeneity. Present numerical experiments indicate that a Mach pulse causes approximately an ω−1.7 high frequency falloff in the FAS of ground displacement. Moreover, within the context of the employed representation of heterogeneities and over the range of parameter space that is accessible with current computational resources, our simulations suggest that while heterogeneities reduce peak ground velocity and diminish the coherence of the Mach fronts, ground motion at stations experiencing Mach pulses should be richer in high frequencies compared to stations without Mach pulses. In contrast to the foregoing theoretical results, we find no average elevation

  17. Coherence of Mach fronts during heterogeneous supershear earthquake rupture propagation: Simulations and comparison with observations

    NASA Astrophysics Data System (ADS)

    Bizzarri, A.; Dunham, Eric M.; Spudich, P.

    2010-08-01

    We study how heterogeneous rupture propagation affects the coherence of shear and Rayleigh Mach wavefronts radiated by supershear earthquakes. We address this question using numerical simulations of ruptures on a planar, vertical strike-slip fault embedded in a three-dimensional, homogeneous, linear elastic half-space. Ruptures propagate spontaneously in accordance with a linear slip-weakening friction law through both homogeneous and heterogeneous initial shear stress fields. In the 3-D homogeneous case, rupture fronts are curved owing to interactions with the free surface and the finite fault width; however, this curvature does not greatly diminish the coherence of Mach fronts relative to cases in which the rupture front is constrained to be straight, as studied by Dunham and Bhat (2008a). Introducing heterogeneity in the initial shear stress distribution causes ruptures to propagate at speeds that locally fluctuate above and below the shear wave speed. Calculations of the Fourier amplitude spectra (FAS) of ground velocity time histories corroborate the kinematic results of Bizzarri and Spudich (2008a): (1) The ground motion of a supershear rupture is richer in high frequency with respect to a subshear one. (2) When a Mach pulse is present, its high frequency content overwhelms that arising from stress heterogeneity. Present numerical experiments indicate that a Mach pulse causes approximately an ω-1.7 high frequency falloff in the FAS of ground displacement. Moreover, within the context of the employed representation of heterogeneities and over the range of parameter space that is accessible with current computational resources, our simulations suggest that while heterogeneities reduce peak ground velocity and diminish the coherence of the Mach fronts, ground motion at stations experiencing Mach pulses should be richer in high frequencies compared to stations without Mach pulses. In contrast to the foregoing theoretical results, we find no average elevation of

  18. Coherence of Mach Fronts During Heterogeneous Supershear Earthquake Rupture Propagation: Simulations and Comparison With Observations

    NASA Astrophysics Data System (ADS)

    Spudich, P.; Bizzarri, A.; Dunham, E. M.

    2009-12-01

    We study how heterogeneous rupture propagation affects the coherence of shear- and Rayleigh-Mach wave fronts radiated by supershear earthquakes. We address this question using numerical simulations of ruptures on a planar, vertical strike-slip fault embedded in a three-dimensional, homogeneous, linear elastic half-space. Ruptures propagate spontaneously in accordance with a linear slip-weakening friction law through both homogeneous and heterogeneous initial shear stress fields. In the 3-D homogeneous case, rupture fronts are curved due to interactions with the free surface and the finite fault width; however, this curvature does not greatly diminish the coherence of Mach fronts relative to cases in which the rupture front is constrained to be straight, as studied by Dunham and Bhat (2008). Introducing heterogeneity in the initial shear stress distribution causes ruptures to propagate at speeds that locally fluctuate above and below the shear-wave speed. Calculations of the Fourier amplitude spectra (FAS) of ground velocity time histories corroborate the kinematic results of Bizzarri and Spudich (2008): 1) The ground motion of a supershear rupture is richer in high frequency with respect to a subshear one. 2) When a Mach pulse is present, its high frequency content overwhelms that arising from stress heterogeneity. Present numerical experiments indicate that a self-similar (k^-1) initial shear stress distribution causes an ω^-1.7 high frequency falloff in the FAS of ground displacement. Moreover, within the context of the employed representation of heterogeneities and over the range of parameter space that is accessible with current computational resources, our simulations suggest that while heterogeneities reduce peak ground velocity and diminish the coherence of the Mach fronts, ground motion at stations experiencing Mach pulses should be richer in high frequencies compared to stations without Mach pulses. In contrast to the foregoing theoretical results, we

  19. Quasi-plane shear wave propagation induced by acoustic radiation force with a focal line region: a simulation study.

    PubMed

    Guo, Min; Abbott, Derek; Lu, Minhua; Liu, Huafeng

    2016-03-01

    Shear wave propagation speed has been regarded as an attractive indicator for quantitatively measuring the intrinsic mechanical properties of soft tissues. While most existing techniques use acoustic radiation force (ARF) excitation with focal spot region based on linear array transducers, we try to employ a special ARF with a focal line region and apply it to viscoelastic materials to create shear waves. First, a two-dimensional capacitive micromachined ultrasonic transducer with 64 × 128 fully controllable elements is realised and simulated to generate this special ARF. Then three-dimensional finite element models are developed to simulate the resulting shear wave propagation through tissue phantom materials. Three different phantoms are explored in our simulation study using: (a) an isotropic viscoelastic medium, (b) within a cylindrical inclusion, and (c) a transverse isotropic viscoelastic medium. For each phantom, the ARF creates a quasi-plane shear wave which has a preferential propagation direction perpendicular to the focal line excitation. The propagation of the quasi-plane shear wave is investigated and then used to reconstruct shear moduli sequentially after the estimation of shear wave speed. In the phantom with a transverse isotropic viscoelastic medium, the anisotropy results in maximum speed parallel to the fiber direction and minimum speed perpendicular to the fiber direction. The simulation results show that the line excitation extends the displacement field to obtain a large imaging field in comparison with spot excitation, and demonstrate its potential usage in measuring the mechanical properties of anisotropic tissues. PMID:26768475

  20. Stimulated Raman signals at conical intersections: Ab initio surface hopping simulation protocol with direct propagation of the nuclear wave function

    SciTech Connect

    Kowalewski, Markus Mukamel, Shaul

    2015-07-28

    Femtosecond Stimulated Raman Spectroscopy (FSRS) signals that monitor the excited state conical intersections dynamics of acrolein are simulated. An effective time dependent Hamiltonian for two C—H vibrational marker bands is constructed on the fly using a local mode expansion combined with a semi-classical surface hopping simulation protocol. The signals are obtained by a direct forward and backward propagation of the vibrational wave function on a numerical grid. Earlier work is extended to fully incorporate the anharmonicities and intermode couplings.

  1. Stimulated Raman signals at conical intersections: Ab initio surface hopping simulation protocol with direct propagation of the nuclear wave function

    NASA Astrophysics Data System (ADS)

    Kowalewski, Markus; Mukamel, Shaul

    2015-07-01

    Femtosecond Stimulated Raman Spectroscopy (FSRS) signals that monitor the excited state conical intersections dynamics of acrolein are simulated. An effective time dependent Hamiltonian for two C—H vibrational marker bands is constructed on the fly using a local mode expansion combined with a semi-classical surface hopping simulation protocol. The signals are obtained by a direct forward and backward propagation of the vibrational wave function on a numerical grid. Earlier work is extended to fully incorporate the anharmonicities and intermode couplings.

  2. Simulations of the magnet misalignments, field errors and orbit correction for the SLC north arc

    SciTech Connect

    Kheifets, S.; Chao, A.; Jaeger, J.; Shoaee, H.

    1983-11-01

    Given the intensity of linac bunches and their repetition rate the desired luminosity of SLC 1.0 x 10/sup 30/ cm/sup -2/ sec/sup -1/ requires focusing the interaction bunches to a spot size in the micrometer (..mu..m) range. The lattice that achieves this goal is obtained by careful design of both the arcs and the final focus systems. For the micrometer range of the beam spot size both the second order geometric and chromatic aberrations may be completely destructive. The concept of second order achromat proved to be extremely important in this respect and the arcs are built essentially as a sequence of such achromats. Between the end of the linac and the interaction point (IP) there are three special sections in addition to the regular structure: matching section (MS) designed for matching the phase space from the linac to the arcs, reverse bend section (RB) which provides the matching when the sign of the curvature is reversed in the arc and the final focus system (FFS). The second order calculations are done by the program TURTLE. Using the TURTLE histogram in the x-y plane and assuming identical histogram for the south arc, corresponding 'luminosity' L is found. The simulation of the misalignments and error effects have to be done simultaneously with the design and simulation of the orbit correction scheme. Even after the orbit is corrected and the beam can be transmitted through the vacuum chamber, the focusing of the beam to the desired size at the IP remains a serious potential problem. It is found, as will be elaborated later, that even for the best achieved orbit correction, additional corrections of the dispersion function and possibly transfer matrix are needed. This report describes a few of the presently conceived correction schemes and summarizes some results of computer simulations done for the SLC north arc. 8 references, 12 figures, 6 tables.

  3. Monte Carlo simulation of expert judgments on human errors in chemical analysis--a case study of ICP-MS.

    PubMed

    Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R

    2014-12-01

    Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. PMID:25159436

  4. Pulse wave propagation in a model human arterial network: Assessment of 1-D visco-elastic simulations against in vitro measurements

    PubMed Central

    Alastruey, Jordi; Khir, Ashraf W.; Matthys, Koen S.; Segers, Patrick; Sherwin, Spencer J.; Verdonck, Pascal R.; Parker, Kim H.; Peiró, Joaquim

    2011-01-01

    The accuracy of the nonlinear one-dimensional (1-D) equations of pressure and flow wave propagation in Voigt-type visco-elastic arteries was tested against measurements in a well-defined experimental 1:1 replica of the 37 largest conduit arteries in the human systemic circulation. The parameters required by the numerical algorithm were directly measured in the in vitro setup and no data fitting was involved. The inclusion of wall visco-elasticity in the numerical model reduced the underdamped high-frequency oscillations obtained using a purely elastic tube law, especially in peripheral vessels, which was previously reported in this paper [Matthys et al., 2007. Pulse wave propagation in a model human arterial network: Assessment of 1-D numerical simulations against in vitro measurements. J. Biomech. 40, 3476–3486]. In comparison to the purely elastic model, visco-elasticity significantly reduced the average relative root-mean-square errors between numerical and experimental waveforms over the 70 locations measured in the in vitro model: from 3.0% to 2.5% (p<0.012) for pressure and from 15.7% to 10.8% (p<0.002) for the flow rate. In the frequency domain, average relative errors between numerical and experimental amplitudes from the 5th to the 20th harmonic decreased from 0.7% to 0.5% (p<0.107) for pressure and from 7.0% to 3.3% (p<10−6) for the flow rate. These results provide additional support for the use of 1-D reduced modelling to accurately simulate clinically relevant problems at a reasonable computational cost. PMID:21724188

  5. Simulation systems for tsunami wave propagation forecasting within the French tsunami warning center

    NASA Astrophysics Data System (ADS)

    Gailler, A.; Hébert, H.; Loevenbruck, A.; Hernandez, B.

    2012-04-01

    Improvements in the availability of sea-level observations and advances in numerical modeling techniques are increasing the potential for tsunami warnings to be based on numerical model forecasts. Numerical tsunami propagation and inundation models are well developed, but they present a challenge to run in real-time, partly due to computational limitations and also to a lack of detailed knowledge on the earthquake rupture parameters. A first generation model-based tsunami prediction system is being developed as part of the French Tsunami Warning Center that will be operational by mid 2012. It involves a pre-computed unit source functions database (i.e., a number of tsunami model runs that are calculated ahead of time and stored) corresponding to tsunami scenarios generated by a source of seismic moment 1.75E+19 N.m with a rectangular fault 25 km by 20 km in size and 1 m in slip. The faults of the unit functions are placed adjacent to each other, following the discretization of the main seismogenic faults bounding the western Mediterranean and North-East Atlantic basins. An authomatized composite scenarios calculation tool is implemented to allow the simulation of any tsunami propagation scenario (i.e., of any seismic moment). The strategy is based on linear combinations and scaling of a finite number of pre-computed unit source functions. The number of unit functions involved varies with the magnitude of the wanted composite solution and the combined wave heights are multiplied by a given scaling factor to produce the new arbitrary scenario. Uncertainty on the magnitude of the detected event and inaccuracy on the epicenter location are taken into account in the composite scenarios calculation. For one tsunamigenic event, the tool produces finally 3 warning maps (i.e., most likely, minimum and maximum scenarios) together with the rough decision matrix representation. A no-dimension code representation is chosen to show zones in the main axis of energy at the basin

  6. Issues in RF propagation modeling in an urban environment using the Extended Air Defense Simulation (EADSIM) mission level model.

    SciTech Connect

    Booher, Stephen R.; Bacon, Larry Donald

    2006-02-01

    is only evaluated along a 2-D path in the vertical orientation. This precludes modeling propagation in the urban canyons of metropolitan areas, where horizontal paths are dominant. It also precludes modeling exterior to interior propagation. In view of the apparent inadequacy of urban propagation within mission level models, as evidenced by EADSIM, the study also attempts to address possible solutions to the problem. Correction of the sparsing techniques in both TIREM and SEKE models is recommended. Both SEKE and TIREM are optimized for DTED level 1 data, sparsed at 3 arc seconds resolution. This led to significant errors when map data was sparsed at higher or lower resolution. TIREM's errors would be significantly reduced if the 999 point array limit was eliminated. This would permit using interval sizes equal to the map resolution for larger areas. This same problem could be fixed in SEKE by changing the interval spacing from a fixed 3 arc second resolution ({approx}93 meters) to an interval which is set at the map resolution. Additionally, the cell elevation interpolation method which TIREM uses is inappropriate for the man-made structures encountered in urban environments. Turning this method of determining height off, or providing a selectable switch is desired. In the near term, it appears that further research into ray-tracing models is appropriate. Codes such as RF-ProTEC, which can be dynamically linked to mission level models such as EADSIM, can provide the higher fidelity propagation calculations required, and still permit the dynamic interactions required of the mission level model. Additional research should also be conducted on the best methods of representing man-made structures to determine whether codes other than ray-trace can be used.

  7. Coupling Hydraulic Fracturing Propagation and Gas Well Performance for Simulation of Production in Unconventional Shale Gas Reservoirs

    NASA Astrophysics Data System (ADS)

    Wang, C.; Winterfeld, P. H.; Wu, Y. S.; Wang, Y.; Chen, D.; Yin, C.; Pan, Z.

    2014-12-01

    Hydraulic fracturing combined with horizontal drilling has made it possible to economically produce natural gas from unconventional shale gas reservoirs. An efficient methodology for evaluating hydraulic fracturing operation parameters, such as fluid and proppant properties, injection rates, and wellhead pressure, is essential for the evaluation and efficient design of these processes. Traditional numerical evaluation and optimization approaches are usually based on simulated fracture properties such as the fracture area. In our opinion, a methodology based on simulated production data is better, because production is the goal of hydraulic fracturing and we can calibrate this approach with production data that is already known. This numerical methodology requires a fully-coupled hydraulic fracture propagation and multi-phase flow model. In this paper, we present a general fully-coupled numerical framework to simulate hydraulic fracturing and post-fracture gas well performance. This three-dimensional, multi-phase simulator focuses on: (1) fracture width increase and fracture propagation that occurs as slurry is injected into the fracture, (2) erosion caused by fracture fluids and leakoff, (3) proppant subsidence and flowback, and (4) multi-phase fluid flow through various-scaled anisotropic natural and man-made fractures. Mathematical and numerical details on how to fully couple the fracture propagation and fluid flow parts are discussed. Hydraulic fracturing and production operation parameters, and properties of the reservoir, fluids, and proppants, are taken into account. The well may be horizontal, vertical, or deviated, as well as open-hole or cemented. The simulator is verified based on benchmarks from the literature and we show its application by simulating fracture network (hydraulic and natural fractures) propagation and production data history matching of a field in China. We also conduct a series of real-data modeling studies with different combinations of

  8. A simulation test of the effectiveness of several methods for error-checking non-invasive genetic data

    USGS Publications Warehouse

    Roon, David A.; Waits, L.P.; Kendall, K.C.

    2005-01-01

    Non-invasive genetic sampling (NGS) is becoming a popular tool for population estimation. However, multiple NGS studies have demonstrated that polymerase chain reaction (PCR) genotyping errors can bias demographic estimates. These errors can be detected by comprehensive data filters such as the multiple-tubes approach, but this approach is expensive and time consuming as it requires three to eight PCR replicates per locus. Thus, researchers have attempted to correct PCR errors in NGS datasets using non-comprehensive error checking methods, but these approaches have not been evaluated for reliability. We simulated NGS studies with and without PCR error and 'filtered' datasets using non-comprehensive approaches derived from published studies and calculated mark-recapture estimates using CAPTURE. In the absence of data-filtering, simulated error resulted in serious inflations in CAPTURE estimates; some estimates exceeded N by ??? 200%. When data filters were used, CAPTURE estimate reliability varied with per-locus error (E??). At E?? = 0.01, CAPTURE estimates from filtered data displayed < 5% deviance from error-free estimates. When E?? was 0.05 or 0.09, some CAPTURE estimates from filtered data displayed biases in excess of 10%. Biases were positive at high sampling intensities; negative biases were observed at low sampling intensities. We caution researchers against using non-comprehensive data filters in NGS studies, unless they can achieve baseline per-locus error rates below 0.05 and, ideally, near 0.01. However, we suggest that data filters can be combined with careful technique and thoughtful NGS study design to yield accurate demographic information. ?? 2005 The Zoological Society of London.

  9. A PIC-MCC code for simulation of streamer propagation in air

    SciTech Connect

    Chanrion, O. Neubert, T.

    2008-07-20

    {approx}3 times the breakdown field. At higher altitudes, the background electric field must be relatively larger to create a similar field in a streamer tip because of increased influence of photoionisation. It is shown that the role of photoionization increases with altitude and the effect is to decrease the space charge fields and increase the streamer propagation velocity. Finally, effects of electrons in the runaway regime on negative streamer dynamics are presented. It is shown the energetic electrons create enhanced ionization in front of negative streamers. The simulations suggest that the thermal runaway mechanism may operate at lower altitudes and be associated with lightning and thundercloud electrification while the mechanism is unlikely to be important in sprite generation at higher altitudes in the mesosphere.

  10. A PIC-MCC code for simulation of streamer propagation in air

    NASA Astrophysics Data System (ADS)

    Chanrion, O.; Neubert, T.

    2008-07-01

    breakdown field. At higher altitudes, the background electric field must be relatively larger to create a similar field in a streamer tip because of increased influence of photoionisation. It is shown that the role of photoionization increases with altitude and the effect is to decrease the space charge fields and increase the streamer propagation velocity. Finally, effects of electrons in the runaway regime on negative streamer dynamics are presented. It is shown the energetic electrons create enhanced ionization in front of negative streamers. The simulations suggest that the thermal runaway mechanism may operate at lower altitudes and be associated with lightning and thundercloud electrification while the mechanism is unlikely to be important in sprite generation at higher altitudes in the mesosphere.

  11. Estimation of crosstalk in LED fNIRS by photon propagation Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Iwano, Takayuki; Umeyama, Shinji

    2015-12-01

    fNIRS (functional near-Infrared spectroscopy) can measure brain activity non-invasively and has advantages such as low cost and portability. While the conventional fNIRS has used laser light, LED light fNIRS is recently becoming common in use. Using LED for fNIRS, equipment can be more inexpensive and more portable. LED light, however, has a wider illumination spectrum than laser light, which may change crosstalk between the calculated concentration change of oxygenated and deoxygenated hemoglobins. The crosstalk is caused by difference in light path length in the head tissues depending on wavelengths used. We conducted Monte Carlo simulations of photon propagation in the tissue layers of head (scalp, skull, CSF, gray matter, and white matter) to estimate the light path length in each layers. Based on the estimated path lengths, the crosstalk in fNIRS using LED light was calculated. Our results showed that LED light more increases the crosstalk than laser light does when certain combinations of wavelengths were adopted. Even in such cases, the crosstalk increased by using LED light can be effectively suppressed by replacing the value of extinction coefficients used in the hemoglobin calculation to their weighted average over illumination spectrum.

  12. Benchmark of numerical tools simulating beam propagation and secondary particles in ITER NBI

    NASA Astrophysics Data System (ADS)

    Sartori, E.; Veltri, P.; Dlougach, E.; Hemsworth, R.; Serianni, G.; Singh, M.

    2015-04-01

    Injection of high energy beams of neutral particles is a method for plasma heating in fusion devices. The ITER injector, and its prototype MITICA (Megavolt ITER Injector and Concept Advancement), are large extrapolations from existing devices: therefore numerical modeling is needed to set thermo-mechanical requirements for all beam-facing components. As the power and charge deposition originates from several sources (primary beam, co-accelerated electrons, and secondary production by beam-gas, beam-surface, and electron-surface interaction), the beam propagation along the beam line is simulated by comprehensive 3D models. This paper presents a comparative study between two codes: BTR has been used for several years in the design of the ITER HNB/DNB components; SAMANTHA code was independently developed and includes additional phenomena, such as secondary particles generated by collision of beam particles with the background gas. The code comparison is valuable in the perspective of the upcoming experimental operations, in order to prepare a reliable numerical support to the interpretation of experimental measurements in the beam test facilities. The power density map calculated on the Electrostatic Residual Ion Dump (ERID) is the chosen benchmark, as it depends on the electric and magnetic fields as well as on the evolution of the beam species via interaction with the gas. Finally the paper shows additional results provided by SAMANTHA, like the secondary electrons produced by volume processes accelerated by the ERID fringe-field towards the Cryopumps.

  13. Indirect boundary element method to simulate elastic wave propagation in piecewise irregular and flat regions

    NASA Astrophysics Data System (ADS)

    Perton, Mathieu; Contreras-Zazueta, Marcial A.; Sánchez-Sesma, Francisco J.

    2016-04-01

    A new implementation of IBEM allows simulating the elastic wave propagation in complex configurations made of embedded regions that are or homogeneous with irregular boundaries or flat layers. In an older implementation, each layer of a flat layered region would have been treated as a separated homogeneous region without taking into account the flat boundary information. For both types of regions, the scattered field results from fictitious sources positioned along their boundaries. For the homogeneous regions, the fictitious sources emit as in a full-space and the wave field is given by analytical Green's functions. For flat layered regions, fictitious sources emit as in an unbounded flat layered region and the wave field is given by Green's functions obtained from the Discrete Wave Number (DWN) method. The new implementation allows then reducing the length of the discretized boundaries but DWN Green's functions require much more computation time than the full space Green's functions. Several optimization steps are then implemented and commented. Validations are presented for 2D and 3D problems. Higher efficiency is achieved in 3D.

  14. Low-cost simulation of guided wave propagation in notched plate-like structures

    NASA Astrophysics Data System (ADS)

    Glushkov, E.; Glushkova, N.; Eremin, A.; Giurgiutiu, V.

    2015-09-01

    The paper deals with the development of low-cost tools for fast computer simulation of guided wave propagation and diffraction in plate-like structures of variable thickness. It is focused on notched surface irregularities, which are the basic model for corrosion damages. Their detection and identification by means of active ultrasonic structural health monitoring technologies assumes the use of guided waves generated and sensed by piezoelectric wafer active sensors as well as the use of laser Doppler vibrometry for surface wave scanning and visualization. To create a theoretical basis for these technologies, analytically based computer models of various complexity have been developed. The simplest models based on the Euler-Bernoulli beam and Kirchhoff plate equations have exhibited a sufficiently wide frequency range of reasonable coincidence with the results obtained within more complex integral equation based models. Being practically inexpensive, they allow one to carry out a fast parametric analysis revealing characteristic features of wave patterns that can be then made more exact using more complex models. In particular, the effect of resonance wave energy transmission through deep notches has been revealed within the plate model and then validated by the integral equation based calculations and experimental measurements.

  15. Finite-difference staggered grids in GPUs for anisotropic elastic wave propagation simulation

    NASA Astrophysics Data System (ADS)

    Rubio, Felix; Hanzich, Mauricio; Farrés, Albert; de la Puente, Josep; María Cela, José

    2014-09-01

    The 3D elastic wave equations can be used to simulate the physics of waves traveling through the Earth more precisely than acoustic approximations. However, this improvement in quality has a counterpart in the cost of the numerical scheme. A possible strategy to mitigate that expense is using specialized, high-performing architectures such as GPUs. Nevertheless, porting and optimizing a code for such a platform require a deep understanding of both the underlying hardware architecture and the algorithm at hand. Furthermore, for very large problems, multiple GPUs must work concurrently, which adds yet another layer of complexity to the codes. In this work, we have tackled the problem of porting and optimizing a 3D elastic wave propagation engine which supports both standard- and fully-staggered grids to multi-GPU clusters. At the single GPU level, we have proposed and evaluated many optimization strategies and adopted the best performing ones for our final code. At the distributed memory level, a domain decomposition approach has been used which allows for good scalability thanks to using asynchronous communications and I/O.

  16. Benchmark of numerical tools simulating beam propagation and secondary particles in ITER NBI

    SciTech Connect

    Sartori, E. Veltri, P.; Serianni, G.; Dlougach, E.; Hemsworth, R.; Singh, M.

    2015-04-08

    Injection of high energy beams of neutral particles is a method for plasma heating in fusion devices. The ITER injector, and its prototype MITICA (Megavolt ITER Injector and Concept Advancement), are large extrapolations from existing devices: therefore numerical modeling is needed to set thermo-mechanical requirements for all beam-facing components. As the power and charge deposition originates from several sources (primary beam, co-accelerated electrons, and secondary production by beam-gas, beam-surface, and electron-surface interaction), the beam propagation along the beam line is simulated by comprehensive 3D models. This paper presents a comparative study between two codes: BTR has been used for several years in the design of the ITER HNB/DNB components; SAMANTHA code was independently developed and includes additional phenomena, such as secondary particles generated by collision of beam particles with the background gas. The code comparison is valuable in the perspective of the upcoming experimental operations, in order to prepare a reliable numerical support to the interpretation of experimental measurements in the beam test facilities. The power density map calculated on the Electrostatic Residual Ion Dump (ERID) is the chosen benchmark, as it depends on the electric and magnetic fields as well as on the evolution of the beam species via interaction with the gas. Finally the paper shows additional results provided by SAMANTHA, like the secondary electrons produced by volume processes accelerated by the ERID fringe-field towards the Cryopumps.

  17. Indirect boundary element method to simulate elastic wave propagation in piecewise irregular and flat regions

    NASA Astrophysics Data System (ADS)

    Perton, Mathieu; Contreras-Zazueta, Marcial A.; Sánchez-Sesma, Francisco J.

    2016-06-01

    A new implementation of indirect boundary element method allows simulating the elastic wave propagation in complex configurations made of embedded regions that are homogeneous with irregular boundaries or flat layers. In an older implementation, each layer of a flat layered region would have been treated as a separated homogeneous region without taking into account the flat boundary information. For both types of regions, the scattered field results from fictitious sources positioned along their boundaries. For the homogeneous regions, the fictitious sources emit as in a full-space and the wave field is given by analytical Green's functions. For flat layered regions, fictitious sources emit as in an unbounded flat layered region and the wave field is given by Green's functions obtained from the discrete wavenumber (DWN) method. The new implementation allows then reducing the length of the discretized boundaries but DWN Green's functions require much more computation time than the full-space Green's functions. Several optimization steps are then implemented and commented. Validations are presented for 2-D and 3-D problems. Higher efficiency is achieved in 3-D.

  18. Propagation of Electrical Excitation in a Ring of Cardiac Cells: A Computer Simulation Study

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Karpoukhin, M. G.; Roizen, I. M.; Chudin, E.; Qu, Z.

    1996-01-01

    The propagation of electrical excitation in a ring of cells described by the Noble, Beeler-Reuter (BR), Luo-Rudy I (LR I), and third-order simplified (TOS) mathematical models is studied using computer simulation. For each of the models it is shown that after transition from steady-state circulation to quasi-periodicity achieved by shortening the ring length (RL), the action potential duration (APD) restitution curve becomes a double-valued function and is located below the original ( that of an isolated cell) APD restitution curve. The distributions of APD and diastolic interval (DI) along a ring for the entire range of RL corresponding to quasi-periodic oscillations remain periodic with the period slightly different from two RLs. The 'S' shape of the original APD restitution curve determines the appearance of the second steady-state circulation region for short RLs. For all the models and the wide variety of their original APD restitution curves, no transition from quasi-periodicity to chaos was observed.

  19. Simulation of crack propagation in fiber-reinforced concrete by fracture mechanics

    SciTech Connect

    Zhang Jun; Li, Victor C

    2004-02-01

    Mode I crack propagation in fiber-reinforced concrete (FRC) is simulated by a fracture mechanics approach. A superposition method is applied to calculate the crack tip stress intensity factor. The model relies on the fracture toughness of hardened cement paste (K{sub IC}) and the crack bridging law, so-called stress-crack width ({sigma}-{delta}) relationship of the material, as the fundamental material parameters for model input. As two examples, experimental data from steel FRC beams under three-point bending load are analyzed with the present fracture mechanics model. A good agreement has been found between model predictions and experimental results in terms of flexural stress-crack mouth opening displacement (CMOD) diagrams. These analyses and comparisons confirm that the structural performance of concrete and FRC elements, such as beams in bending, can be predicted by the simple fracture mechanics model as long as the related material properties, K{sub IC} and ({sigma}-{delta}) relationship, are known.

  20. Simulation of poro-elastic seismic wave propagation in axis-symmetric open and cased boreholes

    NASA Astrophysics Data System (ADS)

    Sidler, R.; Holliger, K.; Carcione, J. M.

    2012-04-01

    Geophysical constraints with regard to permeability are particularly valuable because they tend to bridge the gap in terms of spatial coverage and resolution that exists for corresponding conventional hydrological techniques, such as laboratory measurements and pumping tests. A prominent geophysical technique for estimating the permeability along boreholes is based on the inversion of Stoneley waves. This technique is by now well established for the hydrocarbon exploration purposes, where the corresponding measurements are carried out in open boreholes and in consolidated sediments. Conversely, the sensitivity and potential of Stoneley-wave-based permeability estimates for shallow hydrological applications is still largely unknown. As opposed to their counterparts in hydrocarbon exploration, shallow boreholes tend to be located in unconsolidated alluvial sediments and hence tend to be cased with perforated or non-perforated plastic tubes. The corresponding effects on Stoneley wave attenuation and its sensitivity to in situ permeability of the formation behind the casing are largely unknown and can only be assessed through realistic modeling. To this end, we present a pseudo-spectral numerical modeling code in cylindrical coordinates that allows for the accurate simulation of complex seismic wave propagation phenomena in realistic surficial borehole environments. We employ Fourier operators along the borehole axis and Chebyshev operators in the radial direction. The Chebyshev operators allows for the use of individual computational sub-domains for the fluid-filled, acoustic borehole, the poro-elastic casing, and the poro-elastic formation surrounding the borehole. These computational sub-domains are connected through a domain decomposition method, which is needed to correctly account for the governing boundary conditions and also allows for substantially enhancing the computational efficiency of our simulations.

  1. Polynomial chaos expansions for uncertainty propagation and moment independent sensitivity analysis of seawater intrusion simulations

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Simmons, Craig T.

    2015-01-01

    Real world models of seawater intrusion (SWI) require high computational efforts. This creates computational difficulties for the uncertainty propagation (UP) analysis of these models due the need for repeated numerical simulations in order to adequately capture the underlying statistics that describe the uncertainty in model outputs. Moreover, despite the obvious advantages of moment-independent global sensitivity analysis (SA) methods, these methods have rarely been employed for SWI and other complex groundwater models. The reason is that moment-independent global SA methods involve repeated UP analysis which further becomes computationally demanding. This study proposes the use of non-intrusive polynomial chaos expansions (PCEs) as a means to significantly accelerate UP analysis in SWI numerical modeling studies and shows that despite the highly non-linear and non-smooth input/output relationship that exists in SWI models, non-intrusive PCEs provide a reliable and yet computationally efficient surrogate of the original numerical model. The study illustrates that for the considered two and six dimensional UP problems, PCEs offer a more accurate estimation of the statistics describing the uncertainty in model outputs compared to Monte Carlo simulations based on the original numerical model. This study also shows that the use of non-intrusive PCEs in the estimation of the moment-independent sensitivity indices (i.e. delta indices) decreases the computational time by several orders of magnitude without causing significant loss of accuracy. The use of non-intrusive PCEs for the generation of SWI hazard maps is proposed to extend the practical applications of UP analysis in coastal aquifer management studies.

  2. Rounding errors may be beneficial for simulations of atmospheric flow: results from the forced 1D Burgers equation

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; Dolaptchiev, Stamen I.

    2015-08-01

    Inexact hardware can reduce computational cost, due to a reduced energy demand and an increase in performance, and can therefore allow higher-resolution simulations of the atmosphere within the same budget for computation. We investigate the use of emulated inexact hardware for a model of the randomly forced 1D Burgers equation with stochastic sub-grid-scale parametrisation. Results show that numerical precision can be reduced to only 12 bits in the significand of floating-point numbers—instead of 52 bits for double precision—with no serious degradation in results for all diagnostics considered. Simulations that use inexact hardware on a grid with higher spatial resolution show results that are significantly better compared to simulations in double precision on a coarser grid at similar estimated computing cost. In the second half of the paper, we compare the forcing due to rounding errors to the stochastic forcing of the stochastic parametrisation scheme that is used to represent sub-grid-scale variability in the standard model setup. We argue that stochastic forcings of stochastic parametrisation schemes can provide a first guess for the upper limit of the magnitude of rounding errors of inexact hardware that can be tolerated by model simulations and suggest that rounding errors can be hidden in the distribution of the stochastic forcing. We present an idealised model setup that replaces the expensive stochastic forcing of the stochastic parametrisation scheme with an engineered rounding error forcing and provides results of similar quality. The engineered rounding error forcing can be used to create a forecast ensemble of similar spread compared to an ensemble based on the stochastic forcing. We conclude that rounding errors are not necessarily degrading the quality of model simulations. Instead, they can be beneficial for the representation of sub-grid-scale variability.

  3. Three dimensional image-based simulation of ultrasonic wave propagation in polycrystalline metal using phase-field modeling.

    PubMed

    Nakahata, K; Sugahara, H; Barth, M; Köhler, B; Schubert, F

    2016-04-01

    When modeling ultrasonic wave propagation in metals, it is important to introduce mesoscopic crystalline structures because the anisotropy of the crystal structure and the heterogeneity of grains disturb ultrasonic waves. In this paper, a three-dimensional (3D) polycrystalline structure generated by multiphase-field modeling was introduced to ultrasonic simulation for nondestructive testing. 3D finite-element simulations of ultrasonic waves were validated and compared with visualization results obtained from laser Doppler vibrometer measurements. The simulation results and measurements showed good agreement with respect to the velocity and front shape of the pressure wave, as well as multiple scattering due to grains. This paper discussed the applicability of a transversely isotropic approach to ultrasonic wave propagation in a polycrystalline metal with columnar structures. PMID:26773789

  4. A web-based platform for simulating seismic wave propagation in 3D shallow Earth models with DEM surface topography

    NASA Astrophysics Data System (ADS)

    Luo, Cong; Friederich, Wolfgang

    2016-04-01

    Realistic shallow seismic wave propagation simulation is an important tool for studying induced seismicity (e.g., during geothermal energy development). However over a long time, there is a significant problem which constrains computational seismologists from performing a successful simulation conveniently: pre-processing. Conventional pre-processing has often turned out to be inefficient and unrobust because of the miscellaneous operations, considerable complexity and insufficiency of available tools. An integrated web-based platform for shallow seismic wave propagation simulation has been built. It is aiming at providing a user-friendly pre-processing solution, and cloud-based simulation abilities. The main features of the platform for the user include: revised digital elevation model (DEM) retrieving and processing mechanism; generation of multi-layered 3D shallow Earth model geometry (the computational domain) with user specified surface topography based on the DEM; visualization of the geometry before the simulation; a pipeline from geometry to fully customizable hexahedral element mesh generation; customization and running the simulation on our HPC; post-processing and retrieval of the results over cloud. Regarding the computational aspect, currently the widely accepted specfem3D is chosen as the computational package; packages using different types of elements can be integrated as well in the future. According to our trial simulation experiments, this web-based platform has produced accurate waveforms while significantly simplifying and enhancing the pre-processing and improving the simulation success rate.

  5. Data on simulated interpersonal touch, individual differences and the error-related negativity

    PubMed Central

    Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J.; Koole, Sander L.

    2016-01-01

    The dataset includes data from the electroencephalogram study reported in our paper: ‘Effects of simulated interpersonal touch and trait intrinsic motivation on the error-related negativity’ (doi:10.1016/j.neulet.2016.01.044) (Tjew-A-Sin et al., 2016) [1]. The data was collected at the psychology laboratories at the Vrije Universiteit Amsterdam in 2012 among a Dutch-speaking student sample. The dataset consists of the measures described in the paper, as well as additional (exploratory) measures including the Five-Factor Personality Inventory, the Connectedness to Nature Scale, the Rosenberg Self-esteem Scale and a scale measuring life stress. The data can be used for replication purposes, meta-analyses, and exploratory analyses, as well as cross-cultural comparisons of touch and/or ERN effects. The authors also welcome collaborative research based on re-analyses of the data. The data described is available at a data repository called the DANS archive: http://persistent-identifier.nl/?identifier=urn:nbn:nl:ui:13-tzbk-gg. PMID:27158644

  6. Data on simulated interpersonal touch, individual differences and the error-related negativity.

    PubMed

    Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J; Koole, Sander L

    2016-06-01

    The dataset includes data from the electroencephalogram study reported in our paper: 'Effects of simulated interpersonal touch and trait intrinsic motivation on the error-related negativity' (doi:10.1016/j.neulet.2016.01.044) (Tjew-A-Sin et al., 2016) [1]. The data was collected at the psychology laboratories at the Vrije Universiteit Amsterdam in 2012 among a Dutch-speaking student sample. The dataset consists of the measures described in the paper, as well as additional (exploratory) measures including the Five-Factor Personality Inventory, the Connectedness to Nature Scale, the Rosenberg Self-esteem Scale and a scale measuring life stress. The data can be used for replication purposes, meta-analyses, and exploratory analyses, as well as cross-cultural comparisons of touch and/or ERN effects. The authors also welcome collaborative research based on re-analyses of the data. The data described is available at a data repository called the DANS archive: http://persistent-identifier.nl/?identifier=urn:nbn:nl:ui:13-tzbk-gg. PMID:27158644

  7. A new post-quantization constrained propagator for rigid tops for use in path integral quantum simulations

    SciTech Connect

    Guillon, Grégoire; Zeng, Tao; Roy, Pierre-Nicholas

    2013-11-14

    In this paper, we extend the previously introduced Post-Quantization Constraints (PQC) procedure [G. Guillon, T. Zeng, and P.-N. Roy, J. Chem. Phys. 138, 184101 (2013)] to construct approximate propagators and energy estimators for different rigid body systems, namely, the spherical, symmetric, and asymmetric tops. These propagators are for use in Path Integral simulations. A thorough discussion of the underlying geometrical concepts is given. Furthermore, a detailed analysis of the convergence properties of the density as well as the energy estimators towards their exact counterparts is presented along with illustrative numerical examples. The Post-Quantization Constraints approach can yield converged results and is a practical alternative to so-called sum over states techniques, where one has to expand the propagator as a sum over a complete set of rotational stationary states [as in E. G. Noya, C. Vega, and C. McBride, J. Chem. Phys. 134, 054117 (2011)] because of its modest memory requirements.

  8. A new post-quantization constrained propagator for rigid tops for use in path integral quantum simulations

    NASA Astrophysics Data System (ADS)

    Guillon, Grégoire; Zeng, Tao; Roy, Pierre-Nicholas

    2013-11-01

    In this paper, we extend the previously introduced Post-Quantization Constraints (PQC) procedure [G. Guillon, T. Zeng, and P.-N. Roy, J. Chem. Phys. 138, 184101 (2013)] to construct approximate propagators and energy estimators for different rigid body systems, namely, the spherical, symmetric, and asymmetric tops. These propagators are for use in Path Integral simulations. A thorough discussion of the underlying geometrical concepts is given. Furthermore, a detailed analysis of the convergence properties of the density as well as the energy estimators towards their exact counterparts is presented along with illustrative numerical examples. The Post-Quantization Constraints approach can yield converged results and is a practical alternative to so-called sum over states techniques, where one has to expand the propagator as a sum over a complete set of rotational stationary states [as in E. G. Noya, C. Vega, and C. McBride, J. Chem. Phys. 134, 054117 (2011)] because of its modest memory requirements.

  9. On the Theory and Numerical Simulation of Cohesive Crack Propagation with Application to Fiber-Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Rudraraju, Siva Shankar; Garikipati, Krishna; Waas, Anthony M.; Bednarcyk, Brett A.

    2013-01-01

    The phenomenon of crack propagation is among the predominant modes of failure in many natural and engineering structures, often leading to severe loss of structural integrity and catastrophic failure. Thus, the ability to understand and a priori simulate the evolution of this failure mode has been one of the cornerstones of applied mechanics and structural engineering and is broadly referred to as "fracture mechanics." The work reported herein focuses on extending this understanding, in the context of through-thickness crack propagation in cohesive materials, through the development of a continuum-level multiscale numerical framework, which represents cracks as displacement discontinuities across a surface of zero measure. This report presents the relevant theory, mathematical framework, numerical modeling, and experimental investigations of through-thickness crack propagation in fiber-reinforced composites using the Variational Multiscale Cohesive Method (VMCM) developed by the authors.

  10. Simulation of ultra-high energy photon propagation with PRESHOWER 2.0

    NASA Astrophysics Data System (ADS)

    Homola, P.; Engel, R.; Pysz, A.; Wilczyński, H.

    2013-05-01

    In this paper we describe a new release of the PRESHOWER program, a tool for Monte Carlo simulation of propagation of ultra-high energy photons in the magnetic field of the Earth. The PRESHOWER program is designed to calculate magnetic pair production and bremsstrahlung and should be used together with other programs to simulate extensive air showers induced by photons. The main new features of the PRESHOWER code include a much faster algorithm applied in the procedures of simulating the processes of gamma conversion and bremsstrahlung, update of the geomagnetic field model, and a minor correction. The new simulation procedure increases the flexibility of the code so that it can also be applied to other magnetic field configurations such as, for example, encountered in the vicinity of the sun or neutron stars. Program summaryProgram title: PRESHOWER 2.0 Catalog identifier: ADWG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3968 No. of bytes in distributed program, including test data, etc.: 37198 Distribution format: tar.gz Programming language: C, FORTRAN 77. Computer: Intel-Pentium based PC. Operating system: Linux or Unix. RAM:< 100 kB Classification: 1.1. Does the new version supercede the previous version?: Yes Catalog identifier of previous version: ADWG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 173 (2005) 71 Nature of problem: Simulation of a cascade of particles initiated by UHE photon in magnetic field. Solution method: The primary photon is tracked until its conversion into an e+ e- pair. If conversion occurs each individual particle in the resultant preshower is checked for either bremsstrahlung radiation (electrons) or secondary gamma conversion (photons). Reasons for

  11. Color-Coded Prefilled Medication Syringes Decrease Time to Delivery and Dosing Error in Simulated Emergency Department Pediatric Resuscitations

    PubMed Central

    Moreira, Maria E.; Hernandez, Caleb; Stevens, Allen D.; Jones, Seth; Sande, Margaret; Blumen, Jason R.; Hopkins, Emily; Bakes, Katherine; Haukoos, Jason S.

    2016-01-01

    Study objective The Institute of Medicine has called on the US health care system to identify and reduce medical errors. Unfortunately, medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients when dosing requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national health care priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared with conventional medication administration, in simulated pediatric emergency department (ED) resuscitation scenarios. Methods We performed a prospective, block-randomized, crossover study in which 10 emergency physician and nurse teams managed 2 simulated pediatric arrest scenarios in situ, using either prefilled, color-coded syringes (intervention) or conventional drug administration methods (control). The ED resuscitation room and the intravenous medication port were video recorded during the simulations. Data were extracted from video review by blinded, independent reviewers. Results Median time to delivery of all doses for the conventional and color-coded delivery groups was 47 seconds (95% confidence interval [CI] 40 to 53 seconds) and 19 seconds (95% CI 18 to 20 seconds), respectively (difference=27 seconds; 95% CI 21 to 33 seconds). With the conventional method, 118 doses were administered, with 20 critical dosing errors (17%); with the color-coded method, 123 doses were administered, with 0 critical dosing errors (difference=17%; 95% CI 4% to 30%). Conclusion A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by emergency physician and nurse teams during simulated pediatric ED resuscitations. PMID:25701295

  12. Modeling of Present-Day Atmosphere and Ocean Non-Tidal De-Aliasing Errors for Future Gravity Mission Simulations

    NASA Astrophysics Data System (ADS)

    Bergmann-Wolf, I.; Dobslaw, H.; Mayer-Gürr, T.

    2015-12-01

    A realistically perturbed synthetic de-aliasing model consistent with the updated Earth System Model of the European Space Agency (Dobslaw et al., 2015) is now available for the years 1995 -- 2006. The data-set contains realizations of (i) errors at large spatial scales assessed individually for periods between 10 -- 30, 3 -- 10, and 1 -- 3 days, the S1 atmospheric tide, and sub-diurnal periods; (ii) errors at small spatial scales typically not covered by global models of atmosphere and ocean variability; and (iii) errors due to physical processes not represented in currently available de-aliasing products. The error magnitudes for each of the different frequency bands are derived from a small ensemble of four atmospheric and oceanic models. In order to demonstrate the plausibility of the error magnitudes chosen, we perform a variance component estimation based on daily GRACE normal equations from the ITSG-Grace2014 global gravity field series recently published by the University of Graz. All 12 years of the error model are used to calculate empirical error variance-covariance matrices describing the systematic dependencies of the errors both in time and in space individually for five continental and four oceanic regions, and daily GRACE normal equations are subsequently employed to obtain pre-factors for each of those matrices. For the largest spatial scales up to d/o = 40 and periods longer than 24 h, errors prepared for the updated ESM are found to be largely consistent with noise of a similar stochastic character contained in present-day GRACE solutions. Differences and similarities identified for all of the nine regions considered will be discussed in detail during the presentation.Dobslaw, H., I. Bergmann-Wolf, R. Dill, E. Forootan, V. Klemann, J. Kusche, and I. Sasgen (2015), The updated ESA Earth System Model for future gravity mission simulation studies, J. Geod., doi:10.1007/s00190-014-0787-8.

  13. Mechanism of allosteric propagation across a β-sheet structure investigated by molecular dynamics simulations.

    PubMed

    Interlandi, Gianluca; Thomas, Wendy E

    2016-07-01

    The bacterial adhesin FimH consists of an allosterically regulated mannose-binding lectin domain and a covalently linked inhibitory pilin domain. Under normal conditions, the two domains are bound to each other, and FimH interacts weakly with mannose. However, under tensile force, the domains separate and the lectin domain undergoes conformational changes that strengthen its bond with mannose. Comparison of the crystallographic structures of the low and the high affinity state of the lectin domain reveals conformational changes mainly in the regulatory inter-domain region, the mannose binding site and a large β sheet that connects the two distally located regions. Here, molecular dynamics simulations investigated how conformational changes are propagated within and between different regions of the lectin domain. It was found that the inter-domain region moves towards the high affinity conformation as it becomes more compact and buries exposed hydrophobic surface after separation of the pilin domain. The mannose binding site was more rigid in the high affinity state, which prevented water penetration into the pocket. The large central β sheet demonstrated a soft spring-like twisting. Its twisting motion was moderately correlated to fluctuations in both the regulatory and the binding region, whereas a weak correlation was seen in a direct comparison of these two distal sites. The results suggest a so called "population shift" model whereby binding of the lectin domain to either the pilin domain or mannose locks the β sheet in a rather twisted or flat conformation, stabilizing the low or the high affinity state, respectively. Proteins 2016; 84:990-1008. © 2016 The Authors. Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. PMID:27090060

  14. Influence of porosity, pore size, and cortical thickness on the propagation of ultrasonic waves guided through the femoral neck cortex: a simulation study.

    PubMed

    Rohde, Kerstin; Rohrbach, Daniel; Glüer, Claus-C; Laugier, Pascal; Grimal, Quentin; Raum, Kay; Barkmann, Reinhard

    2014-02-01

    The femoral neck is a common fracture site in elderly people. The cortical shell is thought to be the major contributor to the mechanical competence of the femoral neck, but its microstructural parameters are not sufficiently accessible under in vivo conditions with current X-ray-based methods. To systematically investigate the influences of pore size, porosity, and thickness of the femoral neck cortex on the propagation of ultrasound, we developed 96 different bone models (combining 6 different pore sizes with 4 different porosities and 4 different thicknesses) and simulated the ultrasound propagation using a finite-difference time-domain algorithm. The simulated single-element emitter and receiver array consisting of 16 elements (8 inferior and 8 superior) were placed at anterior and posterior sides of the bone, respectively (transverse transmission). From each simulation, we analyzed the waveform collected by each of the inferior receiver elements for the one with the shortest time of flight. The first arriving signal of this waveform, which is associated with the wave traveling through the cortical shell, was then evaluated for its three different waveform characteristics (TOF: time point of the first point of inflection of the received signal, Δt: difference between the time point at which the signal first crosses the zero baseline and TOF, and A: amplitude of the first extreme of the first arriving signal). From the analyses of these waveform characteristics, we were able to develop multivariate models to predict pore size, porosity, and cortical thickness, corresponding to the 96 different bone models, with remaining errors in the range of 50 μm for pore size, 1.5% for porosity, and 0.17 mm for cortical thickness. PMID:24474136

  15. Computational study of nonlinear plasma waves. I - Simulation model and monochromatic wave propagation. II - Sideband instability and satellite growth

    NASA Technical Reports Server (NTRS)

    Matsuda, Y.; Crawford, F. W.

    1975-01-01

    A hybrid plasma simulation model is described and applied to the study of electrostatic wave propagation in a one-dimensional Maxwellian plasma with periodic boundary conditions. The model employs a cloud-in-cell scheme which can drastically reduce the fluctuations in particle simulation models and greatly ease the computational difficulties of the Vlasov equation approach. A grid in velocity space is introduced and the particles are represented by points in the x-v phase space. The model is tested first in the absence of an applied signal and then in the presence of a small-amplitude perturbation. The method is also used to study propagation of an essentially monochromatic plane wave. Results on amplitude oscillation and nonlinear frequency shift are compared with available theories.

  16. A heteroskedastic error covariance matrix estimator using a first-order conditional autoregressive Markov simulation for deriving asympotical efficient estimates from ecological sampled Anopheles arabiensis aquatic habitat covariates

    PubMed Central

    Jacob, Benjamin G; Griffith, Daniel A; Muturi, Ephantus J; Caamano, Erick X; Githure, John I; Novak, Robert J

    2009-01-01

    Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices) in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression). The eigenfunction values from the spatial

  17. PIV uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Sciacchitano, Andrea; Wieneke, Bernhard

    2016-08-01

    This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5–10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.

  18. A Lattice-Boltzmann model to simulate diffractive nonlinear ultrasound beam propagation in a dissipative fluid medium

    NASA Astrophysics Data System (ADS)

    Abdi, Mohamad; Hajihasani, Mojtaba; Gharibzadeh, Shahriar; Tavakkoli, Jahan

    2012-12-01

    Ultrasound waves have been widely used in diagnostic and therapeutic medical applications. Accurate and effective simulation of ultrasound beam propagation and its interaction with tissue has been proved to be important. The nonlinear nature of the ultrasound beam propagation, especially in the therapeutic regime, plays an important role in the mechanisms of interaction with tissue. There are three main approaches in current computational fluid dynamics (CFD) methods to model and simulate nonlinear ultrasound beams: macroscopic, mesoscopic and microscopic approaches. In this work, a mesoscopic CFD method based on the Lattice-Boltzmann model (LBM) was investigated. In the developed method, the Boltzmann equation is evolved to simulate the flow of a Newtonian fluid with the collision model instead of solving the Navier-Stokes, continuity and state equations which are used in conventional CFD methods. The LBM has some prominent advantages over conventional CFD methods, including: (1) its parallel computational nature; (2) taking microscopic boundaries into account; and (3) capability of simulating in porous and inhomogeneous media. In our proposed method, the propagating medium is discretized with a square grid in 2 dimensions with 9 velocity vectors for each node. Using the developed model, the nonlinear distortion and shock front development of a finiteamplitude diffractive ultrasonic beam in a dissipative fluid medium was computed and validated against the published data. The results confirm that the LBM is an accurate and effective approach to model and simulate nonlinearity in finite-amplitude ultrasound beams with Mach numbers of up to 0.01 which, among others, falls within the range of therapeutic ultrasound regime such as high intensity focused ultrasound (HIFU) beams. A comparison between the HIFU nonlinear beam simulations using the proposed model and pseudospectral methods in a 2D geometry is presented.

  19. Using Simulation to Improve First-Year Pharmacy Students’ Ability to Identify Medication Errors Involving the Top 100 Prescription Medications

    PubMed Central

    Awdishu, Linda; Namba, Jennifer

    2016-01-01

    Objective. To evaluate first-year pharmacy students’ ability to identify medication errors involving the top 100 prescription medications. Design. In the first quarter of a 3-quarter pharmacy self-care course, a didactic lecture on the most common prescribing and dispensing prescription errors was presented to first-year pharmacy students (P1) in preparation for a prescription review simulation done individually and as a group. In the following quarter, they were given a formal prescription review workshop before a second simulation involving individual and group review of a different set of prescriptions. Students were evaluated based on the number of correctly checked prescriptions and a self-assessment of their confidence in reviewing prescriptions. Assessment. All 63 P1 students completed the prescription review simulations. The individual scores did not significantly change, but group scores improved from 79 (16.2%) in the fall quarter to 98.6 (4.7%) in the winter quarter. Students perceived improvement of their prescription checking skills, specifically in their ability to fill a prescription on their own, identify prescribing and dispensing errors, and perform pharmaceutical calculations. Conclusion. A prescription review module consisting of a didactic lecture, workshop and simulation-based methods to teach prescription analysis was successful at improving first year pharmacy students’ knowledge, confidence, and application of these skills. PMID:27402989

  20. Using Simulation to Improve First-Year Pharmacy Students' Ability to Identify Medication Errors Involving the Top 100 Prescription Medications.

    PubMed

    Atayee, Rabia S; Awdishu, Linda; Namba, Jennifer

    2016-06-25

    Objective. To evaluate first-year pharmacy students' ability to identify medication errors involving the top 100 prescription medications. Design. In the first quarter of a 3-quarter pharmacy self-care course, a didactic lecture on the most common prescribing and dispensing prescription errors was presented to first-year pharmacy students (P1) in preparation for a prescription review simulation done individually and as a group. In the following quarter, they were given a formal prescription review workshop before a second simulation involving individual and group review of a different set of prescriptions. Students were evaluated based on the number of correctly checked prescriptions and a self-assessment of their confidence in reviewing prescriptions. Assessment. All 63 P1 students completed the prescription review simulations. The individual scores did not significantly change, but group scores improved from 79 (16.2%) in the fall quarter to 98.6 (4.7%) in the winter quarter. Students perceived improvement of their prescription checking skills, specifically in their ability to fill a prescription on their own, identify prescribing and dispensing errors, and perform pharmaceutical calculations. Conclusion. A prescription review module consisting of a didactic lecture, workshop and simulation-based methods to teach prescription analysis was successful at improving first year pharmacy students' knowledge, confidence, and application of these skills. PMID:27402989

  1. Dual simulations of fluid flow and seismic wave propagation in a fractured network: effects of pore pressure on seismic signature

    NASA Astrophysics Data System (ADS)

    Vlastos, S.; Liu, E.; Main, I. G.; Schoenberg, M.; Narteau, C.; Li, X. Y.; Maillot, B.

    2006-08-01

    Fluid flow in the Earth's crust plays an important role in a number of geological processes. In relatively tight rock formations such flow is usually controlled by open macrofractures, with significant implications for ground water flow and hydrocarbon reservoir management. The movement of fluids in the fractured media will result in changes in the pore pressure and consequently will cause changes to the effective stress, traction and elastic properties. The main purpose of this study is to numerically examine the effect of pore pressure changes on seismic wave propagation (i.e. the effects of pore pressures on amplitude, arrival time, frequency content). This is achieved by using dual simulations of fluid flow and seismic propagation in a common 2-D fracture network. Note that the dual simulations are performed separately as the coupled simulations of fluid flow and seismic wave propagations in such fracture network is not possible because the timescales of fluid flow and wave propagation are considerably different (typically, fluid flows in hours, whereas wave propagation in seconds). The flow simulation updates the pore pressure at consecutive time steps, and thus the elastic properties of the rock, for the seismic modelling. In other words, during each time step of the flow simulations, we compute the elastic response corresponding to the pore pressure distribution. The relationship between pore pressure and fractures is linked via an empirical relationship given by Schoenberg and the elastic response of fractures is computed using the equivalent medium theory described by Hudson and Liu. Therefore, we can evaluate the possibility of inferring the changes of fluid properties directly from seismic data. Our results indicate that P waves are not as sensitive to pore pressure changes as S and coda (or scattered) waves. The increase in pore pressure causes a shift of the energy towards lower frequencies, as shown from the spectrum (as a result of scattering

  2. Quantification of errors in large-eddy simulations of a spatially evolving mixing layer using polynomial chaos

    SciTech Connect

    Meldi, M.; Sagaut, P.; Salvetti, M. V.

    2012-03-15

    A stochastic approach based on generalized polynomial chaos (gPC) is used to quantify the error in large-eddy simulation (LES) of a spatially evolving mixing layer flow and its sensitivity to different simulation parameters, viz., the grid stretching in the streamwise and lateral directions and the subgrid-scale (SGS) Smagorinsky model constant (C{sub S}). The error is evaluated with respect to the results of a highly resolved LES and for different quantities of interest, namely, the mean streamwise velocity, the momentum thickness, and the shear stress. A typical feature of the considered spatially evolving flow is the progressive transition from a laminar regime, highly dependent on the inlet conditions, to a fully developed turbulent one. Therefore, the computational domain is divided in two different zones (inlet dependent and fully turbulent) and the gPC error analysis is carried out for these two zones separately. An optimization of the parameters is also carried out for both these zones. For all the considered quantities, the results point out that the error is mainly governed by the value of the C{sub S} constant. At the end of the inlet-dependent zone, a strong coupling between the normal stretching ratio and the C{sub S} value is observed. The error sensitivity to the parameter values is significantly larger in the inlet-dependent upstream region; however, low-error values can be obtained in this region for all the considered physical quantities by an ad hoc tuning of the parameters. Conversely, in the turbulent regime the error is globally lower and less sensitive to the parameter variations, but it is more difficult to find a set of parameter values leading to optimal results for all the analyzed physical quantities. A similar analysis is also carried out for the dynamic Smagorinsky model, by varying the grid stretching ratios. Comparing the databases generated with the different subgrid-scale models, it is possible to observe that the error cost

  3. Simulated errors in deep drainage beneath irrigated settings: Partitioning vegetation, texture and irrigation effects using Monte Carlo

    NASA Astrophysics Data System (ADS)

    Gibson, J. P.; Gates, J. B.; Nasta, P.

    2014-12-01

    Groundwater in irrigated regions is impacted by timing and rates of deep drainage. Because field monitoring of deep drainage is often cost prohibitive, numerical soil water models are frequently the main method of estimation. Unfortunately, few studies have quantified the relative importance of likely error sources. In this study, three potential error sources are considered within a Monte Carlo framework: water retention parameters, rooting depth, and irrigation practice. Error distributions for water retention parameters were determined by 1) laboratory hydraulic measurements and 2) pedotransfer functions. Error distributions for rooting depth were developed from literature values. Three irrigation scheduling regimes were considered: one representing pre-scheduled irrigation ignoring preceding rainfall, one representing pre-scheduled irrigation that was altered based on preceding rainfall, and one representing algorithmic irrigation scheduling informed by profile matric potential sensors. This approach was applied to an experimental site in Nebraska with silt loam soils and irrigated corn for 2002-2012. Results are based on Six Monte-Carlo simulations, each consisting of 1000 Hydrus 1D simulations at daily timesteps, facilitated by parallelization on a 12-node computing cluster. Results indicate greater sensitivity to irrigation regime than to hydraulic or vegetation parameters (median values for prescheduled irrigation, prescheduled irrigation altered by rainfall, and algorithmic irrigation were 310 ,100, and 110 mm/yr, respectively). Error ranges were up to 700% higher for pedotransfer functions than for laboratory-measured hydraulic functions. Deep drainage was negatively correlated with alpha and maximum root zone depth and, for some scenarios, positively correlated with n. The relative importance of error sources differed amongst the irrigation scenarios because of nonlinearities amongst parameter values, profile wetness, and deep drainage. Compared to pre

  4. Characterization of errors in cirrus simulations from a cloud resolving model for application in ice water content retrievals

    NASA Astrophysics Data System (ADS)

    Benedetti, A.; Stephens, G. L.

    Data available from the Atmospheric Radiation Measurement-Unmanned Aerospace Vehicle (ARM-UAV) Spring 1999 experiment are used in this study to estimate errors in cirrus simulations from a 3D Cloud Resolving Model (CRM). The performance of the model, heritage of the CSU Regional Atmospheric Modeling System (RAMS) is assessed by direct comparison of modeled and observed fields. Results show that the CRM succeeds in placing the cloud at approximately the correct altitude, but consistently overestimates the Ice Water Content (IWC). A statistical approach is introduced and applied to quantify average model bias under the assumption of bias-free observations. An error covariance matrix associated with simulated fields is also computed, and used to identify model strengths and deficiencies. Model fields are then used in the context of an optimum estimation retrieval of IWC from a combination of radar and radiometric observations. The retrieval is based on the knowledge of an a priori profile and relative error covariance to ensure algorithm convergence and stability. RAMS average Ice Water Content, corrected for the bias, and the related error covariance matrix derived in this study are used to provide this a priori information to the retrieval.

  5. Modeling of present-day atmosphere and ocean non-tidal de-aliasing errors for future gravity mission simulations

    NASA Astrophysics Data System (ADS)

    Dobslaw, Henryk; Bergmann-Wolf, Inga; Forootan, Ehsan; Dahle, Christoph; Mayer-Gürr, Torsten; Kusche, Jürgen; Flechtner, Frank

    2016-05-01

    A realistically perturbed synthetic de-aliasing model consistent with the updated Earth System Model of the European Space Agency is now available over the period 1995-2006. The dataset contains realizations of (1) errors at large spatial scales assessed individually for periods 10-30, 3-10, and 1-3 days, the S1 atmospheric tide, and sub-diurnal periods; (2) errors at small spatial scales typically not covered by global models of atmosphere and ocean variability; and (3) errors due to physical processes not represented in currently available de-aliasing products. The model is provided in two separate sets of Stokes coefficients to allow for a flexible re-scaling of the overall error level to account for potential future improvements in atmosphere and ocean mass variability models. Error magnitudes for the different frequency bands are derived from a small ensemble of four atmospheric and oceanic models. For the largest spatial scales up to d/o = 40 and periods longer than 24 h, those error estimates are approximately confirmed from a variance component estimation based on GRACE daily normal equations. Future mission performance simulations based on the updated Earth System Model and the realistically perturbed de-aliasing model indicate that for GRACE-type missions only moderate reductions of de-aliasing errors can be expected from a second satellite pair in a shifted polar orbit. Substantially more accurate global gravity fields are obtained when a second pair of satellites in an moderately inclined orbit is added, which largely stabilizes the global gravity field solutions due to its rotated sampling sensitivity.

  6. Simulating Children's Retrieval Errors in Picture-Naming: A Test of Foygel and Dell's (2000) Semantic/Phonological Model of Speech Production

    ERIC Educational Resources Information Center

    Budd, Mary-Jane; Hanley, J. Richard; Griffiths, Yvonne

    2011-01-01

    This study investigated whether Foygel and Dell's (2000) interactive two-step model of speech production could simulate the number and type of errors made in picture-naming by 68 children of elementary-school age. Results showed that the model provided a satisfactory simulation of the mean error profile of children aged five, six, seven, eight and…

  7. Finite difference simulations of seismic wave propagation for understanding earthquake physics and predicting ground motions: Advances and challenges

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Ulrich, Thomas; Ducellier, Ariane; Dupros, Fabrice; Michea, David

    2013-08-01

    Seismic waves radiated from an earthquake propagate in the Earth and the ground shaking is felt and recorded at (or near) the ground surface. Understanding the wave propagation with respect to the Earth's structure and the earthquake mechanisms is one of the main objectives of seismology, and predicting the strong ground shaking for moderate and large earthquakes is essential for quantitative seismic hazard assessment. The finite difference scheme for solving the wave propagation problem in elastic (sometimes anelastic) media has been more widely used since the 1970s than any other numerical methods, because of its simple formulation and implementation, and its easy scalability to large computations. This paper briefly overviews the advances in finite difference simulations, focusing particularly on earthquake mechanics and the resultant wave radiation in the near field. As the finite difference formulation is simple (interpolation is smooth), an easy coupling with other approaches is one of its advantages. A coupling with a boundary integral equation method (BIEM) allows us to simulate complex earthquake source processes.

  8. Generation of spiral bevel gears with zero kinematical errors and computer aided simulation of their meshing and contact

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Tsung, W.-J.; Coy, J. J.

    1985-01-01

    There is proposed a method for generation of Gleason's spiral bevel gears which provides the following properties of meshing and contact: (1) the contact normal keeps its original direction within the neighborhood of the main contact point; (2) the contact ellipse moves along the gear tooth surface; and (3) the kinematical errors caused by Gleason's method of cutting are almost zero. Computer programs for the simulation of meshing and bearing contact are developed.

  9. A multi-zone chemistry mapping approach for direct numerical simulation of auto-ignition and flame propagation in a constant volume enclosure

    NASA Astrophysics Data System (ADS)

    Jangi, M.; Yu, R.; Bai, X. S.

    2012-04-01

    A direct numerical simulation (DNS) coupling with multi-zone chemistry mapping (MZCM) is presented to simulate flame propagation and auto-ignition in premixed fuel/air mixtures. In the MZCM approach, the physical domain is mapped into a low-dimensional phase domain with a few thermodynamic variables as the independent variables. The approach is based on the fractional step method, in which the flow and transport are solved in the flow time steps whereas the integration of the chemical reaction rates and heat release rate is performed in much finer time steps to accommodate the small time scales in the chemical reactions. It is shown that for premixed mixtures, two independent variables can be sufficient to construct the phase space to achieve a satisfactory mapping. The two variables can be the temperature of the mixture and the specific element mass ratio of H atom for fuels containing hydrogen atoms. An aliasing error in the MZCM is investigated. It is shown that if the element mass ratio is based on the element involved in the most diffusive molecules, the aliasing error of the model can approach zero when the grid in the phase space is refined. The results of DNS coupled with MZCM (DNS-MZCM) are compared with full DNS that integrates the chemical reaction rates and heat release rate directly in physical space. Application of the MZCM to different mixtures of fuel and air is presented to demonstrate the performance of the method for combustion processes with different complexity in the chemical kinetics, transport and flame-turbulence interaction. Good agreement between the results from DNS and DNS-MZCM is obtained for different fuel/air mixtures, including H2/air, CO/H2/air and methane/air, while the computational time is reduced by nearly 70%. It is shown that the MZCM model can properly address important phenomena such as differential diffusion, local extinction and re-ignition in premixed combustion.

  10. Analysis of the orbit errors in the CERN accelerators using model simulation

    SciTech Connect

    Lee, M.; Kleban, S.; Clearwater, S.; Scandale, W.; Pettersson, T.; Kugler, H.; Riche, A.; Chanel, M.; Martensson, E.; Lin, In-Ho

    1987-09-01

    This paper will describe the use of the PLUS program to find various types of machine and beam errors such as, quadrupole strength, dipole strength, beam position monitors (BPMs), energy profile, and beam launch. We refer to this procedure as the GOLD (Generic Orbit and Lattice Debugger) Method which is a general technique that can be applied to analysis of errors in storage rings and transport lines. One useful feature of the Method is that it analyzes segments of a machine at a time so that the application and efficiency is independent of the size of the overall machine. Because the techniques are the same for all the types of problems it solves, the user need learn only how to find one type of error in order to use the program.

  11. New approach for absolute fluence distribution calculations in Monte Carlo simulations of light propagation in turbid media

    SciTech Connect

    Böcklin, Christoph Baumann, Dirk; Fröhlich, Jürg

    2014-02-14

    A novel way to attain three dimensional fluence rate maps from Monte-Carlo simulations of photon propagation is presented in this work. The propagation of light in a turbid medium is described by the radiative transfer equation and formulated in terms of radiance. For many applications, particularly in biomedical optics, the fluence rate is a more useful quantity and directly derived from the radiance by integrating over all directions. Contrary to the usual way which calculates the fluence rate from absorbed photon power, the fluence rate in this work is directly calculated from the photon packet trajectory. The voxel based algorithm works in arbitrary geometries and material distributions. It is shown that the new algorithm is more efficient and also works in materials with a low or even zero absorption coefficient. The capabilities of the new algorithm are demonstrated on a curved layered structure, where a non-scattering, non-absorbing layer is sandwiched between two highly scattering layers.

  12. Computer simulation of fast crack propagation and arrest in steel plate with temperature gradient based on local fracture stress criterion

    SciTech Connect

    Machida, Susumu; Yoshinari, Hitoshi; Aihara, Shuji

    1997-12-31

    A fracture mechanics model for fast crack propagation and arrest is proposed based on the local fracture stress criterion. Dynamic fracture toughness (K{sub D}) for a propagating crack is calculated as a function of crack velocity and temperature. The model is extended to incorporate the effect of unbroken ligament (UL) formed near the plate surfaces and crack-front-tunneling. The model simulates acceleration, deceleration and arrest of a crack in a ESSO or a double-tension test plate with temperature-gradient. Calculated arrested crack lengths compare well with experimental results. It is shown that the conventional crack arrest toughness calculated from applied stress and arrested crack length depends on temperature-gradient and the toughness is not a unique material property.

  13. Simulation of Delamination Propagation in Composites Under High-Cycle Fatigue by Means of Cohesive-Zone Models

    NASA Technical Reports Server (NTRS)

    Turon, Albert; Costa, Josep; Camanho, Pedro P.; Davila, Carlos G.

    2006-01-01

    A damage model for the simulation of delamination propagation under high-cycle fatigue loading is proposed. The basis for the formulation is a cohesive law that links fracture and damage mechanics to establish the evolution of the damage variable in terms of the crack growth rate dA/dN. The damage state is obtained as a function of the loading conditions as well as the experimentally-determined coefficients of the Paris Law crack propagation rates for the material. It is shown that by using the constitutive fatigue damage model in a structural analysis, experimental results can be reproduced without the need of additional model-specific curve-fitting parameters.

  14. FSI Simulations of Pulse Wave Propagation in Human Abdominal Aortic Aneurysm: The Effects of Sac Geometry and Stiffness

    PubMed Central

    Li, Han; Lin, Kexin; Shahmirzadi, Danial

    2016-01-01

    This study aims to quantify the effects of geometry and stiffness of aneurysms on the pulse wave velocity (PWV) and propagation in fluid–solid interaction (FSI) simulations of arterial pulsatile flow. Spatiotemporal maps of both the wall displacement and fluid velocity were generated in order to obtain the pulse wave propagation through fluid and solid media, and to examine the interactions between the two waves. The results indicate that the presence of abdominal aortic aneurysm (AAA) sac and variations in the sac modulus affect the propagation of the pulse waves both qualitatively (eg, patterns of change of forward and reflective waves) and quantitatively (eg, decreasing of PWV within the sac and its increase beyond the sac as the sac stiffness increases). The sac region is particularly identified on the spatiotemporal maps with a region of disruption in the wave propagation with multiple short-traveling forward/reflected waves, which is caused by the change in boundary conditions within the saccular region. The change in sac stiffness, however, is more pronounced on the wall displacement spatiotemporal maps compared to those of fluid velocity. We conclude that the existence of the sac can be identified based on the solid and fluid pulse waves, while the sac properties can also be estimated. This study demonstrates the initial findings in numerical simulations of FSI dynamics during arterial pulsations that can be used as reference for experimental and in vivo studies. Future studies are needed to demonstrate the feasibility of the method in identifying very mild sacs, which cannot be detected from medical imaging, where the material property degradation exists under early disease initiation. PMID:27478394

  15. FSI Simulations of Pulse Wave Propagation in Human Abdominal Aortic Aneurysm: The Effects of Sac Geometry and Stiffness.

    PubMed

    Li, Han; Lin, Kexin; Shahmirzadi, Danial

    2016-01-01

    This study aims to quantify the effects of geometry and stiffness of aneurysms on the pulse wave velocity (PWV) and propagation in fluid-solid interaction (FSI) simulations of arterial pulsatile flow. Spatiotemporal maps of both the wall displacement and fluid velocity were generated in order to obtain the pulse wave propagation through fluid and solid media, and to examine the interactions between the two waves. The results indicate that the presence of abdominal aortic aneurysm (AAA) sac and variations in the sac modulus affect the propagation of the pulse waves both qualitatively (eg, patterns of change of forward and reflective waves) and quantitatively (eg, decreasing of PWV within the sac and its increase beyond the sac as the sac stiffness increases). The sac region is particularly identified on the spatiotemporal maps with a region of disruption in the wave propagation with multiple short-traveling forward/reflected waves, which is caused by the change in boundary conditions within the saccular region. The change in sac stiffness, however, is more pronounced on the wall displacement spatiotemporal maps compared to those of fluid velocity. We conclude that the existence of the sac can be identified based on the solid and fluid pulse waves, while the sac properties can also be estimated. This study demonstrates the initial findings in numerical simulations of FSI dynamics during arterial pulsations that can be used as reference for experimental and in vivo studies. Future studies are needed to demonstrate the feasibility of the method in identifying very mild sacs, which cannot be detected from medical imaging, where the material property degradation exists under early disease initiation. PMID:27478394

  16. Quantification of Transport Errors in regional CO2 inversions using a physics-based ensemble of WRF-Chem simulations

    NASA Astrophysics Data System (ADS)

    Diaz Isaac, L. I.; Davis, K. J.; Lauvaux, T.; Miles, N. L.; Richardson, S.; Andrews, A. E.

    2013-12-01

    Atmospheric inversions can be used to assess biosphere-atmosphere CO2 surface exchanges, but variability among inverse flux estimates at regional scales remains significant. Atmospheric transport model errors are presumed to be one of the main contributors to this variability, but have not been quantified thoroughly. Our study aims to evaluate and quantify the transport errors in the Weather Research and Forecasting (WRF) mesoscale model, recently used to produce inverse flux estimates at the regional scale over the NACP Mid-Continental Intensive (MCI) domain. We evaluate transport errors with an ensemble of WRF simulations using different physical parameterizations (e.g., atmospheric boundary layer (ABL) schemes, land surface models (LSMs), and cumulus parameterizations (CP)). Modeled meteorological variables and atmospheric CO2 mixing ratios are compared to observations (e.g., radiosondes, wind profilers, AmeriFlux sites, and CO2 mixing ratio towers) available in the MCI region for summer of 2008. Comparisons to date include simulations using two different land surface models (Noah and Rapid Update Cycle (RUC)), three different ABL schemes (YSU, MYJ and MYNN) and two different cumulus parameterizations (Kain-Fritsch and Grell-3D). We examine using the ensemble as a proxy for the observed model-data mismatch. Then we present a study of the sensitivity of atmospheric conditions to the choice of physical parameterization, to identify the parameterization driving the model-to-model variability in atmospheric CO2 concentrations at the mesoscale over the MCI domain. For example, we show that, whereas the ABL depth is highly influenced by the choice of ABL scheme and LSM, the mean horizontal wind speed is mainly influenced by the LSM only. Finally, we evaluate the variability in space and time of transport errors and their impact in atmospheric CO2 concentrations. Future work will be to describe transport errors in the MCI regional atmospheric inversion based on the

  17. Characterization, propagation, and simulation of infrared scenes; Proceedings of the Meeting, Orlando, FL, Apr. 16-20, 1990

    SciTech Connect

    Watkins, W.R.; Zegel, F.H.; Triplett, M.J.

    1990-01-01

    Various papers on the characterization, propagation, and simulation of IR scenes are presented. Individual topics addressed include: total radiant exitance measurements, absolute measurement of diffuse and specular reflectance using an FTIR spectrometer with an integrating sphere, fundamental limits in temperature estimation, incorporating the BRDF into an IR scene-generation system, characterizing IR dynamic response for foliage backgrounds, modeling sea surface effects in FLIR performance codes, automated imaging IR seeker performance evaluation system, generation of signature data bases with fast codes, background measurements using the NPS-IRST system. Also discussed are: naval ocean IR background analysis, camouflage simulation and effectiveness assessment for the individual soldier, discussion of IR scene generators, multiwavelength Scophony IR scene projector, LBIR target generator and calibrator for preflight seeker tests, dual-mode hardware-in-the-loop simulation facility, development of the IR blackbody source of gravity-type heat pipe and study of its characteristic.

  18. Characterization, propagation, and simulation of infrared scenes; Proceedings of the Meeting, Orlando, FL, Apr. 16-20, 1990

    NASA Astrophysics Data System (ADS)

    Watkins, Wendell R.; Zegel, Ferdinand H.; Triplett, Milton J.

    1990-09-01

    Various papers on the characterization, propagation, and simulation of IR scenes are presented. Individual topics addressed include: total radiant exitance measurements, absolute measurement of diffuse and specular reflectance using an FTIR spectrometer with an integrating sphere, fundamental limits in temperature estimation, incorporating the BRDF into an IR scene-generation system, characterizing IR dynamic response for foliage backgrounds, modeling sea surface effects in FLIR performance codes, automated imaging IR seeker performance evaluation system, generation of signature data bases with fast codes, background measurements using the NPS-IRST system. Also discussed are: naval ocean IR background analysis, camouflage simulation and effectiveness assessment for the individual soldier, discussion of IR scene generators, multiwavelength Scophony IR scene projector, LBIR target generator and calibrator for preflight seeker tests, dual-mode hardware-in-the-loop simulation facility, development of the IR blackbody source of gravity-type heat pipe and study of its characteristic.

  19. Ion kinetic simulations of the formation and propagation of a planar collisional shock wave in a plasma

    SciTech Connect

    Vidal, F.; Matte, J.P. ); Casanova, M.; Larroche, O. )

    1993-09-01

    Ion kinetic simulations of the formation and propagation of planar shock waves in a hydrogen plasma have been performed at Mach numbers 2 and 5, and compared to fluid simulations. At Mach 5, the shock transition is far wider than expected on the basis of comparative fluid calculations. This enlargement is due to hot ions streaming from the hot plasma into the cold plasma and is found to be limited by the electron preheating layer, essentially because electron--ion collisions slow down these energetic ions very effectively in the cold upstream region. Double-humped ion velocity distributions formed in the transition region, which are particularly prominent during the shock formation, are found not to be unstable to any electrostatic mode, due to electron Landau damping. At Mach numbers of 2 and below, no such features are seen in velocity space, and there is very little difference between the profiles from the kinetic and fluid simulations.

  20. Impact of variational assimilation using multivariate background error covariances on the simulation of monsoon depressions over India

    NASA Astrophysics Data System (ADS)

    Dhanya, M.; Chandrasekar, A.

    2016-02-01

    The background error covariance structure influences a variational data assimilation system immensely. The simulation of a weather phenomenon like monsoon depression can hence be influenced by the background correlation information used in the analysis formulation. The Weather Research and Forecasting Model Data assimilation (WRFDA) system includes an option for formulating multivariate background correlations for its three-dimensional variational (3DVar) system (cv6 option). The impact of using such a formulation in the simulation of three monsoon depressions over India is investigated in this study. Analysis and forecast fields generated using this option are compared with those obtained using the default formulation for regional background error correlations (cv5) in WRFDA and with a base run without any assimilation. The model rainfall forecasts are compared with rainfall observations from the Tropical Rainfall Measurement Mission (TRMM) and the other model forecast fields are compared with a high-resolution analysis as well as with European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-Interim reanalysis. The results of the study indicate that inclusion of additional correlation information in background error statistics has a moderate impact on the vertical profiles of relative humidity, moisture convergence, horizontal divergence and the temperature structure at the depression centre at the analysis time of the cv5/cv6 sensitivity experiments. Moderate improvements are seen in two of the three depressions investigated in this study. An improved thermodynamic and moisture structure at the initial time is expected to provide for improved rainfall simulation. The results of the study indicate that the skill scores of accumulated rainfall are somewhat better for the cv6 option as compared to the cv5 option for at least two of the three depression cases studied, especially at the higher threshold levels. Considering the importance of utilising improved

  1. Numerical simulations of blast/shock wave propagations after nuclear explosions

    NASA Astrophysics Data System (ADS)

    Song, Seungho; Choi, Jung-Il; Li, Yibao; Lee, Changhoon

    2013-11-01

    Pressure waves develop immediately after nuclear explosions and start to move outward from the fireball. The most of initial damages are caused by the blast waves. We performed the blast wave propagations by solving two-dimensional and axisymmetric Euler equations. For shock capturing, inviscid fluxes are discretized using a variant of the piecewise parabolic method (PPM) and an approximate Riemann solver based on Roe's method is used. A clean air burst of fireball above the ground zero is considered. The initial condition of fireball is given at the point of breakaway that shock waves are appeared on the surface of the fireball. The growth of fireball is also calculated by solving one-dimensional radiation hydrodynamics (RHD) equation from point explosion. Characteristics of the blast wave propagations due to the various heights of burst and amount of the nuclear detonations are investigated. The results of parametric studies will be shown in the final presentation. Supported by Agency for Defense Development.

  2. Modeling of femoral neck cortical bone for the numerical simulation of ultrasound propagation.

    PubMed

    Grimal, Quentin; Rohrbach, Daniel; Grondin, Julien; Barkmann, Reinhard; Glüer, Claus-C; Raum, Kay; Laugier, Pascal

    2014-05-01

    Quantitative ultrasound assessment of the cortical compartment of the femur neck (FN) is investigated with the goal of achieving enhanced fracture risk prediction. Measurements at the FN are influenced by bone size, shape and material properties. The work described here was aimed at determining which FN material properties have a significant impact on ultrasound propagation around 0.5 MHz and assessing the relevancy of different models. A methodology for the modeling of ultrasound propagation in the FN, with a focus on the modeling of bone elastic properties based on scanning acoustic microscopy data, is introduced. It is found that the first-arriving ultrasound signal measured in through-transmission at the FN is not influenced by trabecular bone properties or by the heterogeneities of the cortical bone mineralized matrix. In contrast, the signal is sensitive to variations in cortical porosity, which can, to a certain extent, be accounted for by effective properties calculated with the Mori-Tanaka method. PMID:24486239

  3. Mathematical simulation of the origination and propagation of crown fires in averaged formulation

    NASA Astrophysics Data System (ADS)

    Perminov, V. A.

    2015-02-01

    Processes of origination and propagation of crown fires are studied theoretically. The forest is treated as multiphase multicomponent porous reacting medium. The Reynolds equations for a turbulent flow are solved numerically taking chemical reactions into account. The method of control volume is used for obtaining the discrete analog. As a result of numerical computations, the distributions of velocity fields, temperature, oxygen concentration, volatile pyrolysis and combustion products, and volume fractions of the condensed phase at different instants are obtained. The model makes it possible to obtain dynamic contours of propagation of crown fires, which depend on the properties and states of forest canopy (reserves and type of combustible materials, moisture content, inhomogeneities in woodland, velocity and direction of wind, etc.).

  4. Effects of registration error on parametric response map analysis: a simulation study using liver CT-perfusion images

    NASA Astrophysics Data System (ADS)

    Lausch, A.; Jensen, N. K. G.; Chen, J.; Lee, T. Y.; Lock, M.; Wong, E.

    2014-03-01

    Purpose: To investigate the effects of registration error (RE) on parametric response map (PRM) analysis of pre and post-radiotherapy (RT) functional images. Methods: Arterial blood flow maps (ABF) were generated from the CT-perfusion scans of 5 patients with hepatocellular carcinoma. ABF values within each patient map were modified to produce seven new ABF maps simulating 7 distinct post-RT functional change scenarios. Ground truth PRMs were generated for each patient by comparing the simulated and original ABF maps. Each simulated ABF map was then deformed by different magnitudes of realistic respiratory motion in order to simulate RE. PRMs were generated for each of the deformed maps and then compared to the ground truth PRMs to produce estimates of RE-induced misclassification. Main findings: The percentage of voxels misclassified as decreasing, no change, and increasing, increased with RE For all patients, increasing RE was observed to increase the number of high post-RT ABF voxels associated with low pre-RT ABF voxels and vice versa. 3 mm of average tumour RE resulted in 18-45% tumour voxel misclassification rates. Conclusions: RE induced misclassification posed challenges for PRM analysis in the liver where registration accuracy tends to be lower. Quantitative understanding of the sensitivity of the PRM method to registration error is required if PRMs are to be used to guide radiation therapy dose painting techniques.

  5. Reducing prospective memory error and costs in simulated air traffic control: External aids, extending practice, and removing perceived memory requirements.

    PubMed

    Loft, Shayne; Chapman, Melissa; Smith, Rebekah E

    2016-09-01

    In air traffic control (ATC), forgetting to perform deferred actions-prospective memory (PM) errors-can have severe consequences. PM demands can also interfere with ongoing tasks (costs). We examined the extent to which PM errors and costs were reduced in simulated ATC by providing extended practice, or by providing external aids combined with extended practice, or by providing external aids combined with instructions that removed perceived memory requirements. Participants accepted/handed-off aircraft and detected conflicts. For the PM task, participants were required to substitute alternative actions for routine actions when accepting aircraft. In Experiment 1, when no aids were provided, PM errors and costs were not reduced by practice. When aids were provided, costs observed early in practice were eliminated with practice, but residual PM errors remained. Experiment 2 provided more limited practice with aids, but instructions that did not frame the PM task as a "memory" task led to high PM accuracy without costs. Attention-allocation policies that participants set based on expected PM demands were modified as individuals were increasingly exposed to reliable aids, or were given instructions that removed perceived memory requirements. These findings have implications for the design of aids for individuals who monitor multi-item dynamic displays. (PsycINFO Database Record PMID:27608067

  6. Molecular dynamics simulations of the mechanisms controlling the propagation of bcc/fcc semi-coherent interfaces in iron

    NASA Astrophysics Data System (ADS)

    Ou, X.; Sietsma, J.; Santofimia, M. J.

    2016-06-01

    Molecular dynamics simulations have been used to study the effects of different orientation relationships between fcc and bcc phases on the bcc/fcc interfacial propagation in pure iron systems at 300 K. Three semi-coherent bcc/fcc interfaces have been investigated. In all the cases, results show that growth of the bcc phase starts in the areas of low potential energy and progresses into the areas of high potential energy at the original bcc/fcc interfaces. The phase transformation in areas of low potential energy is of a martensitic nature while that in the high potential energy areas involves occasional diffusional jumps of atoms.

  7. A Simulation Study of Categorizing Continuous Exposure Variables Measured with Error in Autism Research: Small Changes with Large Effects

    PubMed Central

    Heavner, Karyn; Burstyn, Igor

    2015-01-01

    Variation in the odds ratio (OR) resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations) used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff) curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to “small numbers.” Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship. PMID:26305250

  8. Simulation of EMIC growth and propagation within the plasmaspheric plume density irregularities

    NASA Astrophysics Data System (ADS)

    de Soria-Santacruz Pich, M.; Spasojevic, M.

    2012-12-01

    In situ data from the Magnetospheric Plasma Analyzer (MPA) instruments onboard the LANL spacecraft are used to study the growth and propagation of electromagnetic ion cyclotron (EMIC) waves in the presence of cold plasma irregularities in the plasmaspheric plume. The data corresponds to the 9 June 2001 event, a period of moderate geomagnetic activity with highly irregular density structure within the plume as measured by the MPA instrument at geosynchoronus orbit. Theory and observations suggest that EMIC waves are responsible for energetic proton precipitation, which is stronger during geomagnetically disturbed intervals. These waves propagate below the proton gyrofrequency, and they appear in three frequency bands due to the presence of heavy ions, which strongly modify wave propagation characteristics. These waves are generated by ion cyclotron instability of ring current ions, whose temperature anisotropy provides the free energy required for wave growth. Growth maximizes for field-aligned propagation near the equatorial plane where the magnetic field gradient is small. Although the wave's group velocity typically stays aligned with the geomagnetic field direction, wave-normal vectors tend to become oblique due to the curvature and gradient of the field. On the other hand, radial density gradients have the capability of guiding the waves and competing against the magnetic field effect thus favoring wave growth conditions. In addition, enhanced cold plasma density reduces the proton resonant energy where higher fluxes are available for resonance, and hence explaining why wave growth is favored at higher L-shell regions where the ratio of plasma to cyclotron frequency is larger. The Stanford VLF 3D Raytracer is used together with path-integrated linear growth calculations to study the amplification and propagation characteristics of EMIC waves within the plasmaspheric plume formed during the 9 June 2001 event. Cold multi-ion plasma is assumed for raytracing

  9. Combined electric field and gap junctions on propagation of action potentials in cardiac muscle and smooth muscle in PSpice simulation.

    PubMed

    Sperelakis, Nicholas

    2003-10-01

    Propagation of action potentials in cardiac muscle and smooth muscle were simulated using the PSpice program. Excitation was transmitted from cell to cell along a strand of 6 cells (cardiac muscle) or 10 cells (smooth muscle) either not connected (control) or connected by low-resistance tunnels (gap-junction connexons). A significant negative cleft potential (V(jv) ) develops in the narrow junctional cleft when the pre-JM fires. V(jc) depolarizes the postjunctional membrane (post-JM) to threshold by a patch-clamp action. With few connecting tunnels, cell-to-cell transmission by the EF mechanism was facilitated. With many tunnels, propagation was dominated by the low-resistance mechanism, and propagation velocity (theta) became very fast and nonphysiological. In conclusion, when the 2 mechanisms for cell-to-cell transfer of excitation were combined, the two mechanisms facilitated each other in a synergistic manner. When there were many connecting tunnels, the tunnel mechanism was dominant. PMID:14661164

  10. Fault steps and the dynamic rupture process: 2-D numerical simulations of a spontaneously propagating shear fracture

    NASA Astrophysics Data System (ADS)

    Harris, Ruth A.; Archuleta, Ralph J.; Day, Steven M.

    1991-05-01

    Fault steps may have controlled the sizes of the 1966 Parkfield, 1968 Borrego Mountain, 1979 Imperial Valley, 1979 Coyote Lake and the 1987 Superstition Hills earthquakes. This project investigates the effect of fault steps of various geometries on the dynamic rupture process. We have used a finite difference code to simulate spontaneous rupture propagation in two dimensions. We employ a slip-weakening fracture criterion as the condition for rupture propagation and examine how rupture on one plane initiates rupture on parallel fault planes. The geometry of the two parallel fault planes allows for stepover widths of 0.5 to 10.0 km and overlaps of -5 to 5 km. Our results demonstrate that the spontaneous rupture on the first fault segment continues to propagate onto the second fault segment for a range of geometries for both compressional and dilational fault steps. A major difference between the compressional and dilational cases is, that a dilational step requires a longer time delay between the rupture front reaching the end of the first fault segment and initiating rupture on the second segment. Therefore our dynamic study implies that a compressional step will be jumped quickly, whereas a dilational step will cause a time delay leading to a lower apparent rupture velocity. We also find that the rupture is capable of jumping a wider dilational step than compressional step.

  11. Fault steps and the dynamic rupture process: 2-D numerical simulations of a spontaneously propagating shear fracture

    SciTech Connect

    Harris, R.A.; Archuleta, R.J. ); Day, S.M. )

    1991-05-01

    Fault steps may have controlled the sizes of the 1966 Parkfield, 1968 Borrego Mountain, 1979 Imperial Valley, 1979 Coyote Lake and the 1987 Superstition Hills earthquakes. This project investigates the effect of fault steps of various geometries on the dynamic rupture process. The authors have used a finite difference code to simulate spontaneous rupture propagation in two dimensions. They employ a slip-weakening fracture criterion as the condition for rupture propagation and examine how rupture on one plane initiates rupture on parallel fault planes. The geometry of the two parallel fault planes allows for stepover widths of 0.5 to 10.0 m and overlaps of {minus}5 to 5 km. Results demonstrate that the spontaneous rupture on the first fault segment continues to propagate onto the second fault segment for a range of geometries for both compressional and dilational fault steps. A major difference between the compressional and dilational cases is that a dilational step requires a longer time delay between the rupture front reaching the end of the first fault segment and initiating rupture on the second segment. Therefore this dynamic study implies that a compressional step will be jumped quickly, whereas a dilational step will cause a time delay leading to a lower apparent rupture velocity. The authors also find that the rupture is capable of jumping a wider dilational step than compressional step.

  12. Numerical simulation of non-invasive determination of the propagation coefficient in arterial system using two measurements sites

    NASA Astrophysics Data System (ADS)

    Abdessalem, K. B.; Sahtout, W.; Flaud, P.; Gazah, H.; Fakhfakh, Z.

    2007-11-01

    Literature shows a lack of works based on non-invasive methods for computing the propagation coefficient γ, a complex number related to dynamic vascular properties. Its imaginary part is inversely related to the wave speed C through the relationship C=ω/Im(γ), while its real part a, called attenuation, represents loss of pulse energy per unit of length. In this work an expression is derived giving the propagation coefficient when assuming a pulsatile flow through a viscoelastic vessel. The effects of physical and geometrical parameters of the tube are then studied. In particular, the effects of increasing the reflection coefficient, on the determination of the propagation coefficient are investigated in a first step. In a second step, we simulate a variation of tube length under physiological conditions. The method developed here is based on the knowledge of instantaneous velocity and radius values at only two sites. It takes into account the presence of a reflection site of unknown reflection coefficient, localised in the distal end of the vessel. The values of wave speed and attenuation obtained with this method are in a good agreement with the theory. This method has the advantage to be usable for small portions of the arterial tree.

  13. 3D FDM Simulation of Seismic Wave Propagation for Nankai Trough Earthquake: Effects of Topography and Seawater

    NASA Astrophysics Data System (ADS)

    Todoriki, M.; Furumura, T.; Maeda, T.

    2013-12-01

    We have studied the effect of topography and a seawater layer on the propagation of seismic wave propagation towards the realization of a high-resolution 3D FDM simulation of strong ground motions expected from future large subduction zone earthquakes along the Nankai Trough. Although most of the former studies on seismic wave propagation simulation did not consider a seawater layer in their simulation model, some of the recent studies claimed the importance of topography and a seawater layer on the simulation of strong ground motions (e.g., Petukhin et al., 2010; Nakamura, 2012; Maeda et al., 2013). In this study, we examined the effect of these two features on seismic wave propagation by introducing the high-resolution topography with a seawater layer over a wide frequency band. The area of 3D FDM simulation is 1200 km x 1000 km for horizontal directions and 200 km in depth, which covers entirely the area of southwestern Japan centered at 136E and 34.8N. This model was discretized with small grid interval of 0.5 km in horizontal direction and 0.25 km in depth. We used 2400 nodes of the K-computer, which is about 2.9% of its total resources, with a total memory of 1TB. We used a 3D velocity model of Koketsu et al. (2008) and an original source-rupture model from a recent study on the expansion of source-rupture area of the 1707 Hoei earthquake (Furumura et al., 2011). The result of simulation shows that the effect of a seawater layer on ground motion is small in almost all parts of Japan Island with a change of the seismic wave amplitude of less than +-20%. However, around the Northern Kanto area characterized by a belt-shaped anomalous zone, the amplitude of ground motion grows twice as large as that without seawater. This was possibly brought about from amplification of the amplitudes of surface waves generated on the Philippine Sea plate in the Suruga Trough located in the eastern end of the Nankai Trough. It is quite likely that the amplitude of surface wave

  14. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  15. Neurite, a Finite Difference Large Scale Parallel Program for the Simulation of Electrical Signal Propagation in Neurites under Mechanical Loading

    PubMed Central

    García-Grajales, Julián A.; Rucabado, Gabriel; García-Dopico, Antonio; Peña, José-María; Jérusalem, Antoine

    2015-01-01

    With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite—explicit and implicit—were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented

  16. 3D numerical simulation of the long range propagation of acoustical shock waves through a heterogeneous and moving medium

    SciTech Connect

    Luquet, David; Marchiano, Régis; Coulouvrat, François

    2015-10-28

    Many situations involve the propagation of acoustical shock waves through flows. Natural sources such as lightning, volcano explosions, or meteoroid atmospheric entries, emit loud, low frequency, and impulsive sound that is influenced by atmospheric wind and turbulence. The sonic boom produced by a supersonic aircraft and explosion noises are examples of intense anthropogenic sources in the atmosphere. The Buzz-Saw-Noise produced by turbo-engine fan blades rotating at supersonic speed also propagates in a fast flow within the engine nacelle. Simulating these situations is challenging, given the 3D nature of the problem, the long range propagation distances relative to the central wavelength, the strongly nonlinear behavior of shocks associated to a wide-band spectrum, and finally the key role of the flow motion. With this in view, the so-called FLHOWARD (acronym for FLow and Heterogeneous One-Way Approximation for Resolution of Diffraction) method is presented with three-dimensional applications. A scalar nonlinear wave equation is established in the framework of atmospheric applications, assuming weak heterogeneities and a slow wind. It takes into account diffraction, absorption and relaxation properties of the atmosphere, quadratic nonlinearities including weak shock waves, heterogeneities of the medium in sound speed and density, and presence of a flow (assuming a mean stratified wind and 3D turbulent ? flow fluctuations of smaller amplitude). This equation is solved in the framework of the one-way method. A split-step technique allows the splitting of the non-linear wave equation into simpler equations, each corresponding to a physical effect. Each sub-equation is solved using an analytical method if possible, and finite-differences otherwise. Nonlinear effects are solved in the time domain, and others in the frequency domain. Homogeneous diffraction is handled by means of the angular spectrum method. Ground is assumed perfectly flat and rigid. Due to the 3D

  17. 3D numerical simulation of the long range propagation of acoustical shock waves through a heterogeneous and moving medium

    NASA Astrophysics Data System (ADS)

    Luquet, David; Marchiano, Régis; Coulouvrat, François

    2015-10-01

    Many situations involve the propagation of acoustical shock waves through flows. Natural sources such as lightning, volcano explosions, or meteoroid atmospheric entries, emit loud, low frequency, and impulsive sound that is influenced by atmospheric wind and turbulence. The sonic boom produced by a supersonic aircraft and explosion noises are examples of intense anthropogenic sources in the atmosphere. The Buzz-Saw-Noise produced by turbo-engine fan blades rotating at supersonic speed also propagates in a fast flow within the engine nacelle. Simulating these situations is challenging, given the 3D nature of the problem, the long range propagation distances relative to the central wavelength, the strongly nonlinear behavior of shocks associated to a wide-band spectrum, and finally the key role of the flow motion. With this in view, the so-called FLHOWARD (acronym for FLow and Heterogeneous One-Way Approximation for Resolution of Diffraction) method is presented with three-dimensional applications. A scalar nonlinear wave equation is established in the framework of atmospheric applications, assuming weak heterogeneities and a slow wind. It takes into account diffraction, absorption and relaxation properties of the atmosphere, quadratic nonlinearities including weak shock waves, heterogeneities of the medium in sound speed and density, and presence of a flow (assuming a mean stratified wind and 3D turbulent ? flow fluctuations of smaller amplitude). This equation is solved in the framework of the one-way method. A split-step technique allows the splitting of the non-linear wave equation into simpler equations, each corresponding to a physical effect. Each sub-equation is solved using an analytical method if possible, and finite-differences otherwise. Nonlinear effects are solved in the time domain, and others in the frequency domain. Homogeneous diffraction is handled by means of the angular spectrum method. Ground is assumed perfectly flat and rigid. Due to the 3D

  18. Functional requirements for the man-vehicle systems research facility. [identifying and correcting human errors during flight simulation

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Allen, R. W.; Heffley, R. K.; Jewell, W. F.; Jex, H. R.; Mcruer, D. T.; Schulman, T. M.; Stapleford, R. L.

    1980-01-01

    The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included.

  19. On the continuum-scale simulation of gravity-driven fingers with hysteretic Richards equation: Trucation error induced numerical artifacts

    SciTech Connect

    ELIASSI,MEHDI; GLASS JR.,ROBERT J.

    2000-03-08

    The authors consider the ability of the numerical solution of Richards equation to model gravity-driven fingers. Although gravity-driven fingers can be easily simulated using a partial downwind averaging method, they find the fingers are purely artificial, generated by the combined effects of truncation error induced oscillations and capillary hysteresis. Since Richards equation can only yield a monotonic solution for standard constitutive relations and constant flux boundary conditions, it is not the valid governing equation to model gravity-driven fingers, and therefore is also suspect for unsaturated flow in initially dry, highly nonlinear, and hysteretic media where these fingers occur. However, analysis of truncation error at the wetting front for the partial downwind method suggests the required mathematical behavior of a more comprehensive and physically based modeling approach for this region of parameter space.

  20. Analysis of shock wave propagation from explosives using computational simulations and artificial schlieren imaging

    NASA Astrophysics Data System (ADS)

    Armstrong, Christopher; Hargather, Michael

    2014-11-01

    Computational simulations of explosions are performed using the hydrocode CTH and analyzed using artificial schlieren imaging. The simulations include one and three-dimensional free-air blasts and a confined geometry. Artificial schlieren images are produced from the density fields calculated via the simulations. The artificial schlieren images are used to simulate traditional and focusing schlieren images of explosions. The artificial schlieren images are compared to actual high-speed schlieren images of similar explosions. Computational streak images are produced to identify time-dependent features in the blast field. The streak images are used to study the interaction between secondary shock waves and the explosive product gas contact surface.

  1. Extending the time scale in molecular dynamics simulations: Propagation of ripples in graphene

    NASA Astrophysics Data System (ADS)

    Tewary, V. K.

    2009-10-01

    A technique using causal Green’s function is proposed for extending and bridging multiple time scales in molecular dynamics for modeling time-dependent processes at the atomistic level in nanomaterials and other physical, chemical, and biological systems. The technique is applied to model propagation of a pulse in a one-dimensional lattice of nonlinear oscillators and ripples in graphene from femtoseconds to microseconds. It is shown that, at least in the vibration problems, the technique can accelerate the convergence of molecular dynamics and extend the time scales by eight orders of magnitude.

  2. Simulation of elastic wave propagation in geological media: Intercomparison of three numerical methods

    NASA Astrophysics Data System (ADS)

    Biryukov, V. A.; Miryakha, V. A.; Petrov, I. B.; Khokhlov, N. I.

    2016-06-01

    For wave propagation in heterogeneous media, we compare numerical results produced by grid-characteristic methods on structured rectangular and unstructured triangular meshes and by a discontinuous Galerkin method on unstructured triangular meshes as applied to the linear system of elasticity equations in the context of direct seismic exploration with an anticlinal trap model. It is shown that the resulting synthetic seismograms are in reasonable quantitative agreement. The grid-characteristic method on structured meshes requires more nodes for approximating curved boundaries, but it has a higher computation speed, which makes it preferable for the given class of problems.

  3. Curvilinear Grid Finite-Difference Method to Simulate Seismic Wave Propagation with Topographic Fluid-Solid Interface at Sea Bottom

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Zhang, W.; Chen, X.

    2014-12-01

    This paper presents a curvilinear grid finite difference method for modeling seismic wave propagation with topographic fluid (acoustic) and solid (elastic) interface. The curvilinear grid finite difference method has been successfully used for seismic wave simulation with free surface topography and earthquake dynamics with complex falut geometry. For seismic wave simulation with topographic sea bottom, we use the curvilinear grid to conform the grid to the sea bottom to avoid artifical scatterings due to stair-case approximation. We solve the acoustic wave equation in the water layer and the elastic wave equation in the solid below the sea bottom. The fluid-solid interface condition is implemented by decomposing velocity and stress components to normal and parallel directions of the sea bottom. The results exhibit high accuracy by comparsion with analytical solutions for flat interfaces and also work very well when the fluid-solid interface is topographic. The scheme can be easily extended to 3-D situation.

  4. Simulation of Multi-Dimensional Signals in the Optical Domain: Quantum-Classical Feedback in Nonlinear Exciton Propagation.

    PubMed

    Richter, Martin; Fingerhut, Benjamin P

    2016-07-12

    We present an algorithm for the simulation of nonlinear 2D spectra of molecular systems in the UV-vis spectral region from atomistic molecular dynamics trajectories subject to nonadiabatic relaxation. We combine the nonlinear exciton propagation (NEP) protocol, that relies on a quasiparticle approach with the surface hopping methodology to account for quantum-classical feedback during the dynamics. Phenomena, such as dynamic Stokes shift due to nuclear relaxation, spectral diffusion, and population transfer among electronic states, are thus naturally included and benchmarked on a model of two electronic states coupled to a harmonic coordinate and a classical heatbath. The capabilities of the algorithm are further demonstrated for the bichromophore diphenylmethane that is described in a fully microscopic fashion including all 69 classical nuclear degrees of freedom. We demonstrate that simulated 2D signals are especially sensitive to the applied theoretical approximations (i.e., choice of active space in the CASSCF method) where population dynamics appears comparable. PMID:27248511

  5. Effect of gas adsorption on acoustic wave propagation in MFI zeolite membrane materials: experiment and molecular simulation.

    PubMed

    Manga, Etoungh D; Blasco, Hugues; Da-Costa, Philippe; Drobek, Martin; Ayral, André; Le Clezio, Emmanuel; Despaux, Gilles; Coasne, Benoit; Julbe, Anne

    2014-09-01

    The present study reports on the development of a characterization method of porous membrane materials which consists of considering their acoustic properties upon gas adsorption. Using acoustic microscopy experiments and atomistic molecular simulations for helium adsorbed in a silicalite-1 zeolite membrane layer, we showed that acoustic wave propagation could be used, in principle, for controlling the membranes operando. Molecular simulations, which were found to fit experimental data, showed that the compressional modulus of the composite system consisting of silicalite-1 with adsorbed He increases linearly with the He adsorbed amount while its shear modulus remains constant in a large range of applied pressures. These results suggest that the longitudinal and Rayleigh wave velocities (VL and VR) depend on the He adsorbed amount whereas the transverse wave velocity VT remains constant. PMID:25089584

  6. Local time-space mesh refinement for simulation of elastic wave propagation in multi-scale media

    NASA Astrophysics Data System (ADS)

    Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir

    2015-01-01

    This paper presents an original approach to local time-space grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are the application of temporal and spatial refinement on two different surfaces; the use of the embedded-stencil technique for the refinement of grid step with respect to time; the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.

  7. A two-scale generalized finite element method for fatigue crack propagation simulations utilizing a fixed, coarse hexahedral mesh

    NASA Astrophysics Data System (ADS)

    O'Hara, P.; Hollkamp, J.; Duarte, C. A.; Eason, T.

    2016-01-01

    This paper presents a two-scale extension of the generalized finite element method (GFEM) which allows for static fracture analyses as well as fatigue crack propagation simulations on fixed, coarse hexahedral meshes. The approach is based on the use of specifically-tailored enrichment functions computed on-the-fly through the use of a fine-scale boundary value problem (BVP) defined in the neighborhood of existing mechanically-short cracks. The fine-scale BVP utilizes tetrahedral elements, and thus offers the potential for the use of a highly adapted fine-scale mesh in the regions of crack fronts capable of generating accurate enrichment functions for use in the coarse-scale hexahedral model. In this manner, automated hp-adaptivity which can be used for accurate fracture analyses, is now available for use on coarse, uniform hexahedral meshes without the requirements of irregular meshes and constrained approximations. The two-scale GFEM approach is verified and compared against alternative approaches for static fracture analyses, as well as mixed-mode fatigue crack propagation simulations. The numerical examples demonstrate the ability of the proposed approach to deliver accurate results even in scenarios involving multiple discontinuities or sharp kinks within a single computational element. The proposed approach is also applied to a representative panel model similar in design and complexity to that which may be used in the aerospace community.

  8. Local time–space mesh refinement for simulation of elastic wave propagation in multi-scale media

    SciTech Connect

    Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir

    2015-01-15

    This paper presents an original approach to local time–space grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are –the application of temporal and spatial refinement on two different surfaces; –the use of the embedded-stencil technique for the refinement of grid step with respect to time; –the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.

  9. Continuous wave simulations on the propagation of electromagnetic fields through the human head.

    PubMed

    Elloian, Jeffrey M; Noetscher, Gregory M; Makarov, Sergey N; Pascual-Leone, Alvaro

    2014-06-01

    Characterizing the human head as a propagation medium is vital for the design of both on-body and implanted antennas and radio-frequency sensors. The following problem has been addressed: find the best radio-frequency path through the brain for a given receiver position-on the top of the sinus cavity. Two parameters, transmitter position and radiating frequency, should be optimized simultaneously such that 1) the propagation path through the brain is the longest; and 2) the received power is maximized. To solve this problem, we have performed a systematic and comprehensive study of the electromagnetic fields excited in the head by small on-body magnetic dipoles (small coil antennas). An anatomically accurate high-fidelity head mesh has been generated from the Visible Human Project data. The base radiator was constructed of two orthogonal magnetic dipoles in quadrature, which enables us to create a directive beam into the head. We have found at least one optimum solution. This solution implies that a distinct RF channel may be established in the brain at a certain frequency and transmitter location. PMID:24845277

  10. The effect of the Gulf Stream current field on wave propagation onto South East Florida reefs, studied with SWAN model simulations

    NASA Astrophysics Data System (ADS)

    gravois, U.; Rogers, W. E.; Sheremet, A.; Jensen, T. G.

    2012-12-01

    This study focuses on the prediction of waves and surf on the nearshore reefs of South East Florida. The edge of this reefs tract, outside of Biscayne Bay, Miami, has a steep transition (1:30) from deep to shallow water and also marks the western wall of the Gulf Stream. Geographically the area is bordered by Florida, Cuba and the Bahamas Islands which block the propagation of swell energy and limit the fetch length in all directions except from the North. Related work by the authors on model hindcast validation for this area using HF radar and in situ data exposed the tendency for the wave model SWAN to over predict wave heights on these nearshore reefs for some NE swell events. Based on the findings of the hindcast validation, a series of theoretical SWAN simulations are set up to investigate the sensitivity of nearshore modeled wave heights to the deep water wave direction and also the effect of coupling with the Gulf Stream surface currents. SWAN is run on an outer wave grid centered about the nearshore reefs of interest and forced with a JONSWAP spectrum that is uniform across all of the boundaries for a suite of wave directions and frequencies. The output of the outer grid is used to force a higher resolution inner grid, run with and without Gulf Stream surface current coupling. Bulk wave parameters are output at a nearshore point location on the reef tract for analysis. There are several interesting findings as a result this study. First, there is only a narrow swell window that allows waves to propagate into the nearshore study location. This implies that a relatively small error in deep water swell angle could result in significant differences in the nearshore wave heights and is likely the source of error for the hindcast validation. Secondly, the swell window significantly shifts with the inclusion of the Gulf Stream current field. Gulf Stream refraction has more effect on shorter period wave forcing, so much so, that the optimal swell window is from the

  11. Reduction of systematic errors in regional climate simulations of the summer monsoon over East Asia and the western North Pacific by applying the spectral nudging technique

    NASA Astrophysics Data System (ADS)

    Cha, Dong-Hyun; Lee, Dong-Kyou

    2009-07-01

    In this study, the systematic errors in regional climate simulation of 28-year summer monsoon over East Asia and the western North Pacific (WNP) and the impact of the spectral nudging technique (SNT) on the reduction of the systematic errors are investigated. The experiment in which the SNT is not applied (the CLT run) has large systematic errors in seasonal mean climatology such as overestimated precipitation, weakened subtropical high, and enhanced low-level southwesterly over the subtropical WNP, while in the experiment using the SNT (the SP run) considerably smaller systematic errors are resulted. In the CTL run, the systematic error of simulated precipitation over the ocean increases significantly after mid-June, since the CTL run cannot reproduce the principal intraseasonal variation of summer monsoon precipitation. The SP run can appropriately capture the spatial distribution as well as temporal variation of the principal empirical orthogonal function mode, and therefore, the systematic error over the ocean does not increase after mid-June. The systematic error of simulated precipitation over the subtropical WNP in the CTL run results from the unreasonable positive feedback between precipitation and surface latent heat flux induced by the warm sea surface temperature anomaly. Since the SNT plays a role in decreasing the positive feedback by improving monsoon circulations, the SP run can considerably reduce the systematic errors of simulated precipitation as well as atmospheric fields over the subtropical WNP region.

  12. Experimental study on impact-induced seismic wave propagating through quartz sand simulating asteroid regolith

    NASA Astrophysics Data System (ADS)

    Matsue, Kazuma; Arakawa, Masahiko; Yasui, Minami; Matsumoto, Rie; Tsujido, Sayaka; Takano, Shota; Hasegawa, Sunao

    2015-08-01

    Introduction: Recent spacecraft surveys clarified that asteroid surfaces were covered with regolith made of boulders and pebbles such as that found on the asteroid Itokawa. It was also found that surface morphologies of asteroids formed on the regolith layer were modified. For example, the high-resolution images of the asteroid Eros revealed the evidence of the downslope movement of the regolith layer, then it could cause the degradation and the erasure of small impact crater. One possible process to explain these observations is the regolith layer collapse caused by seismic vibration after projectile impacts. The impact-induced seismic wave might be an important physical process affecting the morphology change of regolith layer on asteroid surfaces. Therefore, it is significant for us to know the relationship between the impact energy and the impact-induced seismic wave. So in this study, we carried out impact cratering experiments in order to observe the seismic wave propagating through the target far from the impact crater.Experimental method: Impact cratering experiments were conducted by using a single stage vertical gas gun set at Kobe Univ and a two-stage vertical gas gun set at ISAS. We used quartz s