Numerical study of error propagation in Monte Carlo depletion simulations
Wyant, T.; Petrovic, B.
2012-07-01
Improving computer technology and the desire to more accurately model the heterogeneity of the nuclear reactor environment have made the use of Monte Carlo depletion codes more attractive in recent years, and feasible (if not practical) even for 3-D depletion simulation. However, in this case statistical uncertainty is combined with error propagating through the calculation from previous steps. In an effort to understand this error propagation, a numerical study was undertaken to model and track individual fuel pins in four 17 x 17 PWR fuel assemblies. By changing the code's initial random number seed, the data produced by a series of 19 replica runs was used to investigate the true and apparent variance in k{sub eff}, pin powers, and number densities of several isotopes. While this study does not intend to develop a predictive model for error propagation, it is hoped that its results can help to identify some common regularities in the behavior of uncertainty in several key parameters. (authors)
Error propagation in a digital avionic processor: A simulation-based study
NASA Technical Reports Server (NTRS)
Lomelino, D.; Iyer, R. K.
1986-01-01
An experimental analysis to study error propagation from the gate to the chip level is described. The target system is the CPU in the Bendix BDX-930, an avionic miniprocessor. Error activity data for the study was collected via a gate-level simulation. A family of distributions to characterize the error propagation, both within the chip and at the pins, was then generated. Based on these distributions, measures of error propagation and severity were defined. The analysis quantifies the dependency of the measured error propagation on the location of the fault and the type of instruction/microinstruction executed.
Simulation of radar rainfall errors and their propagation into rainfall-runoff processes
NASA Astrophysics Data System (ADS)
Aghakouchak, A.; Habib, E.
2008-05-01
Radar rainfall data compared with rain gauge measurements provide higher spatial and temporal resolution. However, radar data obtained form reflectivity patterns are subject to various errors such as errors in Z-R relationship, vertical profile of reflectivity, spatial and temporal sampling, etc. Characterization of such uncertainties in radar data and their effects on hydrologic simulations (e.g., streamflow estimation) is a challenging issue. This study aims to analyze radar rainfall error characteristics empirically to gain information on prosperities of random error representativeness and its temporal and spatial dependency. To empirically analyze error characteristics, high resolution and accurate rain gauge measurements are required. The Goodwin Creek watershed located in the north part of Mississippi is selected for this study due to availability of a dense rain gauge network. A total of 30 rain gauge measurement stations within Goodwin Creak watershed and the NWS Level II radar reflectivity data obtained from the WSR-88dD Memphis radar station with temporal resolution of 5min and spatial resolution of 1 km2 are used in this study. Radar data and rain gauge measurements comparisons are used to estimate overall bias, and statistical characteristics and spatio-temporal dependency of radar rainfall error fields. This information is then used to simulate realizations of radar error patterns with multiple correlated variables using Monte Calro method and the Cholesky decomposition. The generated error fields are then imposed on radar rainfall fields to obtain statistical realizations of input rainfall fields. Each simulated realization is then fed as input to a distributed physically based hydrological model resulting in an ensemble of predicted runoff hydrographs. The study analyzes the propagation of radar errors on the simulation of different rainfall-runoff processes such as streamflow, soil moisture, infiltration, and over-land flooding.
Error Propagation in a System Model
NASA Technical Reports Server (NTRS)
Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)
2015-01-01
Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
Scout trajectory error propagation computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1982-01-01
Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.
Observation error propagation on video meteor orbit determination
NASA Astrophysics Data System (ADS)
SonotaCo
2016-04-01
A new radiant direction error computation method on SonotaCo Network meteor observation data was tested. It uses single station observation error obtained by reference star measurement and trajectory linearity measurement on each video, as its source error value, and propagates this to the radiant and orbit parameter errors via the Monte Carlo simulation method. The resulting error values on a sample data set showed a reasonable error distribution that makes accuracy-based selecting feasible. A sample set of selected orbits obtained by this method revealed a sharper concentration of shower meteor radiants than we have ever seen before. The simultaneously observed meteor data sets published by the SonotaCo Network will be revised to include this error value on each record and will be publically available along with the computation program in near future.
Error Propagation Analysis for Quantitative Intracellular Metabolomics
Tillack, Jana; Paczia, Nicole; Nöh, Katharina; Wiechert, Wolfgang; Noack, Stephan
2012-01-01
Model-based analyses have become an integral part of modern metabolic engineering and systems biology in order to gain knowledge about complex and not directly observable cellular processes. For quantitative analyses, not only experimental data, but also measurement errors, play a crucial role. The total measurement error of any analytical protocol is the result of an accumulation of single errors introduced by several processing steps. Here, we present a framework for the quantification of intracellular metabolites, including error propagation during metabolome sample processing. Focusing on one specific protocol, we comprehensively investigate all currently known and accessible factors that ultimately impact the accuracy of intracellular metabolite concentration data. All intermediate steps are modeled, and their uncertainty with respect to the final concentration data is rigorously quantified. Finally, on the basis of a comprehensive metabolome dataset of Corynebacterium glutamicum, an integrated error propagation analysis for all parts of the model is conducted, and the most critical steps for intracellular metabolite quantification are detected. PMID:24957773
Error propagation in energetic carrying capacity models
Pearse, Aaron T.; Stafford, Joshua D.
2014-01-01
Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.
Error Propagation Made Easy--Or at Least Easier
ERIC Educational Resources Information Center
Gardenier, George H.; Gui, Feng; Demas, James N.
2011-01-01
Complex error propagation is reduced to formula and data entry into a Mathcad worksheet or an Excel spreadsheet. The Mathcad routine uses both symbolic calculus analysis and Monte Carlo methods to propagate errors in a formula of up to four variables. Graphical output is used to clarify the contributions to the final error of each of the…
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
NASA Astrophysics Data System (ADS)
Noble, Viveca K.
1994-10-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Detecting and preventing error propagation via competitive learning.
Silva, Thiago Christiano; Zhao, Liang
2013-05-01
Semisupervised learning is a machine learning approach which is able to employ both labeled and unlabeled samples in the training process. It is an important mechanism for autonomous systems due to the ability of exploiting the already acquired information and for exploring the new knowledge in the learning space at the same time. In these cases, the reliability of the labels is a crucial factor, because mislabeled samples may propagate wrong labels to a portion of or even the entire data set. This paper has the objective of addressing the error propagation problem originated by these mislabeled samples by presenting a mechanism embedded in a network-based (graph-based) semisupervised learning method. Such a procedure is based on a combined random-preferential walk of particles in a network constructed from the input data set. The particles of the same class cooperate among them, while the particles of different classes compete with each other to propagate class labels to the whole network. Computer simulations conducted on synthetic and real-world data sets reveal the effectiveness of the model. PMID:23200192
Detecting and preventing error propagation via competitive learning.
Silva, Thiago Christiano; Zhao, Liang
2013-05-01
Semisupervised learning is a machine learning approach which is able to employ both labeled and unlabeled samples in the training process. It is an important mechanism for autonomous systems due to the ability of exploiting the already acquired information and for exploring the new knowledge in the learning space at the same time. In these cases, the reliability of the labels is a crucial factor, because mislabeled samples may propagate wrong labels to a portion of or even the entire data set. This paper has the objective of addressing the error propagation problem originated by these mislabeled samples by presenting a mechanism embedded in a network-based (graph-based) semisupervised learning method. Such a procedure is based on a combined random-preferential walk of particles in a network constructed from the input data set. The particles of the same class cooperate among them, while the particles of different classes compete with each other to propagate class labels to the whole network. Computer simulations conducted on synthetic and real-world data sets reveal the effectiveness of the model.
Uncertainty and error in computational simulations
Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.
1997-10-01
The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.
Analysis of Errors in a Special Perturbations Satellite Orbit Propagator
Beckerman, M.; Jones, J.P.
1999-02-01
We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.
Position error propagation in the simplex strapdown navigation system
NASA Technical Reports Server (NTRS)
1976-01-01
The results of an analysis of the effects of deterministic error sources on position error in the simplex strapdown navigation system were documented. Improving the long term accuracy of the system was addressed in two phases: understanding and controlling the error within the system, and defining methods of damping the net system error through the use of an external reference velocity or position. Review of the flight and ground data revealed error containing the Schuler frequency as well as non-repeatable trends. The only unbounded terms are those involving gyro bias and azimuth error coupled with velocity. All forms of Schuler-periodic position error were found to be sufficiently large to require update or damping capability unless the source coefficients can be limited to values less than those used in this analysis for misalignment and gyro and accelerometer bias. The first-order effects of the deterministic error sources were determined with a simple error propagator which provided plots of error time functions in response to various source error values.
NASA Astrophysics Data System (ADS)
Hasegawa, Kei; Geller, Robert J.; Hirabayashi, Nobuyasu
2016-06-01
We present a theoretical analysis of the error of synthetic seismograms computed by higher-order finite-element methods (ho-FEMs). We show the existence of a previously unrecognized type of error due to degenerate coupling between waves with the same frequency but different wavenumbers. These results are confirmed by simple numerical experiments using the spectral element method as an example of ho-FEMs. Errors of the type found by this study may occur generally in applications of ho-FEMs.
Inductively Coupled Plasma Mass Spectrometry Uranium Error Propagation
Hickman, D P; Maclean, S; Shepley, D; Shaw, R K
2001-07-01
The Hazards Control Department at Lawrence Livermore National Laboratory (LLNL) uses Inductively Coupled Plasma Mass Spectrometer (ICP/MS) technology to analyze uranium in urine. The ICP/MS used by the Hazards Control Department is a Perkin-Elmer Elan 6000 ICP/MS. The Department of Energy Laboratory Accreditation Program requires that the total error be assessed for bioassay measurements. A previous evaluation of the errors associated with the ICP/MS measurement of uranium demonstrated a {+-} 9.6% error in the range of 0.01 to 0.02 {micro}g/l. However, the propagation of total error for concentrations above and below this level have heretofore been undetermined. This document is an evaluation of the errors associated with the current LLNL ICP/MS method for a more expanded range of uranium concentrations.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
A neural fuzzy controller learning by fuzzy error propagation
NASA Technical Reports Server (NTRS)
Nauck, Detlef; Kruse, Rudolf
1992-01-01
In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.
Autonomous Navigation Error Propagation Assessment for Lunar Surface Mobility Applications
NASA Technical Reports Server (NTRS)
Welch, Bryan W.; Connolly, Joseph W.
2006-01-01
The NASA Vision for Space Exploration is focused on the return of astronauts to the Moon. While navigation systems have already been proven in the Apollo missions to the moon, the current exploration campaign will involve more extensive and extended missions requiring new concepts for lunar navigation. In this document, the results of an autonomous navigation error propagation assessment are provided. The analysis is intended to be the baseline error propagation analysis for which Earth-based and Lunar-based radiometric data are added to compare these different architecture schemes, and quantify the benefits of an integrated approach, in how they can handle lunar surface mobility applications when near the Lunar South pole or on the Lunar Farside.
Simulation of wave propagation in three-dimensional random media
NASA Technical Reports Server (NTRS)
Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1993-01-01
Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.
Dose calibration optimization and error propagation in polymer gel dosimetry
NASA Astrophysics Data System (ADS)
Jirasek, A.; Hilts, M.
2014-02-01
This study reports on the relative precision, relative error, and dose differences observed when using a new full-image calibration technique in NIPAM-based x-ray CT polymer gel dosimetry. The effects of calibration parameters (e.g. gradient thresholding, dose bin size, calibration fit function, and spatial remeshing) on subsequent errors in calibrated gel images are reported. It is found that gradient thresholding, dose bin size, and fit function all play a primary role in affecting errors in calibrated images. Spatial remeshing induces minimal reductions or increases in errors in calibrated images. This study also reports on a full error propagation throughout the CT gel image pre-processing and calibration procedure thus giving, for the first time, a realistic view of the errors incurred in calibrated CT polymer gel dosimetry. While the work is based on CT polymer gel dosimetry, the formalism is valid for and easily extended to MRI or optical CT dosimetry protocols. Hence, the procedures developed within the work are generally applicable to calibration of polymer gel dosimeters.
NASA Astrophysics Data System (ADS)
Sun, Yu; Zhang, Jin
2016-09-01
The error propagation characteristics of different points of a halo orbit are studied in this paper. The condition number of state transition matrix after a period is used to estimate the sensitivity of different positions on a halo orbit to the state errors. Then the covariance propagation method is applied to compute the error covariance matrix under the initial state errors, and the results are validated by Monte Carlo simulation. The variations of the position error are compared with the variations of the condition number. The results show that the variation trend of the position error after a period of that halo orbit is the same as the variation trend of the condition number. Moreover, this identified property is verified by testing the halo orbits with different amplitudes and the halo orbits in systems with different mass ratios. Finally, the variation trends of the condition numbers against propagation time corresponding to different points on the halo orbit are analyzed. It is shown that the largest point in the negative z direction of the north halo orbit and its vicinity are more sensitive to state errors than other positions on the halo orbit, and they should not be selected as the injection point.
Rigorous covariance propagation of geoid errors to geodetic MDT estimates
NASA Astrophysics Data System (ADS)
Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.
2012-04-01
The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.
Cross Section Sensitivity and Propagated Errors in HZE Exposures
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Wilson, John W.; Blatnig, Steve R.; Qualls, Garry D.; Badavi, Francis F.; Cucinotta, Francis A.
2005-01-01
It has long been recognized that galactic cosmic rays are of such high energy that they tend to pass through available shielding materials resulting in exposure of astronauts and equipment within space vehicles and habitats. Any protection provided by shielding materials result not so much from stopping such particles but by changing their physical character in interaction with shielding material nuclei forming, hopefully, less dangerous species. Clearly, the fidelity of the nuclear cross-sections is essential to correct specification of shield design and sensitivity to cross-section error is important in guiding experimental validation of cross-section models and database. We examine the Boltzmann transport equation which is used to calculate dose equivalent during solar minimum, with units (cSv/yr), associated with various depths of shielding materials. The dose equivalent is a weighted sum of contributions from neutrons, protons, light ions, medium ions and heavy ions. We investigate the sensitivity of dose equivalent calculations due to errors in nuclear fragmentation cross-sections. We do this error analysis for all possible projectile-fragment combinations (14,365 such combinations) to estimate the sensitivity of the shielding calculations to errors in the nuclear fragmentation cross-sections. Numerical differentiation with respect to the cross-sections will be evaluated in a broad class of materials including polyethylene, aluminum and copper. We will identify the most important cross-sections for further experimental study and evaluate their impact on propagated errors in shielding estimates.
Optimal control of quaternion propagation errors in spacecraft navigation
NASA Technical Reports Server (NTRS)
Vathsal, S.
1986-01-01
Optimal control techniques are used to drive the numerical error (truncation, roundoff, commutation) in computing the quaternion vector to zero. The normalization of the quaternion is carried out by appropriate choice of a performance index, which can be optimized. The error equations are derived from Friedland's (1978) theoretical development, and a matrix Riccati equation results for the computation of the gain matrix. Simulation results show that a high precision of the order of 10 to the -12th can be obtained using this technique in meeting the q(T)q=1 constraint. The performance of the estimator in the presence of the feedback control that maintains the normalization, is studied.
On the uncertainty of stream networks derived from elevation data: the error propagation approach
NASA Astrophysics Data System (ADS)
Hengl, T.; Heuvelink, G. B. M.; van Loon, E. E.
2010-07-01
DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks) using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations), and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations). All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial) clearly depicts areas where the extracted stream network is least precise - usually areas of low local relief and slightly convex (0-10 difference from the mean value). In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor) show high error (H>0.5) of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy the required accuracy level. Such error propagation tool should become a standard functionality in any modern GIS. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies w_{r} in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = w_{r}/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less
NASA Astrophysics Data System (ADS)
Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok
2005-08-01
Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of
Molecular dynamics simulation of propagating cracks
NASA Technical Reports Server (NTRS)
Mullins, M.
1982-01-01
Steady state crack propagation is investigated numerically using a model consisting of 236 free atoms in two (010) planes of bcc alpha iron. The continuum region is modeled using the finite element method with 175 nodes and 288 elements. The model shows clear (010) plane fracture to the edge of the discrete region at moderate loads. Analysis of the results obtained indicates that models of this type can provide realistic simulation of steady state crack propagation.
Simulation of guided wave propagation near numerical Brillouin zones
NASA Astrophysics Data System (ADS)
Kijanka, Piotr; Staszewski, Wieslaw J.; Packo, Pawel
2016-04-01
Attractive properties of guided waves provides very unique potential for characterization of incipient damage, particularly in plate-like structures. Among other properties, guided waves can propagate over long distances and can be used to monitor hidden structural features and components. On the other hand, guided propagation brings substantial challenges for data analysis. Signal processing techniques are frequently supported by numerical simulations in order to facilitate problem solution. When employing numerical models additional sources of errors are introduced. These can play significant role for design and development of a wave-based monitoring strategy. Hence, the paper presents an investigation of numerical models for guided waves generation, propagation and sensing. Numerical dispersion analysis, for guided waves in plates, based on the LISA approach is presented and discussed in the paper. Both dispersion and modal amplitudes characteristics are analysed. It is shown that wave propagation in a numerical model resembles propagation in a periodic medium. Consequently, Lamb wave propagation close to numerical Brillouin zone is investigated and characterized.
Propagation Of Error And The Reliability Of Global Air Temperature Projections
NASA Astrophysics Data System (ADS)
Frank, P.
2013-12-01
General circulation model (GCM) projections of the impact of rising greenhouse gases (GHGs) on globally averaged annual surface air temperatures are a simple linear extrapolation of GHG forcing, as indicated by their accurate simulation using the equation, ΔT = a×33K×[(F0+∑iΔFi)/F0], where F0 is the total GHG forcing of projection year zero, ΔFi is the increment of GHG forcing in the ith year, and a is a variable dimensionless fraction that follows GCM climate sensitivity. Linearity of GCM air temperature projections means that uncertainty propagates step-wise as the root-sum-square of error. The annual average error in total cloud fraction (TCF) resulting from CMIP5 model theory-bias is ×12%, equivalent to ×5 Wm-2 uncertainty in the energy state of the projected atmosphere. Propagated uncertainty due to TCF error is always much larger than the projected globally averaged air temperature anomaly, and reaches ×20 C in a centennial projection. CMIP5 GCMs thus have no predictive value.
Error propagation in the characterization of atheromatic plaque types based on imaging.
Athanasiou, Lambros S; Rigas, George; Sakellarios, Antonis; Bourantas, Christos V; Stefanou, Kostas; Fotiou, Evangelos; Exarchos, Themis P; Siogkas, Panagiotis; Naka, Katerina K; Parodi, Oberdan; Vozzi, Federico; Teng, Zhongzhao; Young, Victoria E L; Gillard, Jonathan H; Prati, Francesco; Michalis, Lampros K; Fotiadis, Dimitrios I
2015-10-01
Imaging systems transmit and acquire signals and are subject to errors including: error sources, signal variations or possible calibration errors. These errors are included in all imaging systems for atherosclerosis and are propagated to methodologies implemented for the segmentation and characterization of atherosclerotic plaque. In this paper, we present a study for the propagation of imaging errors and image segmentation errors in plaque characterization methods applied to 2D vascular images. More specifically, the maximum error that can be propagated to the plaque characterization results is estimated, assuming worst-case scenarios. The proposed error propagation methodology is validated using methods applied to real datasets, obtained from intravascular imaging (IVUS) and optical coherence tomography (OCT) for coronary arteries, and magnetic resonance imaging (MRI) for carotid arteries. The plaque characterization methods have recently been presented in the literature and are able to detect the vessel borders, and characterize the atherosclerotic plaque types. Although, these methods have been extensively validated using as gold standard expert annotations, by applying the proposed error propagation methodology a more realistic validation is performed taking into account the effect of the border detection algorithms error and the image formation error into the final results. The Pearson's coefficient of the detected plaques has changed significantly when the method was applied to IVUS and OCT, while there was not any variation when the method was applied to MRI data.
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated
LMSS drive simulator for multipath propagation
NASA Technical Reports Server (NTRS)
Vishakantaiah, Praveen; Vogel, Wolfhard J.
1989-01-01
A three-dimensional drive simulator for the prediction of Land Mobile Satellite Service (LMSS) multipath propagation was developed. It is based on simple physical and geometrical rules and can be used to evaluate effects of scatterer numbers and positions, receiving antenna pattern, and satellite frequency and position. It is shown that scatterers close to the receiver have the most effect and that directive antennas suppress multipath interference.
NASA Technical Reports Server (NTRS)
Weir, Kent A.; Wells, Eugene M.
1990-01-01
The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.
Propagation of errors from the sensitivity image in list mode reconstruction
Qi, Jinyi; Huesman, Ronald H.
2003-11-15
List mode image reconstruction is attracting renewed attention. It eliminates the storage of empty sinogram bins. However, a single back projection of all LORs is still necessary for the pre-calculation of a sensitivity image. Since the detection sensitivity is dependent on the object attenuation and detector efficiency, it must be computed for each study. Exact computation of the sensitivity image can be a daunting task for modern scanners with huge numbers of LORs. Thus, some fast approximate calculation may be desirable. In this paper, we theoretically analyze the error propagation from the sensitivity image into the reconstructed image. The theoretical analysis is based on the fixed point condition of the list mode reconstruction. The non-negativity constraint is modeled using the Kuhn-Tucker condition. With certain assumptions and the first order Taylor series approximation, we derive a closed form expression for the error in the reconstructed image as a function of the error in the sensitivity image. The result provides insights on what kind of error might be allowable in the sensitivity image. Computer simulations show that the theoretical results are in good agreement with the measured results.
Error propagation and scaling for tropical forest biomass estimates.
Chave, Jerome; Condit, Richard; Aguilar, Salomon; Hernandez, Andres; Lao, Suzanne; Perez, Rolando
2004-01-01
The above-ground biomass (AGB) of tropical forests is a crucial variable for ecologists, biogeochemists, foresters and policymakers. Tree inventories are an efficient way of assessing forest carbon stocks and emissions to the atmosphere during deforestation. To make correct inferences about long-term changes in biomass stocks, it is essential to know the uncertainty associated with AGB estimates, yet this uncertainty is rarely evaluated carefully. Here, we quantify four types of uncertainty that could lead to statistical error in AGB estimates: (i) error due to tree measurement; (ii) error due to the choice of an allometric model relating AGB to other tree dimensions; (iii) sampling uncertainty, related to the size of the study plot; (iv) representativeness of a network of small plots across a vast forest landscape. In previous studies, these sources of error were reported but rarely integrated into a consistent framework. We estimate all four terms in a 50 hectare (ha, where 1 ha = 10(4) m2) plot on Barro Colorado Island, Panama, and in a network of 1 ha plots scattered across central Panama. We find that the most important source of error is currently related to the choice of the allometric model. More work should be devoted to improving the predictive power of allometric models for biomass. PMID:15212093
Meshfree Simulations of Ductile Crack Propagations
NASA Astrophysics Data System (ADS)
Li, Shaofan; Simonsen, Cerup B.
2005-03-01
In this work, a meshfree method is used to simulate ductile crack growth and propagation under finite deformation and large scale yielding conditions. A so-called parametric visibility condition and its related particle splitting procedure have been developed to automatically adapt the evolving strong continuity or fracture configuration due to an arbitrary crack growth in ductile materials. It is shown that the proposed meshfree crack adaption and re-interpolation procedure is versatile in numerical simulations, and it can capture some essential features of ductile fracture and ductile crack surface morphology, such as the rough zig-zag pattern of crack surface and the ductile crack front damage zone, which have been difficult to capture in previous numerical simulations.
Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.
2010-09-15
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article
Hoogeveen, R. C.; Martens, E. P.; van der Stelt, P. F.; Berkhout, W. E. R.
2015-01-01
Objective. To investigate if software simulation is practical for quantifying random error (RE) in phantom dosimetry. Materials and Methods. We applied software error simulation to an existing dosimetry study. The specifications and the measurement values of this study were brought into the software (R version 3.0.2) together with the algorithm of the calculation of the effective dose (E). Four sources of RE were specified: (1) the calibration factor; (2) the background radiation correction; (3) the read-out process of the dosimeters; and (4) the fluctuation of the X-ray generator. Results. The amount of RE introduced by these sources was calculated on the basis of the experimental values and the mathematical rules of error propagation. The software repeated the calculations of E multiple times (n = 10,000) while attributing the applicable RE to the experimental values. A distribution of E emerged as a confidence interval around an expected value. Conclusions. Credible confidence intervals around E in phantom dose studies can be calculated by using software modelling of the experiment. With credible confidence intervals, the statistical significance of differences between protocols can be substantiated or rejected. This modelling software can also be used for a power analysis when planning phantom dose experiments. PMID:26881200
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when
Theory and Simulation of Field Error Transport.
NASA Astrophysics Data System (ADS)
Dubin, D. H. E.
2007-11-01
The rate at which a plasma escapes across an applied magnetic field B due to symmetry-breaking electric or magnetic ``field errors'' is revisited. Such field errors cause plasma loss (or compression) in stellarators, tokamaks,ootnotetextH.E. Mynick, Ph Plas 13 058102 (2006). and nonneutral plasmas.ootnotetextEggleston, Ph Plas 14 012302 (07); Danielson et al., Ph Plas 13 055706. We study this process using idealized simulations that follow guiding centers in given trap fields, neglecting their collective effect on the evolution, but including collisions. Also, the Fokker-Planck equation describing the particle distribution is solved, and the predicted transport agrees with simulations in every applicable regime. When a field error of the form δφ(r, θ, z ) = ɛ(r) e^i m θ kz is applied to an infinite plasma column, the transport rates fall into the usual banana, plateau and fluid regimes. When the particles are axially confined by applied trap fields, the same three regimes occur. When an added ``squeeze'' potential produces a separatrix in the axial motion, the transport is enhanced, scaling roughly as ( ν/ B )^1/2 δ2̂ when ν< φ. For φ< ν< φB (where φ, ν and φB are the rotation, collision and axial bounce frequencies) there is also a 1/ ν regime similar to that predicted for ripple-enhanced transport.^1
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.
2013-08-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006-2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC) ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92) are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors in the 7-15 hPa layer (29-32 km), a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold). Comparisons of total column O3 yield average differences of +1.6 DU (-1.1 to +4.9 DU 10th to 90th percentiles) when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (-1.1 to +2.2 DU) when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly distinguishable
Tiffany, T O; Thayer, P C; Coelho, C M; Manning, G B
1976-09-01
We present a total system error evaluation of random error, based on a propagation of error analysis of the expression for the calculation of enzyme activity. A simple expression is derived that contains terms for photometric error, timing uncertainty, temperature-control error, sample and reagent volume errors, and pathlength error. This error expression was developed in general to provide a simple means of evaluating the magnitude of random error in an analytical system and in particular to provide an error evaluation protocol for the assessment of the error components in a prototype Miniature Centrifugal Analyzer system. Individual system components of error are measured. These measured error components are combined in the error expressiion to predict performance. Enzyme activity measurements are made to correlate with the projected error data. In conclusion, it is demonstrated that this is one method for permitting the clinical chemist and the instrument manufacturer to establish reasonable error limits. PMID:954193
Zandbergen, P.A.; Hart, T.C.; Lenzer, K.E.; Camponovo, M.E.
2012-01-01
The quality of geocoding has received substantial attention in recent years. A synthesis of published studies shows that the positional errors of street geocoding are somewhat unique relative to those of other types of spatial data: 1) the magnitude of error varies strongly across urban-rural gradients; 2) the direction of error is not uniform, but strongly associated with the properties of local street segments; 3) the distribution of errors does not follow a normal distribution, but is highly skewed and characterized by a substantial number of very large error values; and 4) the magnitude of error is spatially autocorrelated and is related to properties of the reference data. This makes it difficult to employ analytic approaches or Monte Carlo simulations for error propagation modeling because these rely on generalized statistical characteristics. The current paper describes an alternative empirical approach to error propagation modeling for geocoded data and illustrates its implementation using three different case-studies of geocoded individual-level datasets. The first case-study consists of determining the land cover categories associated with geocoded addresses using a point-in-raster overlay. The second case-study consists of a local hotspot characterization using kernel density analysis of geocoded addresses. The third case-study consists of a spatial data aggregation using enumeration areas of varying spatial resolution. For each case-study a high quality reference scenario based on address points forms the basis for the analysis, which is then compared to the result of various street geocoding techniques. Results show that the unique nature of the positional error of street geocoding introduces substantial noise in the result of spatial analysis, including a substantial amount of bias for some analysis scenarios. This confirms findings from earlier studies, but expands these to a wider range of analytical techniques. PMID:22469492
Simulation of MAD Cow Disease Propagation
NASA Astrophysics Data System (ADS)
Magdoń-Maksymowicz, M. S.; Maksymowicz, A. Z.; Gołdasz, J.
Computer simulation of dynamic of BSE disease is presented. Both vertical (to baby) and horizontal (to neighbor) mechanisms of the disease spread are considered. The game takes place on a two-dimensional square lattice Nx×Ny = 1000×1000 with initial population randomly distributed on the net. The disease may be introduced either with the initial population or by a spontaneous development of BSE in an item, at a small frequency. Main results show a critical probability of the BSE transmission above which the disease is present in the population. This value is vulnerable to possible spatial clustering of the population and it also depends on the mechanism responsible for the disease onset, evolution and propagation. A threshold birth rate below which the population is extinct is seen. Above this threshold the population is disease free at equilibrium until another birth rate value is reached when the disease is present in population. For typical model parameters used for the simulation, which may correspond to the mad cow disease, we are close to the BSE-free case.
Modeling human response errors in synthetic flight simulator domain
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.
1992-01-01
This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.
Channel Error Propagation In Predictor Adaptive Differential Pulse Code Modulation (DPCM) Coders
NASA Astrophysics Data System (ADS)
Devarajan, Venkat; Rao, K. R.
1980-11-01
New adaptive differential pulse code modulation (ADPCM) coders with adaptive prediction are proposed and compared with existing non-adaptive DPCM coders, for processing composite National Television System Commission (NTSC) television signals. Comparisons are based on quantitative criteria as well as subjective evaluation of the processed still frames. The performance of the proposed predictors is shown to be independent of well-designed quantizers and better than existing predictors in such critical regions of the pictures as edges ind contours. Test data consists of four color images with varying levels of activity, color and detail. The adaptive predictors, however, are sensitive to channel errors. Propagation of transmission noise is dependent on the type of prediction and on location of noise i.e., whether in an uniform region or in an active region. The transmission error propagation for different predictors is investigated. By introducing leak in predictor output and/or predictor function it is shown that this propagation can be significantly reduced. The combination predictors not only attenuate and/or terminate the channel error propagation but also improve the predictor performance based on quantitative evaluation such as essential peak value and mean square error between the original and reconstructed images.
Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results
Clark, E.L.
1994-07-01
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.
Simulation of sound propagation over porous barriers of arbitrary shapes.
Ke, Guoyi; Zheng, Z C
2015-01-01
A time-domain solver using an immersed boundary method is investigated for simulating sound propagation over porous and rigid barriers of arbitrary shapes. In this study, acoustic propagation in the air from an impulse source over the ground is considered as a model problem. The linearized Euler equations are solved for sound propagation in the air and the Zwikker-Kosten equations for propagation in barriers as well as in the ground. In comparison to the analytical solutions, the numerical scheme is validated for the cases of a single rigid barrier with different shapes and for two rigid triangular barriers. Sound propagations around barriers with different porous materials are then simulated and discussed. The results show that the simulation is able to capture the sound propagation behaviors accurately around both rigid and porous barriers. PMID:25618061
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Choi, G.; Iyer, R. K.
1990-01-01
A simulation study is described which predicts the susceptibility of an advanced control system to electrical transients resulting in logic errors, latched errors, error propagation, and digital upset. The system is based on a custom-designed microprocessor and it incorporates fault-tolerant techniques. The system under test and the method to perform the transient injection experiment are described. Results for 2100 transient injections are analyzed and classified according to charge level, type of error, and location of injection.
Spatio-temporal precipitation error propagation in runoff modelling: a case study in central Sweden
NASA Astrophysics Data System (ADS)
Olsson, J.
2006-07-01
The propagation of spatio-temporal errors in precipitation estimates to runoff errors in the output from the conceptual hydrological HBV model was investigated. The study region was the Gimån catchment in central Sweden, and the period year 2002. Five precipitation sources were considered: NWP model (H22), weather radar (RAD), precipitation gauges (PTH), and two versions of a mesoscale analysis system (M11, M22). To define the baseline estimates of precipitation and runoff, used to define seasonal precipitation and runoff biases, the mesoscale climate analysis M11 was used. The main precipitation biases were a systematic overestimation of precipitation by H22, in particular during winter and early spring, and a pronounced local overestimation by RAD during autumn, in the western part of the catchment. These overestimations in some cases exceeded 50% in terms of seasonal subcatchment relative accumulated volume bias, but generally the bias was within ±20%. The precipitation data from the different sources were used to drive the HBV model, set up and calibrated for two stations in Gimån, both for continuous simulation during 2002 and for forecasting of the spring flood peak. In summer, autumn and winter all sources agreed well. In spring H22 overestimated the accumulated runoff volume by ~50% and peak discharge by almost 100%, owing to both overestimated snow depth and precipitation during the spring flood. PTH overestimated spring runoff volumes by ~15% owing to overestimated winter precipitation. The results demonstrate how biases in precipitation estimates may exhibit a substantial space-time variability, and may further become either magnified or reduced when applied for hydrological purposes, depending on both temporal and spatial variations in the catchment. Thus, the uncertainty in precipitation estimates should preferably be specified as a function of both time and space.
Effects of Error Experience When Learning to Simulate Hypernasality
ERIC Educational Resources Information Center
Wong, Andus W.-K.; Tse, Andy C.-Y.; Ma, Estella P.-M.; Whitehill, Tara L.; Masters, Rich S. W.
2013-01-01
Purpose: The purpose of this study was to evaluate the effects of error experience on the acquisition of hypernasal speech. Method: Twenty-eight healthy participants were asked to simulate hypernasality in either an "errorless learning" condition (in which the possibility for errors was limited) or an "errorful learning"…
NASA Technical Reports Server (NTRS)
Mallinckrodt, A. J.
1977-01-01
Data from an extensive array of collocated instrumentation at the Wallops Island test facility were intercompared in order to (1) determine the practical achievable accuracy limitations of various tropospheric and ionospheric correction techniques; (2) examine the theoretical bases and derivation of improved refraction correction techniques; and (3) estimate internal systematic and random error levels of the various tracking stations. The GEOS 2 satellite was used as the target vehicle. Data were obtained regarding the ionospheric and tropospheric propagation errors, the theoretical and data analysis of which was documented in some 30 separate reports over the last 6 years. An overview of project results is presented.
Saarelma, Jukka; Botts, Jonathan; Hamilton, Brian; Savioja, Lauri
2016-04-01
Finite-difference time-domain (FDTD) simulation has been a popular area of research in room acoustics due to its capability to simulate wave phenomena in a wide bandwidth directly in the time-domain. A downside of the method is that it introduces a direction and frequency dependent error to the simulated sound field due to the non-linear dispersion relation of the discrete system. In this study, the perceptual threshold of the dispersion error is measured in three-dimensional FDTD schemes as a function of simulation distance. Dispersion error is evaluated for three different explicit, non-staggered FDTD schemes using the numerical wavenumber in the direction of the worst-case error of each scheme. It is found that the thresholds for the different schemes do not vary significantly when the phase velocity error level is fixed. The thresholds are found to vary significantly between the different sound samples. The measured threshold for the audibility of dispersion error at the probability level of 82% correct discrimination for three-alternative forced choice is found to be 9.1 m of propagation in a free field, that leads to a maximum group delay error of 1.8 ms at 20 kHz with the chosen phase velocity error level of 2%. PMID:27106330
Saarelma, Jukka; Botts, Jonathan; Hamilton, Brian; Savioja, Lauri
2016-04-01
Finite-difference time-domain (FDTD) simulation has been a popular area of research in room acoustics due to its capability to simulate wave phenomena in a wide bandwidth directly in the time-domain. A downside of the method is that it introduces a direction and frequency dependent error to the simulated sound field due to the non-linear dispersion relation of the discrete system. In this study, the perceptual threshold of the dispersion error is measured in three-dimensional FDTD schemes as a function of simulation distance. Dispersion error is evaluated for three different explicit, non-staggered FDTD schemes using the numerical wavenumber in the direction of the worst-case error of each scheme. It is found that the thresholds for the different schemes do not vary significantly when the phase velocity error level is fixed. The thresholds are found to vary significantly between the different sound samples. The measured threshold for the audibility of dispersion error at the probability level of 82% correct discrimination for three-alternative forced choice is found to be 9.1 m of propagation in a free field, that leads to a maximum group delay error of 1.8 ms at 20 kHz with the chosen phase velocity error level of 2%.
Atomic simulation of fatigue crack propagation in Ni3Al
NASA Astrophysics Data System (ADS)
Ma, Lei; Xiao, Shifang; Deng, Huiqiu; Hu, Wangyu
2015-03-01
The fatigue crack propagation behavior of Ni3Al was studied using molecular dynamics simulation at room temperature. The simulation results showed that the deformation mechanisms and the crack propagation path were significantly influenced by the orientation of initial crack. The formation process of slip bands around the crack tip was investigated in various cracks and indicated that the slip bands were able to hinder the initiation and propagation of cracks. Besides, the crack growth rate was also calculated by the Paris equation, and the results revealed that the crack growth rate increased with the increasing stress intensity factor range.
Analysis of Medication Errors in Simulated Pediatric Resuscitation by Residents
Porter, Evelyn; Barcega, Besh; Kim, Tommy Y.
2014-01-01
Introduction The objective of our study was to estimate the incidence of prescribing medication errors specifically made by a trainee and identify factors associated with these errors during the simulated resuscitation of a critically ill child. Methods The results of the simulated resuscitation are described. We analyzed data from the simulated resuscitation for the occurrence of a prescribing medication error. We compared univariate analysis of each variable to medication error rate and performed a separate multiple logistic regression analysis on the significant univariate variables to assess the association between the selected variables. Results We reviewed 49 simulated resuscitations. The final medication error rate for the simulation was 26.5% (95% CI 13.7% – 39.3%). On univariate analysis, statistically significant findings for decreased prescribing medication error rates included senior residents in charge, presence of a pharmacist, sleeping greater than 8 hours prior to the simulation, and a visual analog scale score showing more confidence in caring for critically ill children. Multiple logistic regression analysis using the above significant variables showed only the presence of a pharmacist to remain significantly associated with decreased medication error, odds ratio of 0.09 (95% CI 0.01 – 0.64). Conclusion Our results indicate that the presence of a clinical pharmacist during the resuscitation of a critically ill child reduces the medication errors made by resident physician trainees. PMID:25035756
Error-Based Simulation for Error-Awareness in Learning Mechanics: An Evaluation
ERIC Educational Resources Information Center
Horiguchi, Tomoya; Imai, Isao; Toumoto, Takahito; Hirashima, Tsukasa
2014-01-01
Error-based simulation (EBS) has been developed to generate phenomena by using students' erroneous ideas and also offers promise for promoting students' awareness of errors. In this paper, we report the evaluation of EBS used in learning "normal reaction" in a junior high school. An EBS class, where students learned the concept…
Exploring Discretization Error in Simulation-Based Aerodynamic Databases
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2010-01-01
This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.
NASA Astrophysics Data System (ADS)
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.
An efficient error-propagation-based reduction method for large chemical kinetic mechanisms
Pepiot-Desjardins, P.; Pitsch, H.
2008-07-15
Production rates obtained from a detailed chemical mechanism are analyzed in order to quantify the coupling between the various species and reactions involved. These interactions can be represented by a directed relation graph. A geometric error propagation strategy applied to this graph accurately identifies the dependencies of specified targets and creates a set of increasingly simplified kinetic schemes containing only the chemical paths deemed the most important for the targets. An integrity check is performed concurrently with the reduction process to avoid truncated chemical paths and mass accumulation in intermediate species. The quality of a given skeletal model is assessed through the magnitude of the errors introduced in the target predictions. The applied error evaluation is variable-dependent and unambiguous for unsteady problems. The technique yields overall monotonically increasing errors, and the smallest skeletal mechanism that satisfies a user-defined error tolerance over a selected domain of applicability is readily obtained. An additional module based on life-time analysis identifies a set of species that can be modeled accurately by quasi-steady state relations. An application of the reduction procedure is presented for autoignition using a large iso-octane mechanism. The whole process is automatic, is fast, has moderate CPU and memory requirements, and compares favorably to other existing techniques. (author)
NASA Technical Reports Server (NTRS)
James, R.; Brownlow, J. D.
1985-01-01
A study is performed under NASA contract to evaluate data from an AN/FPS-16 radar installed for support of flight programs at Dryden Flight Research Facility of NASA Ames Research Center. The purpose of this study is to provide information necessary for improving post-flight data reduction and knowledge of accuracy of derived radar quantities. Tracking data from six flights are analyzed. Noise and bias errors in raw tracking data are determined for each of the flights. A discussion of an altitude bias error during all of the tracking missions is included. This bias error is defined by utilizing pressure altitude measurements made during survey flights. Four separate filtering methods, representative of the most widely used optimal estimation techniques for enhancement of radar tracking data, are analyzed for suitability in processing both real-time and post-mission data. Additional information regarding the radar and its measurements, including typical noise and bias errors in the range and angle measurements, is also presented. This report is in two parts. This is part 2, a discussion of the modeling of propagation path errors.
Simulation of Radar Rainfall Fields: A Random Error Model
NASA Astrophysics Data System (ADS)
Aghakouchak, A.; Habib, E.; Bardossy, A.
2008-12-01
Precipitation is a major input in hydrological and meteorological models. It is believed that uncertainties due to input data will propagate in modeling hydrologic processes. Stochastically generated rainfall data are used as input to hydrological and meteorological models to assess model uncertainties and climate variability in water resources systems. The superposition of random errors of different sources is one of the main factors in uncertainty of radar estimates. One way to express these uncertainties is to stochastically generate random error fields to impose them on radar measurements in order to obtain an ensemble of radar rainfall estimates. In the method introduced here, the random error consists of two components: purely random error and dependent error on the indicator variable. Model parameters of the error model are estimated using a heteroscedastic maximum likelihood model in order to account for variance heterogeneity in radar rainfall error estimates. When reflectivity values are considered, the exponent and multiplicative factor of the Z-R relationship are estimated simultaneously with the model parameters. The presented model performs better compared to the previous approaches that generally result in unaccounted heteroscedasticity in error fields and thus radar ensemble.
Systematic error analysis of rotating coil using computer simulation
Li, Wei-chuan; Coles, M.
1993-04-01
This report describes a study of the systematic and random measurement uncertainties of magnetic multipoles which are due to construction errors, rotational speed variation, and electronic noise in a digitally bucked tangential coil assembly with dipole bucking windings. The sensitivities of the systematic multipole uncertainty to construction errors are estimated analytically and using a computer simulation program.
Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C
2014-03-01
Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward
Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C
2014-03-01
Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward
Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES
NASA Astrophysics Data System (ADS)
Sarkar, B.; Bhunia, C. T.; Maulik, U.
2012-06-01
Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.
Digital simulation of continuous error models with application to an instrument landing system error
NASA Technical Reports Server (NTRS)
Merrick, R. B.; Smith, G. L.
1972-01-01
A digital simulation of the continuous error of the localized beam of a conventional instrument landing system is discussed. The digital simulation was developed during the analysis of space shuttle navigation capabilities. A discrete mathematical model for use on a digital computer is described. The model generates an output random sequence which is equivalent, for simulation purposes, to the desired random process. The model is a system of difference equations driven by a zero-mean Gaussian random sequence.
The Propagation of Errors in Experimental Data Analysis: A Comparison of Pre-and Post-Test Designs
ERIC Educational Resources Information Center
Gorard, Stephen
2013-01-01
Experimental designs involving the randomization of cases to treatment and control groups are powerful and under-used in many areas of social science and social policy. This paper reminds readers of the pre-and post-test, and the post-test only, designs, before explaining briefly how measurement errors propagate according to error theory. The…
Monte Carlo Simulations of Light Propagation in Apples
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper reports on the investigation of light propagation in fresh apples in the visible and short-wave near-infrared region using Monte Carlo simulations. Optical properties of ‘Golden Delicious’ apples were determined over the spectral range of 500-1100 nm using a hyperspectral imaging method, ...
A simulation of high energy cosmic ray propagation 2
NASA Technical Reports Server (NTRS)
Honda, M.; Kamata, K.; Kifune, T.; Matsubara, Y.; Mori, M.; Nishijima, K.
1985-01-01
The cosmic ray propagation in the Galactic arm is simulated. The Galactic magnetic fields are known to go along with so called Galactic arms as a main structure with turbulences of the scale about 30pc. The distribution of cosmic ray in Galactic arm is studied. The escape time and the possible anisotropies caused by the arm structure are discussed.
Abundance recovery error analysis using simulated AVIRIS data
NASA Technical Reports Server (NTRS)
Stoner, William W.; Harsanyi, Joseph C.; Farrand, William H.; Wong, Jennifer A.
1992-01-01
Measurement noise and imperfect atmospheric correction translate directly into errors in the determination of the surficial abundance of materials from imaging spectrometer data. The effects of errors on abundance recovery were investigated previously using Monte Carlo simulation methods by Sabol et. al. The drawback of the Monte Carlo approach is that thousands of trials are needed to develop good statistics on the probable error in abundance recovery. This computational burden invariably limits the number of scenarios of interest that can practically be investigated. A more efficient approach is based on covariance analysis. The covariance analysis approach expresses errors in abundance as a function of noise in the spectral measurements and provides a closed form result eliminating the need for multiple trials. Monte Carlo simulation and covariance analysis are used to predict confidence limits for abundance recovery for a scenario which is modeled as being derived from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).
Propagation of radar rainfall uncertainty in urban flood simulations
NASA Astrophysics Data System (ADS)
Liguori, Sara; Rico-Ramirez, Miguel
2013-04-01
This work discusses the results of the implementation of a novel probabilistic system designed to improve ensemble sewer flow predictions for the drainage network of a small urban area in the North of England. The probabilistic system has been developed to model the uncertainty associated to radar rainfall estimates and propagate it through radar-based ensemble sewer flow predictions. The assessment of this system aims at outlining the benefits of addressing the uncertainty associated to radar rainfall estimates in a probabilistic framework, to be potentially implemented in the real-time management of the sewer network in the study area. Radar rainfall estimates are affected by uncertainty due to various factors [1-3] and quality control and correction techniques have been developed in order to improve their accuracy. However, the hydrological use of radar rainfall estimates and forecasts remains challenging. A significant effort has been devoted by the international research community to the assessment of the uncertainty propagation through probabilistic hydro-meteorological forecast systems [4-5], and various approaches have been implemented for the purpose of characterizing the uncertainty in radar rainfall estimates and forecasts [6-11]. A radar-based ensemble stochastic approach, similar to the one implemented for use in the Southern-Alps by the REAL system [6], has been developed for the purpose of this work. An ensemble generator has been calibrated on the basis of the spatial-temporal characteristics of the residual error in radar estimates assessed with reference to rainfall records from around 200 rain gauges available for the year 2007, previously post-processed and corrected by the UK Met Office [12-13]. Each ensemble member is determined by summing a perturbation field to the unperturbed radar rainfall field. The perturbations are generated by imposing the radar error spatial and temporal correlation structure to purely stochastic fields. A
Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko
2015-01-01
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
NASA Astrophysics Data System (ADS)
Chen, Shanyong; Li, Shengyi; Wang, Guilin
2014-11-01
environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.
Disentangling timing and amplitude errors in streamflow simulations
NASA Astrophysics Data System (ADS)
Seibert, Simon Paul; Ehret, Uwe; Zehe, Erwin
2016-09-01
This article introduces an improvement in the Series Distance (SD) approach for the improved discrimination and visualization of timing and magnitude uncertainties in streamflow simulations. SD emulates visual hydrograph comparison by distinguishing periods of low flow and periods of rise and recession in hydrological events. Within these periods, it determines the distance of two hydrographs not between points of equal time but between points that are hydrologically similar. The improvement comprises an automated procedure to emulate visual pattern matching, i.e. the determination of an optimal level of generalization when comparing two hydrographs, a scaled error model which is better applicable across large discharge ranges than its non-scaled counterpart, and "error dressing", a concept to construct uncertainty ranges around deterministic simulations or forecasts. Error dressing includes an approach to sample empirical error distributions by increasing variance contribution, which can be extended from standard one-dimensional distributions to the two-dimensional distributions of combined time and magnitude errors provided by SD. In a case study we apply both the SD concept and a benchmark model (BM) based on standard magnitude errors to a 6-year time series of observations and simulations from a small alpine catchment. Time-magnitude error characteristics for low flow and rising and falling limbs of events were substantially different. Their separate treatment within SD therefore preserves useful information which can be used for differentiated model diagnostics, and which is not contained in standard criteria like the Nash-Sutcliffe efficiency. Construction of uncertainty ranges based on the magnitude of errors of the BM approach and the combined time and magnitude errors of the SD approach revealed that the BM-derived ranges were visually narrower and statistically superior to the SD ranges. This suggests that the combined use of time and magnitude errors to
New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations
NASA Technical Reports Server (NTRS)
Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.
2012-01-01
In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.
Simulation of nonlinear ultrasound wave propagation in Fourier domain
NASA Astrophysics Data System (ADS)
Varray, F.; Basset, O.; Cachard, C.
2015-10-01
The nonlinear ultrasound field distortion occurs in all biological media and reminds of great interest in all nonlinear imaging strategies as harmonic or contrast agent imaging. From the various set of methods that compute this propagation, the angular spectrum one is the fastest in term of computation time but the harmonics calculation is less accurate compared to other strategies. In this work, a new formulation based on a slowly varying envelope approximation is proposed to evaluate the full nonlinear spectrum distortion during the propagation. This tool is compared to a previous published angular method and the resulting pressure fields are very close which validate the proposed strategy, with a maximal error inferior to 2 dB. In term of computation time, the proposed tools is as fast as the previous one, but compute the full spectrum in once.
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
Ar-Ar_Redux: rigorous error propagation of 40Ar/39Ar data, including covariances
NASA Astrophysics Data System (ADS)
Vermeesch, P.
2015-12-01
Rigorous data reduction and error propagation algorithms are needed to realise Earthtime's objective to improve the interlaboratory accuracy of 40Ar/39Ar dating to better than 1% and thereby facilitate the comparison and combination of the K-Ar and U-Pb chronometers. Ar-Ar_Redux is a new data reduction protocol and software program for 40Ar/39Ar geochronology which takes into account two previously underappreciated aspects of the method: 1. 40Ar/39Ar measurements are compositional dataIn its simplest form, the 40Ar/39Ar age equation can be written as: t = log(1+J [40Ar/39Ar-298.5636Ar/39Ar])/λ = log(1 + JR)/λ Where λ is the 40K decay constant and J is the irradiation parameter. The age t does not depend on the absolute abundances of the three argon isotopes but only on their relative ratios. Thus, the 36Ar, 39Ar and 40Ar abundances can be normalised to unity and plotted on a ternary diagram or 'simplex'. Argon isotopic data are therefore subject to the peculiar mathematics of 'compositional data', sensu Aitchison (1986, The Statistical Analysis of Compositional Data, Chapman & Hall). 2. Correlated errors are pervasive throughout the 40Ar/39Ar methodCurrent data reduction protocols for 40Ar/39Ar geochronology propagate the age uncertainty as follows: σ2(t) = [J2 σ2(R) + R2 σ2(J)] / [λ2 (1 + R J)], which implies zero covariance between R and J. In reality, however, significant error correlations are found in every step of the 40Ar/39Ar data acquisition and processing, in both single and multi collector instruments, during blank, interference and decay corrections, age calculation etc. Ar-Ar_Redux revisits every aspect of the 40Ar/39Ar method by casting the raw mass spectrometer data into a contingency table of logratios, which automatically keeps track of all covariances in a compositional context. Application of the method to real data reveals strong correlations (r2 of up to 0.9) between age measurements within a single irradiation batch. Propertly taking
Statistical error in particle simulations of low mach number flows
Hadjiconstantinou, N G; Garcia, A L
2000-11-13
We present predictions for the statistical error due to finite sampling in the presence of thermal fluctuations in molecular simulation algorithms. The expressions are derived using equilibrium statistical mechanics. The results show that the number of samples needed to adequately resolve the flowfield scales as the inverse square of the Mach number. Agreement of the theory with direct Monte Carlo simulations shows that the use of equilibrium theory is justified.
Discreteness noise versus force errors in N-body simulations
NASA Technical Reports Server (NTRS)
Hernquist, Lars; Hut, Piet; Makino, Jun
1993-01-01
A low accuracy in the force calculation per time step of a few percent for each particle pair is sufficient for collisionless N-body simulations. Higher accuracy is made meaningless by the dominant discreteness noise in the form of two-body relaxation, which can be reduced only by increasing the number of particles. Since an N-body simulation is a Monte Carlo procedure in which each particle-particle force is essentially random, i.e., carries an error of about 1000 percent, the only requirement is a systematic averaging-out of these intrinsic errors. We illustrate these assertions with two specific examples in which individual pairwise forces are deliberately allowed to carry significant errors: tree-codes on supercomputers and algorithms on special-purpose machines with low-precision hardware.
Propagation of radiation in fluctuating multiscale plasmas. II. Kinetic simulations
Pal Singh, Kunwar; Robinson, P. A.; Cairns, Iver H.; Tyshetskiy, Yu.
2012-11-15
A numerical algorithm is developed and tested that implements the kinetic treatment of electromagnetic radiation propagating through plasmas whose properties have small scale fluctuations, which was developed in a companion paper. This method incorporates the effects of refraction, damping, mode structure, and other aspects of large-scale propagation of electromagnetic waves on the distribution function of quanta in position and wave vector, with small-scale effects of nonuniformities, including scattering and mode conversion approximated as causing drift and diffusion in wave vector. Numerical solution of the kinetic equation yields the distribution function of radiation quanta in space, time, and wave vector. Simulations verify the convergence, accuracy, and speed of the methods used to treat each term in the equation. The simulations also illustrate the main physical effects and place the results in a form that can be used in future applications.
Communication Systems Simulator with Error Correcting Codes Using MATLAB
ERIC Educational Resources Information Center
Gomez, C.; Gonzalez, J. E.; Pardo, J. M.
2003-01-01
In this work, the characteristics of a simulator for channel coding techniques used in communication systems, are described. This software has been designed for engineering students in order to facilitate the understanding of how the error correcting codes work. To help students understand easily the concepts related to these kinds of codes, a…
Myers, Casey A.; Laz, Peter J.; Shelburne, Kevin B.; Davidson, Bradley S.
2015-01-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5–95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Simulation of laser propagation in a turbulent atmosphere.
Frehlich, R
2000-01-20
The split-step Fourier-transform algorithm for numerical simulation of wave propagation in a turbulent atmosphere is refined to correctly include the effects of large-scale phase fluctuations that are important for imaging problems and many beam-wave problems such as focused laser beams and beam spreading. The results of the improved algorithm are similar to the results of the traditional algorithm for the performance of coherent Doppler lidar and for plane-wave intensity statistics because the effects of large-scale turbulence are less important. The series solution for coherent Doppler lidar performance converges slowly to the results from simulation. PMID:18337906
Propagation of radar rainfall uncertainty in urban flood simulations
NASA Astrophysics Data System (ADS)
Liguori, Sara; Rico-Ramirez, Miguel
2013-04-01
This work discusses the results of the implementation of a novel probabilistic system designed to improve ensemble sewer flow predictions for the drainage network of a small urban area in the North of England. The probabilistic system has been developed to model the uncertainty associated to radar rainfall estimates and propagate it through radar-based ensemble sewer flow predictions. The assessment of this system aims at outlining the benefits of addressing the uncertainty associated to radar rainfall estimates in a probabilistic framework, to be potentially implemented in the real-time management of the sewer network in the study area. Radar rainfall estimates are affected by uncertainty due to various factors [1-3] and quality control and correction techniques have been developed in order to improve their accuracy. However, the hydrological use of radar rainfall estimates and forecasts remains challenging. A significant effort has been devoted by the international research community to the assessment of the uncertainty propagation through probabilistic hydro-meteorological forecast systems [4-5], and various approaches have been implemented for the purpose of characterizing the uncertainty in radar rainfall estimates and forecasts [6-11]. A radar-based ensemble stochastic approach, similar to the one implemented for use in the Southern-Alps by the REAL system [6], has been developed for the purpose of this work. An ensemble generator has been calibrated on the basis of the spatial-temporal characteristics of the residual error in radar estimates assessed with reference to rainfall records from around 200 rain gauges available for the year 2007, previously post-processed and corrected by the UK Met Office [12-13]. Each ensemble member is determined by summing a perturbation field to the unperturbed radar rainfall field. The perturbations are generated by imposing the radar error spatial and temporal correlation structure to purely stochastic fields. A
1987-09-30
Version 00 The REFERDOU system can be used to calculate the response function of a NE-213 scintillation detector for energies up to 100 MeV, to interpolate and spread (Gaussian) the response function, and unfold the measured spectrum of neutrons while propagating errors from the response functions to the unfolded spectrum.
The Communication Link and Error ANalysis (CLEAN) simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.; Crowe, Shane
1993-01-01
During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.
Fast Video Encryption Using the H.264 Error Propagation Property for Smart Mobile Devices
Chung, Yongwha; Lee, Sungju; Jeon, Taewoong; Park, Daihee
2015-01-01
In transmitting video data securely over Video Sensor Networks (VSNs), since mobile handheld devices have limited resources in terms of processor clock speed and battery size, it is necessary to develop an efficient method to encrypt video data to meet the increasing demand for secure connections. Selective encryption methods can reduce the amount of computation needed while satisfying high-level security requirements. This is achieved by selecting an important part of the video data and encrypting it. In this paper, to ensure format compliance and security, we propose a special encryption method for H.264, which encrypts only the DC/ACs of I-macroblocks and the motion vectors of P-macroblocks. In particular, the proposed new selective encryption method exploits the error propagation property in an H.264 decoder and improves the collective performance by analyzing the tradeoff between the visual security level and the processing speed compared to typical selective encryption methods (i.e., I-frame, P-frame encryption, and combined I-/P-frame encryption). Experimental results show that the proposed method can significantly reduce the encryption workload without any significant degradation of visual security. PMID:25850068
Fast video encryption using the H.264 error propagation property for smart mobile devices.
Chung, Yongwha; Lee, Sungju; Jeon, Taewoong; Park, Daihee
2015-01-01
In transmitting video data securely over Video Sensor Networks (VSNs), since mobile handheld devices have limited resources in terms of processor clock speed and battery size, it is necessary to develop an efficient method to encrypt video data to meet the increasing demand for secure connections. Selective encryption methods can reduce the amount of computation needed while satisfying high-level security requirements. This is achieved by selecting an important part of the video data and encrypting it. In this paper, to ensure format compliance and security, we propose a special encryption method for H.264, which encrypts only the DC/ACs of I-macroblocks and the motion vectors of P-macroblocks. In particular, the proposed new selective encryption method exploits the error propagation property in an H.264 decoder and improves the collective performance by analyzing the tradeoff between the visual security level and the processing speed compared to typical selective encryption methods (i.e., I-frame, P-frame encryption, and combined I-/P-frame encryption). Experimental results show that the proposed method can significantly reduce the encryption workload without any significant degradation of visual security. PMID:25850068
Fast video encryption using the H.264 error propagation property for smart mobile devices.
Chung, Yongwha; Lee, Sungju; Jeon, Taewoong; Park, Daihee
2015-04-02
In transmitting video data securely over Video Sensor Networks (VSNs), since mobile handheld devices have limited resources in terms of processor clock speed and battery size, it is necessary to develop an efficient method to encrypt video data to meet the increasing demand for secure connections. Selective encryption methods can reduce the amount of computation needed while satisfying high-level security requirements. This is achieved by selecting an important part of the video data and encrypting it. In this paper, to ensure format compliance and security, we propose a special encryption method for H.264, which encrypts only the DC/ACs of I-macroblocks and the motion vectors of P-macroblocks. In particular, the proposed new selective encryption method exploits the error propagation property in an H.264 decoder and improves the collective performance by analyzing the tradeoff between the visual security level and the processing speed compared to typical selective encryption methods (i.e., I-frame, P-frame encryption, and combined I-/P-frame encryption). Experimental results show that the proposed method can significantly reduce the encryption workload without any significant degradation of visual security.
Numerical simulation of shock wave propagation in flows
NASA Astrophysics Data System (ADS)
Rénier, Mathieu; Marchiano, Régis; Gaudard, Eric; Gallin, Louis-Jonardan; Coulouvrat, François
2012-09-01
Acoustical shock waves propagate through flows in many situations. The sonic boom produced by a supersonic aircraft influenced by winds, or the so-called Buzz-Saw-Noise produced by turbo-engine fan blades when rotating at supersonic speeds, are two examples of such a phenomenon. In this work, an original method called FLHOWARD, acronym for FLow and Heterogeneous One-Way Approximation for Resolution of Diffraction, is presented. It relies on a scalar nonlinear wave equation, which takes into account propagation in a privileged direction (one-way approach), with diffraction, flow, heterogeneous and nonlinear effects. Theoretical comparison of the dispersion relations between that equation and parabolic equations (standard or wide angle) shows that this approach is more precise than the parabolic approach because there are no restrictions about the angle of propagation. A numerical procedure based on the standard split-step technique is used. It consists in splitting the nonlinear wave equation into simpler equations. Each of these equations is solved thanks to an analytical solution when it is possible, and a finite differences scheme in other cases. The advancement along the propagation direction is done with an implicit scheme. The validity of that numerical procedure is assessed by comparisons with analytical solutions of the Lilley's equation in waveguides for uniform or shear flows in linear regime. Attention is paid to the advantages and drawbacks of that method. Finally, the numerical code is used to simulate the propagation of sonic boom through a piece of atmosphere with flows and heterogeneities. The effects of the various parameters are analysed.
Hybrid simulation of wave propagation in the Io plasma torus
NASA Astrophysics Data System (ADS)
Stauffer, B. H.; Delamere, P. A.; Damiano, P. A.
2015-12-01
The transmission of waves between Jupiter and Io is an excellent case study of magnetosphere/ionosphere (MI) coupling because the power generated by the interaction at Io and the auroral power emitted at Jupiter can be reasonably estimated. Wave formation begins with mass loading as Io passes through the plasma torus. A ring beam distribution of pickup ions and perturbation of the local flow by the conducting satellite generate electromagnetic ion cyclotron waves and Alfven waves. We investigate wave propagation through the torus and to higher latitudes using a hybrid plasma simulation with a physically realistic density gradient, assessing the transmission of Poynting flux and wave dispersion. We also analyze the propagation of kinetic Alfven waves through a density gradient in two dimensions.
A new version of the CADNA library for estimating round-off error propagation in Fortran programs
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc
2010-11-01
The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round
Starlight emergence angle error analysis of star simulator
NASA Astrophysics Data System (ADS)
Zhang, Jian; Zhang, Guo-yu
2015-10-01
With continuous development of the key technologies of star sensor, the precision of star simulator have been to be further improved, for it directly affects the accuracy of star sensor laboratory calibration. For improving the accuracy level of the star simulator, a theoretical accuracy analysis model need to be proposed. According the ideal imaging model of star simulator, the theoretical accuracy analysis model can be established. Based on theoretically analyzing the theoretical accuracy analysis model we can get that the starlight emergent angle deviation is primarily affected by star position deviation, main point position deviation, focal length deviation, distortion deviation and object plane tilt deviation. Based on the above affecting factors, a comprehensive deviation model can be established. According to the model, the formula of each factors deviation model separately and the comprehensive deviation model can be summarized and concluded out. By analyzing the properties of each factors deviation model and the comprehensive deviation model formula, concluding the characteristics of each factors respectively and the weight relationship among them. According the result of analysis of the comprehensive deviation model, a reasonable designing indexes can be given by considering the star simulator optical system requirements and the precision of machining and adjustment. So, starlight emergence angle error analysis of star simulator is very significant to guide the direction of determining and demonstrating the index of star simulator, analyzing and compensating the error of star simulator for improving the accuracy of star simulator and establishing a theoretical basis for further improving the starlight angle precision of the star simulator can effectively solve the problem.
Numerical Homogenization of Jointed Rock Masses Using Wave Propagation Simulation
NASA Astrophysics Data System (ADS)
Gasmi, Hatem; Hamdi, Essaïeb; Bouden Romdhane, Nejla
2014-07-01
Homogenization in fractured rock analyses is essentially based on the calculation of equivalent elastic parameters. In this paper, a new numerical homogenization method that was programmed by means of a MATLAB code, called HLA-Dissim, is presented. The developed approach simulates a discontinuity network of real rock masses based on the International Society of Rock Mechanics (ISRM) scanline field mapping methodology. Then, it evaluates a series of classic joint parameters to characterize density (RQD, specific length of discontinuities). A pulse wave, characterized by its amplitude, central frequency, and duration, is propagated from a source point to a receiver point of the simulated jointed rock mass using a complex recursive method for evaluating the transmission and reflection coefficient for each simulated discontinuity. The seismic parameters, such as delay, velocity, and attenuation, are then calculated. Finally, the equivalent medium model parameters of the rock mass are computed numerically while taking into account the natural discontinuity distribution. This methodology was applied to 17 bench fronts from six aggregate quarries located in Tunisia, Spain, Austria, and Sweden. It allowed characterizing the rock mass discontinuity network, the resulting seismic performance, and the equivalent medium stiffness. The relationship between the equivalent Young's modulus and rock discontinuity parameters was also analyzed. For these different bench fronts, the proposed numerical approach was also compared to several empirical formulas, based on RQD and fracture density values, published in previous research studies, showing its usefulness and efficiency in estimating rapidly the Young's modulus of equivalent medium for wave propagation analysis.
A simulation of high energy cosmic ray propagation 1
NASA Technical Reports Server (NTRS)
Honda, M.; Kifune, T.; Matsubara, Y.; Mori, M.; Nishijima, K.; Teshima, M.
1985-01-01
High energy cosmic ray propagation of the energy region 10 to the 14.5 power - 10 to the 18th power eV is simulated in the inter steller circumstances. In conclusion, the diffusion process by turbulent magnetic fields is classified into several regions by ratio of the gyro-radius and the scale of turbulence. When the ratio becomes larger then 10 to the minus 0.5 power, the analysis with the assumption of point scattering can be applied with the mean free path E sup 2. However, when the ratio is smaller than 10 to the minus 0.5 power, we need a more complicated analysis or simulation. Assuming the turbulence scale of magnetic fields of the Galaxy is 10-30pc and the mean magnetic field strength is 3 micro gauss, the energy of cosmic ray with that gyro-radius is about 10 to the 16.5 power eV.
Numerical simulation of premixed flame propagation in a closed tube
NASA Astrophysics Data System (ADS)
Kuzuu, Kazuto; Ishii, Katsuya; Kuwahara, Kunio
1996-08-01
Premixed flame propagation of methane-air mixture in a closed tube is estimated through a direct numerical simulation of the three-dimensional unsteady Navier-Stokes equations coupled with chemical reaction. In order to deal with a combusting flow, an extended version of the MAC method, which can be applied to a compressible flow with strong density variation, is employed as a numerical method. The chemical reaction is assumed to be an irreversible single step reaction between methane and oxygen. The chemical species are CH 4, O 2, N 2, CO 2, and H 2O. In this simulation, we reproduce a formation of a tulip flame in a closed tube during the flame propagation. Furthermore we estimate not only a two-dimensional shape but also a three-dimensional structure of the flame and flame-induced vortices, which cannot be observed in the experiments. The agreement between the calculated results and the experimental data is satisfactory, and we compare the phenomenon near the side wall with the one in the corner of the tube.
Monte Carlo simulation of light propagation in the adult brain
NASA Astrophysics Data System (ADS)
Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter
2004-06-01
When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.
Unraveling the uncertainty and error propagation in the vertical flux Martin curve
NASA Astrophysics Data System (ADS)
Olli, Kalle
2015-06-01
Analyzing the vertical particle flux and particle retention in the upper twilight zone has commonly been accomplished by fitting a power function to the data. Measuring the vertical particle flux in the upper twilight zone, where most of the re-mineralization occurs, is a complex endeavor. Here I use field data and simulations to show how uncertainty in the particle flux measurements propagates into the vertical flux attenuation model parameters. Further, I analyze how the number of sampling depths, and variations in the vertical sampling locations influences the model performance and parameters stability. The arguments provide a simple framework to optimize sampling scheme when vertical flux attenuation profiles are measured in the field, either by using an array of sediment traps or 234Th methodology. A compromise between effort and quality of results is to sample from at least six depths: upper sampling depth as close to the base of the euphotic layer as feasible, the vertical sampling depths slightly aggregated toward the upper aphotic zone where most of the vertical flux attenuation takes place, and extending the lower end of the sampling range to as deep as practicable in the twilight zone.
NASA Astrophysics Data System (ADS)
Vicent, Jorge; Alonso, Luis; Sabater, Neus; Miesch, Christophe; Kraft, Stefan; Moreno, Jose
2015-09-01
The uncertainties in the knowledge of the Instrument Spectral Response Function (ISRF), barycenter of the spectral channels and bandwidth / spectral sampling (spectral resolution) are important error sources in the processing of satellite imaging spectrometers within narrow atmospheric absorption bands. The exhaustive laboratory spectral characterization is a costly engineering process that differs from the instrument configuration in-flight given the harsh space environment and harmful launching phase. The retrieval schemes at Level-2 commonly assume a Gaussian ISRF, leading to uncorrected spectral stray-light effects and wrong characterization and correction of the spectral shift and smile. These effects produce inaccurate atmospherically corrected data and are propagated to the final Level-2 mission products. Within ESA's FLEX satellite mission activities, the impact of the ISRF knowledge error and spectral calibration at Level-1 products and its propagation to Level-2 retrieved chlorophyll fluorescence has been analyzed. A spectral recalibration scheme has been implemented at Level-2 reducing the errors in Level-1 products below the 10% error in retrieved fluorescence within the oxygen absorption bands enhancing the quality of the retrieved products. The work presented here shows how the minimization of the spectral calibration errors requires an effort both for the laboratory characterization and for the implementation of specific algorithms at Level-2.
Error analysis of a ratio pyrometer by numerical simulation
Gathers, G.R. )
1992-01-01
A numerical method has been devised to evaluate measurement errors for a three-channel ratio pyrometer as a function of temperature. The pyrometer is simulated by computer codes, which can be used to explore the behavior of various designs. The influence of the various components in the system can be evaluated. General conclusions can be drawn about what makes a good pyrometer, and an existing pyrometer was evaluated, to predict its behavior as a function of temperature. The results show which combination of two channels gives the best precision. 13 refs., 12 figs.
Error analysis of a ratio pyrometer by numerical simulation
Gathers, G.R.
1990-05-01
A numerical method has been devised to evaluate measurement errors for a three channel ratio pyrometer as a function of temperature. The pyrometer is simulated by computer codes, which can be used to explore the behavior of various designs. The influence of the various components in the system can be evaluated. General conclusions can be drawn about what makes a good pyrometer, and an existing pyrometer was evaluated, to predict its behavior as a function of temperature. The results show which combination of two channels gives the best precision. 12 refs., 12 figs.
Supporting the Development of Soft-Error Resilient Message Passing Applications using Simulation
Engelmann, Christian; Naughton III, Thomas J
2016-01-01
Radiation-induced bit flip faults are of particular concern in extreme-scale high-performance computing systems. This paper presents a simulation-based tool that enables the development of soft-error resilient message passing applications by permitting the investigation of their correctness and performance under various fault conditions. The documented extensions to the Extreme-scale Simulator (xSim) enable the injection of bit flip faults at specific of injection location(s) and fault activation time(s), while supporting a significant degree of configurability of the fault type. Experiments show that the simulation overhead with the new feature is ~2,325% for serial execution and ~1,730% at 128 MPI processes, both with very fine-grain fault injection. Fault injection experiments demonstrate the usefulness of the new feature by injecting bit flips in the input and output matrices of a matrix-matrix multiply application, revealing vulnerability of data structures, masking and error propagation. xSim is the very first simulation-based MPI performance tool that supports both, the injection of process failures and bit flip faults.
NASA Astrophysics Data System (ADS)
Jia, Hao; Chen, Bin; Li, Dong; Zhang, Yong
2015-02-01
To adapt the complex tissue structure, laser propagation in a two-layered skin model is simulated to compare voxel-based Monte Carlo (VMC) and tetrahedron-based MC (TMC) methods with a geometry-based MC (GMC) method. In GMC, the interface is mathematically defined without any discretization. GMC is the most accurate but is not applicable to complicated domains. The implementation of VMC is simple because of its structured voxels. However, unavoidable errors are expected because of the zigzag polygonal interface. Compared with GMC and VMC, TMC provides a balance between accuracy and flexibility by the tetrahedron cells. In the present TMC, the body-fitted tetrahedra are generated in different tissues. No interface tetrahedral cells exist, thereby avoiding the photon reflection error in the interface cells in VMC. By introducing a distance threshold, the error caused by confused optical parameters between neighboring cells when photons are incident along the cell boundary can be avoided. The results show that the energy deposition error by TMC in the interfacial region is one-tenth to one-fourth of that by VMC, yielding more accurate computations of photon reflection, refraction, and energy deposition. The results of multilayered and n-shaped vessels indicate that a laser with a 1064-nm wavelength should be introduced to clean deep-buried vessels.
Propagation of landslide inventory errors on data driven landslide susceptibility models
NASA Astrophysics Data System (ADS)
Henriques, C. S.; Zezere, J. L.; Neves, M.; Garcia, R. A. C.; Oliveira, S. C.; Piedade, A.
2009-04-01
of landslide inventory #1 by a senior geomorphologist. This second phase of photo and morphologic interpretation (pre-validation) allows the selection of 204 probable slope movements from the first landslide inventory. The landslide inventory #3 was obtained by the field verification of the total set of probable landslide zones (408 points), and was performed by 6 geomorphologists. This inventory has 193 validated slope movements, and includes 101 "new landslides" that have not been recognized by the ortophotomaps interpretation. Additionally, the field work enabled the cartographic delimitation of the slope movement depletion and accumulation zones, and the definition of landslide type. Landslide susceptibility was assessed using the three landslide inventories by using a single predictive model (logistic regression) and the same set of landslide predisposing factors to allow comparison of results. Uncertainty associated to landslide inventory errors and their propagation on landslide susceptibility results are evaluated and compared by the computation of success-rate and prediction-rate curves. The error derived from landslide inventorying is quantified by assessing the overlapping degree of susceptible areas obtained from the different prediction models.
Simulations of ultra-high-energy cosmic rays propagation
Kalashev, O. E.; Kido, E.
2015-05-15
We compare two techniques for simulation of the propagation of ultra-high-energy cosmic rays (UHECR) in intergalactic space: the Monte Carlo approach and a method based on solving transport equations in one dimension. For the former, we adopt the publicly available tool CRPropa and for the latter, we use the code TransportCR, which has been developed by the first author and used in a number of applications, and is made available online with publishing this paper. While the CRPropa code is more universal, the transport equation solver has the advantage of a roughly 100 times higher calculation speed. We conclude that the methods give practically identical results for proton or neutron primaries if some accuracy improvements are introduced to the CRPropa code.
Numerical Simulation of Shock Wave Propagation in Fractured Cortical Bone
NASA Astrophysics Data System (ADS)
Padilla, Frédéric; Cleveland, Robin
2009-04-01
Shock waves (SW) are considered a promising method to treat bone non unions, but the associated mechanisms of action are not well understood. In this study, numerical simulations are used to quantify the stresses induced by SWs in cortical bone tissue. We use a 3D FDTD code to solve the linear lossless equations that describe wave propagation in solids and fluids. A 3D model of a fractured rat femur was obtained from micro-CT data with a resolution of 32 μm. The bone was subject to a plane SW pulse with a peak positive pressure of 40 MPa and peak negative pressure of -8 MPa. During the simulations the principal tensile stress and maximum shear stress were tracked throughout the bone. It was found that the simulated stresses in a transverse plane relative to the bone axis may reach values higher than the tensile and shear strength of the bone tissue (around 50 MPa). These results suggest that the stresses induced by the SW may be large enough to initiate local micro-fractures, which may in turn trigger the start of bone healing for the case of a non union.
Handling error propagation in sequential data assimilation using an evolutionary strategy
NASA Astrophysics Data System (ADS)
Bai, Yulong; Li, Xin; Huang, Chunlin
2013-07-01
An evolutionary strategy-based error parameterization method that searches for the most ideal error adjustment factors was developed to obtain better assimilation results. Numerical experiments were designed using some classical nonlinear models (i.e., the Lorenz-63 model and the Lorenz-96 model). Crossover and mutation error adjustment factors of evolutionary strategy were investigated in four aspects: the initial conditions of the Lorenz model, ensemble sizes, observation covariance, and the observation intervals. The search for error adjustment factors is usually performed using trial-and-error methods. To solve this difficult problem, a new data assimilation system coupled with genetic algorithms was developed. The method was tested in some simplified model frameworks, and the results are encouraging. The evolutionary strategy-based error handling methods performed robustly under both perfect and imperfect model scenarios in the Lorenz-96 model. However, the application of the methodology to more complex atmospheric or land surface models remains to be tested.
NASA Technical Reports Server (NTRS)
Greatorex, Scott (Editor); Beckman, Mark
1996-01-01
Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would
Exploring discharge prescribing errors and their propagation post-discharge: an observational study.
Riordan, Ciara O'; Delaney, Tim; Grimes, Tamasine
2016-10-01
Background Discharge prescribing error is common. Little is known about whether it persists post-discharge. Objective To explore the relationship between discharge prescribing error and post-discharge medication error. Setting This was a prospective observational study (March-May 2013) at an adult academic hospital in Ireland. Method Patients using three or more chronic medications pre-admission, with a clinical pharmacist documented gold-standard pre-admission medication list, having a chronic medication stopped or started in hospital and discharged to home were included. Within 10-14 days after discharge a gold standard discharge medication was prepared and compared to the discharge prescription to identify differences. Patients were telephoned to identify actual medication use. Community pharmacists, general practitioners and hospital prescribers were contacted to corroborate actual and intended medication use. Post-discharge medication errors were identified and the relationship to discharge prescribing error was explored. Main outcome measured Incidence, type, and potential severity of post-discharge medication error, and the relationship to discharge prescribing. Results Some 36 (43 %) of 83 patients experienced post-discharge medication error(s), for whom the majority (n = 31, 86 %) were at risk of moderate harm. Most (58 of 66) errors were discharge prescribing errors that persisted post-discharge. Unintentional prescription of an intentionally stopped medication; error in the dose, frequency or formulation and unintentional omission of active medication are the error types most likely to persist after discharge. Conclusion There is a need to implement discharge medication reconciliation to support medication optimisation post-hospitalisation.
Computational fluid dynamics simulation of sound propagation through a blade row.
Zhao, Lei; Qiao, Weiyang; Ji, Liang
2012-10-01
The propagation of sound waves through a blade row is investigated numerically. A wave splitting method in a two-dimensional duct with arbitrary mean flow is presented, based on which pressure amplitude of different wave mode can be extracted at an axial plane. The propagation of sound wave through a flat plate blade row has been simulated by solving the unsteady Reynolds average Navier-Stokes equations (URANS). The transmission and reflection coefficients obtained by Computational Fluid Dynamics (CFD) are compared with semi-analytical results. It indicates that the low order URANS scheme will cause large errors if the sound pressure level is lower than -100 dB (with as reference pressure the product of density, main flow velocity, and speed of sound). The CFD code has sufficient precision when solving the interaction of sound wave and blade row providing the boundary reflections have no substantial influence. Finally, the effects of flow Mach number, blade thickness, and blade turning angle on sound propagation are studied.
Technical Note: Simulation of 4DCT tumor motion measurement errors
Dou, Tai H.; Thomas, David H.; O’Connell, Dylan; Bradley, Jeffrey D.; Lamb, James M.; Low, Daniel A.
2015-01-01
Purpose: To determine if and by how much the commercial 4DCT protocols under- and overestimate tumor breathing motion. Methods: 1D simulations were conducted that modeled a 16-slice CT scanner and tumors moving proportionally to breathing amplitude. External breathing surrogate traces of at least 5-min duration for 50 patients were used. Breathing trace amplitudes were converted to motion by relating the nominal tumor motion to the 90th percentile breathing amplitude, reflecting motion defined by the more recent 5DCT approach. Based on clinical low-pitch helical CT acquisition, the CT detector moved according to its velocity while the tumor moved according to the breathing trace. When the CT scanner overlapped the tumor, the overlapping slices were identified as having imaged the tumor. This process was repeated starting at successive 0.1 s time bin in the breathing trace until there was insufficient breathing trace to complete the simulation. The tumor size was subtracted from the distance between the most superior and inferior tumor positions to determine the measured tumor motion for that specific simulation. The effect of the scanning parameter variation was evaluated using two commercial 4DCT protocols with different pitch values. Because clinical 4DCT scan sessions would yield a single tumor motion displacement measurement for each patient, errors in the tumor motion measurement were considered systematic. The mean of largest 5% and smallest 5% of the measured motions was selected to identify over- and underdetermined motion amplitudes, respectively. The process was repeated for tumor motions of 1–4 cm in 1 cm increments and for tumor sizes of 1–4 cm in 1 cm increments. Results: In the examined patient cohort, simulation using pitch of 0.06 showed that 30% of the patients exhibited a 5% chance of mean breathing amplitude overestimations of 47%, while 30% showed a 5% chance of mean breathing amplitude underestimations of 36%; with a separate simulation
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel
Numerical simulation of broadband vortex terahertz beams propagation
NASA Astrophysics Data System (ADS)
Semenova, V. A.; Kulya, M. S.; Bespalov, V. G.
2016-08-01
Orbital angular momentum (OAM) represents new informational degree of freedom for data encoding and multiplexing in fiber and free-space communications. OAM-carrying beams (also called vortex beams) were successfully used to increase the capacity of optical, millimetre-wave and radio frequency communication systems. However, the investigation of the OAM potential for the new generation high-speed terahertz communications is also of interest due to the unlimited demand of higher capacity in telecommunications. Here we present a simulation-based study of the propagating in non-dispersive medium broadband terahertz vortex beams generated by a spiral phase plate (SPP). The algorithm based on scalar diffraction theory was used to obtain the spatial amplitude and phase distributions of the vortex beam in the frequency range from 0.1 to 3 THz at the distances 20-80 mm from the SPP. The simulation results show that the amplitude and phase distributions without unwanted modulation are presented in the wavelengths ranges with centres on the wavelengths which are multiple to the SPP optical thickness. This fact may allow to create the high-capacity near-field communication link which combines OAM and wavelength-division multiplexing.
Simulating atmospheric free-space optical propagation: rainfall attenuation
NASA Astrophysics Data System (ADS)
Achour, Maha
2002-04-01
With recent advances and interest in Free-Space Optics (FSO) for commercial deployments, more attention has been placed on FSO weather effects and the availability of global weather databases. The Meteorological Visual Range (Visibility) is considered one of the main weather parameters necessary to estimate FSO attenuation due to haze, fog and low clouds. Proper understanding of visibility measurements conducted throughout the years is essential. Unfortunately, such information is missing from most of the databases, leaving FSO players no choice but to use the standard visibility equation based on 2% contrast and other assumptions on the source luminance and its background. Another challenge is that visibility is measured using the visual wavelength of 550 nm. Extrapolating the measured attenuations to longer infrared wavelengths is not trivial and involves extensive experimentations. Scattering of electromagnetic waves by spherical droplets of different sizes is considered to simulate FSO scattering effects. This paper serves as an introduction to a series of publications regarding simulation of FSO atmospheric propagation. This first part focuses on attenuation due to rainfall. Additional weather parameters, such as rainfall rate, temperature and relative humidity are considered to effectively build the rain model. Comparison with already published experimental measurement is performed to validate the model. The scattering cross section due to rain is derived from the density of different raindrop sizes and the raindrops fall velocity is derived from the overall rainfall rate. Absorption due the presence of water vapor is computed using the temperature and relative humidity measurements.
NASA Astrophysics Data System (ADS)
Privé, N. C.; Errico, R. M.; Tai, K.-S.
2013-06-01
The National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a 1 month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 h forecast, increased observation error only yields a slight decline in forecast skill in the extratropics and no discernible degradation of forecast skill in the tropics.
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
Evaluation of color error and noise on simulated images
NASA Astrophysics Data System (ADS)
Mornet, Clémence; Vaillant, Jérôme; Decroux, Thomas; Hérault, Didier; Schanen, Isabelle
2010-01-01
The evaluation of CMOS sensors performance in terms of color accuracy and noise is a big challenge for camera phone manufacturers. On this paper, we present a tool developed with Matlab at STMicroelectronics which allows quality parameters to be evaluated on simulated images. These images are computed based on measured or predicted Quantum Efficiency (QE) curves and noise model. By setting the parameters of integration time and illumination, the tool optimizes the color correction matrix (CCM) and calculates the color error, color saturation and signal-to-noise ratio (SNR). After this color correction optimization step, a Graphics User Interface (GUI) has been designed to display a simulated image at a chosen illumination level, with all the characteristics of a real image taken by the sensor with the previous color correction. Simulated images can be a synthetic Macbeth ColorChecker, for which reflectance of each patch is known, or a multi-spectral image, described by the reflectance spectrum of each pixel or an image taken at high-light level. A validation of the results has been performed with ST under development sensors. Finally we present two applications one based on the trade-offs between color saturation and noise by optimizing the CCM and the other based on demosaicking SNR trade-offs.
Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui
2016-01-01
Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506
Matsushima, Kyoji; Shimobaba, Tomoyoshi
2009-10-26
A novel method is proposed for simulating free-space propagation. This method is an improvement of the angular spectrum method (AS). The AS does not include any approximation of the propagation distance, because the formula thereof is derived directly from the Rayleigh-Sommerfeld equation. However, the AS is not an all-round method, because it produces severe numerical errors due to a sampling problem of the transfer function even in Fresnel regions. The proposed method resolves this problem by limiting the bandwidth of the propagation field and also expands the region in which exact fields can be calculated by the AS. A discussion on the validity of limiting the bandwidth is also presented.
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
The numerical robustness of four generally applicable, recursive, least-squares-estimation schemes is analyzed by means of a theoretical round-off propagation study. This study highlights a number of practical, interesting insights of widely used recursive least-squares schemes. These insights have been confirmed in an experimental study as well.
Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation
NASA Astrophysics Data System (ADS)
Schiavazzi, Daniele; Marsden, Alison
2015-11-01
Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.
NASA Technical Reports Server (NTRS)
Snow, L. S.; Kuhn, A. E.
1975-01-01
Previous error analyses conducted by the Guidance and Dynamics Branch of NASA have used the Guidance Analysis Program (GAP) as the trajectory simulation tool. Plans are made to conduct all future error analyses using the Space Vehicle Dynamics Simulation (SVDS) program. A study was conducted to compare the inertial measurement unit (IMU) error simulations of the two programs. Results of the GAP/SVDS comparison are presented and problem areas encountered while attempting to simulate IMU errors, vehicle performance uncertainties and environmental uncertainties using SVDS are defined. An evaluation of the SVDS linear error analysis capability is also included.
Petrov, Nikolay V; Pavlov, Pavel V; Malov, A N
2013-06-30
Using the equations of scalar diffraction theory we consider the formation of an optical vortex on a diffractive optical element. The algorithms are proposed for simulating the processes of propagation of spiral wavefronts in free space and their reflections from surfaces with different roughness parameters. The given approach is illustrated by the results of numerical simulations. (propagation of wave fronts)
Mekid, Samir; Vacharanukul, Ketsaya
2006-01-01
To achieve dynamic error compensation in CNC machine tools, a non-contact laser probe capable of dimensional measurement of a workpiece while it is being machined has been developed and presented in this paper. The measurements are automatically fed back to the machine controller for intelligent error compensations. Based on a well resolved laser Doppler technique and real time data acquisition, the probe delivers a very promising dimensional accuracy at few microns over a range of 100 mm. The developed optical measuring apparatus employs a differential laser Doppler arrangement allowing acquisition of information from the workpiece surface. In addition, the measurements are traceable to standards of frequency allowing higher precision.
First order error propagation of the procrustes method for 3D attitude estimation.
Dorst, Leo
2005-02-01
The well-known Procrustes method determines the optimal rigid body motion that registers two point clouds by minimizing the square distances of the residuals. In this paper, we perform the first order error analysis of this method for the 3D case, fully specifying how directional noise in the point clouds affects the estimated parameters of the rigid body motion. These results are much more specific than the error bounds which have been established in numerical analysis. We provide an intuitive understanding of the outcome to facilitate direct use in applications.
Errors Characteristics of Two Grid Refinement Approaches in Aquaplanet Simulations: MPAS-A and WRF
Hagos, Samson M.; Leung, Lai-Yung R.; Rauscher, Sara; Ringler, Todd
2013-09-01
This study compares the error characteristics associated with two grid refinement approaches including global variable resolution and nesting for high resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales-Atmosphere (MPAS-A), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context. For MPAS-A, simulations have been performed with a quasi-uniform resolution global domain at coarse (1°) and high (0.25°) resolution, and a variable resolution domain with a high resolution region at 0.25° configured inside a coarse resolution global domain at 1° resolution. Similarly, WRF has been configured to run on a coarse (1°) and high (0.25°) tropical channel domain as well as a nested domain with a high resolution region at 0.25° nested two-way inside the coarse resolution (1°) tropical channel. The variable resolution or nested simulations are compared against the high resolution simulations. Both models respond to increased resolution with enhanced precipitation. Limited and significant reduction in the ratio of convective to non-convective precipitation. The limited area grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. Within the high resolution limited area, the zonal distribution of precipitation is affected by advection in MPAS-A and by the nesting strategy in WRF. In both models, 20 day Kelvin waves propagate through the high-resolution domains fairly unaffected by the change in resolution (and the presence of a boundary in WRF) but increased resolution strengthens eastward propagating inertio-gravity waves.
Ainscow, E K; Brand, M D
1998-09-21
The errors associated with experimental application of metabolic control analysis are difficult to assess. In this paper, we give examples where Monte-Carlo simulations of published experimental data are used in error analysis. Data was simulated according to the mean and error obtained from experimental measurements and the simulated data was used to calculate control coefficients. Repeating the simulation 500 times allowed an estimate to be made of the error implicit in the calculated control coefficients. In the first example, state 4 respiration of isolated mitochondria, Monte-Carlo simulations based on the system elasticities were performed. The simulations gave error estimates similar to the values reported within the original paper and those derived from a sensitivity analysis of the elasticities. This demonstrated the validity of the method. In the second example, state 3 respiration of isolated mitochondria, Monte-Carlo simulations were based on measurements of intermediates and fluxes. A key feature of this simulation was that the distribution of the simulated control coefficients did not follow a normal distribution, despite simulation of the original data being based on normal distributions. Consequently, the error calculated using simulation was greater and more realistic than the error calculated directly by averaging the original results. The Monte-Carlo simulations are also demonstrated to be useful in experimental design. The individual data points that should be repeated in order to reduce the error in the control coefficients can be highlighted.
Modeling and Simulation for Realistic Propagation Environments of Communications Signals at SHF Band
NASA Technical Reports Server (NTRS)
Ho, Christian
2005-01-01
In this article, most of widely accepted radio wave propagation models that have proven to be accurate in practice as well as numerically efficient at SHF band will be reviewed. Weather and terrain data along the signal's paths can be input in order to more accurately simulate the propagation environments under particular weather and terrain conditions. Radio signal degradation and communications impairment severity will be investigated through the realistic radio propagation channel simulator. Three types of simulation approaches in predicting signal's behaviors are classified as: deterministic, stochastic and attenuation map. The performance of the simulation can be evaluated under operating conditions for the test ranges of interest. Demonstration tests of a real-time propagation channel simulator will show the capabilities and limitations of the simulation tool and underlying models.
Statistical error in simulations of Poisson processes: Example of diffusion in solids
NASA Astrophysics Data System (ADS)
Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.
2016-08-01
Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.
Shi, Xianbo; Reininger, Ruben; Sanchez Del Rio, Manuel; Assoufid, Lahsen
2014-07-01
A new method for beamline simulation combining ray-tracing and wavefront propagation is described. The `Hybrid Method' computes diffraction effects when the beam is clipped by an aperture or mirror length and can also simulate the effect of figure errors in the optical elements when diffraction is present. The effect of different spatial frequencies of figure errors on the image is compared with SHADOW results pointing to the limitations of the latter. The code has been benchmarked against the multi-electron version of SRW in one dimension to show its validity in the case of fully, partially and non-coherent beams. The results demonstrate that the code is considerably faster than the multi-electron version of SRW and is therefore a useful tool for beamline design and optimization.
Previs, Stephen F; Herath, Kithsiri; Castro-Perez, Jose; Mahsut, Ablatt; Zhou, Haihong; McLaren, David G; Shah, Vinit; Rohm, Rory J; Stout, Steven J; Zhong, Wendy; Wang, Sheng-Ping; Johns, Douglas G; Hubbard, Brian K; Cleary, Michele A; Roddy, Thomas P
2015-01-01
Stable isotope tracers are widely used to quantify metabolic rates, and yet a limited number of studies have considered the impact of analytical error on estimates of flux. For example, when estimating the contribution of de novo lipogenesis, one typically measures a minimum of four isotope ratios, i.e., the precursor and product labeling pre- and posttracer administration. This seemingly simple problem has 1 correct solution and 80 erroneous outcomes. In this report, we outline a methodology for evaluating the effect of error propagation on apparent physiological endpoints. We demonstrate examples of how to evaluate the influence of analytical error in case studies concerning lipid and protein synthesis; we have focused on (2)H2O as a tracer and contrast different mass spectrometry platforms including GC-quadrupole-MS, GC-pyrolysis-IRMS, LC-quadrupole-MS, and high-resolution FT-ICR-MS. The method outlined herein can be used to determine how to minimize variations in the apparent biology by altering the dose and/or the type of tracer. Likewise, one can facilitate biological studies by estimating the reduction in the noise of an outcome that is expected for a given increase in the number of replicate injections. PMID:26358910
Error propagation in the numerical solutions of the differential equations of orbital mechanics
NASA Technical Reports Server (NTRS)
Bond, V. R.
1982-01-01
The relationship between the eigenvalues of the linearized differential equations of orbital mechanics and the stability characteristics of numerical methods is presented. It is shown that the Cowell, Encke, and Encke formulation with an independent variable related to the eccentric anomaly all have a real positive eigenvalue when linearized about the initial conditions. The real positive eigenvalue causes an amplification of the error of the solution when used in conjunction with a numerical integration method. In contrast an element formulation has zero eigenvalues and is numerically stable.
NASA Astrophysics Data System (ADS)
Hossain, F.; Anagnostou, E. N.; Wang, D.
2004-05-01
A comprehensive Satellite Rainfall Error Model (SREM-2D) is developed that modeled the two-dimensional space-time error structure of satellite rain retrievals. The error structure is decomposed into the following components: (1) Sensor's detection structure for rain and no rain; (2) Sensor's spatial structure of detection for rain and no rain; (3) Sensor's spatial structure for rainfall retrieval, and (4) Sensor's temporal structure for the mean field retrieval error. On the basis of Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), parameters of the error structure for Passive Microwave (PM) and Infra-red (IR) sensors are derived over the Southern United States. A demonstration of the utility of SREM-2D is shown by coupling SREM-2D with the Community Land Model (CLM) over a 40000 km2 area in Oklahoma. SREM-2D is found to be a very elegant and valuable tool for formulating scientific questions related to the understanding of propagation of satellite rainfall error in land surface simulations.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
A posteriori error control in numerical simulations of semiconductor nanodevices
NASA Astrophysics Data System (ADS)
Chen, Ren-Chuen; Li, Chun-Hsien; Liu, Jinn-Liang
2016-10-01
A posteriori error estimation and control methods are proposed for a quantum corrected energy balance (QCEB) model that describes electron and hole flows in semiconductor nanodevices under the influence of electrical, diffusive, thermal, and quantum effects. The error estimation is based on the maximum norm a posteriori error estimate developed by Kopteva (2008) for singularly perturbed semilinear reaction-diffusion problems. The error estimate results in three error estimators called the first-, second-, and third-order estimators to guide the refinement process. The second-order estimator is shown to be most effective for adaptive mesh refinement. The QCEB model is scaled to a dimensionless coupled system of seven singularly perturbed semilinear PDEs with various perturbation parameters so that the estimator can be applied to each PDE on equal footing. It is found that the estimator suitable for controlling the approximation error of one PDE (one physical variable) may not be suitable for another PDE, indicating that different parameters account for different boundary or interior layer regions as illustrated by two different semiconductor devices, namely, a diode and a MOSFET. A hybrid approach to automatically choosing different PDEs for calculating the estimator in the adaptive mesh refinement process is shown to be able to control the errors of all PDEs uniformly.
Killeen, P R; Taylor, T J
2000-07-01
The performance of fallible counters is investigated in the context of pacemaker-counter models of interval timing. Failure to reliably transmit signals from one stage of a counter to the next generates periodicity in mean and variance of counts registered, with means power functions of input and standard deviations approximately proportional to the means (Weber's law). The transition diagrams and matrices of the counter are self-similar: Their eigenvalues have a fractal form and closely approximate Julia sets. The distributions of counts registered and of hitting times approximate Weibull densities, which provide the foundation for a signal-detection model of discrimination. Different schemes for weighting the values of each stage may be established by conditioning. As higher order stages of a cascade come on-line the veridicality of lower order stages degrades, leading to scale-invariance in error. The capacity of a counter is more likely to be limited by fallible transmission between stages than by a paucity of stages. Probabilities of successful transmission between stages of a binary counter around 0.98 yield predictions consistent with performance in temporal discrimination and production and with channel capacities for identification of unidimensional stimuli.
SimProp: a simulation code for ultra high energy cosmic ray propagation
Aloisio, R.; Grillo, A.F.; Boncioli, D.; Petrera, S.; Salamida, F. E-mail: denise.boncioli@roma2.infn.it E-mail: petrera@aquila.infn.it
2012-10-01
A new Monte Carlo simulation code for the propagation of Ultra High Energy Cosmic Rays is presented. The results of this simulation scheme are tested by comparison with results of another Monte Carlo computation as well as with the results obtained by directly solving the kinetic equation for the propagation of Ultra High Energy Cosmic Rays. A short comparison with the latest flux published by the Pierre Auger collaboration is also presented.
NASA Astrophysics Data System (ADS)
Nguyen-Dinh, Maxime; Gainville, Olaf; Lardjane, Nicolas
2015-10-01
We present new results for the blast wave propagation from strong shock regime to the weak shock limit. For this purpose, we analyse the blast wave propagation using both Direct Numerical Simulation and an acoustic asymptotic model. This approach allows a full numerical study of a realistic pyrotechnic site taking into account for the main physical effects. We also compare simulation results with first measurements. This study is a part of the french ANR-Prolonge project (ANR-12-ASTR-0026).
Revised error propagation of 40Ar/39Ar data, including covariances
NASA Astrophysics Data System (ADS)
Vermeesch, Pieter
2015-12-01
The main advantage of the 40Ar/39Ar method over conventional K-Ar dating is that it does not depend on any absolute abundance or concentration measurements, but only uses the relative ratios between five isotopes of the same element -argon- which can be measured with great precision on a noble gas mass spectrometer. The relative abundances of the argon isotopes are subject to a constant sum constraint, which imposes a covariant structure on the data: the relative amount of any of the five isotopes can always be obtained from that of the other four. Thus, the 40Ar/39Ar method is a classic example of a 'compositional data problem'. In addition to the constant sum constraint, covariances are introduced by a host of other processes, including data acquisition, blank correction, detector calibration, mass fractionation, decay correction, interference correction, atmospheric argon correction, interpolation of the irradiation parameter, and age calculation. The myriad of correlated errors arising during the data reduction are best handled by casting the 40Ar/39Ar data reduction protocol in a matrix form. The completely revised workflow presented in this paper is implemented in a new software platform, Ar-Ar_Redux, which takes raw mass spectrometer data as input and generates accurate 40Ar/39Ar ages and their (co-)variances as output. Ar-Ar_Redux accounts for all sources of analytical uncertainty, including those associated with decay constants and the air ratio. Knowing the covariance matrix of the ages removes the need to consider 'internal' and 'external' uncertainties separately when calculating (weighted) mean ages. Ar-Ar_Redux is built on the same principles as its sibling program in the U-Pb community (U-Pb_Redux), thus improving the intercomparability of the two methods with tangible benefits to the accuracy of the geologic time scale. The program can be downloaded free of charge from
Numerical simulation of impurity propagation in sea channels
NASA Astrophysics Data System (ADS)
Cherniy, Dmitro; Dovgiy, Stanislav; Gourjii, Alexandre
2009-11-01
Building the dike (2003) in Kerch channel (between Black and Azov seas) from Taman peninsula is an example of technological influence on the fluid flow and hydrological conditions in the channel. Increasing velocity flow by two times in a fairway region results in the appearance dangerous tendencies in hydrology of Kerch channel. A flow near the coastal edges generates large scale vortices, which move along the channel. A shipwreck (November 11, 2007) of tanker ``Volganeft-139'' in Kerch channel resulted in an ecological catastrophe in the indicated region. More than 1300 tons of petroleum appeared on the sea surface. Intensive vortices formed here involve part of the impurity region in own motion. Boundary of the impurity region is deformed, stretched and cover the center part of the channel. The adapted vortex singularity method for the impurity propagation in Kerch channel and analyze of the pollution propagation are the main goal of the report.
Spectral-Element Simulations of Wave Propagation in Porous Media
NASA Astrophysics Data System (ADS)
Morency, C.; Tromp, J.
2007-12-01
Biot theory has been extensively used in the petroleum industry, where seismic surveys are performed to determine the physical properties of reservoir rocks. The theory is also of broad general interest when a physical understanding of the coupling between solid and fluid phases is desired. One fundamental result of Biot theory is the prediction of a second compressional wave, which attenuates rapidly, often referred to as "type II" or "Biot's slow compressional wave", in addition to the classical fast compressional and shear waves. The mathematical formulation of wave propagation in porous media developed by Biot is based upon the principle of virtual work, ignoring processes at the microscopic level. Moreover, even if the Biot formulations are claimed to be valid for non-uniform porosity, gradients in porosity are not explicitly incorporated in the original theory. More recent studies focused on averaging techniques to derive the macroscopic porous medium equations from the microscale, and made an attempt to derive an expression for the change in porosity, but there is still room for clarification of such an expression, and to properly integrate the effects of gradients in porosity. We aim to present a straightforward derivation of the main equations describing wave propagation in porous media, with a particular emphasis on the effects of gradients in porosity. We also present a two dimensional numerical implementation of these equations using a spectral-element method. Finally, we have performed different benchmarks to validate our method, involving acoustic-poroelastic waves interaction and wave propagation in heterogenous porous media.
NASA Technical Reports Server (NTRS)
Boville, Byron A.; Baumhefner, David P.
1990-01-01
Using an NCAR community climate model, Version I, the forecast error growth and the climate drift resulting from the omission of the upper stratosphere are investigated. In the experiment, the control simulation is a seasonal integration of a medium horizontal general circulation model with 30 levels extending from the surface to the upper mesosphere, while the main experiment uses an identical model, except that only the bottom 15 levels (below 10 mb) are retained. It is shown that both random and systematic errors develop rapidly in the lower stratosphere with some local propagation into the troposphere in the 10-30-day time range. The random growth rate in the troposphere in the case of the altered upper boundary was found to be slightly faster than that for the initial-condition uncertainty alone. However, this is not likely to make a significant impact in operational forecast models, because the initial-condition uncertainty is very large.
Standard Errors of Equating Differences: Prior Developments, Extensions, and Simulations
ERIC Educational Resources Information Center
Moses, Tim; Zhang, Wenmin
2011-01-01
The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…
End-to-End Network Simulation Using a Site-Specific Radio Wave Propagation Model
Djouadi, Seddik M; Kuruganti, Phani Teja; Nutaro, James J
2013-01-01
The performance of systems that rely on a wireless network depends on the propagation environment in which that network operates. To predict how these systems and their supporting networks will perform, simulations must take into consideration the propagation environment and how this effects the performance of the wireless network. Network simulators typically use empirical models of the propagation environment. However, these models are not intended for, and cannot be used, to predict a wireless system will perform in a specific location, e.g., in the center of a particular city or the interior of a specific manufacturing facility. In this paper, we demonstrate how a site-specific propagation model and the NS3 simulator can be used to predict the end-to-end performance of a wireless network.
Fast, Accurate RF Propagation Modeling and Simulation Tool for Highly Cluttered Environments
Kuruganti, Phani Teja
2007-01-01
As network centric warfare and distributed operations paradigms unfold, there is a need for robust, fast wireless network deployment tools. These tools must take into consideration the terrain of the operating theater, and facilitate specific modeling of end to end network performance based on accurate RF propagation predictions. It is well known that empirical models can not provide accurate, site specific predictions of radio channel behavior. In this paper an event-driven wave propagation simulation is proposed as a computationally efficient technique for predicting critical propagation characteristics of RF signals in cluttered environments. Convincing validation and simulator performance studies confirm the suitability of this method for indoor and urban area RF channel modeling. By integrating our RF propagation prediction tool, RCSIM, with popular packetlevel network simulators, we are able to construct an end to end network analysis tool for wireless networks operated in built-up urban areas.
Investigation of Radar Propagation in Buildings: A 10 Billion Element Cartesian-Mesh FETD Simulation
Stowell, M L; Fasenfest, B J; White, D A
2008-01-14
In this paper large scale full-wave simulations are performed to investigate radar wave propagation inside buildings. In principle, a radar system combined with sophisticated numerical methods for inverse problems can be used to determine the internal structure of a building. The composition of the walls (cinder block, re-bar) may effect the propagation of the radar waves in a complicated manner. In order to provide a benchmark solution of radar propagation in buildings, including the effects of typical cinder block and re-bar, we performed large scale full wave simulations using a Finite Element Time Domain (FETD) method. This particular FETD implementation is tuned for the special case of an orthogonal Cartesian mesh and hence resembles FDTD in accuracy and efficiency. The method was implemented on a general-purpose massively parallel computer. In this paper we briefly describe the radar propagation problem, the FETD implementation, and we present results of simulations that used over 10 billion elements.
ITER Test Blanket Module Error Field Simulation Experiments
NASA Astrophysics Data System (ADS)
Schaffer, M. J.
2010-11-01
Recent experiments at DIII-D used an active-coil mock-up to investigate effects of magnetic error fields similar to those expected from two ferromagnetic Test Blanket Modules (TBMs) in one ITER equatorial port. The largest and most prevalent observed effect was plasma toroidal rotation slowing across the entire radial profile, up to 60% in H-mode when the mock-up local ripple at the plasma was ˜4 times the local ripple expected in front of ITER TBMs. Analysis showed the slowing to be consistent with non-resonant braking by the mock-up field. There was no evidence of strong electromagnetic braking by resonant harmonics. These results are consistent with the near absence of resonant helical harmonics in the TBM field. Global particle and energy confinement in H-mode decreased by <20% for the maximum mock-up ripple, but <5% at the local ripple expected in ITER. These confinement reductions may be linked with the large velocity reductions. TBM field effects were small in L-mode but increased with plasma beta. The L-H power threshold was unaffected within error bars. The mock-up field increased plasma sensitivity to mode locking by a known n=1 test field (n = toroidal harmonic number). In H-mode the increased locking sensitivity was from TBM torque slowing plasma rotation. At low beta, locked mode tolerance was fully recovered by re-optimizing the conventional DIII-D ``I-coils'' empirical compensation of n=1 errors in the presence of the TBM mock-up field. Empirical error compensation in H-mode should be addressed in future experiments. Global loss of injected neutral beam fast ions was within error bars, but 1 MeV fusion triton loss may have increased. The many DIII-D mock-up results provide important benchmarks for models needed to predict effects of TBMs in ITER.
Propagation speed of combustion and invasion waves in stochastic simulations with competitive mixing
NASA Astrophysics Data System (ADS)
Klimenko, A. Y.; Pope, S. B.
2012-08-01
We consider the propagation speeds of steady waves simulated by particles with stochastic motions, properties and mixing (Pope particles). Conventional conservative mixing is replaced by competitive mixing simulating invasion processes or conditions in turbulent premixed flames under the flamelet regime. The effects of finite correlation times for particle velocity are considered and wave propagation speeds are determined for different limiting regimes. The results are validated by stochastic simulations. If the correlation time is short, the model corresponds to the KPP-Fisher equation, which is conventionally used to simulate invasion processes. If the parameters of the simulations are properly selected, the model under consideration is shown to be consistent with existing experimental evidence for propagation speeds of turbulent premixed flames.
Coherent-wave Monte Carlo method for simulating light propagation in tissue
NASA Astrophysics Data System (ADS)
Kraszewski, Maciej; Pluciński, Jerzy
2016-03-01
Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime.
Accumulation of errors in numerical simulations of chemically reacting gas dynamics
NASA Astrophysics Data System (ADS)
Smirnov, N. N.; Betelin, V. B.; Nikitin, V. F.; Stamov, L. I.; Altoukhov, D. I.
2015-12-01
The aim of the present study is to investigate problems of numerical simulations precision and stochastic errors accumulation in solving problems of detonation or deflagration combustion of gas mixtures in rocket engines. Computational models for parallel computing on supercomputers incorporating CPU and GPU units were tested and verified. Investigation of the influence of computational grid size on simulation precision and computational speed was performed. Investigation of accumulation of errors for simulations implying different strategies of computation were performed.
Whistler propagation in ionospheric density ducts: Simulations and DEMETER observations
NASA Astrophysics Data System (ADS)
Woodroffe, J. R.; Streltsov, A. V.; Vartanyan, A.; Milikh, G. M.
2013-11-01
On 16 October 2009, the Detection of Electromagnetic Emissions Transmitted from Earthquake Regions (DEMETER) satellite observed VLF whistler wave activity coincident with an ionospheric heating experiment conducted at HAARP. At the same time, density measurements by DEMETER indicate the presence of multiple field-aligned enhancements. Using an electron MHD model, we show that the distribution of VLF power observed by DEMETER is consistent with the propagation of whistlers from the heating region inside the observed density enhancements. We also discuss other interesting features of this event, including coupling of the lower hybrid and whistler modes, whistler trapping in artificial density ducts, and the interference of whistlers waves from two adjacent ducts.
Evaluating the Effect of Global Positioning System (GPS) Satellite Clock Error via GPS Simulation
NASA Astrophysics Data System (ADS)
Sathyamoorthy, Dinesh; Shafii, Shalini; Amin, Zainal Fitry M.; Jusoh, Asmariah; Zainun Ali, Siti
2016-06-01
This study is aimed at evaluating the effect of Global Positioning System (GPS) satellite clock error using GPS simulation. Two conditions of tests are used; Case 1: All the GPS satellites have clock errors within the normal range of 0 to 7 ns, corresponding to pseudorange error range of 0 to 2.1 m; Case 2: One GPS satellite suffers from critical failure, resulting in clock error in the pseudorange of up to 1 km. It is found that increase of GPS satellite clock error causes increase of average positional error due to increase of pseudorange error in the GPS satellite signals, which results in increasing error in the coordinates computed by the GPS receiver. Varying average positional error patterns are observed for the each of the readings. This is due to the GPS satellite constellation being dynamic, causing varying GPS satellite geometry over location and time, resulting in GPS accuracy being location / time dependent. For Case 1, in general, the highest average positional error values are observed for readings with the highest PDOP values, while the lowest average positional error values are observed for readings with the lowest PDOP values. For Case 2, no correlation is observed between the average positional error values and PDOP, indicating that the error generated is random.
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
NASA Astrophysics Data System (ADS)
Paćko, P.; Bielak, T.; Spencer, A. B.; Staszewski, W. J.; Uhl, T.; Worden, K.
2012-07-01
This paper demonstrates new parallel computation technology and an implementation for Lamb wave propagation modelling in complex structures. A graphical processing unit (GPU) and computer unified device architecture (CUDA), available in low-cost graphical cards in standard PCs, are used for Lamb wave propagation numerical simulations. The local interaction simulation approach (LISA) wave propagation algorithm has been implemented as an example. Other algorithms suitable for parallel discretization can also be used in practice. The method is illustrated using examples related to damage detection. The results demonstrate good accuracy and effective computational performance of very large models. The wave propagation modelling presented in the paper can be used in many practical applications of science and engineering.
Ghanem, Roger G. . E-mail: ghanem@usc.edu; Doostan, Alireza . E-mail: doostan@jhu.edu
2006-09-01
This paper investigates the predictive accuracy of stochastic models. In particular, a formulation is presented for the impact of data limitations associated with the calibration of parameters for these models, on their overall predictive accuracy. In the course of this development, a new method for the characterization of stochastic processes from corresponding experimental observations is obtained. Specifically, polynomial chaos representations of these processes are estimated that are consistent, in some useful sense, with the data. The estimated polynomial chaos coefficients are themselves characterized as random variables with known probability density function, thus permitting the analysis of the dependence of their values on further experimental evidence. Moreover, the error in these coefficients, associated with limited data, is propagated through a physical system characterized by a stochastic partial differential equation (SPDE). This formalism permits the rational allocation of resources in view of studying the possibility of validating a particular predictive model. A Bayesian inference scheme is relied upon as the logic for parameter estimation, with its computational engine provided by a Metropolis-Hastings Markov chain Monte Carlo procedure.
Simulation-based reasoning about the physical propagation of fault effects
NASA Technical Reports Server (NTRS)
Feyock, Stefan; Li, Dalu
1990-01-01
The research described deals with the effects of faults on complex physical systems, with particular emphasis on aircraft and spacecraft systems. Given that a malfunction has occurred and been diagnosed, the goal is to determine how that fault will propagate to other subsystems, and what the effects will be on vehicle functionality. In particular, the use of qualitative spatial simulation to determine the physical propagation of fault effects in 3-D space is described.
MHD simulation of a propagation of loop-like and bubble-like magnetic clouds
NASA Technical Reports Server (NTRS)
Vandas, M.; Fischer, S.; Pelant, P.; Dryer, M.; Smith, Z.; Detman, T.
1995-01-01
Propagation and evolution of magnetic clouds in the ambient solar wind flow is studied self-consistently using ideal MHD equations in three dimensions. Magnetic clouds as ideal force-free objects (cylinders or spheres) are ejected near the Sun and followed beyond the Earth's orbit. We investigate the influence of various initial parameters like the injection velocity, magnetic field strength, magnetic helicity, orientation of the clouds' axis, etc., on their propagation and evolution. We demonstrate that the injection velocity and magnetic field strength have a major influence on propagation. Simulation results are compared with analytical solutions of magnetic cloud evolution.
Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S
2016-05-01
The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures.
Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S
2016-05-01
The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. PMID:26894840
Perfect crystal propagator for physical optics simulations with Synchrotron Radiation Workshop
NASA Astrophysics Data System (ADS)
Sutter, John P.; Chubar, Oleg; Suvorov, Alexey
2014-09-01
Until now, a treatment of dynamical diffraction from perfect crystals has been missing in the "Synchrotron Radiation Workshop" (SRW) wavefront propagation computer code despite the widespread use of crystals on X-ray synchrotron beamlines. Now a special "Propagator" module for calculating dynamical diffraction from a perfect crystal in the Bragg case has been written in C++, integrated into the SRW C/C++ library and made available for simulations using the Python interface of SRW. The propagator performs local processing of the frequency-domain electric field in the angular representation. A 2-D Fast Fourier Transform is used for changing the field representation from/to the coordinate representation before and after applying the crystal propagator. This ensures seamless integration of the new propagator with the existing functionalities of the SRW package, allows compatibility with existing propagators for other optical elements, and enables the simulation of complex beamlines transporting partially coherent X-rays. The code has been benchmarked by comparison with predictions made by plane-wave and spherical-wave dynamical diffraction theory. Test simulations for a selection of X-ray synchrotron beamlines are also shown.
Batista, R. Alves; Vliet, A. van; Boncioli, D.; Di Matteo, A.; Walz, D. E-mail: denise.boncioli@lngs.infn.it E-mail: a.vanvliet@astro.ru.nl
2015-10-01
The results of simulations of extragalactic propagation of ultra-high energy cosmic rays (UHECRs) have intrinsic uncertainties due to poorly known physical quantities and approximations used in the codes. We quantify the uncertainties in the simulated UHECR spectrum and composition due to different models of extragalactic background light (EBL), different photodisintegration setups, approximations concerning photopion production and the use of different simulation codes. We discuss the results for several representative source scenarios with proton, nitrogen or iron at injection. For this purpose we used SimProp and CRPropa, two publicly available codes for Monte Carlo simulations of UHECR propagation. CRPropa is a detailed and extensive simulation code, while SimProp aims to achieve acceptable results using a simpler code. We show that especially the choices for the EBL model and the photodisintegration setup can have a considerable impact on the simulated UHECR spectrum and composition.
FDTD Simulation on Terahertz Waves Propagation Through a Dusty Plasma
NASA Astrophysics Data System (ADS)
Wang, Maoyan; Zhang, Meng; Li, Guiping; Jiang, Baojun; Zhang, Xiaochuan; Xu, Jun
2016-08-01
The frequency dependent permittivity for dusty plasmas is provided by introducing the charging response factor and charge relaxation rate of airborne particles. The field equations that describe the characteristics of Terahertz (THz) waves propagation in a dusty plasma sheath are derived and discretized on the basis of the auxiliary differential equation (ADE) in the finite difference time domain (FDTD) method. Compared with numerical solutions in reference, the accuracy for the ADE FDTD method is validated. The reflection property of the metal Aluminum interlayer of the sheath at THz frequencies is discussed. The effects of the thickness, effective collision frequency, airborne particle density, and charge relaxation rate of airborne particles on the electromagnetic properties of Terahertz waves through a dusty plasma slab are investigated. Finally, some potential applications for Terahertz waves in information and communication are analyzed. supported by National Natural Science Foundation of China (Nos. 41104097, 11504252, 61201007, 41304119), the Fundamental Research Funds for the Central Universities (Nos. ZYGX2015J039, ZYGX2015J041), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (No. 20120185120012)
SIMULATION OF SHOCK WAVE PROPAGATION AND DAMAGE IN GEOLOGIC MATERIALS
Lomov, I; Vorobiev, O; Antoun, T H
2004-09-17
A new thermodynamically consistent material model for large deformation has been developed. It describes quasistatic loading of limestone as well as high-rate phenomena. This constitutive model has been implemented into an Eulerian shock wave code with adaptive mesh refinement. This approach was successfully used to reproduce static triaxial compression tests and to simulate experiments of blast loading and damage of limestone. Results compare favorably with experimentally available wave profiles from spherically-symmetric explosion in rock samples.
CFD simulation of vented explosion and turbulent flame propagation
NASA Astrophysics Data System (ADS)
Tulach, Aleš; Mynarz, Miroslav; Kozubková, Milada
2015-05-01
Very rapid physical and chemical processes during the explosion require both quality and quantity of detection devices. CFD numerical simulations are suitable instruments for more detailed determination of explosion parameters. The paper deals with mathematical modelling of vented explosion and turbulent flame spread with use of ANSYS Fluent software. The paper is focused on verification of preciseness of calculations comparing calculated data with the results obtained in realised experiments in the explosion chamber.
Theory and simulations of electrostatic field error transport
Dubin, Daniel H. E.
2008-07-15
Asymmetries in applied electromagnetic fields cause plasma loss (or compression) in stellarators, tokamaks, and non-neutral plasmas. Here, this transport is studied using idealized simulations that follow guiding centers in given fields, neglecting collective effects on the plasma evolution, but including collisions at rate {nu}. For simplicity the magnetic field is assumed to be uniform; transport is due to asymmetries in applied electrostatic fields. Also, the Fokker-Planck equation describing the particle distribution is solved, and the predicted transport is found to agree with the simulations. Banana, plateau, and fluid regimes are identified and observed in the simulations. When separate trapped particle populations are created by application of an axisymmetric squeeze potential, enhanced transport regimes are observed, scaling as {radical}({nu}) when {nu}<{omega}{sub 0}<{omega}{sub B} and as 1/{nu} when {omega}{sub 0}<{nu}<{omega}{sub B} (where {omega}{sub 0} and {omega}{sub B} are the rotation and axial bounce frequencies, respectively). These regimes are similar to those predicted for neoclassical transport in stellarators.
Simulation of ultrasonic wave propagation in welds using ray-based methods
NASA Astrophysics Data System (ADS)
Gardahaut, A.; Jezzine, K.; Cassereau, D.; Leymarie, N.
2014-04-01
Austenitic or bimetallic welds are particularly difficult to control due to their anisotropic and inhomogeneous properties. In this paper, we present a ray-based method to simulate the propagation of ultrasonic waves in such structures, taking into account their internal properties. This method is applied on a smooth representation of the orientation of the grain in the weld. The propagation model consists in solving the eikonal and transport equations in an inhomogeneous anisotropic medium. Simulation results are presented and compared to finite elements for a distribution of grain orientation expressed in a closed-form.
Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue
Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William
2008-01-01
In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.
Simulations of Wave Propagation in the Jovian Atmosphere after SL9 Impact Events
NASA Astrophysics Data System (ADS)
Pond, Jarrad W.; Palotai, C.; Korycansky, D.; Harrington, J.
2013-10-01
Our previous numerical investigations into Jovian impacts, including the Shoemaker Levy- 9 (SL9) event (Korycansky et al. 2006 ApJ 646. 642; Palotai et al. 2011 ApJ 731. 3), the 2009 bolide (Pond et al. 2012 ApJ 745. 113), and the ephemeral flashes caused by smaller impactors in 2010 and 2012 (Hueso et al. 2013; Submitted to A&A), have covered only up to approximately 3 to 30 seconds after impact. Here, we present further SL9 impacts extending to minutes after collision with Jupiter’s atmosphere, with a focus on the propagation of shock waves generated as a result of the impact events. Using a similar yet more efficient remapping method than previously presented (Pond et al. 2012; DPS 2012), we move our simulation results onto a larger computational grid, conserving quantities with minimal error. The Jovian atmosphere is extended as needed to accommodate the evolution of the features of the impact event. We restart the simulation, allowing the impact event to continue to progress to greater spatial extents and for longer times, but at lower resolutions. This remap-restart process can be implemented multiple times to achieve the spatial and temporal scales needed to investigate the observable effects of waves generated by the deposition of energy and momentum into the Jovian atmosphere by an SL9-like impactor. As before, we use the three-dimensional, parallel hydrodynamics code ZEUS-MP 2 (Hayes et al. 2006 ApJ.SS. 165. 188) to conduct our simulations. Wave characteristics are tracked throughout these simulations. Of particular interest are the wave speeds and wave positions in the atmosphere as a function of time. These properties are compared to the characteristics of the HST rings to see if shock wave behavior within one hour of impact is consistent with waves observed at one hour post-impact and beyond (Hammel et al. 1995 Science 267. 1288). This research was supported by National Science Foundation Grant AST-1109729 and NASA Planetary Atmospheres Program Grant
Two-dimensional simulation of optical wave propagation through atmospheric turbulence.
Hyde, Milo W; Basu, Santasri; Schmidt, Jason D
2015-01-15
A methodology for the two-dimensional simulation of optical wave propagation through atmospheric turbulence is presented. The derivations of common statistical field moments in two dimensions, required for performing and validating simulations, are presented and compared with their traditional three-dimensional counterparts. Wave optics simulations are performed to validate the two-dimensional moments and to demonstrate the utility of performing two-dimensional wave optics simulations so that the results may be scaled to those of computationally prohibitive 3D scenarios. Discussions of the benefits and limitations of two-dimensional atmospheric turbulence simulations are provided throughout.
A Compact Code for Simulations of Quantum Error Correction in Classical Computers
Nyman, Peter
2009-03-10
This study considers implementations of error correction in a simulation language on a classical computer. Error correction will be necessarily in quantum computing and quantum information. We will give some examples of the implementations of some error correction codes. These implementations will be made in a more general quantum simulation language on a classical computer in the language Mathematica. The intention of this research is to develop a programming language that is able to make simulations of all quantum algorithms and error corrections in the same framework. The program code implemented on a classical computer will provide a connection between the mathematical formulation of quantum mechanics and computational methods. This gives us a clear uncomplicated language for the implementations of algorithms.
Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, Ronald M.
2015-01-01
The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.
ASTRA Simulation Results of RF Propagation in Plasma Medium
NASA Astrophysics Data System (ADS)
Goodwin, Joshua; Oneal, Brandon; Smith, Aaron; Sen, Sudip
2015-04-01
Transport barriers in toroidal plasmas play a major role in achieving the required confinement for reactor grade plasmas. They are formed by different mechanisms, but most of them are associated with a zonal flow which suppresses turbulence. A different way of producing a barrier has been recently proposed which uses the ponderomotive force of RF waves to reduce the fluctuations due to drift waves, but without inducing any plasma rotation. Using this mechanism, a transport coefficient is derived which is a function of RF power, and it is incorporated in transport simulations performed for the Brazilian tokamak TCABR, as a possible test bed for the theoretical model. The formation of a transport barrier is demonstrated at the position of the RF wave resonant absorption surface, having the typical pedestal-like temperature profile.
Computer simulation of crack propagation in ductile materials under biaxial dynamic loads
Chen, Y.M.
1980-07-29
The finite-difference computer program HEMP is used to simulate the crack-propagation phenomenon in two-dimensional ductile materials under truly dynamic biaxial loads. A comulative strain-damage criterion for the initiation of ductile fracture is used. To simulate crack propagation numerically, the method of equivalent free-surface boundary conditions and the method of artifical velocity are used in the computation. Centrally cracked rectangular aluminum bars subjected to constant-velocity biaxial loads at the edges are considered. Tensile and compressive loads in the direction of crack length are found, respectively, to increase and decrease directional instability in crack propagation, where the directional instability is characterized by branching or bifurcation.
Time-Sliced Thawed Gaussian Propagation Method for Simulations of Quantum Dynamics.
Kong, Xiangmeng; Markmann, Andreas; Batista, Victor S
2016-05-19
A rigorous method for simulations of quantum dynamics is introduced on the basis of concatenation of semiclassical thawed Gaussian propagation steps. The time-evolving state is represented as a linear superposition of closely overlapping Gaussians that evolve in time according to their characteristic equations of motion, integrated by fourth-order Runge-Kutta or velocity Verlet. The expansion coefficients of the initial superposition are updated after each semiclassical propagation period by implementing the Husimi Transform analytically in the basis of closely overlapping Gaussians. An advantage of the resulting time-sliced thawed Gaussian (TSTG) method is that it allows for full-quantum dynamics propagation without any kind of multidimensional integral calculation, or inversion of overlap matrices. The accuracy of the TSTG method is demonstrated as applied to simulations of quantum tunneling, showing quantitative agreement with benchmark calculations based on the split-operator Fourier transform method. PMID:26845486
GPU simulation of nonlinear propagation of dual band ultrasound pulse complexes
Kvam, Johannes Angelsen, Bjørn A. J.; Elster, Anne C.
2015-10-28
In a new method of ultrasound imaging, called SURF imaging, dual band pulse complexes composed of overlapping low frequency (LF) and high frequency (HF) pulses are transmitted, where the frequency ratio LF:HF ∼ 1 : 20, and the relative bandwidth of both pulses are ∼ 50 − 70%. The LF pulse length is hence ∼ 20 times the HF pulse length. The LF pulse is used to nonlinearly manipulate the material elasticity observed by the co-propagating HF pulse. This produces nonlinear interaction effects that give more information on the propagation of the pulse complex. Due to the large difference in frequency and pulse length between the LF and the HF pulses, we have developed a dual level simulation where the LF pulse propagation is first simulated independent of the HF pulse, using a temporal sampling frequency matched to the LF pulse. A separate equation for the HF pulse is developed, where the the presimulated LF pulse modifies the propagation velocity. The equations are adapted to parallel processing in a GPU, where nonlinear simulations of a typical HF beam of 10 MHz down to 40 mm is done in ∼ 2 secs in a standard GPU. This simulation is hence very useful for studying the manipulation effect of the LF pulse on the HF pulse.
Simulation study of wakefield generation by two color laser pulses propagating in homogeneous plasma
Kumar Mishra, Rohit; Saroch, Akanksha; Jha, Pallavi
2013-09-15
This paper deals with a two-dimensional simulation of electric wakefields generated by two color laser pulses propagating in homogeneous plasma, using VORPAL simulation code. The laser pulses are assumed to have a frequency difference equal to the plasma frequency. Simulation studies are performed for two similarly as well as oppositely polarized laser pulses and the respective amplitudes of the generated longitudinal wakefields for the two cases are compared. Enhancement of wake amplitude for the latter case is reported. This simulation study validates the analytical results presented by Jha et al.[Phys. Plasmas 20, 053102 (2013)].
PROPAGATING WAVE PHENOMENA DETECTED IN OBSERVATIONS AND SIMULATIONS OF THE LOWER SOLAR ATMOSPHERE
Jess, D. B.; Shelyag, S.; Mathioudakis, M.; Keys, P. H.; Keenan, F. P.; Christian, D. J.
2012-02-20
We present high-cadence observations and simulations of the solar photosphere, obtained using the Rapid Oscillations in the Solar Atmosphere imaging system and the MuRAM magnetohydrodynamic (MHD) code, respectively. Each data set demonstrates a wealth of magnetoacoustic oscillatory behavior, visible as periodic intensity fluctuations with periods in the range 110-600 s. Almost no propagating waves with periods less than 140 s and 110 s are detected in the observational and simulated data sets, respectively. High concentrations of power are found in highly magnetized regions, such as magnetic bright points and intergranular lanes. Radiative diagnostics of the photospheric simulations replicate our observational results, confirming that the current breed of MHD simulations are able to accurately represent the lower solar atmosphere. All observed oscillations are generated as a result of naturally occurring magnetoconvective processes, with no specific input driver present. Using contribution functions extracted from our numerical simulations, we estimate minimum G-band and 4170 A continuum formation heights of 100 km and 25 km, respectively. Detected magnetoacoustic oscillations exhibit a dominant phase delay of -8 Degree-Sign between the G-band and 4170 A continuum observations, suggesting the presence of upwardly propagating waves. More than 73% of MBPs (73% from observations and 96% from simulations) display upwardly propagating wave phenomena, suggesting the abundant nature of oscillatory behavior detected higher in the solar atmosphere may be traced back to magnetoconvective processes occurring in the upper layers of the Sun's convection zone.
Characterizing the propagation of gravity waves in 3D nonlinear simulations of solar-like stars
NASA Astrophysics Data System (ADS)
Alvan, L.; Strugarek, A.; Brun, A. S.; Mathis, S.; Garcia, R. A.
2015-09-01
Context. The revolution of helio- and asteroseismology provides access to the detailed properties of stellar interiors by studying the star's oscillation modes. Among them, gravity (g) modes are formed by constructive interferences between progressive internal gravity waves (IGWs), propagating in stellar radiative zones. Our new 3D nonlinear simulations of the interior of a solar-like star allows us to study the excitation, propagation, and dissipation of these waves. Aims: The aim of this article is to clarify our understanding of the behavior of IGWs in a 3D radiative zone and to provide a clear overview of their properties. Methods: We use a method of frequency filtering that reveals the path of individual gravity waves of different frequencies in the radiative zone. Results: We are able to identify the region of propagation of different waves in 2D and 3D, to compare them to the linear raytracing theory and to distinguish between propagative and standing waves (g-modes). We also show that the energy carried by waves is distributed in different planes in the sphere, depending on their azimuthal wave number. Conclusions: We are able to isolate individual IGWs from a complex spectrum and to study their propagation in space and time. In particular, we highlight in this paper the necessity of studying the propagation of waves in 3D spherical geometry, since the distribution of their energy is not equipartitioned in the sphere.
Simulating underwater plasma sound sources to evaluate focusing performance and analyze errors
NASA Astrophysics Data System (ADS)
Ma, Tian; Huang, Jian-Guo; Lei, Kai-Zhuo; Chen, Jian-Feng; Zhang, Qun-Fei
2010-03-01
Focused underwater plasma sound sources are being applied in more and more fields. Focusing performance is one of the most important factors determining transmission distance and peak values of the pulsed sound waves. The sound source’s components and focusing mechanism were all analyzed. A model was built in 3D Max and wave strength was measured on the simulation platform. Error analysis was fully integrated into the model so that effects on sound focusing performance of processing-errors and installation-errors could be studied. Based on what was practical, ways to limit the errors were proposed. The results of the error analysis should guide the design, machining, placement, debugging and application of underwater plasma sound sources.
SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors
Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I
2014-06-01
Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with γ<κ was < (or ≥) τ. (Conventionally, κ=1 and τ=90%.) A ROC curve was generated from the 616 tests by varying τ. For a range of κ and τ, true/false positive/negative rates were calculated. This procedure was repeated for inserted errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some κ+τ combination. Results: 20mm{sup 2} errors with intensity altered by ≥20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ≥50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though
Computer simulation of light pulse propagation for communication through thick clouds.
Bucher, E A
1973-10-01
This paper reports computer simulations of light pulse propagation through clouds. The amount and distribution of multipath time spreading was found to be independent of the detailed shape of the scattering function for sufficiently thick clouds. Moreover, the amount of multipath spreading for many scattering functions and cloud thicknesses can be predicted from a common set of data. Spatial spreading of the exit-spot diameter was found to saturate as a cloud of a given physical thickness became optically thicker and thicker. We observed that the propagation parameters for sufficiently thin clouds were dependent both on the cloud parameters and on the scattering function.
Numerical simulation of interaction of few-cycle pulses counter-propagating in the optical fiber
NASA Astrophysics Data System (ADS)
Konev, L. S.; Shpolyanskiy, Yu A.
2016-08-01
Interaction of few-cycle pulses counter-propagating in an optical fiber is studied numerically via solution of equations for bi-directional fields that are equivalent to the full scalar wave equation. It is simulated how 3-cycle optical pulse of the Ti:Sa laser is gained as it propagates through the field of the pulse on the second harmonic with higher intensity in the telecommunication type single-mode optical fiber. Rise of sixths harmonic is also observable under considered conditions.
Hashemiyan, Z; Packo, P; Staszewski, W J; Uhl, T
2016-01-01
Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808
Packo, P.; Staszewski, W. J.; Uhl, T.
2016-01-01
Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808
Sampling data for OSSEs. [simulating errors for WINDSAT Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Hoffman, Ross
1988-01-01
An OSSE should for the sake of realism incorporate at least some of the high-frequency, small-scale phenomena that are suppressed by atmospheric models; these phenomena should be present in the realistic atmosphere sampled by all observing sensor systems whose data are being used. Errors are presently generated for an OSSE in a way that encompasses representational errors, sampling, geophysical local bias, random error, and sensor filtering.
Time-Space Decoupled Explicit Method for Fast Numerical Simulation of Tsunami Propagation
NASA Astrophysics Data System (ADS)
Guo, Anxin; Xiao, Shengchao; Li, Hui
2015-02-01
This study presents a novel explicit numerical scheme for simulating tsunami propagation using the exact solution of the wave equations. The objective of this study is to develop a fast and stable numerical scheme by decoupling the wave equation in both the time and space domains. First, the finite difference scheme of the shallow-water equations for tsunami simulation are briefly introduced. The time-space decoupled explicit method based on the exact solution of the wave equation is given for the simulation of tsunami propagation without including frequency dispersive effects. Then, to consider wave dispersion, the second-order accurate numerical scheme to solve the shallow-water equations, which mimics the physical frequency dispersion with numerical dispersion, is derived. Lastly, the computation efficiency and the accuracy of the two types of numerical schemes are investigated by the 2004 Indonesia tsunami and the solution of the Boussinesq equation for a tsunami with Gaussian hump over both uniform and varying water depths. The simulation results indicate that the proposed numerical scheme can achieve a fast and stable tsunami propagation simulation while maintaining computation accuracy.
Radio wave propagation in arch-shaped tunnels: Measurements and simulations by asymptotic methods
NASA Astrophysics Data System (ADS)
Masson, E.; Combeau, P.; Cocheril, Y.; Berbineau, M.; Aveneau, L.; Vauzelle, R.
2010-01-01
Several wireless communication systems are developed for communication needs between train and ground and between trains in the railway or mass transit domains. They are developed for operational needs for security and comfort. In order to deploy these systems in specific environments, such as tunnels, straight or curved, rectangular or arch-shaped section, specific propagation models have to be developed. A modelisation of the radio wave propagation in straight arch-shaped tunnels is realized by using asymptotic methods, such as Ray Tracing and Ray Launching, combined with the tessellation of the arched section. A method of interpolation of the facets' normals was implemented in order to minimize the error made when using the tessellation. Results obtained are validated by comparison to the literature and to measurement results.
Sampling errors in free energy simulations of small molecules in lipid bilayers.
Neale, Chris; Pomès, Régis
2016-10-01
Free energy simulations are a powerful tool for evaluating the interactions of molecular solutes with lipid bilayers as mimetics of cellular membranes. However, these simulations are frequently hindered by systematic sampling errors. This review highlights recent progress in computing free energy profiles for inserting molecular solutes into lipid bilayers. Particular emphasis is placed on a systematic analysis of the free energy profiles, identifying the sources of sampling errors that reduce computational efficiency, and highlighting methodological advances that may alleviate sampling deficiencies. This article is part of a Special Issue entitled: Biosimulations edited by Ilpo Vattulainen and Tomasz Róg.
NASA Technical Reports Server (NTRS)
Matda, Y.; Crawford, F. W.
1974-01-01
An economical low noise plasma simulation model is applied to a series of problems associated with electrostatic wave propagation in a one-dimensional, collisionless, Maxwellian plasma, in the absence of magnetic field. The model is described and tested, first in the absence of an applied signal, and then with a small amplitude perturbation, to establish the low noise features and to verify the theoretical linear dispersion relation at wave energy levels as low as 0.000,001 of the plasma thermal energy. The method is then used to study propagation of an essentially monochromatic plane wave. Results on amplitude oscillation and nonlinear frequency shift are compared with available theories. The additional phenomena of sideband instability and satellite growth, stimulated by large amplitude wave propagation and the resulting particle trapping, are described.
NASA Astrophysics Data System (ADS)
Mira, J.; Solana, P.; Bolado, R.
2003-03-01
In many physical processes, there is uncertainty in the parameters which define the process and this input uncertainty is propagated through the equations of the process to its output. Experimental design is essential to quantify the uncertainty of the input parameters. If the process is simulated by a computer code, propagation of uncertainties is carried out through the Monte Carlo method by sampling in the input parameter distribution and running the code for each sample. It is then important to obtain information about the way in which the parameters are influential on the output of the process. This is useful in order to decide how to sample in the input space when propagating uncertainties and on which parameters experimental effort should be more concentrated. Here, we use dimensional and similarity analyses to reduce the dimension of the input variable space with no loss of information and profit from this reduction when propagating uncertainties by Monte Carlo. Using dimensional analysis, the output is expressed in terms of the inputs through a series of dimensionless numbers, a dimension reduction is achieved since there are less dimensionless numbers than original parameters. In order to minimize the uncertainty of the estimation of the output, propagation of uncertainties should be carried out by sampling on the space of the dimensionless numbers and not on the space of the original parameters. The purpose of this paper is an application of propagation of uncertainties to a code which simulates the interaction of metal drilling with a laser beam, where there exists uncertainty in the absorbed intensity of the beam and the density of the medium. By sampling in the reduced input space, a substantial variance reduction is achieved for the estimators of the mean, variance and distribution function of the output. Moreover, the output is found to depend on the intensity and the density through their quotient.
Impacts of Characteristics of Errors in Radar Rainfall Estimates for Rainfall-Runoff Simulation
NASA Astrophysics Data System (ADS)
KO, D.; PARK, T.; Lee, T. S.; Shin, J. Y.; Lee, D.
2015-12-01
For flood prediction, weather radar has been commonly employed to measure the amount of precipitation and its spatial distribution. However, estimated rainfall from the radar contains uncertainty caused by its errors such as beam blockage and ground clutter. Even though, previous studies have been focused on removing error of radar data, it is crucial to evaluate runoff volumes which are influenced primarily by the radar errors. Furthermore, resolution of rainfall modeled by previous studies for rainfall uncertainty analysis or distributed hydrological simulation are quite coarse to apply to real application. Therefore, in the current study, we tested the effects of radar rainfall errors on rainfall runoff with a high resolution approach, called spatial error model (SEM). In the current study, the synthetic generation of random and cross-correlated radar errors were employed as SEM. A number of events for the Nam River dam region were tested to investigate the peak discharge from a basin according to error variance. The results indicate that the dependent error brings much higher variations in peak discharge than the independent random error. To further investigate the effect of the magnitude of cross-correlation between radar errors, the different magnitudes of spatial cross-correlations were employed for the rainfall-runoff simulation. The results demonstrate that the stronger correlation leads to higher variation of peak discharge and vice versa. We conclude that the error structure in radar rainfall estimates significantly affects on predicting the runoff peak. Therefore, the efforts must take into consideration on not only removing radar rainfall error itself but also weakening the cross-correlation structure of radar errors in order to forecast flood events more accurately. Acknowledgements This research was supported by a grant from a Strategic Research Project (Development of Flood Warning and Snowfall Estimation Platform Using Hydrological Radars), which
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2015-01-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735
NASA Astrophysics Data System (ADS)
Hellander, Andreas; Lawson, Michael J.; Drawert, Brian; Petzold, Linda
2014-06-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps were adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the diffusive finite-state projection (DFSP) method, to incorporate temporal adaptivity.
Quantitative analyses of spectral measurement error based on Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Jiang, Jingying; Ma, Congcong; Zhang, Qi; Lu, Junsheng; Xu, Kexin
2015-03-01
The spectral measurement error is controlled by the resolution and the sensitivity of the spectroscopic instrument and the instability of involved environment. In this talk, the spectral measurement error has been analyzed quantitatively by using the Monte Carlo (MC) simulation. Take the floating reference point measurement for example, unavoidably there is a deviation between the measuring position and the theoretical position due to various influence factors. In order to determine the error caused by the positioning accuracy of the measuring device, Monte Carlo simulation has been carried out at the wavelength of 1310nm, simulating Intralipid solution of 2%. MC simulation was performed with the number of 1010 photons and the sampling interval of the ring at 1μm. The data from MC simulation will be analyzed on the basis of thinning and calculating method (TCM) proposed in this talk. The results indicate that TCM could be used to quantitatively analyze the spectral measurement error brought by the positioning inaccuracy.
Mézière, Fabien; Muller, Marie; Dobigny, Blandine; Bossy, Emmanuel; Derode, Arnaud
2013-02-01
Ultrasound propagation in clusters of elliptic (two-dimensional) or ellipsoidal (three-dimensional) scatterers randomly distributed in a fluid is investigated numerically. The essential motivation for the present work is to gain a better understanding of ultrasound propagation in trabecular bone. Bone microstructure exhibits structural anisotropy and multiple wave scattering. Some phenomena remain partially unexplained, such as the propagation of two longitudinal waves. The objective of this study was to shed more light on the occurrence of these two waves, using finite-difference simulations on a model medium simpler than bone. Slabs of anisotropic, scattering media were randomly generated. The coherent wave was obtained through spatial and ensemble-averaging of the transmitted wavefields. When varying relevant medium parameters, four of them appeared to play a significant role for the observation of two waves: (i) the solid fraction, (ii) the direction of propagation relatively to the scatterers orientation, (iii) the ability of scatterers to support shear waves, and (iv) a continuity of the solid matrix along the propagation. These observations are consistent with the hypothesis that fast waves are guided by the locally plate/bar-like solid matrix. If confirmed, this interpretation could significantly help developing approaches for a better understanding of trabecular bone micro-architecture using ultrasound.
Xiao, Xifeng; Voelz, David G; Toselli, Italo; Korotkova, Olga
2016-05-20
Experimental and theoretical work has shown that atmospheric turbulence can exhibit "non-Kolmogorov" behavior including anisotropy and modifications of the classically accepted spatial power spectral slope, -11/3. In typical horizontal scenarios, atmospheric anisotropy implies that the variations in the refractive index are more spatially correlated in both horizontal directions than in the vertical. In this work, we extend Gaussian beam theory for propagation through Kolmogorov turbulence to the case of anisotropic turbulence along the horizontal direction. We also study the effects of different spatial power spectral slopes on the beam propagation. A description is developed for the average beam intensity profile, and the results for a range of scenarios are demonstrated for the first time with a wave optics simulation and a spatial light modulator-based laboratory benchtop counterpart. The theoretical, simulation, and benchtop intensity profiles show good agreement and illustrate that an elliptically shaped beam profile can develop upon propagation. For stronger turbulent fluctuation regimes and larger anisotropies, the theory predicts a slightly more elliptical form of the beam than is generated by the simulation or benchtop setup. The theory also predicts that without an outer scale limit, the beam width becomes unbounded as the power spectral slope index α approaches a maximum value of 4. This behavior is not seen in the simulation or benchtop results because the numerical phase screens used for these studies do not model the unbounded wavefront tilt component implied in the analytic theory.
PUQ: A code for non-intrusive uncertainty propagation in computer simulations
NASA Astrophysics Data System (ADS)
Hunt, Martin; Haley, Benjamin; McLennan, Michael; Koslowski, Marisol; Murthy, Jayathi; Strachan, Alejandro
2015-09-01
We present a software package for the non-intrusive propagation of uncertainties in input parameters through computer simulation codes or mathematical models and associated analysis; we demonstrate its use to drive micromechanical simulations using a phase field approach to dislocation dynamics. The PRISM uncertainty quantification framework (PUQ) offers several methods to sample the distribution of input variables and to obtain surrogate models (or response functions) that relate the uncertain inputs with the quantities of interest (QoIs); the surrogate models are ultimately used to propagate uncertainties. PUQ requires minimal changes in the simulation code, just those required to annotate the QoI(s) for its analysis. Collocation methods include Monte Carlo, Latin Hypercube and Smolyak sparse grids and surrogate models can be obtained in terms of radial basis functions and via generalized polynomial chaos. PUQ uses the method of elementary effects for sensitivity analysis in Smolyak runs. The code is available for download and also available for cloud computing in nanoHUB. PUQ orchestrates runs of the nanoPLASTICITY tool at nanoHUB where users can propagate uncertainties in dislocation dynamics simulations using simply a web browser, without downloading or installing any software.
NASA Astrophysics Data System (ADS)
Wu, Di M.; Zhao, S. S.; Lu, Jun Q.; Hu, Xin-Hua
2000-06-01
In Monte Carlo simulations of light propagating in biological tissues, photons propagating in the media are described as classic particles being scattered and absorbed randomly in the media, and their path are tracked individually. To obtain any statistically significant results, however, a large number of photons is needed in the simulations and the calculations are time consuming and sometime impossible with existing computing resource, especially when considering the inhomogeneous boundary conditions. To overcome this difficulty, we have implemented a parallel computing technique into our Monte Carlo simulations. And this moment is well justified due to the nature of the Monte Carlo simulation. Utilizing the PVM (Parallel Virtual Machine, a parallel computing software package), parallel codes in both C and Fortran have been developed on the massive parallel computer of Cray T3E and a local PC-network running Unix/Sun Solaris. Our results show that parallel computing can significantly reduce the running time and make efficient usage of low cost personal computers. In this report, we present a numerical study of light propagation in a slab phantom of skin tissue using the parallel computing technique.
NASA Astrophysics Data System (ADS)
Liu, Kai; Ming, Hai; Lu, Yonghua; Bai, Ming; Xie, Jiangping
2001-02-01
The optical characters and light wave propagation of various fiber probes, solid immersion lens (SIL) system and Super-RENS for near-field optical recording are numerically simulated using 3D finite-difference time-domain (3D-FDTD) method in this paper. The aperture metal-coated probe have a near field spot size smaller than the bare-glass fiber probe, which means higher data density in near-field optical recording. The entirely metal coat probe is pointed out to have an extremely small near-field spot size about 10 nm, but the output electromagnetic wave propagation decrease to nearly zero within a few nanometers. The propagating and evanescent wave in different solid immersion lens (SIL) system is numerically simulated. The spot sizes are different because of different polarization. With the TbFe substrate, spot size will remain constant as observation distance z increased. But the propagating, evanescent and total energy decay more rapidly than the SIL system without TbFe substrate.
NASA Astrophysics Data System (ADS)
Ishmuratov, I. K.; Baibekov, E. I.
2015-12-01
We investigate the possibility to restore transient nutations of electron spin centers embedded in the solid using specific composite pulse sequences developed previously for the application in nuclear magnetic resonance spectroscopy. We treat two types of systematic errors simultaneously: (i) rotation angle errors related to the spatial distribution of microwave field amplitude in the sample volume, and (ii) off-resonance errors related to the spectral distribution of Larmor precession frequencies of the electron spin centers. Our direct simulations of the transient signal in erbium- and chromium-doped CaWO4 crystal samples with and without error corrections show that the application of the selected composite pulse sequences can substantially increase the lifetime of Rabi oscillations. Finally, we discuss the applicability limitations of the studied pulse sequences for the use in solid-state electron paramagnetic resonance spectroscopy.
Watanabe, Y. Abe, S.
2014-06-15
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.
A measurement methodology for dynamic angle of sight errors in hardware-in-the-loop simulation
NASA Astrophysics Data System (ADS)
Zhang, Wen-pan; Wu, Jun-hui; Gan, Lin; Zhao, Hong-peng; Liang, Wei-wei
2015-10-01
In order to precisely measure dynamic angle of sight for hardware-in-the-loop simulation, a dynamic measurement methodology was established and a set of measurement system was built. The errors and drifts, such as synchronization delay, CCD measurement error and drift, laser spot error on diffuse reflection plane and optics axis drift of laser, were measured and analyzed. First, by analyzing and measuring synchronization time between laser and time of controlling data, an error control method was devised and lowered synchronization delay to 21μs. Then, the relationship between CCD device and laser spot position was calibrated precisely and fitted by two-dimension surface fitting. CCD measurement error and drift were controlled below 0.26mrad. Next, angular resolution was calculated, and laser spot error on diffuse reflection plane was estimated to be 0.065mrad. Finally, optics axis drift of laser was analyzed and measured which did not exceed 0.06mrad. The measurement results indicate that the maximum of errors and drifts of the measurement methodology is less than 0.275mrad. The methodology can satisfy the measurement on dynamic angle of sight of higher precision and lager scale.
Using Simulation to Address Hierarchy-Related Errors in Medical Practice
Calhoun, Aaron William; Boone, Megan C; Porter, Melissa B; Miller, Karen H
2014-01-01
Objective: Hierarchy, the unavoidable authority gradients that exist within and between clinical disciplines, can lead to significant patient harm in high-risk situations if not mitigated. High-fidelity simulation is a powerful means of addressing this issue in a reproducible manner, but participant psychological safety must be assured. Our institution experienced a hierarchy-related medication error that we subsequently addressed using simulation. The purpose of this article is to discuss the implementation and outcome of these simulations. Methods: Script and simulation flowcharts were developed to replicate the case. Each session included the use of faculty misdirection to precipitate the error. Care was taken to assure psychological safety via carefully conducted briefing and debriefing periods. Case outcomes were assessed using the validated Team Performance During Simulated Crises Instrument. Gap analysis was used to quantify team self-insight. Session content was analyzed via video review. Results: Five sessions were conducted (3 in the pediatric intensive care unit and 2 in the Pediatric Emergency Department). The team was unsuccessful at addressing the error in 4 (80%) of 5 cases. Trends toward lower communication scores (3.4/5 vs 2.3/5), as well as poor team self-assessment of communicative ability, were noted in unsuccessful sessions. Learners had a positive impression of the case. Conclusions: Simulation is a useful means to replicate hierarchy error in an educational environment. This methodology was viewed positively by learner teams, suggesting that psychological safety was maintained. Teams that did not address the error successfully may have impaired self-assessment ability in the communication skill domain. PMID:24867545
Measurement and simulation of clock errors from resource-constrained embedded systems
NASA Astrophysics Data System (ADS)
Collett, M. A.; Matthews, C. E.; Esward, T. J.; Whibberley, P. B.
2010-07-01
Resource-constrained embedded systems such as wireless sensor networks are becoming increasingly sought-after in a range of critical sensing applications. Hardware for such systems is typically developed as a general tool, intended for research and flexibility. These systems often have unexpected limitations and sources of error when being implemented for specific applications. We investigate via measurement and simulation the output of the onboard clock of a Crossbow MICAz testbed, comprising a quartz oscillator accessed via a combination of hardware and software. We show that the clock output available to the user suffers a number of instabilities and errors. Using a simple software simulation of the system based on a series of nested loops, we identify the source of each component of the error, finding that there is a 7.5 × 10-6 probability that a given oscillation from the governing crystal will be miscounted, resulting in frequency jitter over a 60 µHz range.
Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2010-01-01
The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.
Gordon, J. J.; Crimaldi, A. J.; Hagan, M.; Moore, J.; Siebers, J. V.
2007-01-15
This work evaluates: (i) the size of random and systematic setup errors that can be absorbed by 5 mm clinical target volume (CTV) to planning target volume (PTV) margins in prostate intensity modulated radiation therapy (IMRT); (ii) agreement between simulation results and published margin recipes; and (iii) whether shifting contours with respect to a static dose distribution accurately predicts dose coverage due to setup errors. In 27 IMRT treatment plans created with 5 mm CTV-to-PTV margins, random setup errors with standard deviations (SDs) of 1.5, 3, 5 and 10 mm were simulated by fluence convolution. Systematic errors with identical SDs were simulated using two methods: (a) shifting the isocenter and recomputing dose (isocenter shift), and (b) shifting patient contours with respect to the static dose distribution (contour shift). Maximum tolerated setup errors were evaluated such that 90% of plans had target coverage equal to the planned PTV coverage. For coverage criteria consistent with published margin formulas, plans with 5 mm margins were found to absorb combined random and systematic SDs{approx_equal}3 mm. Published recipes require margins of 8-10 mm for 3 mm SDs. For the prostate IMRT cases presented here a 5 mm margin would suffice, indicating that published recipes may be pessimistic. We found significant errors in individual plan doses given by the contour shift method. However, dose population plots (DPPs) given by the contour shift method agreed with the isocenter shift method for all structures except the nodal CTV and small bowel. For the nodal CTV, contour shift DPP differences were due to the structure moving outside the patient. Small bowel DPP errors were an artifact of large relative differences at low doses. Estimating individual plan doses by shifting contours with respect to a static dose distribution is not recommended. However, approximating DPPs is acceptable, provided care is taken with structures such as the nodal CTV which lie close
Effects of Crew Resource Management Training on Medical Errors in a Simulated Prehospital Setting
ERIC Educational Resources Information Center
Carhart, Elliot D.
2012-01-01
This applied dissertation investigated the effect of crew resource management (CRM) training on medical errors in a simulated prehospital setting. Specific areas addressed by this program included situational awareness, decision making, task management, teamwork, and communication. This study is believed to be the first investigation of CRM…
NASA Astrophysics Data System (ADS)
Intriligator, D. S.; Sun, W.; Detman, T. R.; Dryer, Ph D., M.; Intriligator, J.; Deehr, C. S.; Webber, W. R.; Gloeckler, G.; Miller, W. D.
2015-12-01
Large solar events can have severe adverse global impacts at Earth. These solar events also can propagate throughout the heliopshere and into the interstellar medium. We focus on the July 2012 and Halloween 2003 solar events. We simulate these events starting from the vicinity of the Sun at 2.5 Rs. We compare our three dimensional (3D) time-dependent simulations to available spacecraft (s/c) observations at 1 AU and beyond. Based on the comparisons of the predictions from our simulations with in-situ measurements we find that the effects of these large solar events can be observed in the outer heliosphere, the heliosheath, and even into the interstellar medium. We use two simulation models. The HAFSS (HAF Source Surface) model is a kinematic model. HHMS-PI (Hybrid Heliospheric Modeling System with Pickup protons) is a numerical magnetohydrodynamic solar wind (SW) simulation model. Both HHMS-PI and HAFSS are ideally suited for these analyses since starting at 2.5 Rs from the Sun they model the slowly evolving background SW and the impulsive, time-dependent events associated with solar activity. Our models naturally reproduce dynamic 3D spatially asymmetric effects observed throughout the heliosphere. Pre-existing SW background conditions have a strong influence on the propagation of shock waves from solar events. Time-dependence is a crucial aspect of interpreting s/c data. We show comparisons of our simulation results with STEREO A, ACE, Ulysses, and Voyager s/c observations.
Wang, Fei; Toselli, Italo; Korotkova, Olga
2016-02-10
An optical system consisting of a laser source and two independent consecutive phase-only spatial light modulators (SLMs) is shown to accurately simulate a generated random beam (first SLM) after interaction with a stationary random medium (second SLM). To illustrate the range of possibilities, a recently introduced class of random optical frames is examined on propagation in free space and several weak turbulent channels with Kolmogorov and non-Kolmogorov statistics.
A computer simulation study of type III radio burst propagation through the solar corona
NASA Astrophysics Data System (ADS)
Itkina, M. A.; Levin, B. N.
1992-01-01
Type III solar radio burst propagation through large-scale coronal structure is numerically simulated. It is shown that radio wave refraction in an overdense streamer results in an increase of the apparent radial distance of the type III fundamental source and produces a broadening of the radiation polar diagram in agreement with the observations. It is also verified that the well-known fine frequency structure of type IIIb emission can be due to the fibrous character of the streamer.
Wang, Fei; Toselli, Italo; Korotkova, Olga
2016-02-10
An optical system consisting of a laser source and two independent consecutive phase-only spatial light modulators (SLMs) is shown to accurately simulate a generated random beam (first SLM) after interaction with a stationary random medium (second SLM). To illustrate the range of possibilities, a recently introduced class of random optical frames is examined on propagation in free space and several weak turbulent channels with Kolmogorov and non-Kolmogorov statistics. PMID:26906385
NASA Astrophysics Data System (ADS)
Rauter, N.; Lammering, R.
2015-04-01
In order to detect micro-structural damages accurately new methods are currently developed. A promising tool is the generation of higher harmonic wave modes caused by the nonlinear Lamb wave propagation in plate like structures. Due to the very small amplitudes a cumulative effect is used. To get a better overview of this inspection method numerical simulations are essential. Previous studies have developed the analytical description of this phenomenon which is based on the five-constant nonlinear elastic theory. The analytical solution has been approved by numerical simulations. In this work first the nonlinear cumulative wave propagation is simulated and analyzed considering micro-structural cracks in thin linear elastic isotropic plates. It is shown that there is a cumulative effect considering the S1-S2 mode pair. Furthermore the sensitivity of the relative acoustical nonlinearity parameter regarding those damages is validated. Furthermore, an influence of the crack size and orientation on the nonlinear wave propagation behavior is observed. In a second step the micro-structural cracks are replaced by a nonlinear material model. Instead of the five-constant nonlinear elastic theory hyperelastic material models that are implemented in commonly used FEM software are used to simulate the cumulative effect of the higher harmonic Lamb wave generation. The cumulative effect as well as the different nonlinear behavior of the S1-S2 and S2-S4 mode pairs are found by using these hyperelastic material models. It is shown that, both numerical simulations, which take into account micro-structural cracks on the one hand and nonlinear material on the other hand, lead to comparable results. Furthermore, in comparison to the five-constant nonlinear elastic theory the use of the well established hyperelastic material models like Neo-Hooke and Mooney-Rivlin are a suitable alternative to simulate the cumulative higher harmonic generation.
Experimental study on propagation of fault slip along a simulated rock fault
NASA Astrophysics Data System (ADS)
Mizoguchi, K.
2015-12-01
Around pre-existing geological faults in the crust, we have often observed off-fault damage zone where there are many fractures with various scales, from ~ mm to ~ m and their density typically increases with proximity to the fault. One of the fracture formation processes is considered to be dynamic shear rupture propagation on the faults, which leads to the occurrence of earthquakes. Here, I have conducted experiments on propagation of fault slip along a pre-cut rock surface to investigate the damaging behavior of rocks with slip propagation. For the experiments, I used a pair of metagabbro blocks from Tamil Nadu, India, of which the contacting surface simulates a fault of 35 cm in length and 1cm width. The experiments were done with the similar uniaxial loading configuration to Rosakis et al. (2007). Axial load σ is applied to the fault plane with an angle 60° to the loading direction. When σ is 5kN, normal and shear stresses on the fault are 1.25MPa and 0.72MPa, respectively. Timing and direction of slip propagation on the fault during the experiments were monitored with several strain gauges arrayed at an interval along the fault. The gauge data were digitally recorded with a 1MHz sampling rate and 16bit resolution. When σ is 4.8kN is applied, we observed some fault slip events where a slip nucleates spontaneously in a subsection of the fault and propagates to the whole fault. However, the propagation speed is about 1.2km/s, much lower than the S-wave velocity of the rock. This indicates that the slip events were not earthquake-like dynamic rupture ones. More efforts are needed to reproduce earthquake-like slip events in the experiments. This work is supported by the JSPS KAKENHI (26870912).
Monte Carlo simulation for light propagation in 3D tooth model
NASA Astrophysics Data System (ADS)
Fu, Yongji; Jacques, Steven L.
2011-03-01
Monte Carlo (MC) simulation was implemented in a three dimensional tooth model to simulate the light propagation in the tooth for antibiotic photodynamic therapy and other laser therapy. The goal of this research is to estimate the light energy deposition in the target region of tooth with given light source information, tooth optical properties and tooth structure. Two use cases were presented to demonstrate the practical application of this model. One case was comparing the isotropic point source and narrow beam dosage distribution and the other case was comparing different incident points for the same light source. This model will help the doctor for PDT design in the tooth.
NASA Astrophysics Data System (ADS)
Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros
2012-10-01
We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.
Simulation of System Error Tolerances of a High Current Transport Experiment for Heavy-Ion Fusion
NASA Astrophysics Data System (ADS)
Lund, Steven M.; Bangerter, Roger O.; Freidman, Alex; Grote, Dave P.; Seidl, Peter A.
2000-10-01
A driver-scale, intense ion beam transport experiment (HCX) is being designed to test issues for Heavy Ion Fusion (HIF) [1]. Here we present detailed, Particle in Cell simulations of HCX to parametrically explore how various system errors can impact machine performance. The simulations are transverse and include the full 3D fields of the quadrupole focusing magnets, spreads in axial momentum, conducting pipe boundary conditions, etc. System imperfections such as applied focusing field errors (magnet strength, field nonlinearities, etc.), alignment errors (magnet offsets and rotations), beam envelope mismatches to the focusing lattice, induced beam image charges, and beam distribution errors (beam nonuniformities, collective modes, and other distortions) are all analyzed in turn and in combination. The influence of these errors on the degradation of beam quality (emittance growth), halo production, and loss of beam control are evaluated. Evaluations of practical machine apertures and centroid steering corrections that can mitigate particle loss and degradation of beam quality are carried out. 1. P.A. Seidl, L.E. Ahle, R.O. Bangerter, V.P. Karpenko, S.M. Lund, A Faltens, R.M. Franks, D.B. Shuman, and H.K. Springer, Design of a Proof of Principal High Current Transport Experiment for Heavy-Ion Fusion, these proceedings.
Tupper, Judith B; Pearson, Karen B; Meinersmann, Krista M; Dvorak, Jean
2013-06-01
Continuing education for health care workers is an important mechanism for maintaining patient safety and high-quality health care. Interdisciplinary continuing education that incorporates simulation can be an effective teaching strategy for improving patient safety. Health care professionals who attended a recent Patient Safety Academy had the opportunity to experience firsthand a simulated situation that included many potential patient safety errors. This high-fidelity activity combined the best practice components of a simulation and a collaborative experience that promoted interdisciplinary communication and learning. Participants were challenged to see, learn, and experience "ah-ha" moments of insight as a basis for error reduction and quality improvement. This innovative interdisciplinary educational training method can be offered in place of traditional lecture or online instruction in any facility, hospital, nursing home, or community care setting.
NASA Technical Reports Server (NTRS)
Goldberg, Louis F.
1992-01-01
Aspects of the information propagation modeling behavior of integral machine computer simulation programs are investigated in terms of a transmission line. In particular, the effects of pressure-linking and temporal integration algorithms on the amplitude ratio and phase angle predictions are compared against experimental and closed-form analytic data. It is concluded that the discretized, first order conservation balances may not be adequate for modeling information propagation effects at characteristic numbers less than about 24. An entropy transport equation suitable for generalized use in Stirling machine simulation is developed. The equation is evaluated by including it in a simulation of an incompressible oscillating flow apparatus designed to demonstrate the effect of flow oscillations on the enhancement of thermal diffusion. Numerical false diffusion is found to be a major factor inhibiting validation of the simulation predictions with experimental and closed-form analytic data. A generalized false diffusion correction algorithm is developed which allows the numerical results to match their analytic counterparts. Under these conditions, the simulation yields entropy predictions which satisfy Clausius' inequality.
Errors in short circuit measurements due to spectral mismatch between sunlight and solar simulators
NASA Technical Reports Server (NTRS)
Curtis, H. B.
1976-01-01
Errors in short circuit current measurement were calculated for a variety of spectral mismatch conditions. The differences in spectral irradiance between terrestrial sunlight and three types of solar simulator were studied, as well as the differences in spectral response between three types of reference solar cells and various test cells. The simulators considered were a short arc xenon lamp AMO sunlight simulator, an ordinary quartz halogen lamp, and an ELH-type quartz halogen lamp. Three types of solar cells studied were a silicon cell, a cadmium sulfide cell and a gallium arsenide cell.
NASA Astrophysics Data System (ADS)
Goulden, T.; Hopkinson, C.
2013-12-01
The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future
Confirmation of standard error analysis techniques applied to EXAFS using simulations
Booth, Corwin H; Hu, Yung-Jin
2009-12-14
Systematic uncertainties, such as those in calculated backscattering amplitudes, crystal glitches, etc., not only limit the ultimate accuracy of the EXAFS technique, but also affect the covariance matrix representation of real parameter errors in typical fitting routines. Despite major advances in EXAFS analysis and in understanding all potential uncertainties, these methods are not routinely applied by all EXAFS users. Consequently, reported parameter errors are not reliable in many EXAFS studies in the literature. This situation has made many EXAFS practitioners leery of conventional error analysis applied to EXAFS data. However, conventional error analysis, if properly applied, can teach us more about our data, and even about the power and limitations of the EXAFS technique. Here, we describe the proper application of conventional error analysis to r-space fitting to EXAFS data. Using simulations, we demonstrate the veracity of this analysis by, for instance, showing that the number of independent dat a points from Stern's rule is balanced by the degrees of freedom obtained from a 2 statistical analysis. By applying such analysis to real data, we determine the quantitative effect of systematic errors. In short, this study is intended to remind the EXAFS community about the role of fundamental noise distributions in interpreting our final results.
One-way approximation for the simulation of weak shock wave propagation in atmospheric flows.
Gallin, Louis-Jonardan; Rénier, Mathieu; Gaudard, Eric; Farges, Thomas; Marchiano, Régis; Coulouvrat, François
2014-05-01
A numerical scheme is developed to simulate the propagation of weak acoustic shock waves in the atmosphere with no absorption. It generalizes the method previously developed for a heterogeneous medium [Dagrau, Rénier, Marchiano, and Coulouvrat, J. Acoust. Soc. Am. 130, 20-32 (2011)] to the case of a moving medium. It is based on an approximate scalar wave equation for potential, rewritten in a moving time frame, and separated into three parts: (i) the linear wave equation in a homogeneous and quiescent medium, (ii) the effects of atmospheric winds and of density and speed of sound heterogeneities, and (iii) nonlinearities. Each effect is then solved separately by an adapted method: angular spectrum for the wave equation, finite differences for the flow and heterogeneity corrections, and analytical method in time domain for nonlinearities. To keep a one-way formulation, only forward propagating waves are kept in the angular spectrum part, while a wide-angle parabolic approximation is performed on the correction terms. The numerical process is validated in the case of guided modal propagation with a shear flow. It is then applied to the case of blast wave propagation within a boundary layer flow over a flat and rigid ground. PMID:24815240
Acoustic pulse propagation in an urban environment using a three-dimensional numerical simulation.
Mehra, Ravish; Raghuvanshi, Nikunj; Chandak, Anish; Albert, Donald G; Wilson, D Keith; Manocha, Dinesh
2014-06-01
Acoustic pulse propagation in outdoor urban environments is a physically complex phenomenon due to the predominance of reflection, diffraction, and scattering. This is especially true in non-line-of-sight cases, where edge diffraction and high-order scattering are major components of acoustic energy transport. Past work by Albert and Liu [J. Acoust. Soc. Am. 127, 1335-1346 (2010)] has shown that many of these effects can be captured using a two-dimensional finite-difference time-domain method, which was compared to the measured data recorded in an army training village. In this paper, a full three-dimensional analysis of acoustic pulse propagation is presented. This analysis is enabled by the adaptive rectangular decomposition method by Raghuvanshi, Narain and Lin [IEEE Trans. Visual. Comput. Graphics 15, 789-801 (2009)], which models sound propagation in the same scene in three dimensions. The simulation is run at a much higher usable bandwidth (nearly 450 Hz) and took only a few minutes on a desktop computer. It is shown that a three-dimensional solution provides better agreement with measured data than two-dimensional modeling, especially in cases where propagation over rooftops is important. In general, the predicted acoustic responses match well with measured results for the source/sensor locations. PMID:24907788
One-way approximation for the simulation of weak shock wave propagation in atmospheric flows.
Gallin, Louis-Jonardan; Rénier, Mathieu; Gaudard, Eric; Farges, Thomas; Marchiano, Régis; Coulouvrat, François
2014-05-01
A numerical scheme is developed to simulate the propagation of weak acoustic shock waves in the atmosphere with no absorption. It generalizes the method previously developed for a heterogeneous medium [Dagrau, Rénier, Marchiano, and Coulouvrat, J. Acoust. Soc. Am. 130, 20-32 (2011)] to the case of a moving medium. It is based on an approximate scalar wave equation for potential, rewritten in a moving time frame, and separated into three parts: (i) the linear wave equation in a homogeneous and quiescent medium, (ii) the effects of atmospheric winds and of density and speed of sound heterogeneities, and (iii) nonlinearities. Each effect is then solved separately by an adapted method: angular spectrum for the wave equation, finite differences for the flow and heterogeneity corrections, and analytical method in time domain for nonlinearities. To keep a one-way formulation, only forward propagating waves are kept in the angular spectrum part, while a wide-angle parabolic approximation is performed on the correction terms. The numerical process is validated in the case of guided modal propagation with a shear flow. It is then applied to the case of blast wave propagation within a boundary layer flow over a flat and rigid ground.
Testing the Propagating Fluctuations Model with a Long, Global Accretion Disk Simulation
NASA Astrophysics Data System (ADS)
Hogg, J. Drew; Reynolds, Christopher S.
2016-07-01
The broadband variability of many accreting systems displays characteristic structures; log-normal flux distributions, root-mean square (rms)-flux relations, and long inter-band lags. These characteristics are usually interpreted as inward propagating fluctuations of the mass accretion rate in an accretion disk driven by stochasticity of the angular momentum transport mechanism. We present the first analysis of propagating fluctuations in a long-duration, high-resolution, global three-dimensional magnetohydrodynamic (MHD) simulation of a geometrically thin (h/r ≈ 0.1) accretion disk around a black hole. While the dynamical-timescale turbulent fluctuations in the Maxwell stresses are too rapid to drive radially coherent fluctuations in the accretion rate, we find that the low-frequency quasi-periodic dynamo action introduces low-frequency fluctuations in the Maxwell stresses, which then drive the propagating fluctuations. Examining both the mass accretion rate and emission proxies, we recover log-normality, linear rms-flux relations, and radial coherence that would produce inter-band lags. Hence, we successfully relate and connect the phenomenology of propagating fluctuations to modern MHD accretion disk theory.
Davidchack, Ruslan L.
2010-12-10
We investigate the influence of numerical discretization errors on computed averages in a molecular dynamics simulation of TIP4P liquid water at 300 K coupled to different deterministic (Nose-Hoover and Nose-Poincare) and stochastic (Langevin) thermostats. We propose a couple of simple practical approaches to estimating such errors and taking them into account when computing the averages. We show that it is possible to obtain accurate measurements of various system quantities using step sizes of up to 70% of the stability threshold of the integrator, which for the system of TIP4P liquid water at 300 K corresponds to the step size of about 7 fs.
The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Suarez, M. J.; Bacmeister, J. T.; Chen, B.; Takacs, L. L.
2006-01-01
Previous GCM studies have found that the systematic errors in the GCM simulation of the seasonal mean ITCZ intensity and location could be substantially corrected by adding suitable amount of rain re-evaporation or cumulus momentum transport. However, the reason(s) for these systematic errors and solutions has remained a puzzle. In this work the knowledge gained from previous studies of the ITCZ in an aqua-planet model with zonally uniform SST is applied to solve this puzzle. The solution is supported by further aqua-planet and full model experiments using the latest version of the Goddard Earth Observing System GCM.
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Accelerating Simulation of Seismic Wave Propagation by Multi-GPUs (Invited)
NASA Astrophysics Data System (ADS)
Okamoto, T.; Takenaka, H.; Nakamura, T.; Aoki, T.
2010-12-01
Simulation of seismic wave propagation is essential in modern seismology: the effects of irregular topography of the surface, internal discontinuities and heterogeneity on the seismic waveforms must be precisely modeled in order to probe the Earth's and other planets' interiors, to study the earthquake sources, and to evaluate the strong ground motions due to earthquakes. Devices with high computing performance are necessary because in large scale simulations more than one billion of grid points are required. GPU (Graphics Processing Unit) is a remarkable device for its many core architecture with more-than-one-hundred processing units, and its high memory bandwidth. Now GPU delivers extremely high computing performance (more than one tera-flops in single-precision arithmetic) at a reduced power and cost compared to conventional CPUs. The simulation of seismic wave propagation is a memory intensive problem which involves large amount of data transfer between the memory and the arithmetic units while the number of arithmetic calculations is relatively small. Therefore the simulation should benefit from the high memory bandwidth of the GPU. Thus several approaches to adopt GPU to the simulation of seismic wave propagation have been emerging (e.g., Komatitsch et al., 2009; Micikevicius, 2009; Michea and Komatitsch, 2010; Aoi et al., SSJ 2009, JPGU 2010; Okamoto et al., SSJ 2009, SACSIS 2010). In this paper we describe our approach to accelerate the simulation of seismic wave propagation based on the finite-difference method (FDM) by adopting multi-GPU computing. The finite-difference scheme we use is the three-dimensional, velocity-stress staggered grid scheme (e.g., Grave 1996; Moczo et al., 2007) for heterogeneous medium with perfect elasticity (incorporation of an-elasticity is underway). We use the GPUs (NVIDIA S1070, 1.44 GHz) installed in the TSUBAME grid cluster in the Global Scientific Information and Computing Center, Tokyo Institute of Technology and NVIDIA
NASA Astrophysics Data System (ADS)
Fedioun, Ivan; Lardjane, Nicolas; Gökalp, Iskender
2001-12-01
Some recent studies on the effects of truncation and aliasing errors on the large eddy simulation (LES) of turbulent flows via the concept of modified wave number are revisited. It is shown that all the results obtained for nonlinear partial differential equations projected and advanced in time in spectral space are not straightforwardly applicable to physical space calculations due to the nonequivalence by Fourier transform of spectral aliasing errors and numerical errors on a set of grid points in physical space. The consequences of spectral static aliasing errors on a set of grid points are analyzed in one dimension of space for quadratic products and their derivatives. The dynamical process that results through time stepping is illustrated on the Burgers equation. A method based on midpoint interpolation is proposed to remove in physical space the static grid point errors involved in divergence forms. It is compared to the sharp filtering technique on finer grids suggested by previous authors. Global performances resulting from combination of static aliasing errors and truncation errors are then discussed for all classical forms of the convective terms in Navier-Stokes equations. Some analytical results previously obtained on the relative magnitude of subgrid scale terms and numerical errors are confirmed with 3D realistic random fields. The physical space dynamical behavior and the stability of typical associations of numerical schemes and forms of nonlinear terms are finally evaluated on the LES of self-decaying homogeneous isotropic turbulence. It is shown that the convective form (if conservative properties are not strictly required) associated with highly resolving compact finite difference schemes provides the best compromise, which is nearly equivalent to dealiased pseudo-spectral calculations.
NASA Astrophysics Data System (ADS)
Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian
2016-06-01
Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modeling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5% and 9 ° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10% in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1% at periods greater than 30 s in most oceanic regions, but the error is up to 2% for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.
NASA Astrophysics Data System (ADS)
Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian
2016-08-01
Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modelling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5 per cent and 9° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10 per cent in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1 per cent at periods greater than 30 s in most oceanic regions, but the error is up to 2 per cent for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2008-01-01
An approach for assessing the delamination propagation simulation capabilities in commercial finite element codes is presented and demonstrated. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. The load-displacement relationship and the total strain energy obtained from the propagation analysis results and the benchmark results were compared and good agreements could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as was expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.
NASA Astrophysics Data System (ADS)
Klinger, David; Kraitl, Jens; Ewald, Hartmut
2013-02-01
Simulations of light propagation in biological tissues are a useful method in detector development for tissue spectroscopy. In practice most attention is paid to the adequate description of tissue structures and the ray trace procedure. The surrounding light source geometry, such as output window, reflector and casing is neglected. Instead, the description of the light source is usually reduced to incident beam paths. This also applies to detectors and further surrounding tissue connected sensor geometry. This paper discusses the influence of a complex and realistic description of the light source and detector geometry with the ray tracing software ASAP (Breault Research Organization). Additionally simulations include the light distribution curve in respect to light propagation through the tissue model. It was observed that the implementation of the geometric elements of the light source and the detector have direct influence on the propagation paths, average photon penetration depth, average photon path length and detected photon energy. The results show the importance of the inclusion of realistic geometric structures for various light source, tissue and sensor scenarios, especially for reflectance measurements. In reality the tissue surrounding sensor geometry has a substantial impact on surface and subsurface reflectance and transmittance due to the fact that a certain amount of photons are prevented from leaving the tissue model. Further improvement allows a determination of optimal materials and geometry for the light source and sensors to increase the number of light-tissue-interactions by the incident photons.
Simulation of Lamb wave propagation for the characterization of complex structures.
Agostini, Valentina; Delsanto, Pier Paolo; Genesio, Ivan; Olivero, Dimitri
2003-04-01
Reliable numerical simulation techniques represent a very valuable tool for analysis. For this purpose we investigated the applicability of the local interaction simulation approach (LISA) to the study of the propagation of Lamb waves in complex structures. The LISA allows very fast and flexible simulations, especially in conjunction with parallel processing, and it is particularly useful for complex (heterogeneous, anisotropic, attenuative, and/or nonlinear) media. We present simulations performed on a glass fiber reinforced plate, initially undamaged and then with a hole passing through its thickness (passing-by hole). In order to give a validation of the method, the results are compared with experimental data. Then we analyze the interaction of Lamb waves with notches, delaminations, and complex structures. In the first case the discontinuity due to a notch generates mode conversion, which may be used to predict the defect shape and size. In the case of a single delamination, the most striking "signature" is a time-shift delay, which may be observed in the temporal evolution of the signal recorded by a receiver. We also present some results obtained on a geometrically complex structure. Due to the inherent discontinuities, a wealth of propagation mechanisms are observed, which can be exploited for the purpose of quantitative nondestructive evaluation (NDE).
The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation over Oceans
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Suarez, Max J.; Bacmeister, Julio T.; Chen, Baode; Takacs, Lawrence L.
2006-01-01
This study provides explanations for some of the experimental findings of Chao (2000) and Chao and Chen (2001) concerning the mechanisms responsible for the ITCZ in an aqua-planet model. These explanations are then applied to explain the origin of some of the systematic errors in the GCM simulation of ITCZ precipitatin over oceans. The ITCZ systematic errors are highly sensitive to model physics and by extension model horizontal resolution. The findings in this study along with those of Chao (2000) and Chao and Chen (2001, 2004) contribute to building a theoretical foundation for ITCZ study. A few possible methods of alleviating the systematic errors in the GCM simulaiton of ITCZ are discussed. This study uses a recent version of the Goddard Modeling and Assimilation Office's Goddard Earth Observing System (GEOS-5) GCM.
Simulation of Crack Propagation in Engine Rotating Components under Variable Amplitude Loading
NASA Technical Reports Server (NTRS)
Bonacuse, P. J.; Ghosn, L. J.; Telesman, J.; Calomino, A. M.; Kantzos, P.
1998-01-01
The crack propagation life of tested specimens has been repeatedly shown to strongly depend on the loading history. Overloads and extended stress holds at temperature can either retard or accelerate the crack growth rate. Therefore, to accurately predict the crack propagation life of an actual component, it is essential to approximate the true loading history. In military rotorcraft engine applications, the loading profile (stress amplitudes, temperature, and number of excursions) can vary significantly depending on the type of mission flown. To accurately assess the durability of a fleet of engines, the crack propagation life distribution of a specific component should account for the variability in the missions performed (proportion of missions flown and sequence). In this report, analytical and experimental studies are described that calibrate/validate the crack propagation prediction capability ]or a disk alloy under variable amplitude loading. A crack closure based model was adopted to analytically predict the load interaction effects. Furthermore, a methodology has been developed to realistically simulate the actual mission mix loading on a fleet of engines over their lifetime. A sequence of missions is randomly selected and the number of repeats of each mission in the sequence is determined assuming a Poisson distributed random variable with a given mean occurrence rate. Multiple realizations of random mission histories are generated in this manner and are used to produce stress, temperature, and time points for fracture mechanics calculations. The result is a cumulative distribution of crack propagation lives for a given, life limiting, component location. This information can be used to determine a safe retirement life or inspection interval for the given location.
Simulation of Crack Propagation in Engine Rotating Components Under Variable Amplitude Loading
NASA Technical Reports Server (NTRS)
Bonacuse, P. J.; Ghosn, L. J.; Telesman, J.; Calomino, A. M.; Kantzos, P.
1999-01-01
The crack propagation life of tested specimens has been repeatedly shown to strongly depend on the loading history. Overloads and extended stress holds at temperature can either retard or accelerate the crack growth rate. Therefore, to accurately predict the crack propagation life of an actual component, it is essential to approximate the true loading history. In military rotorcraft engine applications, the loading profile (stress amplitudes, temperature, and number of excursions) can vary significantly depending on the type of mission flown. To accurately assess the durability of a fleet of engines, the crack propagation life distribution of a specific component should account for the variability in the missions performed (proportion of missions flown and sequence). In this report, analytical and experimental studies are described that calibrate/validate the crack propagation prediction capability for a disk alloy under variable amplitude loading. A crack closure based model was adopted to analytically predict the load interaction effects. Furthermore, a methodology has been developed to realistically simulate the actual mission mix loading on a fleet of engines over their lifetime. A sequence of missions is randomly selected and the number of repeats of each mission in the sequence is determined assuming a Poisson distributed random variable with a given mean occurrence rate. Multiple realizations of random mission histories are generated in this manner and are used to produce stress, temperature, and time points for fracture mechanics calculations. The result is a cumulative distribution of crack propagation lives for a given, life limiting, component location. This information can be used to determine a safe retirement life or inspection interval for the given location.
An Atomistic Simulation of Crack Propagation in a Nickel Single Crystal
NASA Technical Reports Server (NTRS)
Karimi, Majid
2002-01-01
The main objective of this paper is to determine mechanisms of crack propagation in a nickel single crystal. Motivation for selecting nickel as a case study is because we believe that its physical properties are very close to that of nickel-base super alloy. We are directed in identifying some generic trends that would lead a single crystalline material to failure. We believe that the results obtained here would be of interest to the experimentalists in guiding them to a more optimized experimental strategy. The dynamic crack propagation experiments are very difficult to do. We are partially motivated to fill the gap by generating the simulation results in lieu of the experimental ones for the cases where experiment can not be done or when the data is not available.
Numerical simulations of large earthquakes: Dynamic rupture propagation on heterogeneous faults
Harris, R.A.
2004-01-01
Our current conceptions of earthquake rupture dynamics, especially for large earthquakes, require knowledge of the geometry of the faults involved in the rupture, the material properties of the rocks surrounding the faults, the initial state of stress on the faults, and a constitutive formulation that determines when the faults can slip. In numerical simulations each of these factors appears to play a significant role in rupture propagation, at the kilometer length scale. Observational evidence of the earth indicates that at least the first three of the elements, geometry, material, and stress, can vary over many scale dimensions. Future research on earthquake rupture dynamics needs to consider at which length scales these features are significant in affecting rupture propagation. ?? Birkha??user Verlag, Basel, 2004.
Simulation of the trans-oceanic tsunami propagation due to the 1883 Krakatau volcanic eruption
NASA Astrophysics Data System (ADS)
Choi, B. H.; Pelinovsky, E.; Kim, K. O.; Lee, J. S.
The 1883 Krakatau volcanic eruption has generated a destructive tsunami higher than 40 m on the Indonesian coast where more than 36 000 lives were lost. Sea level oscillations related with this event have been reported on significant distances from the source in the Indian, Atlantic and Pacific Oceans. Evidence of many manifestations of the Krakatau tsunami was a subject of the intense discussion, and it was suggested that some of them are not related with the direct propagation of the tsunami waves from the Krakatau volcanic eruption. Present paper analyzes the hydrodynamic part of the Krakatau event in details. The worldwide propagation of the tsunami waves generated by the Krakatau volcanic eruption is studied numerically using two conventional models: ray tracing method and two-dimensional linear shallow-water model. The results of the numerical simulations are compared with available data of the tsunami registration.
Frequency-domain bridging multiscale method for wave propagation simulations in damaged structures
NASA Astrophysics Data System (ADS)
Casadei, F.; Ruzzene, M.
2010-03-01
Efficient numerical models are essential for the simulation of the interaction of propagating waves with localized defects. Classical finite elements may be computationally time consuming, especially when detailed discretizations are needed around damage regions. A multi-scale approach is here propose to bridge a fine-scale mesh defined on a limited region around the defect and a coarse-scale discretization of the entire domain. This "bridging" method is formulated in the frequency domain in order to further reduce the computational cost and provide a general framework valid for different types of structures. Numerical results presented for propagating elastic waves in 1D and 2D damaged waveguides illustrate the proposed technique and its advantages.
3D dynamic simulation of crack propagation in extracorporeal shock wave lithotripsy
NASA Astrophysics Data System (ADS)
Wijerathne, M. L. L.; Hori, Muneo; Sakaguchi, Hide; Oguni, Kenji
2010-06-01
Some experimental observations of Shock Wave Lithotripsy(SWL), which include 3D dynamic crack propagation, are simulated with the aim of reproducing fragmentation of kidney stones with SWL. Extracorporeal shock wave lithotripsy (ESWL) is the fragmentation of kidney stones by focusing an ultrasonic pressure pulse onto the stones. 3D models with fine discretization are used to accurately capture the high amplitude shear shock waves. For solving the resulting large scale dynamic crack propagation problem, PDS-FEM is used; it provides numerically efficient failure treatments. With a distributed memory parallel code of PDS-FEM, experimentally observed 3D photoelastic images of transient stress waves and crack patterns in cylindrical samples are successfully reproduced. The numerical crack patterns are in good agreement with the experimental ones, quantitatively. The results shows that the high amplitude shear waves induced in solid, by the lithotriptor generated shock wave, play a dominant role in stone fragmentation.
Simulation of quasi-static hydraulic fracture propagation in porous media with XFEM
NASA Astrophysics Data System (ADS)
Juan-Lien Ramirez, Alina; Neuweiler, Insa; Löhnert, Stefan
2015-04-01
Hydraulic fracturing is the injection of a fracking fluid at high pressures into the underground. Its goal is to create and expand fracture networks to increase the rock permeability. It is a technique used, for example, for oil and gas recovery and for geothermal energy extraction, since higher rock permeability improves production. Many physical processes take place when it comes to fracking; rock deformation, fluid flow within the fractures, as well as into and through the porous rock. All these processes are strongly coupled, what makes its numerical simulation rather challenging. We present a 2D numerical model that simulates the hydraulic propagation of an embedded fracture quasi-statically in a poroelastic, fully saturated material. Fluid flow within the porous rock is described by Darcy's law and the flow within the fracture is approximated by a parallel plate model. Additionally, the effect of leak-off is taken into consideration. The solid component of the porous medium is assumed to be linear elastic and the propagation criteria are given by the energy release rate and the stress intensity factors [1]. The used numerical method for the spatial discretization is the eXtended Finite Element Method (XFEM) [2]. It is based on the standard Finite Element Method, but introduces additional degrees of freedom and enrichment functions to describe discontinuities locally in a system. Through them the geometry of the discontinuity (e.g. a fracture) becomes independent of the mesh allowing it to move freely through the domain without a mesh-adapting step. With this numerical model we are able to simulate hydraulic fracture propagation with different initial fracture geometries and material parameters. Results from these simulations will also be presented. References [1] D. Gross and T. Seelig. Fracture Mechanics with an Introduction to Micromechanics. Springer, 2nd edition, (2011) [2] T. Belytschko and T. Black. Elastic crack growth in finite elements with minimal
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; Touma, Danielle; Ruby Leung, L.
2016-09-19
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and themore » trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. Ultimately, these results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.« less
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
NASA Astrophysics Data System (ADS)
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; Touma, Danielle; Ruby Leung, L.
2016-09-01
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land-atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land-atmosphere interactions in the development and maintenance of SAM.
Wave propagation simulation in normal and infarcted myocardium: computational and modelling issues.
Maglaveras, N; Van Capelle, F J; De Bakker, J M
1998-01-01
Simulation of propagating action potentials (PAP) in normal and abnormal myocardium is used for the understanding of mechanisms responsible for eliciting dangerous arrhythmias. One- and two-dimensional models dealing with PAP properties are reviewed in this paper viewed both from the computational and mathematical aspects. These models are used for linking theoretical and experimental results. The discontinuous nature of the PAP is demonstrated through the combination of experimental and theoretically derived results. In particular it can be shown that for increased intracellular coupling resistance the PAP upstroke phase properties (Vmax, dV/dtmax and tau foot) change considerably, and in some cases non-monotonically with increased coupling resistance. It is shown that tau foot) is a parameter that is very sensitive to the cell's distance to the stimulus site, the stimulus strength and the coupling resistance. In particular it can be shown that in a one-dimensional structure the tau foot value can increase dramatically for lower coupling resistance values near the stimulus site and subsequently can be reduced as we move to distances larger than five resting length constants from the stimulus site. The tau foot variability is reduced with increased coupling resistance, rendering the lower coupling resistance structures, under abnormal excitation sequences, more vulnerable to conduction block and arrhythmias. Using the theory of discontinuous propagation of the PAP in the myocardium it is demonstrated that for specific abnormal situations in the myocardium, such as infarcted tissue, one- and two-dimensional models can reliably simulate propagation characteristics and explain complex phenomena such as propagation at bifurcation sites and mechanisms of block and re-entry. In conclusion it is shown that applied mathematics and informatics can help in elucidating electrophysiologically complex mechanisms such as arrhythmias and conduction disturbances in the myocardium
NASA Astrophysics Data System (ADS)
Lu, S.; Lu, Q.; Lin, Y.; Wang, X.; Ge, Y.; Wang, R.; Zhou, M.; Fu, H.; Huang, C.; Wu, M.; Wang, S.
2015-12-01
Dipolarization fronts (DFs) as earthward propagating flux ropes (FRs) in the Earth's magnetotail are presented and investigated with a three-dimensional (3-D) global hybrid simulation for the first time. In the simulation, several small-scale earthward propagating FRs are found to be formed by multiple X-line reconnection in the near-tail. During their earthward propagation, the magnetic field Bz of the FRs becomes highly asymmetric due to the imbalance of the reconnection rates between the multiple X-lines. At the later stage, when the FRs approach the near-Earth dipole-like region, the anti-reconnection between the southward/negative Bz of the FRs and the northward geomagnetic field leads to the erosion of the southward magnetic flux of the FRs, which further aggravates the Bz asymmetry. Eventually, the FRs merge into the near-Earth region through the anti-reconnection. These earthward propagating FRs can fully reproduce the observational features of the DFs, e.g., a sharp enhancement of Bz preceded by a smaller amplitude Bz dip, an earthward flow enhancement, the presence of the electric field components in the normal and dawn-dusk directions, and ion energization. Our results show that the earthward propagating FRs can be used to explain the DFs observed in the magnetotail. The thickness of the DFs is on the order of several ion inertial lengths, and the electric field normal to the front is found to be dominated by the Hall physics. During the earthward propagation from the near-tail to the near-Earth region, the speed of the FR/DFs increases from ~150km/s to ~1000km/s. The FR/DFs can be tilted in the GSM xy plane with respect to the y (dawn-dusk) axis and only extend several RE in this direction. Moreover, the structure and evolution of the FRs/DFs are non-uniform in the dawn-dusk direction, which indicates that the DFs are essentially 3-D.
Elias, John J.; Kelly, Michael J.; Smith, Kathryn E.; Gall, Kenneth A.; Farr, Jack
2016-01-01
Background: Medial patellofemoral ligament (MPFL) reconstruction is performed to prevent recurrent instability, but errors in femoral fixation can elevate graft tension. Hypothesis: Errors related to femoral fixation will overconstrain the patella and increase medial patellofemoral pressures. Study Design: Controlled laboratory study. Methods: Five knees with patellar instability were represented with computational models. Kinematics during knee extension were characterized from computational reconstruction of motion performed within a dynamic computed tomography (CT) scanner. Multibody dynamic simulation of knee extension, with discrete element analysis used to quantify contact pressures, was performed for the preoperative condition and after MPFL reconstruction. A standard femoral attachment and graft resting length were set for each knee. The resting length was decreased by 2 mm, and the femoral attachment was shifted 5 mm posteriorly. The simulated errors were also combined. Root-mean-square errors were quantified for the comparison of preoperative patellar lateral shift and tilt between computationally reconstructed motion and dynamic simulation. Simulation output was compared between the preoperative and MPFL reconstruction conditions with repeated-measures Friedman tests and Dunnett comparisons against a control, which was the standard MPFL condition, with statistical significance set at P < .05. Results: Root-mean-square errors for simulated patellar tilt and shift were 5.8° and 3.3 mm, respectively. Patellar lateral tracking for the preoperative condition was significantly larger near full extension compared with the standard MPFL reconstruction (mean differences of 8 mm and 13° for shift and tilt, respectively, at 0°), and lateral tracking was significantly smaller for a posterior femoral attachment (mean differences of 3 mm and 4° for shift and tilt, respectively, at 0°). The maximum medial pressure was also larger for the short graft with a
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Global particle simulation of lower hybrid wave propagation and mode conversion in tokamaks
Bao, J.; Lin, Z.; Kuley, A.
2015-12-10
Particle-in-cell simulation of lower hybrid (LH) waves in core plasmas is presented with a realistic electron-to-ion mass ratio in toroidal geometry. Due to the fact that LH waves mainly interact with electrons to drive the current, ion dynamic is described by cold fluid equations for simplicity, while electron dynamic is described by drift kinetic equations. This model could be considered as a new method to study LH waves in tokamak plasmas, which has advantages in nonlinear simulations. The mode conversion between slow and fast waves is observed in the simulation when the accessibility condition is not satisfied, which is consistent with the theory. The poloidal spectrum upshift and broadening effects are observed during LH wave propagation in the toroidal geometry.
Simulation of neutrino and charged particle production and propagation in the atmosphere
Derome, L.
2006-11-15
A precise evaluation of the secondary particle production and propagation in the atmosphere is very important for the atmospheric neutrino oscillation studies. The issue is addressed with the extension of a previously developed full 3-dimensional Monte-Carlo simulation of particle generation and transport in the atmosphere, to compute the flux of secondary protons, muons, and neutrinos. Recent balloon borne experiments have performed a set of accurate flux measurements for different particle species at different altitudes in the atmosphere, which can be used to test the calculations for the atmospheric neutrino production, and constrain the underlying hadronic models. The simulation results are reported and compared with the latest flux measurements. It is shown that the level of precision reached by these experiments could be used to constrain the nuclear models used in the simulation. The implication of these results for the atmospheric neutrino flux calculation are discussed.
NASA Technical Reports Server (NTRS)
Prive, Nikki C.; Errico, Ronald M.
2013-01-01
A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.
Canestrari, Niccolo; Chubar, Oleg; Reininger, Ruben
2014-09-01
X-ray beamlines in modern synchrotron radiation sources make extensive use of grazing-incidence reflective optics, in particular Kirkpatrick-Baez elliptical mirror systems. These systems can focus the incoming X-rays down to nanometer-scale spot sizes while maintaining relatively large acceptance apertures and high flux in the focused radiation spots. In low-emittance storage rings and in free-electron lasers such systems are used with partially or even nearly fully coherent X-ray beams and often target diffraction-limited resolution. Therefore, their accurate simulation and modeling has to be performed within the framework of wave optics. Here the implementation and benchmarking of a wave-optics method for the simulation of grazing-incidence mirrors based on the local stationary-phase approximation or, in other words, the local propagation of the radiation electric field along geometrical rays, is described. The proposed method is CPU-efficient and fully compatible with the numerical methods of Fourier optics. It has been implemented in the Synchrotron Radiation Workshop (SRW) computer code and extensively tested against the geometrical ray-tracing code SHADOW. The test simulations have been performed for cases without and with diffraction at mirror apertures, including cases where the grazing-incidence mirrors can be hardly approximated by ideal lenses. Good agreement between the SRW and SHADOW simulation results is observed in the cases without diffraction. The differences between the simulation results obtained by the two codes in diffraction-dominated cases for illumination with fully or partially coherent radiation are analyzed and interpreted. The application of the new method for the simulation of wavefront propagation through a high-resolution X-ray microspectroscopy beamline at the National Synchrotron Light Source II (Brookhaven National Laboratory, USA) is demonstrated.
NASA Astrophysics Data System (ADS)
Hu, Tao; Ma, Li
2010-09-01
An internal wave observation experiment was performed near the south of Hai-Nan Island in the South China Sea in July 2004. Three vertical thermistor arrays were moored to estimate internal wave propagation direction and velocity. A nonlinear internal wave packet was observed in this experiment. It appeared at flood tide time of wee hours. Computation indicated that the nonlinear internal wave packet's velocity was 0.54 m/s and its propagation direction was northwest. From its propagation direction, we estimated that the nonlinear internal wave packet was generated near Xi-Sha Islands. The dnoidal model of KdV(Korteweg-deVries) equation was used to simulate the waveform of thid nonlinear internal wave. Measured data shows the crest interval of nonlinear internal waves was shorter when they propagated. In the last section of this paper we simulate a nonlinear internal wave packet's effect on sound propagation and analyzed mode coupling led by the nonlinear internal wave packet.
NASA Astrophysics Data System (ADS)
Hackstein, S.; Vazza, F.; Brüggen, M.; Sigl, G.; Dundovic, A.
2016-11-01
We use the CRPROPA code to simulate the propagation of ultrahigh energy cosmic rays (with energy ≥1018eV and pure proton composition) through extragalactic magnetic fields that have been simulated with the cosmological ENZO code. We test both primordial and astrophysical magnetogenesis scenarios in order to investigate the impact of different magnetic field strengths in clusters, filaments and voids on the deflection of cosmic rays propagating across cosmological distances. We also study the effect of different source distributions of cosmic rays around simulated Milky Way-like observers. Our analysis shows that the arrival spectra and anisotropy of events are rather insensitive to the distribution of extragalactic magnetic fields, while they are more affected by the clustering of sources within an ˜50 Mpc distance to observers. Finally, we find that in order to reproduce the observed degree of isotropy of cosmic rays at ˜EeV energies, the average magnetic fields in cosmic voids must be ˜ 0.1 nG, providing limits on the strength of primordial seed fields.
Fast acceleration of 2D wave propagation simulations using modern computational accelerators.
Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H; Kay, Matthew
2014-01-01
Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than 150x speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least 200x faster than the sequential implementation and 30x faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of 120x with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of
NASA Technical Reports Server (NTRS)
Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.
2005-01-01
This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.
NASA Technical Reports Server (NTRS)
Warner, Joseph D.; Theofylaktos, Onoufrios
2012-01-01
A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.
NASA Astrophysics Data System (ADS)
Hong, Y.; Moradkhani, H.; Hsu, K.; Sorooshian, S.
2004-12-01
A general framework to quantify the error associated with satellite-based precipitation estimates, at various spatial and temporal scales is presented. In addition, the impact of using such precipitation data as input to a hydrologic rainfall-runoff model is examined. The uncertainty in the satellite-based precipitation estimates, as a function of the space (A), time (T), sampling frequency (Dt), and spatio-temporal average of precipitation estimates (R), using two years of high resolution PERSIANN-CCS* precipitation data over Southwest U.S, is determined. Parameter sensitivity analysis is conducted at 5o x 5o latitude-longitude grids for 16 selected areas. The eventual goal of this latter step is to obtain a generalization of the error function. The influence of spatio-temporal precipitation errors on hydrologic response is examined using a Monte Carlo approach. By this approach, an ensemble of precipitation data is generated, as forcing to the hydrologic model, and the resulting uncertainty in the forecasted streamflow is estimated. The applicability and usefulness of this procedure is demonstrated in the case of the Leaf River Basin, located north of Collins, Mississippi. It is shown that the current strategy offers a more realistic uncertainty assessment of precipitation estimates and the correspondingly streamflow forecasts. *Hong, Y., K. Hsu, S. Sorooshian, and X. Gao, 2004: Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network--Cloud Classification System, Journal of Applied Meteorology, in press.
Simulation of charge exchange plasma propagation near an ion thruster propelled spacecraft
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Kaufman, H. R.; Winder, D. R.
1981-01-01
A model describing the charge exchange plasma and its propagation is discussed, along with a computer code based on the model. The geometry of an idealized spacecraft having an ion thruster is outlined, with attention given to the assumptions used in modeling the ion beam. Also presented is the distribution function describing charge exchange production. The barometric equation is used in relating the variation in plasma potential to the variation in plasma density. The numerical methods and approximations employed in the calculations are discussed, and comparisons are made between the computer simulation and experimental data. An analytical solution of a simple configuration is also used in verifying the model.
Source altitude for experiments to simulate space-to-earth laser propagation.
NASA Technical Reports Server (NTRS)
Minott, P. O.
1973-01-01
The bias in scintillation measurements caused by the proximity of a spherical-wave source to the turbulence region of the atmosphere is predicted, and the laser-source altitude required for meaningful experiments simulating space-to-earth laser propagation is estimated. It is concluded that the source should be located at two or more times the maximum altitude of the tropopause to ensure that all measurements are not biased by more than 25%. Thus the vehicle used for experiments of this type should be capable of reaching a minimum altitude of 32 km.
NASA Astrophysics Data System (ADS)
Achour, Maha
2002-12-01
One of the biggest challenges facing Free-Space Optics deployment is proper understanding of optical signal propagation in different atmospheric conditions. In an earlier study by the author (30), attenuation by rain was analyzed and successfully modeled for infrared signal transmission. In this paper, we focus on attenuation due to scattering by haze, fog and low clouds droplets using the original Mie Scattering theory. Relying on published experimental results on infrared propagation, electromagnetic waves scattering by spherical droplet, atmospheric physics and thermodynamics, UlmTech developed a computer-based platform, Simulight, which simulates infrared signal (750 nm-12 μm) propagation in haze, fog, low clouds, rain and clear weather. Optical signals are scattered by fog droplets during transmission in the forward direction preventing the receiver from detecting the minimum required power. Weather databases describe foggy conditions by measuring the visibility parameter, which is, in general, defined as the maximum distance that the visible 550 nm signal can travel while distinguishing between the target object and its background at 2% contrast. Extrapolating optical signal attenuations beyond 550 nm using only visibility is not as straightforward as stated by the Kruse equation which is unfortunately widely used. We conclude that it is essential to understand atmospheric droplet sizes and their distributions based on measured attenuations to effectively estimate infrared attenuation. We focus on three types of popular fogs: Evolving, Stable and Selective.
Prabhakar, Ramachandran Rath, Goura K.; Julka, Pramod K.; Ganesh, Tharmar; Haresh, K.P.; Joshi, Rakesh C.; Senthamizhchelvan, S.; Thulkar, Sanjay; Pant, G.S.
2008-04-01
Setup error plays a significant role in the final treatment outcome in radiotherapy. The effect of setup error on the planning target volume (PTV) and surrounding critical structures has been studied and the maximum allowed tolerance in setup error with minimal complications to the surrounding critical structure and acceptable tumor control probability is determined. Twelve patients were selected for this study after breast conservation surgery, wherein 8 patients were right-sided and 4 were left-sided breast. Tangential fields were placed on the 3-dimensional-computed tomography (3D-CT) dataset by isocentric technique and the dose to the PTV, ipsilateral lung (IL), contralateral lung (CLL), contralateral breast (CLB), heart, and liver were then computed from dose-volume histograms (DVHs). The planning isocenter was shifted for 3 and 10 mm in all 3 directions (X, Y, Z) to simulate the setup error encountered during treatment. Dosimetric studies were performed for each patient for PTV according to ICRU 50 guidelines: mean doses to PTV, IL, CLL, heart, CLB, liver, and percentage of lung volume that received a dose of 20 Gy or more (V20); percentage of heart volume that received a dose of 30 Gy or more (V30); and volume of liver that received a dose of 50 Gy or more (V50) were calculated for all of the above-mentioned isocenter shifts and compared to the results with zero isocenter shift. Simulation of different isocenter shifts in all 3 directions showed that the isocentric shifts along the posterior direction had a very significant effect on the dose to the heart, IL, CLL, and CLB, which was followed by the lateral direction. The setup error in isocenter should be strictly kept below 3 mm. The study shows that isocenter verification in the case of tangential fields should be performed to reduce future complications to adjacent normal tissues.
Prabhakar, Ramachandran; Rath, Goura K; Julka, Pramod K; Ganesh, Tharmar; Haresh, K P; Joshi, Rakesh C; Senthamizhchelvan, S; Thulkar, Sanjay; Pant, G S
2008-01-01
Setup error plays a significant role in the final treatment outcome in radiotherapy. The effect of setup error on the planning target volume (PTV) and surrounding critical structures has been studied and the maximum allowed tolerance in setup error with minimal complications to the surrounding critical structure and acceptable tumor control probability is determined. Twelve patients were selected for this study after breast conservation surgery, wherein 8 patients were right-sided and 4 were left-sided breast. Tangential fields were placed on the 3-dimensional-computed tomography (3D-CT) dataset by isocentric technique and the dose to the PTV, ipsilateral lung (IL), contralateral lung (CLL), contralateral breast (CLB), heart, and liver were then computed from dose-volume histograms (DVHs). The planning isocenter was shifted for 3 and 10 mm in all 3 directions (X, Y, Z) to simulate the setup error encountered during treatment. Dosimetric studies were performed for each patient for PTV according to ICRU 50 guidelines: mean doses to PTV, IL, CLL, heart, CLB, liver, and percentage of lung volume that received a dose of 20 Gy or more (V20); percentage of heart volume that received a dose of 30 Gy or more (V30); and volume of liver that received a dose of 50 Gy or more (V50) were calculated for all of the above-mentioned isocenter shifts and compared to the results with zero isocenter shift. Simulation of different isocenter shifts in all 3 directions showed that the isocentric shifts along the posterior direction had a very significant effect on the dose to the heart, IL, CLL, and CLB, which was followed by the lateral direction. The setup error in isocenter should be strictly kept below 3 mm. The study shows that isocenter verification in the case of tangential fields should be performed to reduce future complications to adjacent normal tissues. PMID:18262128
Stochastic simulations suggest that HIV-1 survives close to its error threshold.
Tripathi, Kushal; Balagam, Rajesh; Vishnoi, Nisheeth K; Dixit, Narendra M
2012-01-01
The use of mutagenic drugs to drive HIV-1 past its error threshold presents a novel intervention strategy, as suggested by the quasispecies theory, that may be less susceptible to failure via viral mutation-induced emergence of drug resistance than current strategies. The error threshold of HIV-1, μ c, however, is not known. Application of the quasispecies theory to determine μ c poses significant challenges: Whereas the quasispecies theory considers the asexual reproduction of an infinitely large population of haploid individuals, HIV-1 is diploid, undergoes recombination, and is estimated to have a small effective population size in vivo. We performed population genetics-based stochastic simulations of the within-host evolution of HIV-1 and estimated the structure of the HIV-1 quasispecies and μ c. We found that with small mutation rates, the quasispecies was dominated by genomes with few mutations. Upon increasing the mutation rate, a sharp error catastrophe occurred where the quasispecies became delocalized in sequence space. Using parameter values that quantitatively captured data of viral diversification in HIV-1 patients, we estimated μ c to be 7 x 10(-5)-1 x 10(-4) substitutions/site/replication, ≈ 2-6 fold higher than the natural mutation rate of HIV-1, suggesting that HIV-1 survives close to its error threshold and may be readily susceptible to mutagenic drugs. The latter estimate was weakly dependent on the within-host effective population size of HIV-1. With large population sizes and in the absence of recombination, our simulations converged to the quasispecies theory, bridging the gap between quasispecies theory and population genetics-based approaches to describing HIV-1 evolution. Further, μ c increased with the recombination rate, rendering HIV-1 less susceptible to error catastrophe, thus elucidating an added benefit of recombination to HIV-1. Our estimate of μ c may serve as a quantitative guideline for the use of mutagenic drugs against
NASA Astrophysics Data System (ADS)
Hu, C.; Liu, X.; Shi, Y.
2015-12-01
Fold-and-thrust belts and accretionary wedge develop along compressive plate boundaries, both in hinterland and foreland. Under the long-term compressive tectonic loading, a series ramps will initiate and propagate along the wedge. How do the ramps initiate? What are the timing and spacing intervals between the ramps? How many patterns are there for the ramp propagation? These questions are basic for the study of ramp initiation and propagation. Many scholars used three different methods, critical coulomb wedge theory, analogue sandbox models, and numerical simulation to research the initiation and propagation of the ramps, respectively. In this paper, we set up a 2-D elastic-plastic finite element model, with a frictional contact plane, to simulate the initiation and propagation of the ramps. In this model, the material in upper wedge is homogenous, but considering the effects of gravity and long-term tectonic loading. The model is very simple but simulated results are very interesting. The simulated results indicate that the cohesion of upper wedge and dip angle of detachment plane have strong effects on the initiation and propagation of ramps. There are three different patterns of ramp initiation and propagation for different values of the cohesion. The results are different from those by previous analogue sandbox models, and numerical simulation, in which there is usually only one pattern for the ramp initiation and propagation. The results are consistent with geological survey for the ramp formation in an accretionary wedge. This study will provide more knowledge of mechanism of the ramp initiation and propagation in Tibetan Plateau and central Taiwan.
Monte Carlo Simulation Study of Local Critical Dimension Error on Mask and Wafer
NASA Astrophysics Data System (ADS)
Ahn, Byoung-sup; Park, Joon-Soo; Choi, Seong-Woon; Sohn, Jung-Min
2004-06-01
Sub-100 nm lithography has been realized recently in the IC industry. The resolution enhancement techniques (RET) and optical proximity effect correction (OPC) require more complicated mask patterns. It is therefore, very important to simulate and calculate mask error enhanced factor (MEEF), and critical dimension (CD) variations on the mask and wafer correctly using optical simulation tool before manufacturing. However, the expectations of MEEF and CD error using the in-house optical simulation tool, Topo, are larger than those of the experimental result. These are caused by many reasons. The ignorance of the vector property of light could be one reason. In case of using higher numerical aperture (NA), the vector property of light, such as polarization, should be taken into account when calculation of printed image on wafer. Also, the ignorance of the local CD error caused by the neighborhood could be another reason. The second issue described above has been studied using the Monte Carlo (MC) method, a commonly used statistical method. We assume that all of the factors follow the normal curve with a certain standard deviation. This assumption is sufficient for studying local CD error by the MC method. When the local CD variation on the mask for the design rule 110 nm is 3 nm in its 3σ, CD variation of approximately 2.0 nm on wafer is expected by the MC method. This result is fairly comparable with experimental one, when the MEEF is about 2.7 locally. We obtain another MEEF value, around 4.1 globally, when the mask CD deviates from the target CD by ± 12 nm in 3σ. This study shows that the MC method gives a result close to that of the experimental one greater than 2.5 of MEEF locally.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
Andreev, Victor P; Gillespie, Brenda W; Helfand, Brian T; Merion, Robert M
2016-01-01
Unsupervised classification methods are gaining acceptance in omics studies of complex common diseases, which are often vaguely defined and are likely the collections of disease subtypes. Unsupervised classification based on the molecular signatures identified in omics studies have the potential to reflect molecular mechanisms of the subtypes of the disease and to lead to more targeted and successful interventions for the identified subtypes. Multiple classification algorithms exist but none is ideal for all types of data. Importantly, there are no established methods to estimate sample size in unsupervised classification (unlike power analysis in hypothesis testing). Therefore, we developed a simulation approach allowing comparison of misclassification errors and estimating the required sample size for a given effect size, number, and correlation matrix of the differentially abundant proteins in targeted proteomics studies. All the experiments were performed in silico. The simulated data imitated the expected one from the study of the plasma of patients with lower urinary tract dysfunction with the aptamer proteomics assay Somascan (SomaLogic Inc, Boulder, CO), which targeted 1129 proteins, including 330 involved in inflammation, 180 in stress response, 80 in aging, etc. Three popular clustering methods (hierarchical, k-means, and k-medoids) were compared. K-means clustering performed much better for the simulated data than the other two methods and enabled classification with misclassification error below 5% in the simulated cohort of 100 patients based on the molecular signatures of 40 differentially abundant proteins (effect size 1.5) from among the 1129-protein panel. PMID:27524871
Error-related EEG potentials generated during simulated brain-computer interaction.
Ferrez, Pierre W; del R Millan, José
2008-03-01
Brain-computer interfaces (BCIs) are prone to errors in the recognition of subject's intent. An elegant approach to improve the accuracy of BCIs consists in a verification procedure directly based on the presence of error-related potentials (ErrP) in the electroencephalogram (EEG) recorded right after the occurrence of an error. Several studies show the presence of ErrP in typical choice reaction tasks. However, in the context of a BCI, the central question is: "Are ErrP also elicited when the error is made by the interface during the recognition of the subject's intent?"; We have thus explored whether ErrP also follow a feedback indicating incorrect responses of the simulated BCI interface. Five healthy volunteer subjects participated in a new human-robot interaction experiment, which seem to confirm the previously reported presence of a new kind of ErrP. However, in order to exploit these ErrP, we need to detect them in each single trial using a short window following the feedback associated to the response of the BCI. We have achieved an average recognition rate of correct and erroneous single trials of 83.5% and 79.2%, respectively, using a classifier built with data recorded up to three months earlier.
A background error covariance model of significant wave height employing Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Guo, Yanyou; Hou, Yijun; Zhang, Chunmei; Yang, Jie
2012-09-01
The quality of background error statistics is one of the key components for successful assimilation of observations in a numerical model. The background error covariance (BEC) of ocean waves is generally estimated under an assumption that it is stationary over a period of time and uniform over a domain. However, error statistics are in fact functions of the physical processes governing the meteorological situation and vary with the wave condition. In this paper, we simulated the BEC of the significant wave height (SWH) employing Monte Carlo methods. An interesting result is that the BEC varies consistently with the mean wave direction (MWD). In the model domain, the BEC of the SWH decreases significantly when the MWD changes abruptly. A new BEC model of the SWH based on the correlation between the BEC and MWD was then developed. A case study of regional data assimilation was performed, where the SWH observations of buoy 22001 were used to assess the SWH hindcast. The results show that the new BEC model benefits wave prediction and allows reasonable approximations of anisotropy and inhomogeneous errors.
Lg-wave simulation in heterogeneous crusts with surface topography using screen propagators
NASA Astrophysics Data System (ADS)
Wu, Xian-Yun; Wu, Ru-Shan
2001-09-01
We develop a numerical simulation method that can efficiently model the combined effects of large-scale structural variations and small-scale heterogeneities (e.g. random media) on Lg-wave propagation at far regional distances. The approach is based on the generalized screen propagator (GSP) method, which has previously been used to simulate SH Lg waves in complex crustal waveguides. In this paper, we extend the GSP method to treat complex crustal models with irregular or rough topography by incorporating surface flattening transformation into the method. The transformation converts surface perturbations into modified volume perturbations. In this way the range-dependent boundary condition becomes a stress release boundary condition on a flat surface in the new coordinate system where the half-space GSP can be applied. To demonstrate the accuracy and efficiency of the extended GSP method, synthetic seismograms are generated for various crustal waveguides, including uniform crusts, a Gaussian hill half-space, and crustal models with mild and moderately rough surfaces. The results are compared with those generated by the exact boundary element method. It is shown that the screen method is efficient for modelling the effect of surface topography on Lg waves. The comparison of synthetic seismograms generated by the screen method and the traditional parabolic equation method shows that the screen method can handle wider-angle waves as well as rougher topography than the parabolic equation method. Finally, we apply the method to complex crustal waveguides with both small-scale heterogeneities (random media) and random rough surfaces for Lg propagation to far regional distances. The influence of random heterogeneities and rough surfaces on Lg attenuation is significant.
Geant4 Application for Simulating the Propagation of Cosmic Rays through the Earth's Magnetosphere
NASA Astrophysics Data System (ADS)
Desorgher, L.; Flueckiger, E.O.; Buetikofer, R.; Moser, M.R.
2003-07-01
We have developed a Geant4 application to simulate the propagation of cosmic rays through the Earth's magnetosphere. The application computes the motion of charged particles through advanced magnetospheric magnetic field models such as the Tsyganenko 2001 model. It allows to determine cosmic ray cutoff rigidities and asymptotic directions of incidence for user-defined observing positions, directions, and times. By using the new generation of Tsyganenko models, we can analyse the variation of cutoff rigidities and asymptotic directions during magnetic storms as function of the Dst index and of the solar wind dynamic pressure. The paper describes the application, in particular its visualisation potential, and simulation results. Acknowledgments. This work was supported by the Swiss National Science Foundation, grant 20-67092.01 and by the QINETIQ contract CU009-0000028872 in the frame of the ESA/ESTEC SEPTIMESS project.
Titze, Ingo R; Palaparthi, Anil; Smith, Simeon L
2014-12-01
Time-domain computer simulation of sound production in airways is a widely used tool, both for research and synthetic speech production technology. Speed of computation is generally the rationale for one-dimensional approaches to sound propagation and radiation. Transmission line and wave-reflection (scattering) algorithms are used to produce formant frequencies and bandwidths for arbitrarily shaped airways. Some benchmark graphs and tables are provided for formant frequencies and bandwidth calculations based on specific mathematical terms in the one-dimensional Navier-Stokes equation. Some rules are provided here for temporal and spatial discretization in terms of desired accuracy and stability of the solution. Kinetic losses, which have been difficult to quantify in frequency-domain simulations, are quantified here on the basis of the measurements of Scherer, Torkaman, Kucinschi, and Afjeh [(2010). J. Acoust. Soc. Am. 128(2), 828-838].
Titze, Ingo R.; Palaparthi, Anil; Smith, Simeon L.
2014-01-01
Time-domain computer simulation of sound production in airways is a widely used tool, both for research and synthetic speech production technology. Speed of computation is generally the rationale for one-dimensional approaches to sound propagation and radiation. Transmission line and wave-reflection (scattering) algorithms are used to produce formant frequencies and bandwidths for arbitrarily shaped airways. Some benchmark graphs and tables are provided for formant frequencies and bandwidth calculations based on specific mathematical terms in the one-dimensional Navier–Stokes equation. Some rules are provided here for temporal and spatial discretization in terms of desired accuracy and stability of the solution. Kinetic losses, which have been difficult to quantify in frequency-domain simulations, are quantified here on the basis of the measurements of Scherer, Torkaman, Kucinschi, and Afjeh [(2010). J. Acoust. Soc. Am. 128(2), 828–838]. PMID:25480071
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
DTI quality control assessment via error estimation from Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Farzinfar, Mahshid; Li, Yin; Verde, Audrey R.; Oguz, Ipek; Gerig, Guido; Styner, Martin A.
2013-03-01
Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing the microscopic tissue structure of white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo (MC) simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and after QC.
NASA Astrophysics Data System (ADS)
Pechereau, Francois; Jansky, Jaroslav; Bourdon, Anne
2012-10-01
In recent years, experimental studies on flue gas treatment have demonstrated the efficiency of plasma assisted catalysis for the treatment of a wide range of pollutants at a low energetic cost. In plasma reactors, usual catalyst supports are pellets, monoliths or porous media, and then atmospheric pressure discharges have to interact with many obstacles and to propagate in microcavities and pores. As a first step to better understand atmospheric pressure discharge dynamics in these complex geometries, in this work, we have carried out numerical simulations using a 2D-axisymmetric fluid model for a point-to-plane discharge with a dielectric plane obstacle placed in the path of the discharge. First, we have simulated the discharge ignition at the point electrode, its propagation in the gap and its impact and expansion on the dielectric plane. Depending on the applied voltage, the dielectric plane geometry and permittivity, we have identified conditions for the reignition of a second discharge behind the plane obstacle. These conditions will be discussed and compared with recent experimental results on the same configuration.
Hydrodynamics simulations of 2 (omega) laser propagation in underdense gasbag plasmas
Meezan, N B; Divol, L; Marinak, M M; Kerbel, G D; Suter, L J; Stevenson, R M; Slark, G E; Oades, K
2004-04-05
Recent 2{omega} laser propagation and stimulated Raman backscatter (SRS) experiments performed on the Helen laser have been analyzed using the radiation-hydrodynamics code hydra. These experiments utilized two diagnostics sensitive to the hydrodynamics of gasbag targets: a fast x-ray framing camera (FXI) and an SRS streak spectrometer. With a newly implemented nonlocal thermal transport model, hydra is able to reproduce many features seen in the FXI images and the SRS streak spectra. Experimental and simulated side-on FXI images suggest that propagation can be explained by classical laser absorption and the resulting hydrodynamics. Synthetic SRS spectra generated from the hydra results reproduce the details of the experimental SRS streak spectra. Most features in the synthetic spectra can be explained solely by axial density and temperature gradients. The total SRS backscatter increases with initial gasbag fill density up to {approx} 0.08 times the critical density, then decreases. Images from a near-backscatter camera (NBI) show that severe beam spray is not responsible for the trend in total backscatter. Filamentation does not appear to be a significant factor in gasbag hydrodynamics. The simulation and analysis techniques established here can be used in upcoming experimental campaigns on the Omega laser facility and the National Ignition Facility.
A phase screen model for simulating numerically the propagation of a laser beam in rain
Lukin, I P; Rychkov, D S; Falits, A V; Lai, Kin S; Liu, Min R
2009-09-30
The method based on the generalisation of the phase screen method for a continuous random medium is proposed for simulating numerically the propagation of laser radiation in a turbulent atmosphere with precipitation. In the phase screen model for a discrete component of a heterogeneous 'air-rain droplet' medium, the amplitude screen describing the scattering of an optical field by discrete particles of the medium is replaced by an equivalent phase screen with a spectrum of the correlation function of the effective dielectric constant fluctuations that is similar to the spectrum of a discrete scattering component - water droplets in air. The 'turbulent' phase screen is constructed on the basis of the Kolmogorov model, while the 'rain' screen model utiises the exponential distribution of the number of rain drops with respect to their radii as a function of the rain intensity. Theresults of the numerical simulation are compared with the known theoretical estimates for a large-scale discrete scattering medium. (propagation of laser radiation in matter)
Hydrodynamics simulations of 2{omega} laser propagation in underdense gasbag plasmas
Meezan, N.B.; Divol, L.; Marinak, M.M.; Kerbel, G.D.; Suter, L.J.; Stevenson, R.M.; Slark, G.E.; Oades, K.
2004-12-01
Recent 2{omega} laser propagation and stimulated Raman backscatter (SRS) experiments performed on the Helen laser have been analyzed using the radiation-hydrodynamics code HYDRA [M. M. Marinak, G. D. Kerbel, N. A. Gentile, O. Jones, D. Munro, S. Pollaine, T. R. Dittrich, and S. W. Haan, Phys. Plasmas 8, 2275 (2001)]. These experiments utilized two diagnostics sensitive to the hydrodynamics of gasbag targets: a fast x-ray framing camera (FXI) and a SRS streak spectrometer. With a newly implemented nonlocal thermal transport model, HYDRA is able to reproduce many features seen in the FXI images and the SRS streak spectra. Experimental and simulated side-on FXI images suggest that propagation can be explained by classical laser absorption and the resulting hydrodynamics. Synthetic SRS spectra generated from the HYDRA results reproduce the details of the experimental SRS streak spectra. Most features in the synthetic spectra can be explained solely by axial density and temperature gradients. The total SRS backscatter increases with initial gasbag fill density up to {approx_equal}0.08 times the critical density, then decreases. Data from a near-backscatter imaging camera show that severe beam spray is not responsible for the trend in total backscatter. Filamentation does not appear to be a significant factor in gasbag hydrodynamics. The simulation and analysis techniques established here can be used in ongoing experimental campaigns on the Omega laser facility and the National Ignition Facility.
NASA Astrophysics Data System (ADS)
Nikolopoulos, Efthymios I.; Polcher, Jan; Anagnostou, Emmanouil N.; Eisner, Stephanie; Fink, Gabriel; Kallos, George
2016-04-01
Precipitation is arguably one of the most important forcing variables that drive terrestrial water cycle processes. The process of precipitation exhibits significant variability in space and time, is associated with different water phases (liquid or solid) and depends on several other factors (aerosols, orography etc), which make estimation and modeling of this process a particularly challenging task. As such, precipitation information from different sensors/products is associated with uncertainty. Propagation of this uncertainty into hydrologic simulations can have a considerable impact on the accuracy of the simulated hydrologic variables. Therefore, to make hydrologic predictions more useful, it is important to investigate and assess the impact of precipitation uncertainty in hydrologic simulations in order to be able to quantify it and identify ways to minimize it. In this work we investigate the impact of precipitation uncertainty in hydrologic simulations using land surface models (e.g. ORCHIDEE) and global hydrologic models (e.g. WaterGAP3) for the simulation of several hydrologic variables (soil moisture, ET, runoff) over the Iberian Peninsula. Uncertainty in precipitation is assessed by utilizing various sources of precipitation input that include one reference precipitation dataset (SAFRAN), three widely-used satellite precipitation products (TRMM 3B42v7, CMORPH, PERSIANN) and a state-of-the-art reanalysis product (WFDEI) based on the ECMWF ERA-Interim reanalysis. Comparative analysis is based on using the SAFRAN-simulations as reference and it is carried out at different space (0.5deg or regional average) and time (daily or seasonal) scales. Furthermore, as an independent verification, simulated discharge is compared against available discharge observations for selected major rivers of Iberian region. Results allow us to draw conclusions regarding the impact of precipitation uncertainty with respect to i) hydrologic variable of interest, ii
A simulator study of the interaction of pilot workload with errors, vigilance, and decisions
NASA Technical Reports Server (NTRS)
Smith, H. P. R.
1979-01-01
A full mission simulation of a civil air transport scenario that had two levels of workload was used to observe the actions of the crews and the basic aircraft parameters and to record heart rates. The results showed that the number of errors was very variable among crews but the mean increased in the higher workload case. The increase in errors was not related to rise in heart rate but was associated with vigilance times as well as the days since the last flight. The recorded data also made it possible to investigate decision time and decision order. These also varied among crews and seemed related to the ability of captains to manage the resources available to them on the flight deck.
Enhancement of the NMSU Channel Error Simulator to Provide User-Selectable Link Delays
NASA Technical Reports Server (NTRS)
Horan, Stephen; Wang, Ru-Hai
2000-01-01
This is the third in a continuing series of reports describing the development of the Space-to-Ground Link Simulator (SGLS) to be used for testing data transfers under simulated space channel conditions. The SGLS is based upon Virtual Instrument (VI) software techniques for managing the error generation, link data rate configuration, and, now, selection of the link delay value. In this report we detail the changes that needed to be made to the SGLS VI configuration to permit link delays to be added to the basic error generation and link data rate control capabilities. This was accomplished by modifying the rate-splitting VIs to include a buffer the hold the incoming data for the duration selected by the user to emulate the channel link delay. In sample tests of this configuration, the TCP/IP(sub ftp) service and the SCPS(sub fp) service were used to transmit 10-KB data files using both symmetric (both forward and return links set to 115200 bps) and unsymmetric (forward link set at 2400 bps and a return link set at 115200 bps) link configurations. Transmission times were recorded at bit error rates of 0 through 10(exp -5) to give an indication of the link performance. In these tests. we noted separate timings for the protocol setup time to initiate the file transfer and the variation in the actual file transfer time caused by channel errors. Both protocols showed similar performance to that seen earlier for the symmetric and unsymmetric channels. This time, the delays in establishing the file protocol also showed that these delays could double the transmission time and need to be accounted for in mission planning. Both protocols also showed a difficulty in transmitting large data files over large link delays. In these tests, there was no clear favorite between the TCP/IP(sub ftp) and the SCPS(sub fp). Based upon these tests, further testing is recommended to extend the results to different file transfer configurations.
NASA Astrophysics Data System (ADS)
Celik, Cihangir
-scale technologies. Prevention of SEEs has been studied and applied in the semiconductor industry by including radiation protection precautions in the system architecture or by using corrective algorithms in the system operation. Decreasing 10B content (20%of natural boron) in the natural boron of Borophosphosilicate glass (BPSG) layers that are conventionally used in the fabrication of semiconductor devices was one of the major radiation protection approaches for the system architecture. Neutron interaction in the BPSG layer was the origin of the SEEs because of the 10B (n,alpha) 7Li reaction products. Both of the particles produced have the capability of ionization in the silicon substrate region, whose thickness is comparable to the ranges of these particles. Using the soft error phenomenon in exactly the opposite manner of the semiconductor industry can provide a new neutron detection system based on the SERs in the semiconductor memories. By investigating the soft error mechanisms in the available semiconductor memories and enhancing the soft error occurrences in these devices, one can convert all memory using intelligent systems into portable, power efficient, directiondependent neutron detectors. The Neutron Intercepting Silicon Chip (NISC) project aims to achieve this goal by introducing 10B-enriched BPSG layers to the semiconductor memory architectures. This research addresses the development of a simulation tool, the NISC Soft Error Analysis Tool (NISCSAT), for soft error modeling and analysis in the semiconductor memories to provide basic design considerations for the NISC. NISCSAT performs particle transport and calculates the soft error probabilities, or SER, depending on energy depositions of the particles in a given memory node model of the NISC. Soft error measurements were performed with commercially available, off-the-shelf semiconductor memories and microprocessors to observe soft error variations with the neutron flux and memory supply voltage. Measurement
Shan, S.; Bevis, M.; Kendrick, E.; Mader, G.L.; Raleigh, D.; Hudnut, K.; Sartori, M.; Phillips, D.
2007-01-01
When kinematic GPS processing software is used to estimate the trajectory of an aircraft, unless the delays imposed on the GPS signals by the atmosphere are either estimated or calibrated via external observations, then vertical height errors of decimeters can occur. This problem is clearly manifested when the aircraft is positioned against multiple base stations in areas of pronounced topography because the aircraft height solutions obtained using different base stations will tend to be mutually offset, or biased, in proportion to the elevation differences between the base stations. When performing kinematic surveys in areas with significant topography it should be standard procedure to use multiple base stations, and to separate them vertically to the maximum extent possible, since it will then be much easier to detect mis-modeling of the atmosphere. Copyright 2007 by the American Geophysical Union.
Yuan, X; Borup, D; Wiskin, J; Berggren, M; Johnson, S A
1999-01-01
We present a method to incorporate the relaxation dominated attenuation into the finite-difference time-domain (FDTD) simulation of acoustic wave propagation in complex media. A dispersive perfectly matched layer (DPML) boundary condition, which is suitable for boundary matching to such a dispersive media whole space, is also proposed to truncate the FDTD simulation domain. The numerical simulation of a Ricker wavelet propagating in a dispersive medium, described by second-order Debye model, shows that the Ricker wavelet is attenuated in amplitude and expanded in time in its course of propagation, as required by Kramers-Kronig relations. The numerical results also are compared to exact solution showing that the dispersive FDTD method is accurate and that the DPML boundary condition effectively dampens reflective waves. The method presented here is applicable to the simulation of ultrasonic instrumentation for medical imaging and other nondestructive testing problems with frequency dependent, attenuating media.
NASA Astrophysics Data System (ADS)
Murshid, Syed H.; Chakravarty, Abhijit
2011-06-01
Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.
The Effects of Computational Modeling Errors on the Estimation of Statistical Mechanical Variables.
Faver, John C; Yang, Wei; Merz, Kenneth M
2012-10-01
Computational models used in the estimation of thermodynamic quantities of large chemical systems often require approximate energy models that rely on parameterization and cancellation of errors to yield agreement with experimental measurements. In this work, we show how energy function errors propagate when computing statistical mechanics-derived thermodynamic quantities. Assuming that each microstate included in a statistical ensemble has a measurable amount of error in its calculated energy, we derive low-order expressions for the propagation of these errors in free energy, average energy, and entropy. Through gedanken experiments we show the expected behavior of these error propagation formulas on hypothetical energy surfaces. For very large microstate energy errors, these low-order formulas disagree with estimates from Monte Carlo simulations of error propagation. Hence, such simulations of error propagation may be required when using poor potential energy functions. Propagated systematic errors predicted by these methods can be removed from computed quantities, while propagated random errors yield uncertainty estimates. Importantly, we find that end-point free energy methods maximize random errors and that local sampling of potential energy wells decreases random error significantly. Hence, end-point methods should be avoided in energy computations and should be replaced by methods that incorporate local sampling. The techniques described herein will be used in future work involving the calculation of free energies of biomolecular processes, where error corrections are expected to yield improved agreement with experiment.
Parallel 3D Simulation of Seismic Wave Propagation in the Structure of Nobi Plain, Central Japan
NASA Astrophysics Data System (ADS)
Kotani, A.; Furumura, T.; Hirahara, K.
2003-12-01
We performed large-scale parallel simulations of the seismic wave propagation to understand the complex wave behavior in the 3D basin structure of the Nobi Plain, which is one of the high population cities in central Japan. In this area, many large earthquakes occurred in the past, such as the 1891 Nobi earthquake (M8.0), the 1944 Tonankai earthquake (M7.9) and the 1945 Mikawa earthquake (M6.8). In order to mitigate the potential disasters for future earthquakes, 3D subsurface structure of Nobi Plain has recently been investigated by local governments. We referred to this model together with bouguer anomaly data to construct a detail 3D basin structure model for Nobi plain, and conducted computer simulations of ground motions. We first evaluated the ground motions for two small earthquakes (M4~5); one occurred just beneath the basin edge at west, and the other occurred at south. The ground motions from these earthquakes were well recorded by the strong motion networks; K-net, Kik-net, and seismic intensity instruments operated by local governments. We compare the observed seismograms with simulations to validate the 3D model. For the 3D simulation we sliced the 3D model into a number of layers to assign to many processors for concurrent computing. The equation of motions are solved using a high order (32nd) staggered-grid FDM in horizontal directions, and a conventional (4th-order) FDM in vertical direction with the MPI inter-processor communications between neighbor region. The simulation model is 128km by 128km by 43km, which is discritized at variable grid size of 62.5-125m in horizontal directions and of 31.25-62.5m in vertical direction. We assigned a minimum shear wave velocity is Vs=0.4km/s, at the top of the sedimentary basin. The seismic sources for the small events are approximated by double-couple point source and we simulate the seismic wave propagation at maximum frequency of 2Hz. We used the Earth Simulator (JAMSTEC, Yokohama Inst) to conduct such
NASA Astrophysics Data System (ADS)
Langton, Christian; Church, Luke
2002-05-01
Cancellous bone consists of a porous open-celled framework of trabeculae interspersed with marrow. Although the measurement of broadband ultrasound attenuation (BUA) has been shown to be sensitive to osteoporotic changes, the exact dependence on material and structural parameters has not been elucidated. A 3-D computer simulation of ultrasound propagation through cancellous bone has been developed, based upon simple reflective behavior at the multitude of trabecular/marrow interfaces. A cancellous bone framework is initially described by an array of bone and marrow elements. An ultrasound pulse is launched along each row of the model with partial reflection occurring at each bone/marrow interface. If a reverse direction wave hits an interface, a further forward (echo) wave is created, with phase inversion implemented if appropriate. This process is monitored for each wave within each row. The effective received signal is created by summing the time domain data, thus simulating detection by a phase-sensitive ultrasound transducer, as incorporated in clinical systems. The simulation has been validated on a hexagonal honeycomb design of variable mesh size, first against a commercial computer simulation solution (Wave 2000 Pro), and second, via experimental measurement of physical replicas produced by stereolithography.
Propagation of variability in railway dynamic simulations: application to virtual homologation
NASA Astrophysics Data System (ADS)
Funfschilling, Christine; Perrin, Guillaume; Kraft, Sönke
2012-01-01
Railway dynamic simulations are increasingly used to predict and analyse the behaviour of the vehicle and of the track during their whole life cycle. Up to now however, no simulation has been used in the certification procedure even if the expected benefits are important: cheaper and shorter procedures, more objectivity, better knowledge of the behaviour around critical situations. Deterministic simulations are nevertheless too poor to represent the whole physical of the track/vehicle system which contains several sources of variability: variability of the mechanical parameters of a train among a class of vehicles (mass, stiffness and damping of different suspensions), variability of the contact parameters (friction coefficient, wheel and rail profiles) and variability of the track design and quality. This variability plays an important role on the safety, on the ride quality, and thus on the certification criteria. When using the simulation for certification purposes, it seems therefore crucial to take into account the variability of the different inputs. The main goal of this article is thus to propose a method to introduce the variability in railway dynamics. A four-step method is described namely the definition of the stochastic problem, the modelling of the inputs variability, the propagation and the analysis of the output. Each step is illustrated with railway examples.
NASA Astrophysics Data System (ADS)
Lisinetskaya, Polina G.; Röhr, Merle I. S.; Mitrić, Roland
2016-06-01
We present a theoretical approach for the simulation of the electric field and exciton propagation in ordered arrays constructed of molecular-sized noble metal clusters bound to organic polymer templates. In order to describe the electronic coupling between individual constituents of the nanostructure we use the ab initio parameterized transition charge method which is more accurate than the usual dipole-dipole coupling. The electronic population dynamics in the nanostructure under an external laser pulse excitation is simulated by numerical integration of the time-dependent Schrödinger equation employing the fully coupled Hamiltonian. The solution of the TDSE gives rise to time-dependent partial point charges for each subunit of the nanostructure, and the spatio-temporal electric field distribution is evaluated by means of classical electrodynamics methods. The time-dependent partial charges are determined based on the stationary partial and transition charges obtained in the framework of the TDDFT. In order to treat large plasmonic nanostructures constructed of many constituents, the approximate self-consistent iterative approach presented in (Lisinetskaya and Mitrić in Phys Rev B 89:035433, 2014) is modified to include the transition-charge-based interaction. The developed methods are used to study the optical response and exciton dynamics of Ag3+ and porphyrin-Ag4 dimers. Subsequently, the spatio-temporal electric field distribution in a ring constructed of ten porphyrin-Ag4 subunits under the action of circularly polarized laser pulse is simulated. The presented methodology provides a theoretical basis for the investigation of coupled light-exciton propagation in nanoarchitectures built from molecular size metal nanoclusters in which quantum confinement effects are important.
Measurement error in time-series analysis: a simulation study comparing modelled and monitored data
2013-01-01
Background Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Methods Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003–2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). Results When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2
Cancès, Eric; Castella, François; Chartier, Philippe; Faou, Erwan; Le Bris, Claude; Legoll, Frédéric; Turinici, Gabriel
2004-12-01
We introduce high-order formulas for the computation of statistical averages based on the long-time simulation of molecular dynamics trajectories. In some cases, this allows us to significantly improve the convergence rate of time averages toward ensemble averages. We provide some numerical examples that show the efficiency of our scheme. When trajectories are approximated using symplectic integration schemes (such as velocity Verlet), we give some error bounds that allow one to fix the parameters of the computation in order to reach a given desired accuracy in the most efficient manner. PMID:15549912
NASA Astrophysics Data System (ADS)
Cancès, Eric; Castella, François; Chartier, Philippe; Faou, Erwan; Le Bris, Claude; Legoll, Frédéric; Turinici, Gabriel
2004-12-01
We introduce high-order formulas for the computation of statistical averages based on the long-time simulation of molecular dynamics trajectories. In some cases, this allows us to significantly improve the convergence rate of time averages toward ensemble averages. We provide some numerical examples that show the efficiency of our scheme. When trajectories are approximated using symplectic integration schemes (such as velocity Verlet), we give some error bounds that allow one to fix the parameters of the computation in order to reach a given desired accuracy in the most efficient manner.
NASA Astrophysics Data System (ADS)
Plotnikov, M. Yu.; Shkarupa, E. V.
2015-11-01
Presently, the direct simulation Monte Carlo (DSMC) method is widely used for solving rarefied gas dynamics problems. As applied to steady-state problems, a feature of this method is the use of dependent sample values of random variables for the calculation of macroparameters of gas flows. A new combined approach to estimating the statistical error of the method is proposed that does not practically require additional computations, and it is applicable for any degree of probabilistic dependence of sample values. Features of the proposed approach are analyzed theoretically and numerically. The approach is tested using the classical Fourier problem and the problem of supersonic flow of rarefied gas through permeable obstacle.
NASA Astrophysics Data System (ADS)
Dietze, M.; Lebauer, D.; Moorcroft, P. R.; Richardson, A. D.; Wang, D.
2009-12-01
Data-model integration plays a critical role in assessing and improving our capacity to predict the dynamics of the terrestrial carbon cycle. Likewise, the ability to attach quantitative statements of uncertainty around model forecasts is crucial for model assessment and interpretation and for setting field research priorities. Bayesian methods have garnered recent attention for these applications, especially for problems with multiple data constraints, but the Markov Chain/Monte Carlo usually methods employed can be computationally prohibitive for large data sets and slow models. We describe an alternative method, Bayesian model emulation, that can approximate the full joint posterior density, is more amenable to parallelization, and provides an estimate of parameter sensitivity as a byproduct. We report on the application of these methods to the parameterization of the Ecosystem Demography model v2.1, an age and size structured terrestrial biosphere model. Results will focus on the application of the model to the parameterization at two flux tower sites, one in the northern hardwood forest of New Hampshire and the second for a biofuel crop field trial in Illinois. Analysis of both sites involved multiple data constraints, the specification of both model and data uncertainties, and the inclusion of informative priors constructed from a meta-analysis of the primary literature . The model is well-constrained at both sites, with particular improvement in parameters controlling below-ground processes and allocation, which had poor prior constraint. Observation error for NEE is highest during the growing season while model error, by contrast, is highest in the winter due to sensitivity of the model to soil freezing. Model fit is sensitive to the weighting of different data sources, in particular if the data sources are in disagreement (e.g. nighttime NEE and soil respiration). Statistically accounting for the high degree of temporal autocorrelation in eddy
Salomons, Erik M; Lohman, Walter J A; Zhou, Han
2016-01-01
Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.
Salomons, Erik M.; Lohman, Walter J. A.; Zhou, Han
2016-01-01
Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing. PMID:26789631
NASA Astrophysics Data System (ADS)
Aoki, Masanori; Baba, Yoshihiro; Rakov, Vladimir A.
2015-08-01
We have computed lightning electromagnetic pulses (LEMPs), including the azimuthal magnetic field Hφ, vertical electric field Ez, and horizontal (radial) electric field Eh that propagated over 5 to 200 km of flat lossy ground, using the finite difference time domain (FDTD) method in the 2-D cylindrical coordinate system. This is the first systematic full-wave study of LEMP propagation effects based on a realistic return-stroke model and including the complete return-stroke frequency range. Influences of the return-stroke wavefront speed (ranging from c/2 to c, where c is the speed of light), current risetime (ranging from 0.5 to 5 µs), and ground conductivity (ranging from 0.1 mS/m to ∞) on Hφ, Ez, and Eh have been investigated. Also, the FDTD-computed waveforms of Eh have been compared with the corresponding ones computed using the Cooray-Rubinstein formula. Peaks of Hφ, Ez, and Eh are nearly proportional to the return-stroke wavefront speed. The peak of Eh decreases with increasing current risetime, while those of Hφ and Ez are only slightly influenced by it. The peaks of Hφ and Ez are essentially independent of the ground conductivity at a distance of 5 km. Beyond this distance, they appreciably decrease relative to the perfectly conducting ground case, and the decrease is stronger for lower ground conductivity values. The peak of Eh increases with decreasing ground conductivity. The computed Eh/Ez is consistent with measurements of Thomson et al. (1988). The observed decrease of Ez peak and increase of Ez risetime due to propagation over 200 km of Florida soil are reasonably well reproduced by the FDTD simulation with ground conductivity of 1 mS/m.
NASA Astrophysics Data System (ADS)
Petrov, P.; Newman, G. A.
2010-12-01
-Fourier domain we had developed 3D code for full-wave field simulation in the elastic media which take into account nonlinearity introduced by free-surface effects. Our approach is based on the velocity-stress formulation. In the contrast to conventional formulation we defined the material properties such as density and Lame constants not at nodal points but within cells. This second order finite differences method formulated in the cell-based grid, generate numerical solutions compatible with analytical ones within the range errors determinate by dispersion analysis. Our simulator will be embedded in an inversion scheme for joint seismic- electromagnetic imaging. It also offers possibilities for preconditioning the seismic wave propagation problems in the frequency domain. References. Shin, C. & Cha, Y. (2009), Waveform inversion in the Laplace-Fourier domain, Geophys. J. Int. 177(3), 1067- 1079. Shin, C. & Cha, Y. H. (2008), Waveform inversion in the Laplace domain, Geophys. J. Int. 173(3), 922-931. Commer, M. & Newman, G. (2008), New advances in three-dimensional controlled-source electromagnetic inversion, Geophys. J. Int. 172(2), 513-535. Newman, G. A., Commer, M. & Carazzone, J. J. (2010), Imaging CSEM data in the presence of electrical anisotropy, Geophysics, in press.
NASA Astrophysics Data System (ADS)
Green, Marilynn P.; Wang, S. S. Peter
2002-11-01
Mobile location is one of the fastest growing areas for the development of new technologies, services and applications. This paper describes the channel models that were developed as a basis of discussion to assist the Technical Subcommittee T1P1.5 in its consideration of various mobile location technologies for emergency applications (1997 - 1998) for presentation to the U.S. Federal Communication Commission (FCC). It also presents the PCS 1900 extension to this model, which is based on the COST-231 extended Hata model and review of the original Okumura graphical interpretation of signal propagation characteristics in different environments. Based on a wide array of published (and non-publicly disclosed) empirical data, the signal propagation models described in this paper were all obtained by consensus of a group of inter-company participants in order to facilitate the direct comparison between simulations of different handset-based and network-based location methods prior to their standardization for emergency E-911 applications by the FCC. Since that time, this model has become a de-facto standard for assessing the positioning accuracy of different location technologies using GSM mobile terminals. In this paper, the radio environment is described to the level of detail that is necessary to replicate it in a software environment.
Open Boundary Particle-in-Cell Simulation of Dipolarization Front Propagation
NASA Technical Reports Server (NTRS)
Klimas, Alex; Hwang, Kyoung-Joo; Vinas, Adolfo F.; Goldstein, Melvyn L.
2014-01-01
First results are presented from an ongoing open boundary 2-1/2D particle-in-cell simulation study of dipolarization front (DF) propagation in Earth's magnetotail. At this stage, this study is focused on the compression, or pileup, region preceding the DF current sheet. We find that the earthward acceleration of the plasma in this region is in general agreement with a recent DF force balance model. A gyrophase bunched reflected ion population at the leading edge of the pileup region is reflected by a normal electric field in the pileup region itself, rather than through an interaction with the current sheet. We discuss plasma wave activity at the leading edge of the pileup region that may be driven by gradients, or by reflected ions, or both; the mode has not been identified. The waves oscillate near but above the ion cyclotron frequency with wavelength several ion inertial lengths. We show that the waves oscillate primarily in the perpendicular magnetic field components, do not propagate along the background magnetic field, are right handed elliptically (close to circularly) polarized, exist in a region of high electron and ion beta, and are stationary in the plasma frame moving earthward. We discuss the possibility that the waves are present in plasma sheet data, but have not, thus far, been discovered.
NASA Astrophysics Data System (ADS)
Taghavifar, Hadi; Khalilarya, Shahram; Jafarmadar, Samad; Taghavifar, Hamid
2016-08-01
A multidimensional computational fluid dynamic code was developed and integrated with probability density function combustion model to give the detailed account of multiphase fluid flow. The vapor phase within injector domain is treated with Reynolds-averaged Navier-Stokes technique. A new parameter is proposed which is an index of plane-cut spray propagation and takes into account two parameters of spray penetration length and cone angle at the same time. It was found that spray propagation factor (SPI) tends to increase at lower r/ d ratios, although the spray penetration tends to decrease. The results of SPI obtained by empirical correlation of Hay and Jones were compared with the simulation computation as a function of respective r/ d ratio. Based on the results of this study, the spray distribution on plane area has proportional correlation with heat release amount, NO x emission mass fraction, and soot concentration reduction. Higher cavitation is attributed to the sharp edge of nozzle entrance, yielding better liquid jet disintegration and smaller spray droplet that reduces soot mass fraction of late combustion process. In order to have better insight of cavitation phenomenon, turbulence magnitude in nozzle and combustion chamber was acquired and depicted along with spray velocity.
NASA Astrophysics Data System (ADS)
Wang, S.; Chen, Z. Y.; Wang, X. H.; Li, D.; Yang, A. J.; Liu, D. X.; Rong, M. Z.; Chen, H. L.; Kong, M. G.
2015-11-01
Cold atmospheric-pressure plasmas have potential to be used for endoscope sterilization. In this study, a long quartz tube was used as the simulated endoscope channel, and an array of electrodes was warped one by one along the tube. Plasmas were generated in the inner channel of the tube, and their propagation characteristics in He+O2 feedstock gases were studied as a function of the oxygen concentration. It is found that each of the plasmas originates at the edge of an instantaneous cathode, and then it propagates bidirectionally. Interestingly, a plasma head with bright spots is formed in the hollow instantaneous cathode and moves towards its center part, and a plasma tail expands through the electrode gap and then forms a swallow tail in the instantaneous anode. The plasmas are in good axisymmetry when [O2] ≤ 0.3%, but not for [O2] ≥ 1%, and even behave in a stochastic manner when [O2] = 3%. The antibacterial agents are charged species and reactive oxygen species, so their wall fluxes represent the "plasma dosage" for the sterilization. Such fluxes mainly act on the inner wall in the hollow electrode rather than that in the electrode gap, and they get to the maximum efficiency when the oxygen concentration is around 0.3%. It is estimated that one can reduce the electrode gap and enlarge the electrode width to achieve more homogenous and efficient antibacterial effect, which have benefits for sterilization applications.
Monte Carlo simulations of converging laser beam propagating in turbid media with parallel computing
NASA Astrophysics Data System (ADS)
Wu, Di; Lu, Jun Q.; Hu, Xin H.; Zhao, S. S.
1999-11-01
Due to its flexibility and simplicity, Monte Carlo method is often used to study light propagation in turbid medium where the photons are treated like classic particles being scattered and absorbed randomly based on a radiative transfer theory. However, due to the need of large number of photons to produce statistically significance results, this type of calculations requires large computing resources. To overcome such difficulty, we implemented parallel computing technique into our Monte Carlo simulations. The algorithm is based on the fact that the classic particles are uncorrelated, and the trajectories of multiple photons can be tracked simultaneously. When a beam of focused light incident to the medium, the incident photons are divided into groups according to the available processes on a parallel machine and the calculations are carried out in parallel. Utilizing PVM (Parallel Virtual Machine, a parallel computing software), the parallel programs in both C and FORTRAN are developed on the massive parallel computer Cray T3E at the North Carolina Supercomputer Center and a local PC-cluster network running UNIX/Sun Solaris. The parallel performances of our codes have been excellent on both Cray T3E and the PC clusters. In this paper, we present results on a focusing laser beam propagating through a highly scattering and diluted solution of intralipid. The dependence of the spatial distribution of light near the focal point on the concentration of intralipid solution is studied and its significance is discussed.
NASA Astrophysics Data System (ADS)
Lu, B.; Darmon, M.; Leymarie, N.; Chatillon, S.; Potel, C.
2012-05-01
In-service inspection of Sodium-Cooled Fast Reactors (SFR) requires the development of non-destructive techniques adapted to the harsh environment conditions and the examination complexity. From past experiences, ultrasonic techniques are considered as suitable candidates. The ultrasonic telemetry is a technique used to constantly insure the safe functioning of reactor inner components by determining their exact position: it consists in measuring the time of flight of the ultrasonic response obtained after propagation of a pulse emitted by a transducer and its interaction with the targets. While in-service the sodium flow creates turbulences that lead to temperature inhomogeneities, which translates into ultrasonic velocity inhomogeneities. These velocity variations could directly impact the accuracy of the target locating by introducing time of flight variations. A stochastic simulation model has been developed to calculate the propagation of ultrasonic waves in such an inhomogeneous medium. Using this approach, the travel time is randomly generated by a stochastic process whose inputs are the statistical moments of travel times known analytically. The stochastic model predicts beam deviations due to velocity inhomogeneities, which are similar to those provided by a determinist method, such as the ray method.
Lu, B.; Darmon, M.; Leymarie, N.; Chatillon, S.; Potel, C.
2012-05-17
In-service inspection of Sodium-Cooled Fast Reactors (SFR) requires the development of non-destructive techniques adapted to the harsh environment conditions and the examination complexity. From past experiences, ultrasonic techniques are considered as suitable candidates. The ultrasonic telemetry is a technique used to constantly insure the safe functioning of reactor inner components by determining their exact position: it consists in measuring the time of flight of the ultrasonic response obtained after propagation of a pulse emitted by a transducer and its interaction with the targets. While in-service the sodium flow creates turbulences that lead to temperature inhomogeneities, which translates into ultrasonic velocity inhomogeneities. These velocity variations could directly impact the accuracy of the target locating by introducing time of flight variations. A stochastic simulation model has been developed to calculate the propagation of ultrasonic waves in such an inhomogeneous medium. Using this approach, the travel time is randomly generated by a stochastic process whose inputs are the statistical moments of travel times known analytically. The stochastic model predicts beam deviations due to velocity inhomogeneities, which are similar to those provided by a determinist method, such as the ray method.
Simulation of wave propagation in boreholes and radial profiling of formation elastic parameters
NASA Astrophysics Data System (ADS)
Chi, Shihong
Modern acoustic logging tools measure in-situ elastic wave velocities of rock formations. These velocities provide ground truth for time-depth conversions in seismic exploration. They are also widely used to quantify the mechanical strength of formations for applications such as wellbore stability analysis and sand production prevention. Despite continued improvements in acoustic logging technology and interpretation methods that take advantage of full waveform data, acoustic logs processed with current industry standard methods often remain influenced by formation damage and mud-filtrate invasion. This dissertation develops an efficient and accurate algorithm for the numerical simulation of wave propagation in fluid-filled boreholes in the presence of complex, near-wellbore damaged zones. The algorithm is based on the generalized reflection and transmission matrices method. Assessment of mud-filtrate invasion effects on borehole acoustic measurements is performed through simulation of time-lapse logging in the presence of complex radial invasion zones. The validity of log corrections performed with the Biot-Gassmann fluid substitution model is assessed by comparing the velocities estimated from array waveform data simulated for homogeneous and radially heterogeneous formations that sustain mud-filtrate invasion. The proposed inversion algorithm uses array waveform data to estimate radial profiles of formation elastic parameters. These elastic parameters can be used to construct more realistic near-wellbore petrophysical models for applications in seismic exploration, geo-mechanics, and production. Frequency-domain, normalized amplitude and phase information contained in array waveform data are input to the nonlinear Gauss-Newton inversion algorithm. Validation of both numerical simulation and inversion is performed against previously published results based on the Thomson-Haskell method and travel time tomography, respectively. This exercise indicates that the
NASA Astrophysics Data System (ADS)
Colli, Matteo; Lanza, Luca Giovanni; Rasmussen, Roy; Mireille Thériault, Julie
2014-05-01
Among the different environmental sources of error for ground based solid precipitation measurements, wind is the main responsible for a large reduction of the catching performance. This is due to the aero-dynamic response of the gauge that affects the originally undisturbed airflow causing the deformation of the snowflakes trajectories. The application of composite gauge/wind shield measuring configurations allows the improvements of the collection efficiency (CE) at low wind speeds (Uw) but the performance achievable under severe airflow velocities and the role of turbulence still have to be explained. This work is aimed to assess the wind induced errors of a Geonor T200B vibrating wires gauge equipped with a single Alter shield. This is a common measuring system for solid precipitation, which constitutes of the R3 reference system in the ongoing WMO Solid Precipitation InterComparison Experiment (SPICE). The analysis is carried out by adopting advanced Computational Fluid Dynamics (CFD) tools for the numerical simulation of the turbulent airflow realized in the proximity of the catching section of the gauge. The airflow patterns were computed by running both time-dependent (Large Eddies Simulation) and time-independent (Reynolds Averaged Navier-Stokes) simulations. on the Yellowstone high performance computing system of the National Center for Atmospheric Research. The evaluation of CE under different Uw conditions was obtained by running a Lagrangian model for the calculation of the snowflakes trajectories building on the simulated airflow patterns. Particular attention has been paid to the sensitivity of the trajectories to different snow particles sizes and water content (corresponding to dry and wet snow). The results will be illustrated in comparative form between the different methodologies adopted and the existing infield CE evaluations based on double shield reference gauges.
Steepening of parallel propagating hydromagnetic waves into magnetic pulsations - A simulation study
NASA Technical Reports Server (NTRS)
Akimoto, K.; Winske, D.; Onsager, T. G.; Thomsen, M. F.; Gary, S. P.
1991-01-01
The steepening mechanism of parallel propagating low-frequency MHD-like waves observed upstream of the earth's quasi-parallel bow shock has been investigated by means of electromagnetic hybrid simulations. It is shown that an ion beam through the resonant electromagnetic ion/ion instability excites large-amplitude waves, which consequently pitch angle scatter, decelerate, and eventually magnetically trap beam ions in regions where the wave amplitudes are largest. As a result, the beam ions become bunched in both space and gyrophase. As these higher-density, nongyrotropic beam segments are formed, the hydromagnetic waves rapidly steepen, resulting in magnetic pulsations, with properties generally in agreement with observations. This steepening process operates on the scale of the linear growth time of the resonant ion/ion instability. Many of the pulsations generated by this mechanism are left-hand polarized in the spacecraft frame.
Chen, Qiang; Chen, Bin
2012-10-01
In this paper, a hybrid electrodynamics and kinetics numerical model based on the finite-difference time-domain method and lattice Boltzmann method is presented for electromagnetic wave propagation in weakly ionized hydrogen plasmas. In this framework, the multicomponent Bhatnagar-Gross-Krook collision model considering both elastic and Coulomb collisions and the multicomponent force model based on the Guo model are introduced, which supply a hyperfine description on the interaction between electromagnetic wave and weakly ionized plasma. Cubic spline interpolation and mean filtering technique are separately introduced to solve the multiscalar problem and enhance the physical quantities, which are polluted by numerical noise. Several simulations have been implemented to validate our model. The numerical results are consistent with a simplified analytical model, which demonstrates that this model can obtain satisfying numerical solutions successfully.
Simulation of Vibrational Spectra of Large Molecules by Arbitrary Time Propagation.
Kubelka, Jan; Bouř, Petr
2009-01-13
Modern ab initio and multiscale methods enable the simulation of vibrational properties of very large molecules. Within the harmonic approximation, the traditional generation of the spectra based on the force field diagonalization can become inefficient due to the excessive demands on computer time and memory. The present study proposes to avoid completely the matrix diagonalization with a direct generation of the spectral shapes. For infrared absorption (IR) and vibrational circular dichroism (VCD) electric and magnetic dipole moments are propagated in a fictitious time and spectral intensities are obtained by Fourier transformation. The algorithm scales quasi-linearly, and for model polypeptide molecules the method was found numerically stable and faithfully reproduced exact transition frequencies and relative intensities.
Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J; Koole, Sander L
2016-03-23
The error-related negativity (ERN or Ne) is a negative event-related brain potential that peaks about 20-100 ms after people perform an incorrect response in choice reaction time tasks. Prior research has shown that the ERN may be enhanced by situational and dispositional factors that promote intrinsic motivation. Building on and extending this work the authors hypothesized that simulated interpersonal touch may increase task engagement and thereby increase ERN amplitude. To test this notion, 20 participants performed a Go/No-Go task while holding a teddy bear or a same-sized cardboard box. As expected, the ERN was significantly larger when participants held a teddy bear rather than a cardboard box. This effect was most pronounced for people high (rather than low) in trait intrinsic motivation, who may depend more on intrinsically motivating task cues to maintain task engagement. These findings highlight the potential benefits of simulated interpersonal touch in stimulating attention to errors, especially among people who are intrinsically motivated.
Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J; Koole, Sander L
2016-03-23
The error-related negativity (ERN or Ne) is a negative event-related brain potential that peaks about 20-100 ms after people perform an incorrect response in choice reaction time tasks. Prior research has shown that the ERN may be enhanced by situational and dispositional factors that promote intrinsic motivation. Building on and extending this work the authors hypothesized that simulated interpersonal touch may increase task engagement and thereby increase ERN amplitude. To test this notion, 20 participants performed a Go/No-Go task while holding a teddy bear or a same-sized cardboard box. As expected, the ERN was significantly larger when participants held a teddy bear rather than a cardboard box. This effect was most pronounced for people high (rather than low) in trait intrinsic motivation, who may depend more on intrinsically motivating task cues to maintain task engagement. These findings highlight the potential benefits of simulated interpersonal touch in stimulating attention to errors, especially among people who are intrinsically motivated. PMID:26876476
Prôa, Miguel; O'Higgins, Paul; Monteiro, Leandro R
2013-01-01
Studies of evolutionary divergence using quantitative genetic methods are centered on the additive genetic variance-covariance matrix (G) of correlated traits. However, estimating G properly requires large samples and complicated experimental designs. Multivariate tests for neutral evolution commonly replace average G by the pooled phenotypic within-group variance-covariance matrix (W) for evolutionary inferences, but this approach has been criticized due to the lack of exact proportionality between genetic and phenotypic matrices. In this study, we examined the consequence, in terms of type I error rates, of replacing average G by W in a test of neutral evolution that measures the regression slope between among-population variances and within-population eigenvalues (the Ackermann and Cheverud [AC] test) using a simulation approach to generate random observations under genetic drift. Our results indicate that the type I error rates for the genetic drift test are acceptable when using W instead of average G when the matrix correlation between the ancestral G and P is higher than 0.6, the average character heritability is above 0.7, and the matrices share principal components. For less-similar G and P matrices, the type I error rates would still be acceptable if the ratio between the number of generations since divergence and the effective population size (t/N(e)) is smaller than 0.01 (large populations that diverged recently). When G is not known in real data, a simulation approach to estimate expected slopes for the AC test under genetic drift is discussed.
Simulation study on light propagation in an anisotropic turbulence field of entrainment zone.
Yuan, Renmin; Sun, Jianning; Luo, Tao; Wu, Xuping; Wang, Chen; Fu, Yunfei
2014-06-01
The convective atmospheric boundary layer was modeled in the water tank. In the entrainment zone (EZ), which is at the top of the convective boundary layer (CBL), the turbulence is anisotropic. An anisotropy coefficient was introduced in the presented anisotropic turbulence model. A laser beam was set to horizontally go through the EZ modeled in the water tank. The image of two-dimensional (2D) light intensity fluctuation was formed on the receiving plate perpendicular to the light path and was recorded by the CCD. The spatial spectra of both horizontal and vertical light intensity fluctuations were analyzed. Results indicate that the light intensity fluctuation in the EZ exhibits strong anisotropic characteristics. Numerical simulation shows there is a linear relationship between the anisotropy coefficients and the ratio of horizontal to vertical fluctuation spectra peak wavelength. By using the measured temperature fluctuations along the light path at different heights, together with the relationship between temperature and refractive index, the one-dimensional (1D) refractive index fluctuation spectra were derived. The anisotropy coefficients were estimated from the 2D light intensity fluctuation spectra modeled by the water tank. Then the turbulence parameters can be obtained using the 1D refractive index fluctuation spectra and the corresponding anisotropy coefficients. These parameters were used in numerical simulation of light propagation. The results of numerical simulations show this approach can reproduce the anisotropic features of light intensity fluctuations in the EZ modeled by the water tank experiment.
Computational Simulation of Damage Propagation in Three-Dimensional Woven Composites
NASA Technical Reports Server (NTRS)
Huang, Dade; Minnetyan, Levon
2005-01-01
Three dimensional (3D) woven composites have demonstrated multi-directional properties and improved transverse strength, impact resistance, and shear characteristics. The objective of this research is to develop a new model for predicting the elastic constants, hygrothermal effects, thermomechanical response, and stress limits of 3D woven composites; and to develop a computational tool to facilitate the evaluation of 3D woven composite structures with regard to damage tolerance and durability. Fiber orientations of weave and braid patterns are defined with reference to composite structural coordinates. Orthotropic ply properties and stress limits computed via micromechanics are transformed to composite structural coordinates and integrated to obtain the 3D properties. The various stages of degradation, from damage initiation to collapse of structures, in the 3D woven structures are simulated for the first time. Three dimensional woven composite specimens with various woven patterns under different loading conditions, such as tension, compression, bending, and shear are simulated in the validation process of this research. Damage initiation, growth, accumulation, and propagation to fracture are included in these simulations.
López, Rodrigo A.; Muñoz, Víctor; Viñas, Adolfo F.; Valdivia, Juan A.
2015-09-15
We use a particle-in-cell simulation to study the propagation of localized structures in a magnetized electron-positron plasma with relativistic finite temperature. We use as initial condition for the simulation an envelope soliton solution of the nonlinear Schrödinger equation, derived from the relativistic two fluid equations in the strongly magnetized limit. This envelope soliton turns out not to be a stable solution for the simulation and splits in two localized structures propagating in opposite directions. However, these two localized structures exhibit a soliton-like behavior, as they keep their profile after they collide with each other due to the periodic boundary conditions. We also observe the formation of localized structures in the evolution of a spatially uniform circularly polarized Alfvén wave. In both cases, the localized structures propagate with an amplitude independent velocity.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-01-01
Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at
Yılmaz, Bülent; Çiftçi, Emre
2013-06-01
Extracorporeal Shock Wave Lithotripsy (ESWL) is based on disintegration of the kidney stone by delivering high-energy shock waves that are created outside the body and transmitted through the skin and body tissues. Nowadays high-energy shock waves are also used in orthopedic operations and investigated to be used in the treatment of myocardial infarction and cancer. Because of these new application areas novel lithotriptor designs are needed for different kinds of treatment strategies. In this study our aim was to develop a versatile computer simulation environment which would give the device designers working on various medical applications that use shock wave principle a substantial amount of flexibility while testing the effects of new parameters such as reflector size, material properties of the medium, water temperature, and different clinical scenarios. For this purpose, we created a finite-difference time-domain (FDTD)-based computational model in which most of the physical system parameters were defined as an input and/or as a variable in the simulations. We constructed a realistic computational model of a commercial electrohydraulic lithotriptor and optimized our simulation program using the results that were obtained by the manufacturer in an experimental setup. We, then, compared the simulation results with the results from an experimental setup in which oxygen level in water was varied. Finally, we studied the effects of changing the input parameters like ellipsoid size and material, temperature change in the wave propagation media, and shock wave source point misalignment. The simulation results were consistent with the experimental results and expected effects of variation in physical parameters of the system. The results of this study encourage further investigation and provide adequate evidence that the numerical modeling of a shock wave therapy system is feasible and can provide a practical means to test novel ideas in new device design procedures.
Jha, Pallavi; Kumar Verma, Nirmal
2014-06-15
A one-dimensional numerical model for studying terahertz radiation generation by intense laser pulses propagating, in the extraordinary mode, through magnetized plasma has been presented. The direction of the static external magnetic field is perpendicular to the polarization as well as propagation direction of the laser pulse. A transverse electromagnetic wave with frequency in the terahertz range is generated due to the presence of the magnetic field. Further, two-dimensional simulations using XOOPIC code show that the THz fields generated in plasma are transmitted into vacuum. The fields obtained via simulation study are found to be compatible with those obtained from the numerical model.
NASA Astrophysics Data System (ADS)
Alves Batista, Rafael; Dundovic, Andrej; Erdmann, Martin; Kampert, Karl-Heinz; Kuempel, Daniel; Müller, Gero; Sigl, Guenter; van Vliet, Arjen; Walz, David; Winchen, Tobias
2016-05-01
We present the simulation framework CRPropa version 3 designed for efficient development of astrophysical predictions for ultra-high energy particles. Users can assemble modules of the most relevant propagation effects in galactic and extragalactic space, include their own physics modules with new features, and receive on output primary and secondary cosmic messengers including nuclei, neutrinos and photons. In extension to the propagation physics contained in a previous CRPropa version, the new version facilitates high-performance computing and comprises new physical features such as an interface for galactic propagation using lensing techniques, an improved photonuclear interaction calculation, and propagation in time dependent environments to take into account cosmic evolution effects in anisotropy studies and variable sources. First applications using highlighted features are presented as well.
NASA Astrophysics Data System (ADS)
Suvorov, Alexey; Cai, Yong Q.; Sutter, John P.; Chubar, Oleg
2014-09-01
Up to now simulation of perfect crystal optics in the "Synchrotron Radiation Workshop" (SRW) wave-optics computer code was not available, thus hindering the accurate modelling of synchrotron radiation beamlines containing optical components with multiple-crystal arrangements, such as double-crystal monochromators and high-energy-resolution monochromators. A new module has been developed for SRW for calculating dynamical diffraction from a perfect crystal in the Bragg case. We demonstrate its successful application to the modelling of partially-coherent undulator radiation propagating through the Inelastic X-ray Scattering (IXS) beamline of the National Synchrotron Light Source II (NSLS-II) at Brookhaven National Laboratory. The IXS beamline contains a double-crystal and a multiple-crystal highenergy- resolution monochromator, as well as complex optics such as compound refractive lenses and Kirkpatrick-Baez mirrors for the X-ray beam transport and shaping, which makes it an excellent case for benchmarking the new functionalities of the updated SRW codes. As a photon-hungry experimental technique, this case study for the IXS beamline is particularly valuable as it provides an accurate evaluation of the photon flux at the sample position, using the most advanced simulation methods and taking into account parameters of the electron beam, details of undulator source, and the crystal optics.
Difference in Simulated Low-Frequency Sound Propagation in the Various Species of Baleen Whale
NASA Astrophysics Data System (ADS)
Tsuchiya, Toshio; Naoi, Jun; Futa, Koji; Kikuchi, Toshiaki
2004-05-01
Whales found in the north Pacific are known to migrate over several thousand kilometers, from the Alaskan coast where they heartily feed during the summer to low latitude waters where they breed during the winter. Therefore, it is assumed that whales are using the “deep sound channel” for their long-distance communication. The main objective of this study is to clarify the behaviors of baleen whales from the standpoint of acoustical oceanography. Hence, authors investigated the possibility of long distance communication in various species of baleen whales, by simulating the long-distance propagation of their sound transmission, by applying the mode theory to actual sound speed profiles and by simulating their transmission frequencies. As a result, the possibility of long distance communication among blue whales using the deep sound channel was indicated. It was also indicated that communication among fin whales and blue whales can be made possible by coming close to shore slopes such as the Island of Hawaii.
Numerical Simulation of Stoneley Surface Wave Propagating Along Elastic-Elastic Interface
NASA Astrophysics Data System (ADS)
Korneev, V. A.; Zuev, M. A.; Petrov, P.; Magomedov, M.
2014-12-01
There are seven waves in dynamic theory of elasticity that are named after their discoverers. In 1885, Lord Rayleigh had published a paper where he described a wave capable to propagate along a free surface of an elastic half-space. In 1911, Love had considered a pure shear motion for a model of an elastic layer, bounded by an elastic halfspace. In 1917, Lamb had discovered symmetric and asymmetric waves propagating in an isolated elastic plate. Stoneley (1924) had found that a surface wave can propagate along an interface between two elastic halfspaces for some parameter combinations, and then Scholte had shown in 1942, that in a model where one of the halfspaces is fluid, the surface wave can exist for any parameters. The sixth wave is named after Biot (1956), and it describes a slow diffusive wave in a fluid-saturated poroelastic media. Finally, in 1962 Krauklis had found a dispersive fluid wave in a system of a fluid layer bounded by two elastic halfspaces. Remarkably, all but one of the named waves were found and predicted theoretically as the results of mathematical and physical approaches in Nature exploration to be later confirmed in experiments and used in various scientific and practical applications. The only wave, which was not observed neither numerically nor experimentally until now is Stoneley wave. A likely reason for that is in rather restricted combinations of material parameters for this wave to exist. Indeed, the ratio R of shear velocities a model must be inside of the interval (0.8742 - 1). The ratio of the Stoneley wave velocity to the largest share wave velocity must be in the interval (0.8742 - R). To fill the gap, we performed 2D finite-difference simulation for a model consisting of polysterene (with velocities Vp1=2.350 m/s, Vs1=1190. m/s, and density Rho1= 1.06 g/m3) and gold (with velocities Vp2=3.240 m/s, Vs2=1200. m/s, and density Rho2= 19.7 g/m3). A corresponded root of a dispersion equation was found with a help of original
Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD
Smith, Thomas Michael; Shadid, John N.; Pawlowski, Roger P.; Cyr, Eric C.; Wildey, Timothy Michael
2014-01-01
This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.
Bizzarri, A.; Dunham, Eric M.; Spudich, P.
2010-01-01
We study how heterogeneous rupture propagation affects the coherence of shear and Rayleigh Mach wavefronts radiated by supershear earthquakes. We address this question using numerical simulations of ruptures on a planar, vertical strike-slip fault embedded in a three-dimensional, homogeneous, linear elastic half-space. Ruptures propagate spontaneously in accordance with a linear slip-weakening friction law through both homogeneous and heterogeneous initial shear stress fields. In the 3-D homogeneous case, rupture fronts are curved owing to interactions with the free surface and the finite fault width; however, this curvature does not greatly diminish the coherence of Mach fronts relative to cases in which the rupture front is constrained to be straight, as studied by Dunham and Bhat (2008a). Introducing heterogeneity in the initial shear stress distribution causes ruptures to propagate at speeds that locally fluctuate above and below the shear wave speed. Calculations of the Fourier amplitude spectra (FAS) of ground velocity time histories corroborate the kinematic results of Bizzarri and Spudich (2008a): (1) The ground motion of a supershear rupture is richer in high frequency with respect to a subshear one. (2) When a Mach pulse is present, its high frequency content overwhelms that arising from stress heterogeneity. Present numerical experiments indicate that a Mach pulse causes approximately an ω−1.7 high frequency falloff in the FAS of ground displacement. Moreover, within the context of the employed representation of heterogeneities and over the range of parameter space that is accessible with current computational resources, our simulations suggest that while heterogeneities reduce peak ground velocity and diminish the coherence of the Mach fronts, ground motion at stations experiencing Mach pulses should be richer in high frequencies compared to stations without Mach pulses. In contrast to the foregoing theoretical results, we find no average elevation
NASA Astrophysics Data System (ADS)
Spudich, P.; Bizzarri, A.; Dunham, E. M.
2009-12-01
We study how heterogeneous rupture propagation affects the coherence of shear- and Rayleigh-Mach wave fronts radiated by supershear earthquakes. We address this question using numerical simulations of ruptures on a planar, vertical strike-slip fault embedded in a three-dimensional, homogeneous, linear elastic half-space. Ruptures propagate spontaneously in accordance with a linear slip-weakening friction law through both homogeneous and heterogeneous initial shear stress fields. In the 3-D homogeneous case, rupture fronts are curved due to interactions with the free surface and the finite fault width; however, this curvature does not greatly diminish the coherence of Mach fronts relative to cases in which the rupture front is constrained to be straight, as studied by Dunham and Bhat (2008). Introducing heterogeneity in the initial shear stress distribution causes ruptures to propagate at speeds that locally fluctuate above and below the shear-wave speed. Calculations of the Fourier amplitude spectra (FAS) of ground velocity time histories corroborate the kinematic results of Bizzarri and Spudich (2008): 1) The ground motion of a supershear rupture is richer in high frequency with respect to a subshear one. 2) When a Mach pulse is present, its high frequency content overwhelms that arising from stress heterogeneity. Present numerical experiments indicate that a self-similar (k^-1) initial shear stress distribution causes an ω^-1.7 high frequency falloff in the FAS of ground displacement. Moreover, within the context of the employed representation of heterogeneities and over the range of parameter space that is accessible with current computational resources, our simulations suggest that while heterogeneities reduce peak ground velocity and diminish the coherence of the Mach fronts, ground motion at stations experiencing Mach pulses should be richer in high frequencies compared to stations without Mach pulses. In contrast to the foregoing theoretical results, we
NASA Astrophysics Data System (ADS)
Bizzarri, A.; Dunham, Eric M.; Spudich, P.
2010-08-01
We study how heterogeneous rupture propagation affects the coherence of shear and Rayleigh Mach wavefronts radiated by supershear earthquakes. We address this question using numerical simulations of ruptures on a planar, vertical strike-slip fault embedded in a three-dimensional, homogeneous, linear elastic half-space. Ruptures propagate spontaneously in accordance with a linear slip-weakening friction law through both homogeneous and heterogeneous initial shear stress fields. In the 3-D homogeneous case, rupture fronts are curved owing to interactions with the free surface and the finite fault width; however, this curvature does not greatly diminish the coherence of Mach fronts relative to cases in which the rupture front is constrained to be straight, as studied by Dunham and Bhat (2008a). Introducing heterogeneity in the initial shear stress distribution causes ruptures to propagate at speeds that locally fluctuate above and below the shear wave speed. Calculations of the Fourier amplitude spectra (FAS) of ground velocity time histories corroborate the kinematic results of Bizzarri and Spudich (2008a): (1) The ground motion of a supershear rupture is richer in high frequency with respect to a subshear one. (2) When a Mach pulse is present, its high frequency content overwhelms that arising from stress heterogeneity. Present numerical experiments indicate that a Mach pulse causes approximately an ω-1.7 high frequency falloff in the FAS of ground displacement. Moreover, within the context of the employed representation of heterogeneities and over the range of parameter space that is accessible with current computational resources, our simulations suggest that while heterogeneities reduce peak ground velocity and diminish the coherence of the Mach fronts, ground motion at stations experiencing Mach pulses should be richer in high frequencies compared to stations without Mach pulses. In contrast to the foregoing theoretical results, we find no average elevation of
Guo, Min; Abbott, Derek; Lu, Minhua; Liu, Huafeng
2016-03-01
Shear wave propagation speed has been regarded as an attractive indicator for quantitatively measuring the intrinsic mechanical properties of soft tissues. While most existing techniques use acoustic radiation force (ARF) excitation with focal spot region based on linear array transducers, we try to employ a special ARF with a focal line region and apply it to viscoelastic materials to create shear waves. First, a two-dimensional capacitive micromachined ultrasonic transducer with 64 × 128 fully controllable elements is realised and simulated to generate this special ARF. Then three-dimensional finite element models are developed to simulate the resulting shear wave propagation through tissue phantom materials. Three different phantoms are explored in our simulation study using: (a) an isotropic viscoelastic medium, (b) within a cylindrical inclusion, and (c) a transverse isotropic viscoelastic medium. For each phantom, the ARF creates a quasi-plane shear wave which has a preferential propagation direction perpendicular to the focal line excitation. The propagation of the quasi-plane shear wave is investigated and then used to reconstruct shear moduli sequentially after the estimation of shear wave speed. In the phantom with a transverse isotropic viscoelastic medium, the anisotropy results in maximum speed parallel to the fiber direction and minimum speed perpendicular to the fiber direction. The simulation results show that the line excitation extends the displacement field to obtain a large imaging field in comparison with spot excitation, and demonstrate its potential usage in measuring the mechanical properties of anisotropic tissues. PMID:26768475
NASA Astrophysics Data System (ADS)
Lukin, I. P.; Rychkov, D. S.; Falits, A. V.; Lai, Kin S.; Liu, Min R.
2009-09-01
The method based on the generalisation of the phase screen method for a continuous random medium is proposed for simulating numerically the propagation of laser radiation in a turbulent atmosphere with precipitation. In the phase screen model for a discrete component of a heterogeneous 'air—rain droplet' medium, the amplitude screen describing the scattering of an optical field by discrete particles of the medium is replaced by an equivalent phase screen with a spectrum of the correlation function of the effective dielectric constant fluctuations that is similar to the spectrum of a discrete scattering component — water droplets in air. The 'turbulent' phase screen is constructed on the basis of the Kolmogorov model, while the 'rain' screen model utilises theexponential distribution of the number of rain drops with respect to their radii as a function of the rain intensity. Theresults of the numerical simulation are compared with the known theoretical estimates for a large-scale discrete scattering medium.
NASA Astrophysics Data System (ADS)
Freitas, Ana C. V.; Frederiksen, Jorgen S.; O'Kane, Terence J.; Ambrizzi, Tércio
2016-09-01
Ensemble simulations, using both coupled ocean-atmosphere (AOGCM) and atmosphere only (AGCM) general circulation models, are employed to examine the austral winter response of the Hadley circulation (HC) and stationary Rossby wave propagation (SRW) to a warming climate. Changes in the strength and width of the HC are firstly examined in a set of runs with idealized sea surface temperature (SST) perturbations as boundary conditions in the AGCM. Strong and weak SST gradient experiments (SG and WG, respectively) simulate changes in the HC intensity, whereas narrow (5°S-5°N) and wide (30°S-30°N) SST warming experiments simulate changes in the HC width. To examine the combined impact of changes in the strength and width of the HC upon SRW propagation two AOGCM simulations using different scenarios of increasing carbon dioxide (CO2) concentrations are employed. We show that, in contrast to a wide SST warming, the atmospheric simulations with a narrow SST warming produce stronger and very zonally extended Rossby wave sources, leading to stronger and eastward shifted troughs and ridges. Simulations with SST anomalies, either in narrow or wide latitude bands only modify the intensity of the troughs and ridges. SST anomalies outside the narrow latitude band of 5°S-5°N do not significantly affect the spatial pattern of SRW propagation. AOGCM simulations with 1 %/year increasing CO2 concentrations or 4 times preindustrial CO2 levels reveal very similar SRW responses to the atmospheric only simulations with anomalously wider SST warming. Our results suggest that in a warmer climate, the changes in the strength and width of the HC act in concert to significantly alter SRW sources and propagation characteristics.
NASA Astrophysics Data System (ADS)
Kowalewski, Markus; Mukamel, Shaul
2015-07-01
Femtosecond Stimulated Raman Spectroscopy (FSRS) signals that monitor the excited state conical intersections dynamics of acrolein are simulated. An effective time dependent Hamiltonian for two C—H vibrational marker bands is constructed on the fly using a local mode expansion combined with a semi-classical surface hopping simulation protocol. The signals are obtained by a direct forward and backward propagation of the vibrational wave function on a numerical grid. Earlier work is extended to fully incorporate the anharmonicities and intermode couplings.
Kowalewski, Markus Mukamel, Shaul
2015-07-28
Femtosecond Stimulated Raman Spectroscopy (FSRS) signals that monitor the excited state conical intersections dynamics of acrolein are simulated. An effective time dependent Hamiltonian for two C—H vibrational marker bands is constructed on the fly using a local mode expansion combined with a semi-classical surface hopping simulation protocol. The signals are obtained by a direct forward and backward propagation of the vibrational wave function on a numerical grid. Earlier work is extended to fully incorporate the anharmonicities and intermode couplings.
Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh
2015-11-01
In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.
Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh
2015-11-01
In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations. PMID:26560913
Alastruey, Jordi; Khir, Ashraf W; Matthys, Koen S; Segers, Patrick; Sherwin, Spencer J; Verdonck, Pascal R; Parker, Kim H; Peiró, Joaquim
2011-08-11
The accuracy of the nonlinear one-dimensional (1-D) equations of pressure and flow wave propagation in Voigt-type visco-elastic arteries was tested against measurements in a well-defined experimental 1:1 replica of the 37 largest conduit arteries in the human systemic circulation. The parameters required by the numerical algorithm were directly measured in the in vitro setup and no data fitting was involved. The inclusion of wall visco-elasticity in the numerical model reduced the underdamped high-frequency oscillations obtained using a purely elastic tube law, especially in peripheral vessels, which was previously reported in this paper [Matthys et al., 2007. Pulse wave propagation in a model human arterial network: Assessment of 1-D numerical simulations against in vitro measurements. J. Biomech. 40, 3476-3486]. In comparison to the purely elastic model, visco-elasticity significantly reduced the average relative root-mean-square errors between numerical and experimental waveforms over the 70 locations measured in the in vitro model: from 3.0% to 2.5% (p<0.012) for pressure and from 15.7% to 10.8% (p<0.002) for the flow rate. In the frequency domain, average relative errors between numerical and experimental amplitudes from the 5th to the 20th harmonic decreased from 0.7% to 0.5% (p<0.107) for pressure and from 7.0% to 3.3% (p<10(-6)) for the flow rate. These results provide additional support for the use of 1-D reduced modelling to accurately simulate clinically relevant problems at a reasonable computational cost.
Numerical simulation of turbulent stratified flame propagation in a closed vessel
NASA Astrophysics Data System (ADS)
Gruselle, Catherine; Lartigue, Ghislain; Pepiot, Perrine; Moureau, Vincent; D'Angelo, Yves
2012-11-01
Reducing pollutants emissions while keeping a high combustion efficiency and a low fuel consumption is an important challenge for both gas turbine (GT) and internal combustion engines (ICE). To fulfill these new constraints, stratified combustion may constitute an efficient strategy. A tabulated chemistry approach based on FPI combined to a low-Mach number method is applied in the analysis of a turbulent propane-air flame with equivalence ratio (ER) stratification, which has been studied experimentally by Balusamy [S. Balusamy, Ph.D Thesis, INSA-Rouen (2010)]. Flame topology, along with flame velocity statistics, are well reproduced in the simulation, even if time-history effects are not accounted for in the tabulated approach. However, these effects may become significant when exhaust gas recirculation (EGR) is introduced. To better quantify them, both ER and EGR-stratified two-dimensional flames are simulated using finite-rate chemistry and a semi-detailed mechanism for propane oxidation. The numerical implementation is first investigated in terms of efficiency and accuracy, with a focus on splitting errors. The resulting flames are then analyzed to investigate potential extensions of the FPI technique to EGR stratification.
Sources of error in CEMRA-based CFD simulations of the common carotid artery
NASA Astrophysics Data System (ADS)
Khan, Muhammad Owais; Wasserman, Bruce A.; Steinman, David A.
2013-03-01
Magnetic resonance imaging is often used as a source for reconstructing vascular anatomy for the purpose of computational fluid dynamics (CFD) analysis. We recently observed large discrepancies in such "image-based" CFD models of the normal common carotid artery (CCA) derived from contrast enhanced MR angiography (CEMRA), when compared to phase contrast MR imaging (PCMRI) of the same subjects. A novel quantitative comparison of velocity profile shape of N=20 cases revealed an average 25% overestimation of velocities by CFD, attributed to a corresponding underestimation of lumen area in the CEMRA-derived geometries. We hypothesized that this was due to blurring of edges in the images caused by dilution of contrast agent during the relatively long elliptic centric CEMRA acquisitions, and confirmed this with MRI simulations. Rescaling of CFD models to account for the lumen underestimation improved agreement with the velocity levels seen in the corresponding PCMRI images, but discrepancies in velocity profile shape remained, with CFD tending to over-predict velocity profile skewing. CFD simulations incorporating realistic inlet velocity profiles and non-Newtonian rheology had a negligible effect on velocity profile skewing, suggesting a role for other sources of error or modeling assumptions. In summary, our findings suggest that caution should be exercised when using elliptic-centric CEMRA data as a basis for image-based CFD modeling, and emphasize the importance of comparing image-based CFD models against in vivo data whenever possible.
Fitzpatrick, Gianna M.; Wells, R. Glenn
2006-08-15
Heart disease is a leading killer in Canada and positron emission tomography (PET) provides clinicians with in vivo metabolic information for diagnosing heart disease. Transmission data are usually acquired with {sup 68}Ge, although the advent of PET/CT scanners has made computed tomography (CT) an alternative option. The fast data acquisition of CT compared to PET may cause potential misregistration problems, leading to inaccurate attenuation correction (AC). Using Monte Carlo simulations and an anthropomorphic dynamic computer phantom, this study determines the magnitude and location of respiratory-induced errors in radioactivity uptake measured in cardiac PET/CT. A homogeneous tracer distribution in the heart was considered. The AC was based on (1) a time-averaged attenuation map (2) CT maps from a single phase of the respiratory cycle, and (3) CT maps phase matched to the emission data. Circumferential profiles of the heart uptake were compared and differences of up to 24% were found between the single-phase CT-AC method and the true phantom values. Simulation results were supported by a PET/CT canine study which showed differences of up to 10% in the heart uptake in the lung-heart boundary region when comparing {sup 68}Ge- to CT-based AC with the CT map acquired at end inhalation.
Further Developments in the Communication Link and Error Analysis (CLEAN) Simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1995-01-01
During the period 1 July 1993 - 30 June 1994, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed. Many of these were reported in the Semi-Annual report dated December 1993 which has been included in this report in Appendix A. Since December 1993, a number of additional modules have been added involving Unit-Memory Convolutional codes (UMC). These are: (1) Unit-Memory Convolutional Encoder module (UMCEncd); (2) Hard decision Unit-Memory Convolutional Decoder using the Viterbi decoding algorithm (VitUMC); and (3) a number of utility modules designed to investigate the performance of LTMC's such as LTMC column distance function (UMCdc), UMC free distance function (UMCdfree), UMC row distance function (UMCdr), and UMC Transformation (UMCTrans). The study of UMC's was driven, in part, by the desire to investigate high-rate convolutional codes which are better suited as inner codes for a concatenated coding scheme. A number of high-rate LTMC's were found which are good candidates for inner codes. Besides the further developments of the simulation, a study was performed to construct a table of the best known Unit-Memory Convolutional codes. Finally, a preliminary study of the usefulness of the Periodic Convolutional Interleaver (PCI) was completed and documented in a Technical note dated March 17, 1994. This technical note has also been included in this final report.
NASA Astrophysics Data System (ADS)
Mossoulina, O. A.; Kirilenko, M. S.; Khonina, S. N.
2016-08-01
We use radial Fractional Fourier transform to model vortex laser beams propagation in optical waveguides with parabolic dependence of the refractive index. To overcome calculation difficulties at distances proportional to a quarter of the period we use varied calculation step. Numerical results for vortex modes superposition propagation in a parabolic optical fiber show that the transverse beam structure can be changed significantly during the propagation. To provide stable transverse distribution input scale modes should be in accordance with fiber parameters.
Parallax error in long-axial field-of-view PET scanners—a simulation study
NASA Astrophysics Data System (ADS)
Schmall, Jeffrey P.; Karp, Joel S.; Werner, Matt; Surti, Suleman
2016-07-01
There is a growing interest in the design and construction of a PET scanner with a very long axial extent. One critical design challenge is the impact of the long axial extent on the scanner spatial resolution properties. In this work, we characterize the effect of parallax error in PET system designs having an axial field-of-view (FOV) of 198 cm (total-body PET scanner) using fully-3D Monte Carlo simulations. Two different scintillation materials were studied: LSO and LaBr3. The crystal size in both cases was 4 × 4 × 20 mm3. Several different depth-of-interaction (DOI) encoding techniques were investigated to characterize the improvement in spatial resolution when using a DOI capable detector. To measure spatial resolution we simulated point sources in a warm background in the center of the imaging FOV, where the effects of axial parallax are largest, and at several positions radially offset from the center. Using a line-of-response based ordered-subset expectation maximization reconstruction algorithm we found that the axial resolution in an LSO scanner degrades from 4.8 mm to 5.7 mm (full width at half max) at the center of the imaging FOV when extending the axial acceptance angle (α) from ±12° (corresponding to an axial FOV of 18 cm) to the maximum of ±67°—a similar result was obtained with LaBr3, in which the axial resolution degraded from 5.3 mm to 6.1 mm. For comparison we also measured the degradation due to radial parallax error in the transverse imaging FOV; the transverse resolution, averaging radial and tangential directions, of an LSO scanner was degraded from 4.9 mm to 7.7 mm, for a measurement at the center of the scanner compared to a measurement with a radial offset of 23 cm. Simulations of a DOI detector design improved the spatial resolution in all dimensions. The axial resolution in the LSO-based scanner, with α = ± 67°, was improved from 5.7 mm to 5.0 mm by
Booher, Stephen R.; Bacon, Larry Donald
2006-02-01
is only evaluated along a 2-D path in the vertical orientation. This precludes modeling propagation in the urban canyons of metropolitan areas, where horizontal paths are dominant. It also precludes modeling exterior to interior propagation. In view of the apparent inadequacy of urban propagation within mission level models, as evidenced by EADSIM, the study also attempts to address possible solutions to the problem. Correction of the sparsing techniques in both TIREM and SEKE models is recommended. Both SEKE and TIREM are optimized for DTED level 1 data, sparsed at 3 arc seconds resolution. This led to significant errors when map data was sparsed at higher or lower resolution. TIREM's errors would be significantly reduced if the 999 point array limit was eliminated. This would permit using interval sizes equal to the map resolution for larger areas. This same problem could be fixed in SEKE by changing the interval spacing from a fixed 3 arc second resolution ({approx}93 meters) to an interval which is set at the map resolution. Additionally, the cell elevation interpolation method which TIREM uses is inappropriate for the man-made structures encountered in urban environments. Turning this method of determining height off, or providing a selectable switch is desired. In the near term, it appears that further research into ray-tracing models is appropriate. Codes such as RF-ProTEC, which can be dynamically linked to mission level models such as EADSIM, can provide the higher fidelity propagation calculations required, and still permit the dynamic interactions required of the mission level model. Additional research should also be conducted on the best methods of representing man-made structures to determine whether codes other than ray-trace can be used.
Low-cost simulation of guided wave propagation in notched plate-like structures
NASA Astrophysics Data System (ADS)
Glushkov, E.; Glushkova, N.; Eremin, A.; Giurgiutiu, V.
2015-09-01
The paper deals with the development of low-cost tools for fast computer simulation of guided wave propagation and diffraction in plate-like structures of variable thickness. It is focused on notched surface irregularities, which are the basic model for corrosion damages. Their detection and identification by means of active ultrasonic structural health monitoring technologies assumes the use of guided waves generated and sensed by piezoelectric wafer active sensors as well as the use of laser Doppler vibrometry for surface wave scanning and visualization. To create a theoretical basis for these technologies, analytically based computer models of various complexity have been developed. The simplest models based on the Euler-Bernoulli beam and Kirchhoff plate equations have exhibited a sufficiently wide frequency range of reasonable coincidence with the results obtained within more complex integral equation based models. Being practically inexpensive, they allow one to carry out a fast parametric analysis revealing characteristic features of wave patterns that can be then made more exact using more complex models. In particular, the effect of resonance wave energy transmission through deep notches has been revealed within the plate model and then validated by the integral equation based calculations and experimental measurements.
Benchmark of numerical tools simulating beam propagation and secondary particles in ITER NBI
NASA Astrophysics Data System (ADS)
Sartori, E.; Veltri, P.; Dlougach, E.; Hemsworth, R.; Serianni, G.; Singh, M.
2015-04-01
Injection of high energy beams of neutral particles is a method for plasma heating in fusion devices. The ITER injector, and its prototype MITICA (Megavolt ITER Injector and Concept Advancement), are large extrapolations from existing devices: therefore numerical modeling is needed to set thermo-mechanical requirements for all beam-facing components. As the power and charge deposition originates from several sources (primary beam, co-accelerated electrons, and secondary production by beam-gas, beam-surface, and electron-surface interaction), the beam propagation along the beam line is simulated by comprehensive 3D models. This paper presents a comparative study between two codes: BTR has been used for several years in the design of the ITER HNB/DNB components; SAMANTHA code was independently developed and includes additional phenomena, such as secondary particles generated by collision of beam particles with the background gas. The code comparison is valuable in the perspective of the upcoming experimental operations, in order to prepare a reliable numerical support to the interpretation of experimental measurements in the beam test facilities. The power density map calculated on the Electrostatic Residual Ion Dump (ERID) is the chosen benchmark, as it depends on the electric and magnetic fields as well as on the evolution of the beam species via interaction with the gas. Finally the paper shows additional results provided by SAMANTHA, like the secondary electrons produced by volume processes accelerated by the ERID fringe-field towards the Cryopumps.
Estimation of crosstalk in LED fNIRS by photon propagation Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Iwano, Takayuki; Umeyama, Shinji
2015-12-01
fNIRS (functional near-Infrared spectroscopy) can measure brain activity non-invasively and has advantages such as low cost and portability. While the conventional fNIRS has used laser light, LED light fNIRS is recently becoming common in use. Using LED for fNIRS, equipment can be more inexpensive and more portable. LED light, however, has a wider illumination spectrum than laser light, which may change crosstalk between the calculated concentration change of oxygenated and deoxygenated hemoglobins. The crosstalk is caused by difference in light path length in the head tissues depending on wavelengths used. We conducted Monte Carlo simulations of photon propagation in the tissue layers of head (scalp, skull, CSF, gray matter, and white matter) to estimate the light path length in each layers. Based on the estimated path lengths, the crosstalk in fNIRS using LED light was calculated. Our results showed that LED light more increases the crosstalk than laser light does when certain combinations of wavelengths were adopted. Even in such cases, the crosstalk increased by using LED light can be effectively suppressed by replacing the value of extinction coefficients used in the hemoglobin calculation to their weighted average over illumination spectrum.
Propagation of Electrical Excitation in a Ring of Cardiac Cells: A Computer Simulation Study
NASA Technical Reports Server (NTRS)
Kogan, B. Y.; Karplus, W. J.; Karpoukhin, M. G.; Roizen, I. M.; Chudin, E.; Qu, Z.
1996-01-01
The propagation of electrical excitation in a ring of cells described by the Noble, Beeler-Reuter (BR), Luo-Rudy I (LR I), and third-order simplified (TOS) mathematical models is studied using computer simulation. For each of the models it is shown that after transition from steady-state circulation to quasi-periodicity achieved by shortening the ring length (RL), the action potential duration (APD) restitution curve becomes a double-valued function and is located below the original ( that of an isolated cell) APD restitution curve. The distributions of APD and diastolic interval (DI) along a ring for the entire range of RL corresponding to quasi-periodic oscillations remain periodic with the period slightly different from two RLs. The 'S' shape of the original APD restitution curve determines the appearance of the second steady-state circulation region for short RLs. For all the models and the wide variety of their original APD restitution curves, no transition from quasi-periodicity to chaos was observed.
Modelling the propagation of terahertz radiation through a tissue simulating phantom.
Walker, Gillian C; Berry, Elizabeth; Smye, Stephen W; Zinov'ev, Nick N; Fitzgerald, Anthony J; Miles, Robert E; Chamberlain, Martyn; Smith, Michael A
2004-05-21
Terahertz (THz) frequency radiation, 0.1 THz to 20 THz, is being investigated for biomedical imaging applications following the introduction of pulsed THz sources that produce picosecond pulses and function at room temperature. Owing to the broadband nature of the radiation, spectral and temporal information is available from radiation that has interacted with a sample; this information is exploited in the development of biomedical imaging tools and sensors. In this work, models to aid interpretation of broadband THz spectra were developed and evaluated. THz radiation lies on the boundary between regions best considered using a deterministic electromagnetic approach and those better analysed using a stochastic approach incorporating quantum mechanical effects, so two computational models to simulate the propagation of THz radiation in an absorbing medium were compared. The first was a thin film analysis and the second a stochastic Monte Carlo model. The Cole-Cole model was used to predict the variation with frequency of the physical properties of the sample and scattering was neglected. The two models were compared with measurements from a highly absorbing water-based phantom. The Monte Carlo model gave a prediction closer to experiment over 0.1 to 3 THz. Knowledge of the frequency-dependent physical properties, including the scattering characteristics, of the absorbing media is necessary. The thin film model is computationally simple to implement but is restricted by the geometry of the sample it can describe. The Monte Carlo framework, despite being initially more complex, provides greater flexibility to investigate more complicated sample geometries.
Dagrau, Franck; Rénier, Mathieu; Marchiano, Régis; Coulouvrat, François
2011-07-01
Numerical simulation of nonlinear acoustics and shock waves in a weakly heterogeneous and lossless medium is considered. The wave equation is formulated so as to separate homogeneous diffraction, heterogeneous effects, and nonlinearities. A numerical method called heterogeneous one-way approximation for resolution of diffraction (HOWARD) is developed, that solves the homogeneous part of the equation in the spectral domain (both in time and space) through a one-way approximation neglecting backscattering. A second-order parabolic approximation is performed but only on the small, heterogeneous part. So the resulting equation is more precise than the usual standard or wide-angle parabolic approximation. It has the same dispersion equation as the exact wave equation for all forward propagating waves, including evanescent waves. Finally, nonlinear terms are treated through an analytical, shock-fitting method. Several validation tests are performed through comparisons with analytical solutions in the linear case and outputs of the standard or wide-angle parabolic approximation in the nonlinear case. Numerical convergence tests and physical analysis are finally performed in the fully heterogeneous and nonlinear case of shock wave focusing through an acoustical lens.
Benchmark of numerical tools simulating beam propagation and secondary particles in ITER NBI
Sartori, E. Veltri, P.; Serianni, G.; Dlougach, E.; Hemsworth, R.; Singh, M.
2015-04-08
Injection of high energy beams of neutral particles is a method for plasma heating in fusion devices. The ITER injector, and its prototype MITICA (Megavolt ITER Injector and Concept Advancement), are large extrapolations from existing devices: therefore numerical modeling is needed to set thermo-mechanical requirements for all beam-facing components. As the power and charge deposition originates from several sources (primary beam, co-accelerated electrons, and secondary production by beam-gas, beam-surface, and electron-surface interaction), the beam propagation along the beam line is simulated by comprehensive 3D models. This paper presents a comparative study between two codes: BTR has been used for several years in the design of the ITER HNB/DNB components; SAMANTHA code was independently developed and includes additional phenomena, such as secondary particles generated by collision of beam particles with the background gas. The code comparison is valuable in the perspective of the upcoming experimental operations, in order to prepare a reliable numerical support to the interpretation of experimental measurements in the beam test facilities. The power density map calculated on the Electrostatic Residual Ion Dump (ERID) is the chosen benchmark, as it depends on the electric and magnetic fields as well as on the evolution of the beam species via interaction with the gas. Finally the paper shows additional results provided by SAMANTHA, like the secondary electrons produced by volume processes accelerated by the ERID fringe-field towards the Cryopumps.
Finite-difference staggered grids in GPUs for anisotropic elastic wave propagation simulation
NASA Astrophysics Data System (ADS)
Rubio, Felix; Hanzich, Mauricio; Farrés, Albert; de la Puente, Josep; María Cela, José
2014-09-01
The 3D elastic wave equations can be used to simulate the physics of waves traveling through the Earth more precisely than acoustic approximations. However, this improvement in quality has a counterpart in the cost of the numerical scheme. A possible strategy to mitigate that expense is using specialized, high-performing architectures such as GPUs. Nevertheless, porting and optimizing a code for such a platform require a deep understanding of both the underlying hardware architecture and the algorithm at hand. Furthermore, for very large problems, multiple GPUs must work concurrently, which adds yet another layer of complexity to the codes. In this work, we have tackled the problem of porting and optimizing a 3D elastic wave propagation engine which supports both standard- and fully-staggered grids to multi-GPU clusters. At the single GPU level, we have proposed and evaluated many optimization strategies and adopted the best performing ones for our final code. At the distributed memory level, a domain decomposition approach has been used which allows for good scalability thanks to using asynchronous communications and I/O.
FEM simulation of oxidation induced stresses with a coupled crack propagation in a TBC model system
NASA Astrophysics Data System (ADS)
Seiler, P.; Bäker, M.; Rösier, J.
2010-06-01
Plasma sprayed thermal barrier coating systems are used on top of highly stressed components, e.g. on gas turbine blades, to protect the underlying substrate from the high surrounding temperatures. A typical coating system consists of the bond-coat (BC), the thermal barrier coating (TBC), and the thermally grown oxide (TGO) between the BC and the TBC. This study examines the failure mechanisms which are caused by the diffusion of oxygen through the TBC and the resulting growth of the TGO. To study the behaviour of the complex failure mechanisms in thermal barrier coatings, a simplified model system is used to reduce the number of system parameters. The model system consists of a bond-coat material (fast creeping Fecralloy or slow creeping MA956) as the substrate with a Y2O3 partially stabilised plasma sprayed zircon oxide TBC on top and a TGO between the two layers. Alongside the experimental studies a FEM simulation was developed to calculate the stress distribution inside the simplified coating system [1]. The simulation permits the identification of compression and tension areas which are established by the growth of the oxide layer. Furthermore a 2-dimensional finite element model of crack propagation was developed in which the crack direction is calculated by using short trial cracks in different directions. The direction of the crack in the model system is defined as the crack direction with the maximum energy release rate [2,3]. The simulated stress distributions and the obtained crack path provide an insight into the possible failure mechanisms in the coating and allow to draw conclusions for optimising real thermal barrier coating systems. The simulated growth stresses of the TGO show that a slow creeping BC may reduce lifetime. This is caused by stress concentration and cracks under the TGO. A slow creeping BC on the other hand reduces the stresses in the TBC. The different failure mechanisms emphasise the existence of a lifetime optimum which depends on
A PIC-MCC code for simulation of streamer propagation in air
Chanrion, O. Neubert, T.
2008-07-20
{approx}3 times the breakdown field. At higher altitudes, the background electric field must be relatively larger to create a similar field in a streamer tip because of increased influence of photoionisation. It is shown that the role of photoionization increases with altitude and the effect is to decrease the space charge fields and increase the streamer propagation velocity. Finally, effects of electrons in the runaway regime on negative streamer dynamics are presented. It is shown the energetic electrons create enhanced ionization in front of negative streamers. The simulations suggest that the thermal runaway mechanism may operate at lower altitudes and be associated with lightning and thundercloud electrification while the mechanism is unlikely to be important in sprite generation at higher altitudes in the mesosphere.
Chubar O.; Berman, L; Chu, Y.S.; Fluerasu, A.; Hulbert, S.; Idir, M.; Kaznatcheev, K.; Shapiro, D.; Baltser, J.
2012-04-04
Partially-coherent wavefront propagation calculations have proven to be feasible and very beneficial in the design of beamlines for 3rd and 4th generation Synchrotron Radiation (SR) sources. These types of calculations use the framework of classical electrodynamics for the description, on the same accuracy level, of the emission by relativistic electrons moving in magnetic fields of accelerators, and the propagation of the emitted radiation wavefronts through beamline optical elements. This enables accurate prediction of performance characteristics for beamlines exploiting high SR brightness and/or high spectral flux. Detailed analysis of radiation degree of coherence, offered by the partially-coherent wavefront propagation method, is of paramount importance for modern storage-ring based SR sources, which, thanks to extremely small sub-nanometer-level electron beam emittances, produce substantial portions of coherent flux in X-ray spectral range. We describe the general approach to partially-coherent SR wavefront propagation simulations and present examples of such simulations performed using 'Synchrotron Radiation Workshop' (SRW) code for the parameters of hard X-ray undulator based beamlines at the National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory. These examples illustrate general characteristics of partially-coherent undulator radiation beams in low-emittance SR sources, and demonstrate advantages of applying high-accuracy physical-optics simulations to the optimization and performance prediction of X-ray optical beamlines in these new sources.
Simulations of the magnet misalignments, field errors and orbit correction for the SLC north arc
Kheifets, S.; Chao, A.; Jaeger, J.; Shoaee, H.
1983-11-01
Given the intensity of linac bunches and their repetition rate the desired luminosity of SLC 1.0 x 10/sup 30/ cm/sup -2/ sec/sup -1/ requires focusing the interaction bunches to a spot size in the micrometer (..mu..m) range. The lattice that achieves this goal is obtained by careful design of both the arcs and the final focus systems. For the micrometer range of the beam spot size both the second order geometric and chromatic aberrations may be completely destructive. The concept of second order achromat proved to be extremely important in this respect and the arcs are built essentially as a sequence of such achromats. Between the end of the linac and the interaction point (IP) there are three special sections in addition to the regular structure: matching section (MS) designed for matching the phase space from the linac to the arcs, reverse bend section (RB) which provides the matching when the sign of the curvature is reversed in the arc and the final focus system (FFS). The second order calculations are done by the program TURTLE. Using the TURTLE histogram in the x-y plane and assuming identical histogram for the south arc, corresponding 'luminosity' L is found. The simulation of the misalignments and error effects have to be done simultaneously with the design and simulation of the orbit correction scheme. Even after the orbit is corrected and the beam can be transmitted through the vacuum chamber, the focusing of the beam to the desired size at the IP remains a serious potential problem. It is found, as will be elaborated later, that even for the best achieved orbit correction, additional corrections of the dispersion function and possibly transfer matrix are needed. This report describes a few of the presently conceived correction schemes and summarizes some results of computer simulations done for the SLC north arc. 8 references, 12 figures, 6 tables.
Nakahata, K; Sugahara, H; Barth, M; Köhler, B; Schubert, F
2016-04-01
When modeling ultrasonic wave propagation in metals, it is important to introduce mesoscopic crystalline structures because the anisotropy of the crystal structure and the heterogeneity of grains disturb ultrasonic waves. In this paper, a three-dimensional (3D) polycrystalline structure generated by multiphase-field modeling was introduced to ultrasonic simulation for nondestructive testing. 3D finite-element simulations of ultrasonic waves were validated and compared with visualization results obtained from laser Doppler vibrometer measurements. The simulation results and measurements showed good agreement with respect to the velocity and front shape of the pressure wave, as well as multiple scattering due to grains. This paper discussed the applicability of a transversely isotropic approach to ultrasonic wave propagation in a polycrystalline metal with columnar structures. PMID:26773789
Nakahata, K; Sugahara, H; Barth, M; Köhler, B; Schubert, F
2016-04-01
When modeling ultrasonic wave propagation in metals, it is important to introduce mesoscopic crystalline structures because the anisotropy of the crystal structure and the heterogeneity of grains disturb ultrasonic waves. In this paper, a three-dimensional (3D) polycrystalline structure generated by multiphase-field modeling was introduced to ultrasonic simulation for nondestructive testing. 3D finite-element simulations of ultrasonic waves were validated and compared with visualization results obtained from laser Doppler vibrometer measurements. The simulation results and measurements showed good agreement with respect to the velocity and front shape of the pressure wave, as well as multiple scattering due to grains. This paper discussed the applicability of a transversely isotropic approach to ultrasonic wave propagation in a polycrystalline metal with columnar structures.
NASA Astrophysics Data System (ADS)
Luo, Cong; Friederich, Wolfgang
2016-04-01
Realistic shallow seismic wave propagation simulation is an important tool for studying induced seismicity (e.g., during geothermal energy development). However over a long time, there is a significant problem which constrains computational seismologists from performing a successful simulation conveniently: pre-processing. Conventional pre-processing has often turned out to be inefficient and unrobust because of the miscellaneous operations, considerable complexity and insufficiency of available tools. An integrated web-based platform for shallow seismic wave propagation simulation has been built. It is aiming at providing a user-friendly pre-processing solution, and cloud-based simulation abilities. The main features of the platform for the user include: revised digital elevation model (DEM) retrieving and processing mechanism; generation of multi-layered 3D shallow Earth model geometry (the computational domain) with user specified surface topography based on the DEM; visualization of the geometry before the simulation; a pipeline from geometry to fully customizable hexahedral element mesh generation; customization and running the simulation on our HPC; post-processing and retrieval of the results over cloud. Regarding the computational aspect, currently the widely accepted specfem3D is chosen as the computational package; packages using different types of elements can be integrated as well in the future. According to our trial simulation experiments, this web-based platform has produced accurate waveforms while significantly simplifying and enhancing the pre-processing and improving the simulation success rate.
NASA Technical Reports Server (NTRS)
Rudraraju, Siva Shankar; Garikipati, Krishna; Waas, Anthony M.; Bednarcyk, Brett A.
2013-01-01
The phenomenon of crack propagation is among the predominant modes of failure in many natural and engineering structures, often leading to severe loss of structural integrity and catastrophic failure. Thus, the ability to understand and a priori simulate the evolution of this failure mode has been one of the cornerstones of applied mechanics and structural engineering and is broadly referred to as "fracture mechanics." The work reported herein focuses on extending this understanding, in the context of through-thickness crack propagation in cohesive materials, through the development of a continuum-level multiscale numerical framework, which represents cracks as displacement discontinuities across a surface of zero measure. This report presents the relevant theory, mathematical framework, numerical modeling, and experimental investigations of through-thickness crack propagation in fiber-reinforced composites using the Variational Multiscale Cohesive Method (VMCM) developed by the authors.
Guillon, Grégoire; Zeng, Tao; Roy, Pierre-Nicholas
2013-11-14
In this paper, we extend the previously introduced Post-Quantization Constraints (PQC) procedure [G. Guillon, T. Zeng, and P.-N. Roy, J. Chem. Phys. 138, 184101 (2013)] to construct approximate propagators and energy estimators for different rigid body systems, namely, the spherical, symmetric, and asymmetric tops. These propagators are for use in Path Integral simulations. A thorough discussion of the underlying geometrical concepts is given. Furthermore, a detailed analysis of the convergence properties of the density as well as the energy estimators towards their exact counterparts is presented along with illustrative numerical examples. The Post-Quantization Constraints approach can yield converged results and is a practical alternative to so-called sum over states techniques, where one has to expand the propagator as a sum over a complete set of rotational stationary states [as in E. G. Noya, C. Vega, and C. McBride, J. Chem. Phys. 134, 054117 (2011)] because of its modest memory requirements.
Roon, David A.; Waits, L.P.; Kendall, K.C.
2005-01-01
Non-invasive genetic sampling (NGS) is becoming a popular tool for population estimation. However, multiple NGS studies have demonstrated that polymerase chain reaction (PCR) genotyping errors can bias demographic estimates. These errors can be detected by comprehensive data filters such as the multiple-tubes approach, but this approach is expensive and time consuming as it requires three to eight PCR replicates per locus. Thus, researchers have attempted to correct PCR errors in NGS datasets using non-comprehensive error checking methods, but these approaches have not been evaluated for reliability. We simulated NGS studies with and without PCR error and 'filtered' datasets using non-comprehensive approaches derived from published studies and calculated mark-recapture estimates using CAPTURE. In the absence of data-filtering, simulated error resulted in serious inflations in CAPTURE estimates; some estimates exceeded N by ??? 200%. When data filters were used, CAPTURE estimate reliability varied with per-locus error (E??). At E?? = 0.01, CAPTURE estimates from filtered data displayed < 5% deviance from error-free estimates. When E?? was 0.05 or 0.09, some CAPTURE estimates from filtered data displayed biases in excess of 10%. Biases were positive at high sampling intensities; negative biases were observed at low sampling intensities. We caution researchers against using non-comprehensive data filters in NGS studies, unless they can achieve baseline per-locus error rates below 0.05 and, ideally, near 0.01. However, we suggest that data filters can be combined with careful technique and thoughtful NGS study design to yield accurate demographic information. ?? 2005 The Zoological Society of London.
Simulation of ultra-high energy photon propagation with PRESHOWER 2.0
NASA Astrophysics Data System (ADS)
Homola, P.; Engel, R.; Pysz, A.; Wilczyński, H.
2013-05-01
In this paper we describe a new release of the PRESHOWER program, a tool for Monte Carlo simulation of propagation of ultra-high energy photons in the magnetic field of the Earth. The PRESHOWER program is designed to calculate magnetic pair production and bremsstrahlung and should be used together with other programs to simulate extensive air showers induced by photons. The main new features of the PRESHOWER code include a much faster algorithm applied in the procedures of simulating the processes of gamma conversion and bremsstrahlung, update of the geomagnetic field model, and a minor correction. The new simulation procedure increases the flexibility of the code so that it can also be applied to other magnetic field configurations such as, for example, encountered in the vicinity of the sun or neutron stars. Program summaryProgram title: PRESHOWER 2.0 Catalog identifier: ADWG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3968 No. of bytes in distributed program, including test data, etc.: 37198 Distribution format: tar.gz Programming language: C, FORTRAN 77. Computer: Intel-Pentium based PC. Operating system: Linux or Unix. RAM:< 100 kB Classification: 1.1. Does the new version supercede the previous version?: Yes Catalog identifier of previous version: ADWG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 173 (2005) 71 Nature of problem: Simulation of a cascade of particles initiated by UHE photon in magnetic field. Solution method: The primary photon is tracked until its conversion into an e+ e- pair. If conversion occurs each individual particle in the resultant preshower is checked for either bremsstrahlung radiation (electrons) or secondary gamma conversion (photons). Reasons for
NASA Astrophysics Data System (ADS)
Düben, Peter D.; Dolaptchiev, Stamen I.
2015-08-01
Inexact hardware can reduce computational cost, due to a reduced energy demand and an increase in performance, and can therefore allow higher-resolution simulations of the atmosphere within the same budget for computation. We investigate the use of emulated inexact hardware for a model of the randomly forced 1D Burgers equation with stochastic sub-grid-scale parametrisation. Results show that numerical precision can be reduced to only 12 bits in the significand of floating-point numbers—instead of 52 bits for double precision—with no serious degradation in results for all diagnostics considered. Simulations that use inexact hardware on a grid with higher spatial resolution show results that are significantly better compared to simulations in double precision on a coarser grid at similar estimated computing cost. In the second half of the paper, we compare the forcing due to rounding errors to the stochastic forcing of the stochastic parametrisation scheme that is used to represent sub-grid-scale variability in the standard model setup. We argue that stochastic forcings of stochastic parametrisation schemes can provide a first guess for the upper limit of the magnitude of rounding errors of inexact hardware that can be tolerated by model simulations and suggest that rounding errors can be hidden in the distribution of the stochastic forcing. We present an idealised model setup that replaces the expensive stochastic forcing of the stochastic parametrisation scheme with an engineered rounding error forcing and provides results of similar quality. The engineered rounding error forcing can be used to create a forecast ensemble of similar spread compared to an ensemble based on the stochastic forcing. We conclude that rounding errors are not necessarily degrading the quality of model simulations. Instead, they can be beneficial for the representation of sub-grid-scale variability.
Ward, Michael J.; Self, Wesley H.; Froehle, Craig M.
2015-01-01
Objectives To estimate how data errors in electronic health records (EHR) can affect the accuracy of common emergency department (ED) operational performance metrics. Methods Using a 3-month, 7,348-visit dataset of electronic timestamps from a suburban academic ED as a baseline, Monte Carlo simulation was used to introduce four types of data errors (substitution, missing, random, and systematic bias) at three frequency levels (2%, 4%, and 7%). Three commonly used ED operational metrics (arrival to clinician evaluation, disposition decision to exit for admitted patients, and ED length of stay for admitted patients) were calculated and the proportion of ED visits that achieved each performance goal was determined. Results Even small data errors have measurable effects on a clinical organization's ability to accurately determine whether it is meeting its operational performance goals. Systematic substitution errors, increased frequency of errors, and the use of shorter-duration metrics resulted in a lower proportion of ED visits reported as meeting the associated performance objectives. However, the presence of other error types mitigated somewhat the effect of the systematic substitution error. Longer time-duration metrics were found to be less sensitive to data errors than shorter time-duration metrics. Conclusions Infrequent and small-magnitude data errors in EHR timestamps can compromise a clinical organization's ability to determine accurately if it is meeting performance goals. By understanding the types and frequencies of data errors in an organization's EHR, organizational leaders can use data-management best practices to better measure true performance and enhance operational decision-making. PMID:26291051
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
Data on simulated interpersonal touch, individual differences and the error-related negativity
Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J.; Koole, Sander L.
2016-01-01
The dataset includes data from the electroencephalogram study reported in our paper: ‘Effects of simulated interpersonal touch and trait intrinsic motivation on the error-related negativity’ (doi:10.1016/j.neulet.2016.01.044) (Tjew-A-Sin et al., 2016) [1]. The data was collected at the psychology laboratories at the Vrije Universiteit Amsterdam in 2012 among a Dutch-speaking student sample. The dataset consists of the measures described in the paper, as well as additional (exploratory) measures including the Five-Factor Personality Inventory, the Connectedness to Nature Scale, the Rosenberg Self-esteem Scale and a scale measuring life stress. The data can be used for replication purposes, meta-analyses, and exploratory analyses, as well as cross-cultural comparisons of touch and/or ERN effects. The authors also welcome collaborative research based on re-analyses of the data. The data described is available at a data repository called the DANS archive: http://persistent-identifier.nl/?identifier=urn:nbn:nl:ui:13-tzbk-gg. PMID:27158644
A finite element beam propagation method for simulation of liquid crystal devices.
Vanbrabant, Pieter J M; Beeckman, Jeroen; Neyts, Kristiaan; James, Richard; Fernandez, F Anibal
2009-06-22
An efficient full-vectorial finite element beam propagation method is presented that uses higher order vector elements to calculate the wide angle propagation of an optical field through inhomogeneous, anisotropic optical materials such as liquid crystals. The full dielectric permittivity tensor is considered in solving Maxwell's equations. The wide applicability of the method is illustrated with different examples: the propagation of a laser beam in a uniaxial medium, the tunability of a directional coupler based on liquid crystals and the near-field diffraction of a plane wave in a structure containing micrometer scale variations in the transverse refractive index, similar to the pixels of a spatial light modulator.
3D numerical simulation of laser-generated Lamb waves propagation in 2D acoustic black holes
NASA Astrophysics Data System (ADS)
Yan, Shiling; Lomonosov, Alexey M.; Shen, Zhonghua; Han, Bing
2015-05-01
Acoustic black holes have been widely used in damping structural vibration. In this work, the Lamb waves are utilized to evaluate the specified structure. The three-dimensional numerical model of acoustic black holes with parabolic profile was established. The propagation of laser-generated Lamb wave in two-dimensional acoustic black holes was numerically simulated using the finite element method. The results indicated that the incident wave was trapped by the structure obviously.
Minkkinen, Pentti O; Esbensen, Kim H
2009-10-19
Sampling errors can be divided into two classes, incorrect sampling and correct sampling errors. Incorrect sampling errors arise from incorrectly designed sampling equipment or procedures. Correct sampling errors are due to the heterogeneity of the material in sampling targets. Excluding the incorrect sampling errors, which can all be eliminated in practice although informed and diligent work is often needed, five factors dominate sampling variance: two factors related to material heterogeneity (analyte concentration; distributional heterogeneity) and three factors related to the sampling process itself (sample type, sample size, sampling modus). Due to highly significant interactions, a comprehensive appreciation of their combined effects is far from trivial and has in fact never been illustrated in detail. Heterogeneous materials can be well characterized by the two first factors, while all essential sampling process characteristics can be summarized by combinations of the latter three. We here present simulations based on an experimental design that varies all five factors. Within the framework of the Theory of Sampling, the empirical Total Sampling Error is a function of the fundamental sampling error and the grouping and segregation error interacting with a specific sampling process. We here illustrate absolute and relative sampling variance levels resulting from a wide array of simulated repeated samplings and express the effects by pertinent lot mean estimates and associated Root Mean Squared Errors/sampling variances, covering specific combinations of materials' heterogeneity and typical sampling procedures as used in current science, technology and industry. Factors, levels and interactions are varied within limits selected to match realistic materials and sampling situations that mimic, e.g., sampling for genetically modified organisms; sampling of geological drill cores; sampling during off-loading 3-dimensional lots (shiploads, railroad cars, truckloads
NASA Astrophysics Data System (ADS)
Bergmann-Wolf, I.; Dobslaw, H.; Mayer-Gürr, T.
2015-12-01
A realistically perturbed synthetic de-aliasing model consistent with the updated Earth System Model of the European Space Agency (Dobslaw et al., 2015) is now available for the years 1995 -- 2006. The data-set contains realizations of (i) errors at large spatial scales assessed individually for periods between 10 -- 30, 3 -- 10, and 1 -- 3 days, the S1 atmospheric tide, and sub-diurnal periods; (ii) errors at small spatial scales typically not covered by global models of atmosphere and ocean variability; and (iii) errors due to physical processes not represented in currently available de-aliasing products. The error magnitudes for each of the different frequency bands are derived from a small ensemble of four atmospheric and oceanic models. In order to demonstrate the plausibility of the error magnitudes chosen, we perform a variance component estimation based on daily GRACE normal equations from the ITSG-Grace2014 global gravity field series recently published by the University of Graz. All 12 years of the error model are used to calculate empirical error variance-covariance matrices describing the systematic dependencies of the errors both in time and in space individually for five continental and four oceanic regions, and daily GRACE normal equations are subsequently employed to obtain pre-factors for each of those matrices. For the largest spatial scales up to d/o = 40 and periods longer than 24 h, errors prepared for the updated ESM are found to be largely consistent with noise of a similar stochastic character contained in present-day GRACE solutions. Differences and similarities identified for all of the nine regions considered will be discussed in detail during the presentation.Dobslaw, H., I. Bergmann-Wolf, R. Dill, E. Forootan, V. Klemann, J. Kusche, and I. Sasgen (2015), The updated ESA Earth System Model for future gravity mission simulation studies, J. Geod., doi:10.1007/s00190-014-0787-8.
NASA Astrophysics Data System (ADS)
Mazzanti, P.; Bozzano, F.
2009-11-01
Coastal and subaqueous landslides can be very dangerous phenomena since they are characterised by the additional risk of induced tsunamis, unlike their completely-subaerial counterparts. Numerical modelling of landslides propagation is a key step in forecasting the consequences of landslides. In this paper, a novel approach named Equivalent Fluid/Equivalent Medium (EFEM) has been developed. It adapts common numerical models and software that were originally designed for subaerial landslides in order to simulate the propagation of combined subaerial-subaqueous and completely-subaqueous landslides. Drag and buoyancy forces, the loss of energy at the landslide-water impact and peculiar mechanisms like hydroplaning can be suitably simulated by this approach; furthermore, the change in properties of the landslide's mass, which is encountered at the transition from the subaerial to the submerged environment, can be taken into account. The approach has been tested by modelling two documented coastal landslides (a debris flow and a rock slide at Lake Albano) using the DAN-W code. The results, which were achieved from the back-analyses, demonstrate the efficacy of the approach to simulate the propagation of different types of coastal landslides.
NASA Astrophysics Data System (ADS)
Sciacchitano, Andrea; Wieneke, Bernhard
2016-08-01
This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5-10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.
Simulation of a plane wavefront propagating in cardiac tissue using a cellular automata model.
Barbosa, Carlos R Hall
2003-12-21
We present a detailed description of a cellular automata model for the propagation of action potential in a planar cardiac tissue, which is very fast and easy to use. The model incorporates anisotropy in the electrical conductivity and a spatial variation of the refractory time. The transmembrane potential distribution is directly derived from the cell states, and the intracellular and extracellular potential distributions are calculated for the particular case of a plane wavefront. Once the potential distributions are known, the associated current densities are calculated by Ohm's law, and the magnetic field is determined at a plane parallel to the cardiac tissue by applying the law of Biot and Savart. The results obtained for propagation speed and for magnetic field amplitude with the cellular automata model are compared with values predicted by the bidomain formulation, for various angles between wavefront propagation and fibre direction, characterizing excellent agreement between the models.
Awdishu, Linda; Namba, Jennifer
2016-01-01
Objective. To evaluate first-year pharmacy students’ ability to identify medication errors involving the top 100 prescription medications. Design. In the first quarter of a 3-quarter pharmacy self-care course, a didactic lecture on the most common prescribing and dispensing prescription errors was presented to first-year pharmacy students (P1) in preparation for a prescription review simulation done individually and as a group. In the following quarter, they were given a formal prescription review workshop before a second simulation involving individual and group review of a different set of prescriptions. Students were evaluated based on the number of correctly checked prescriptions and a self-assessment of their confidence in reviewing prescriptions. Assessment. All 63 P1 students completed the prescription review simulations. The individual scores did not significantly change, but group scores improved from 79 (16.2%) in the fall quarter to 98.6 (4.7%) in the winter quarter. Students perceived improvement of their prescription checking skills, specifically in their ability to fill a prescription on their own, identify prescribing and dispensing errors, and perform pharmaceutical calculations. Conclusion. A prescription review module consisting of a didactic lecture, workshop and simulation-based methods to teach prescription analysis was successful at improving first year pharmacy students’ knowledge, confidence, and application of these skills. PMID:27402989
NASA Technical Reports Server (NTRS)
Turon, Albert; Costa, Josep; Camanho, Pedro P.; Davila, Carlos G.
2006-01-01
A damage model for the simulation of delamination propagation under high-cycle fatigue loading is proposed. The basis for the formulation is a cohesive law that links fracture and damage mechanics to establish the evolution of the damage variable in terms of the crack growth rate dA/dN. The damage state is obtained as a function of the loading conditions as well as the experimentally-determined coefficients of the Paris Law crack propagation rates for the material. It is shown that by using the constitutive fatigue damage model in a structural analysis, experimental results can be reproduced without the need of additional model-specific curve-fitting parameters.
Böcklin, Christoph Baumann, Dirk; Fröhlich, Jürg
2014-02-14
A novel way to attain three dimensional fluence rate maps from Monte-Carlo simulations of photon propagation is presented in this work. The propagation of light in a turbid medium is described by the radiative transfer equation and formulated in terms of radiance. For many applications, particularly in biomedical optics, the fluence rate is a more useful quantity and directly derived from the radiance by integrating over all directions. Contrary to the usual way which calculates the fluence rate from absorbed photon power, the fluence rate in this work is directly calculated from the photon packet trajectory. The voxel based algorithm works in arbitrary geometries and material distributions. It is shown that the new algorithm is more efficient and also works in materials with a low or even zero absorption coefficient. The capabilities of the new algorithm are demonstrated on a curved layered structure, where a non-scattering, non-absorbing layer is sandwiched between two highly scattering layers.
Meldi, M.; Sagaut, P.; Salvetti, M. V.
2012-03-15
A stochastic approach based on generalized polynomial chaos (gPC) is used to quantify the error in large-eddy simulation (LES) of a spatially evolving mixing layer flow and its sensitivity to different simulation parameters, viz., the grid stretching in the streamwise and lateral directions and the subgrid-scale (SGS) Smagorinsky model constant (C{sub S}). The error is evaluated with respect to the results of a highly resolved LES and for different quantities of interest, namely, the mean streamwise velocity, the momentum thickness, and the shear stress. A typical feature of the considered spatially evolving flow is the progressive transition from a laminar regime, highly dependent on the inlet conditions, to a fully developed turbulent one. Therefore, the computational domain is divided in two different zones (inlet dependent and fully turbulent) and the gPC error analysis is carried out for these two zones separately. An optimization of the parameters is also carried out for both these zones. For all the considered quantities, the results point out that the error is mainly governed by the value of the C{sub S} constant. At the end of the inlet-dependent zone, a strong coupling between the normal stretching ratio and the C{sub S} value is observed. The error sensitivity to the parameter values is significantly larger in the inlet-dependent upstream region; however, low-error values can be obtained in this region for all the considered physical quantities by an ad hoc tuning of the parameters. Conversely, in the turbulent regime the error is globally lower and less sensitive to the parameter variations, but it is more difficult to find a set of parameter values leading to optimal results for all the analyzed physical quantities. A similar analysis is also carried out for the dynamic Smagorinsky model, by varying the grid stretching ratios. Comparing the databases generated with the different subgrid-scale models, it is possible to observe that the error cost
NASA Astrophysics Data System (ADS)
Gibson, J. P.; Gates, J. B.; Nasta, P.
2014-12-01
Groundwater in irrigated regions is impacted by timing and rates of deep drainage. Because field monitoring of deep drainage is often cost prohibitive, numerical soil water models are frequently the main method of estimation. Unfortunately, few studies have quantified the relative importance of likely error sources. In this study, three potential error sources are considered within a Monte Carlo framework: water retention parameters, rooting depth, and irrigation practice. Error distributions for water retention parameters were determined by 1) laboratory hydraulic measurements and 2) pedotransfer functions. Error distributions for rooting depth were developed from literature values. Three irrigation scheduling regimes were considered: one representing pre-scheduled irrigation ignoring preceding rainfall, one representing pre-scheduled irrigation that was altered based on preceding rainfall, and one representing algorithmic irrigation scheduling informed by profile matric potential sensors. This approach was applied to an experimental site in Nebraska with silt loam soils and irrigated corn for 2002-2012. Results are based on Six Monte-Carlo simulations, each consisting of 1000 Hydrus 1D simulations at daily timesteps, facilitated by parallelization on a 12-node computing cluster. Results indicate greater sensitivity to irrigation regime than to hydraulic or vegetation parameters (median values for prescheduled irrigation, prescheduled irrigation altered by rainfall, and algorithmic irrigation were 310 ,100, and 110 mm/yr, respectively). Error ranges were up to 700% higher for pedotransfer functions than for laboratory-measured hydraulic functions. Deep drainage was negatively correlated with alpha and maximum root zone depth and, for some scenarios, positively correlated with n. The relative importance of error sources differed amongst the irrigation scenarios because of nonlinearities amongst parameter values, profile wetness, and deep drainage. Compared to pre
Li, Han; Lin, Kexin; Shahmirzadi, Danial
2016-01-01
This study aims to quantify the effects of geometry and stiffness of aneurysms on the pulse wave velocity (PWV) and propagation in fluid–solid interaction (FSI) simulations of arterial pulsatile flow. Spatiotemporal maps of both the wall displacement and fluid velocity were generated in order to obtain the pulse wave propagation through fluid and solid media, and to examine the interactions between the two waves. The results indicate that the presence of abdominal aortic aneurysm (AAA) sac and variations in the sac modulus affect the propagation of the pulse waves both qualitatively (eg, patterns of change of forward and reflective waves) and quantitatively (eg, decreasing of PWV within the sac and its increase beyond the sac as the sac stiffness increases). The sac region is particularly identified on the spatiotemporal maps with a region of disruption in the wave propagation with multiple short-traveling forward/reflected waves, which is caused by the change in boundary conditions within the saccular region. The change in sac stiffness, however, is more pronounced on the wall displacement spatiotemporal maps compared to those of fluid velocity. We conclude that the existence of the sac can be identified based on the solid and fluid pulse waves, while the sac properties can also be estimated. This study demonstrates the initial findings in numerical simulations of FSI dynamics during arterial pulsations that can be used as reference for experimental and in vivo studies. Future studies are needed to demonstrate the feasibility of the method in identifying very mild sacs, which cannot be detected from medical imaging, where the material property degradation exists under early disease initiation. PMID:27478394
NASA Astrophysics Data System (ADS)
Dobslaw, Henryk; Bergmann-Wolf, Inga; Forootan, Ehsan; Dahle, Christoph; Mayer-Gürr, Torsten; Kusche, Jürgen; Flechtner, Frank
2016-05-01
A realistically perturbed synthetic de-aliasing model consistent with the updated Earth System Model of the European Space Agency is now available over the period 1995-2006. The dataset contains realizations of (1) errors at large spatial scales assessed individually for periods 10-30, 3-10, and 1-3 days, the S1 atmospheric tide, and sub-diurnal periods; (2) errors at small spatial scales typically not covered by global models of atmosphere and ocean variability; and (3) errors due to physical processes not represented in currently available de-aliasing products. The model is provided in two separate sets of Stokes coefficients to allow for a flexible re-scaling of the overall error level to account for potential future improvements in atmosphere and ocean mass variability models. Error magnitudes for the different frequency bands are derived from a small ensemble of four atmospheric and oceanic models. For the largest spatial scales up to d/o = 40 and periods longer than 24 h, those error estimates are approximately confirmed from a variance component estimation based on GRACE daily normal equations. Future mission performance simulations based on the updated Earth System Model and the realistically perturbed de-aliasing model indicate that for GRACE-type missions only moderate reductions of de-aliasing errors can be expected from a second satellite pair in a shifted polar orbit. Substantially more accurate global gravity fields are obtained when a second pair of satellites in an moderately inclined orbit is added, which largely stabilizes the global gravity field solutions due to its rotated sampling sensitivity.
Watkins, W.R.; Zegel, F.H.; Triplett, M.J.
1990-01-01
Various papers on the characterization, propagation, and simulation of IR scenes are presented. Individual topics addressed include: total radiant exitance measurements, absolute measurement of diffuse and specular reflectance using an FTIR spectrometer with an integrating sphere, fundamental limits in temperature estimation, incorporating the BRDF into an IR scene-generation system, characterizing IR dynamic response for foliage backgrounds, modeling sea surface effects in FLIR performance codes, automated imaging IR seeker performance evaluation system, generation of signature data bases with fast codes, background measurements using the NPS-IRST system. Also discussed are: naval ocean IR background analysis, camouflage simulation and effectiveness assessment for the individual soldier, discussion of IR scene generators, multiwavelength Scophony IR scene projector, LBIR target generator and calibrator for preflight seeker tests, dual-mode hardware-in-the-loop simulation facility, development of the IR blackbody source of gravity-type heat pipe and study of its characteristic.
NASA Astrophysics Data System (ADS)
Watkins, Wendell R.; Zegel, Ferdinand H.; Triplett, Milton J.
1990-09-01
Various papers on the characterization, propagation, and simulation of IR scenes are presented. Individual topics addressed include: total radiant exitance measurements, absolute measurement of diffuse and specular reflectance using an FTIR spectrometer with an integrating sphere, fundamental limits in temperature estimation, incorporating the BRDF into an IR scene-generation system, characterizing IR dynamic response for foliage backgrounds, modeling sea surface effects in FLIR performance codes, automated imaging IR seeker performance evaluation system, generation of signature data bases with fast codes, background measurements using the NPS-IRST system. Also discussed are: naval ocean IR background analysis, camouflage simulation and effectiveness assessment for the individual soldier, discussion of IR scene generators, multiwavelength Scophony IR scene projector, LBIR target generator and calibrator for preflight seeker tests, dual-mode hardware-in-the-loop simulation facility, development of the IR blackbody source of gravity-type heat pipe and study of its characteristic.
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Tsung, W.-J.; Coy, J. J.
1985-01-01
There is proposed a method for generation of Gleason's spiral bevel gears which provides the following properties of meshing and contact: (1) the contact normal keeps its original direction within the neighborhood of the main contact point; (2) the contact ellipse moves along the gear tooth surface; and (3) the kinematical errors caused by Gleason's method of cutting are almost zero. Computer programs for the simulation of meshing and bearing contact are developed.
Adjoint-field errors in high fidelity compressible turbulence simulations for sound control
NASA Astrophysics Data System (ADS)
Vishnampet, Ramanathan; Bodony, Daniel; Freund, Jonathan
2013-11-01
A consistent discrete adjoint for high-fidelity discretization of the three-dimensional Navier-Stokes equations is used to quantify the error in the sensitivity gradient predicted by the continuous adjoint method, and examine the aeroacoustic flow-control problem for free-shear-flow turbulence. A particular quadrature scheme for approximating the cost functional makes our discrete adjoint formulation for a fourth-order Runge-Kutta scheme with high-order finite differences practical and efficient. The continuous adjoint-based sensitivity gradient is shown to to be inconsistent due to discretization truncation errors, grid stretching and filtering near boundaries. These errors cannot be eliminated by increasing the spatial or temporal resolution since chaotic interactions lead them to become O (1) at the time of control actuation. Although this is a known behavior for chaotic systems, its effect on noise control is much harder to anticipate, especially given the different resolution needs of different parts of the turbulence and acoustic spectra. A comparison of energy spectra of the adjoint pressure fields shows significant error in the continuous adjoint at all wavenumbers, even though they are well-resolved. The effect of this error on the noise control mechanism is analyzed.
Uncertainty propagation in an ecosystem nutrient budget.
Lehrter, John C; Cebrian, Just
2010-03-01
New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated uncertainty for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of freedom. New aspects include the combined use of Monte Carlo simulations with classical error propagation methods, uncertainty analyses for GIS computations, and uncertainty propagation involving literature and subjective estimates of terms used in the budget calculations. The methods employed are broadly applicable to the mathematical operations employed in ecological studies involving step-by-step calculations, scaling procedures, and calculations of variables from direct measurements and/or literature estimates. Propagation of the standard error and the degrees of freedom allowed for calculation of the uncertainty intervals around every term in the budget. For scientists and environmental managers, the methods developed herein provide a relatively simple framework to propagate and assess the contributions of uncertainty in directly measured and literature estimated variables to calculated variables. Application of these methods to environmental data used in scientific reporting and environmental management will improve the interpretation of data and simplify the estimation of risk associated with decisions based on ecological studies.
Mathematical simulation of the origination and propagation of crown fires in averaged formulation
NASA Astrophysics Data System (ADS)
Perminov, V. A.
2015-02-01
Processes of origination and propagation of crown fires are studied theoretically. The forest is treated as multiphase multicomponent porous reacting medium. The Reynolds equations for a turbulent flow are solved numerically taking chemical reactions into account. The method of control volume is used for obtaining the discrete analog. As a result of numerical computations, the distributions of velocity fields, temperature, oxygen concentration, volatile pyrolysis and combustion products, and volume fractions of the condensed phase at different instants are obtained. The model makes it possible to obtain dynamic contours of propagation of crown fires, which depend on the properties and states of forest canopy (reserves and type of combustible materials, moisture content, inhomogeneities in woodland, velocity and direction of wind, etc.).
NASA Astrophysics Data System (ADS)
Abdessalem, K. B.; Sahtout, W.; Flaud, P.; Gazah, H.; Fakhfakh, Z.
2007-11-01
Literature shows a lack of works based on non-invasive methods for computing the propagation coefficient γ, a complex number related to dynamic vascular properties. Its imaginary part is inversely related to the wave speed C through the relationship C=ω/Im(γ), while its real part a, called attenuation, represents loss of pulse energy per unit of length. In this work an expression is derived giving the propagation coefficient when assuming a pulsatile flow through a viscoelastic vessel. The effects of physical and geometrical parameters of the tube are then studied. In particular, the effects of increasing the reflection coefficient, on the determination of the propagation coefficient are investigated in a first step. In a second step, we simulate a variation of tube length under physiological conditions. The method developed here is based on the knowledge of instantaneous velocity and radius values at only two sites. It takes into account the presence of a reflection site of unknown reflection coefficient, localised in the distal end of the vessel. The values of wave speed and attenuation obtained with this method are in a good agreement with the theory. This method has the advantage to be usable for small portions of the arterial tree.
Simulation of EMIC growth and propagation within the plasmaspheric plume density irregularities
NASA Astrophysics Data System (ADS)
de Soria-Santacruz Pich, M.; Spasojevic, M.
2012-12-01
In situ data from the Magnetospheric Plasma Analyzer (MPA) instruments onboard the LANL spacecraft are used to study the growth and propagation of electromagnetic ion cyclotron (EMIC) waves in the presence of cold plasma irregularities in the plasmaspheric plume. The data corresponds to the 9 June 2001 event, a period of moderate geomagnetic activity with highly irregular density structure within the plume as measured by the MPA instrument at geosynchoronus orbit. Theory and observations suggest that EMIC waves are responsible for energetic proton precipitation, which is stronger during geomagnetically disturbed intervals. These waves propagate below the proton gyrofrequency, and they appear in three frequency bands due to the presence of heavy ions, which strongly modify wave propagation characteristics. These waves are generated by ion cyclotron instability of ring current ions, whose temperature anisotropy provides the free energy required for wave growth. Growth maximizes for field-aligned propagation near the equatorial plane where the magnetic field gradient is small. Although the wave's group velocity typically stays aligned with the geomagnetic field direction, wave-normal vectors tend to become oblique due to the curvature and gradient of the field. On the other hand, radial density gradients have the capability of guiding the waves and competing against the magnetic field effect thus favoring wave growth conditions. In addition, enhanced cold plasma density reduces the proton resonant energy where higher fluxes are available for resonance, and hence explaining why wave growth is favored at higher L-shell regions where the ratio of plasma to cyclotron frequency is larger. The Stanford VLF 3D Raytracer is used together with path-integrated linear growth calculations to study the amplification and propagation characteristics of EMIC waves within the plasmaspheric plume formed during the 9 June 2001 event. Cold multi-ion plasma is assumed for raytracing
NASA Astrophysics Data System (ADS)
Luquet, David; Marchiano, Régis; Coulouvrat, François
2015-10-01
Many situations involve the propagation of acoustical shock waves through flows. Natural sources such as lightning, volcano explosions, or meteoroid atmospheric entries, emit loud, low frequency, and impulsive sound that is influenced by atmospheric wind and turbulence. The sonic boom produced by a supersonic aircraft and explosion noises are examples of intense anthropogenic sources in the atmosphere. The Buzz-Saw-Noise produced by turbo-engine fan blades rotating at supersonic speed also propagates in a fast flow within the engine nacelle. Simulating these situations is challenging, given the 3D nature of the problem, the long range propagation distances relative to the central wavelength, the strongly nonlinear behavior of shocks associated to a wide-band spectrum, and finally the key role of the flow motion. With this in view, the so-called FLHOWARD (acronym for FLow and Heterogeneous One-Way Approximation for Resolution of Diffraction) method is presented with three-dimensional applications. A scalar nonlinear wave equation is established in the framework of atmospheric applications, assuming weak heterogeneities and a slow wind. It takes into account diffraction, absorption and relaxation properties of the atmosphere, quadratic nonlinearities including weak shock waves, heterogeneities of the medium in sound speed and density, and presence of a flow (assuming a mean stratified wind and 3D turbulent ? flow fluctuations of smaller amplitude). This equation is solved in the framework of the one-way method. A split-step technique allows the splitting of the non-linear wave equation into simpler equations, each corresponding to a physical effect. Each sub-equation is solved using an analytical method if possible, and finite-differences otherwise. Nonlinear effects are solved in the time domain, and others in the frequency domain. Homogeneous diffraction is handled by means of the angular spectrum method. Ground is assumed perfectly flat and rigid. Due to the 3D
Luquet, David; Marchiano, Régis; Coulouvrat, François
2015-10-28
Many situations involve the propagation of acoustical shock waves through flows. Natural sources such as lightning, volcano explosions, or meteoroid atmospheric entries, emit loud, low frequency, and impulsive sound that is influenced by atmospheric wind and turbulence. The sonic boom produced by a supersonic aircraft and explosion noises are examples of intense anthropogenic sources in the atmosphere. The Buzz-Saw-Noise produced by turbo-engine fan blades rotating at supersonic speed also propagates in a fast flow within the engine nacelle. Simulating these situations is challenging, given the 3D nature of the problem, the long range propagation distances relative to the central wavelength, the strongly nonlinear behavior of shocks associated to a wide-band spectrum, and finally the key role of the flow motion. With this in view, the so-called FLHOWARD (acronym for FLow and Heterogeneous One-Way Approximation for Resolution of Diffraction) method is presented with three-dimensional applications. A scalar nonlinear wave equation is established in the framework of atmospheric applications, assuming weak heterogeneities and a slow wind. It takes into account diffraction, absorption and relaxation properties of the atmosphere, quadratic nonlinearities including weak shock waves, heterogeneities of the medium in sound speed and density, and presence of a flow (assuming a mean stratified wind and 3D turbulent ? flow fluctuations of smaller amplitude). This equation is solved in the framework of the one-way method. A split-step technique allows the splitting of the non-linear wave equation into simpler equations, each corresponding to a physical effect. Each sub-equation is solved using an analytical method if possible, and finite-differences otherwise. Nonlinear effects are solved in the time domain, and others in the frequency domain. Homogeneous diffraction is handled by means of the angular spectrum method. Ground is assumed perfectly flat and rigid. Due to the 3D
García-Grajales, Julián A.; Rucabado, Gabriel; García-Dopico, Antonio; Peña, José-María; Jérusalem, Antoine
2015-01-01
With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite—explicit and implicit—were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented
García-Grajales, Julián A; Rucabado, Gabriel; García-Dopico, Antonio; Peña, José-María; Jérusalem, Antoine
2015-01-01
With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite--explicit and implicit--were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented
Error Analysis and Trajectory Correction Maneuvers of Lunar Transfer Orbit
NASA Astrophysics Data System (ADS)
Zhao, Yu-hui; Hou, Xi-yun; Liu, Lin
2013-10-01
For a returnable lunar probe, this paper studies the characteristics of both the Earth-Moon transfer orbit and the return orbit. On the basis of the error propagation matrix, the linear equation to estimate the first midcourse trajectory correction maneuver (TCM) is figured out. Numerical simulations are performed, and the features of error propagation in lunar transfer orbit are given. The advantages, disadvantages, and applications of two TCM strategies are discussed, and the computation of the second TCM of the return orbit is also simulated under the conditions at the reentry time.
Error Analysis and Trajectory Correction Maneuvers of Lunar Transfer Orbit
NASA Astrophysics Data System (ADS)
Zhao, Y. H.; Hou, X. Y.; Liu, L.
2013-05-01
For the sample return lunar missions and human lunar exploration, this paper studies the characteristics of both the Earth-Moon transfer orbit and the return orbit. On the basis of the error propagation matrix, the linear equation to estimate the first midcourse trajectory correction maneuver (TCM) is figured out. Numerical simulations are performed, and the features of error propagation in lunar transfer orbit are given. The advantages, disadvantages, and applications of two TCM strategies are discussed, and the computation of the second TCM of the return orbit is also simulated under the conditions at the reentry time.
NASA Astrophysics Data System (ADS)
Dhanya, M.; Chandrasekar, A.
2016-02-01
The background error covariance structure influences a variational data assimilation system immensely. The simulation of a weather phenomenon like monsoon depression can hence be influenced by the background correlation information used in the analysis formulation. The Weather Research and Forecasting Model Data assimilation (WRFDA) system includes an option for formulating multivariate background correlations for its three-dimensional variational (3DVar) system (cv6 option). The impact of using such a formulation in the simulation of three monsoon depressions over India is investigated in this study. Analysis and forecast fields generated using this option are compared with those obtained using the default formulation for regional background error correlations (cv5) in WRFDA and with a base run without any assimilation. The model rainfall forecasts are compared with rainfall observations from the Tropical Rainfall Measurement Mission (TRMM) and the other model forecast fields are compared with a high-resolution analysis as well as with European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-Interim reanalysis. The results of the study indicate that inclusion of additional correlation information in background error statistics has a moderate impact on the vertical profiles of relative humidity, moisture convergence, horizontal divergence and the temperature structure at the depression centre at the analysis time of the cv5/cv6 sensitivity experiments. Moderate improvements are seen in two of the three depressions investigated in this study. An improved thermodynamic and moisture structure at the initial time is expected to provide for improved rainfall simulation. The results of the study indicate that the skill scores of accumulated rainfall are somewhat better for the cv6 option as compared to the cv5 option for at least two of the three depression cases studied, especially at the higher threshold levels. Considering the importance of utilising improved
NASA Astrophysics Data System (ADS)
Lausch, A.; Jensen, N. K. G.; Chen, J.; Lee, T. Y.; Lock, M.; Wong, E.
2014-03-01
Purpose: To investigate the effects of registration error (RE) on parametric response map (PRM) analysis of pre and post-radiotherapy (RT) functional images. Methods: Arterial blood flow maps (ABF) were generated from the CT-perfusion scans of 5 patients with hepatocellular carcinoma. ABF values within each patient map were modified to produce seven new ABF maps simulating 7 distinct post-RT functional change scenarios. Ground truth PRMs were generated for each patient by comparing the simulated and original ABF maps. Each simulated ABF map was then deformed by different magnitudes of realistic respiratory motion in order to simulate RE. PRMs were generated for each of the deformed maps and then compared to the ground truth PRMs to produce estimates of RE-induced misclassification. Main findings: The percentage of voxels misclassified as decreasing, no change, and increasing, increased with RE For all patients, increasing RE was observed to increase the number of high post-RT ABF voxels associated with low pre-RT ABF voxels and vice versa. 3 mm of average tumour RE resulted in 18-45% tumour voxel misclassification rates. Conclusions: RE induced misclassification posed challenges for PRM analysis in the liver where registration accuracy tends to be lower. Quantitative understanding of the sensitivity of the PRM method to registration error is required if PRMs are to be used to guide radiation therapy dose painting techniques.
Loft, Shayne; Chapman, Melissa; Smith, Rebekah E
2016-09-01
In air traffic control (ATC), forgetting to perform deferred actions-prospective memory (PM) errors-can have severe consequences. PM demands can also interfere with ongoing tasks (costs). We examined the extent to which PM errors and costs were reduced in simulated ATC by providing extended practice, or by providing external aids combined with extended practice, or by providing external aids combined with instructions that removed perceived memory requirements. Participants accepted/handed-off aircraft and detected conflicts. For the PM task, participants were required to substitute alternative actions for routine actions when accepting aircraft. In Experiment 1, when no aids were provided, PM errors and costs were not reduced by practice. When aids were provided, costs observed early in practice were eliminated with practice, but residual PM errors remained. Experiment 2 provided more limited practice with aids, but instructions that did not frame the PM task as a "memory" task led to high PM accuracy without costs. Attention-allocation policies that participants set based on expected PM demands were modified as individuals were increasingly exposed to reliable aids, or were given instructions that removed perceived memory requirements. These findings have implications for the design of aids for individuals who monitor multi-item dynamic displays. (PsycINFO Database Record PMID:27608067
NASA Astrophysics Data System (ADS)
Biryukov, V. A.; Miryakha, V. A.; Petrov, I. B.; Khokhlov, N. I.
2016-06-01
For wave propagation in heterogeneous media, we compare numerical results produced by grid-characteristic methods on structured rectangular and unstructured triangular meshes and by a discontinuous Galerkin method on unstructured triangular meshes as applied to the linear system of elasticity equations in the context of direct seismic exploration with an anticlinal trap model. It is shown that the resulting synthetic seismograms are in reasonable quantitative agreement. The grid-characteristic method on structured meshes requires more nodes for approximating curved boundaries, but it has a higher computation speed, which makes it preferable for the given class of problems.
NASA Astrophysics Data System (ADS)
Armstrong, Christopher; Hargather, Michael
2014-11-01
Computational simulations of explosions are performed using the hydrocode CTH and analyzed using artificial schlieren imaging. The simulations include one and three-dimensional free-air blasts and a confined geometry. Artificial schlieren images are produced from the density fields calculated via the simulations. The artificial schlieren images are used to simulate traditional and focusing schlieren images of explosions. The artificial schlieren images are compared to actual high-speed schlieren images of similar explosions. Computational streak images are produced to identify time-dependent features in the blast field. The streak images are used to study the interaction between secondary shock waves and the explosive product gas contact surface.
Manga, Etoungh D; Blasco, Hugues; Da-Costa, Philippe; Drobek, Martin; Ayral, André; Le Clezio, Emmanuel; Despaux, Gilles; Coasne, Benoit; Julbe, Anne
2014-09-01
The present study reports on the development of a characterization method of porous membrane materials which consists of considering their acoustic properties upon gas adsorption. Using acoustic microscopy experiments and atomistic molecular simulations for helium adsorbed in a silicalite-1 zeolite membrane layer, we showed that acoustic wave propagation could be used, in principle, for controlling the membranes operando. Molecular simulations, which were found to fit experimental data, showed that the compressional modulus of the composite system consisting of silicalite-1 with adsorbed He increases linearly with the He adsorbed amount while its shear modulus remains constant in a large range of applied pressures. These results suggest that the longitudinal and Rayleigh wave velocities (VL and VR) depend on the He adsorbed amount whereas the transverse wave velocity VT remains constant. PMID:25089584
Manga, Etoungh D; Blasco, Hugues; Da-Costa, Philippe; Drobek, Martin; Ayral, André; Le Clezio, Emmanuel; Despaux, Gilles; Coasne, Benoit; Julbe, Anne
2014-09-01
The present study reports on the development of a characterization method of porous membrane materials which consists of considering their acoustic properties upon gas adsorption. Using acoustic microscopy experiments and atomistic molecular simulations for helium adsorbed in a silicalite-1 zeolite membrane layer, we showed that acoustic wave propagation could be used, in principle, for controlling the membranes operando. Molecular simulations, which were found to fit experimental data, showed that the compressional modulus of the composite system consisting of silicalite-1 with adsorbed He increases linearly with the He adsorbed amount while its shear modulus remains constant in a large range of applied pressures. These results suggest that the longitudinal and Rayleigh wave velocities (VL and VR) depend on the He adsorbed amount whereas the transverse wave velocity VT remains constant.
Richter, Martin; Fingerhut, Benjamin P
2016-07-12
We present an algorithm for the simulation of nonlinear 2D spectra of molecular systems in the UV-vis spectral region from atomistic molecular dynamics trajectories subject to nonadiabatic relaxation. We combine the nonlinear exciton propagation (NEP) protocol, that relies on a quasiparticle approach with the surface hopping methodology to account for quantum-classical feedback during the dynamics. Phenomena, such as dynamic Stokes shift due to nuclear relaxation, spectral diffusion, and population transfer among electronic states, are thus naturally included and benchmarked on a model of two electronic states coupled to a harmonic coordinate and a classical heatbath. The capabilities of the algorithm are further demonstrated for the bichromophore diphenylmethane that is described in a fully microscopic fashion including all 69 classical nuclear degrees of freedom. We demonstrate that simulated 2D signals are especially sensitive to the applied theoretical approximations (i.e., choice of active space in the CASSCF method) where population dynamics appears comparable. PMID:27248511
Local time-space mesh refinement for simulation of elastic wave propagation in multi-scale media
NASA Astrophysics Data System (ADS)
Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir
2015-01-01
This paper presents an original approach to local time-space grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are the application of temporal and spatial refinement on two different surfaces; the use of the embedded-stencil technique for the refinement of grid step with respect to time; the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.
Local time–space mesh refinement for simulation of elastic wave propagation in multi-scale media
Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir
2015-01-15
This paper presents an original approach to local time–space grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are –the application of temporal and spatial refinement on two different surfaces; –the use of the embedded-stencil technique for the refinement of grid step with respect to time; –the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
NASA Technical Reports Server (NTRS)
Clement, W. F.; Allen, R. W.; Heffley, R. K.; Jewell, W. F.; Jex, H. R.; Mcruer, D. T.; Schulman, T. M.; Stapleford, R. L.
1980-01-01
The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included.
Vayron, Romain; Nguyen, Vu-Hieu; Bosc, Romain; Naili, Salah; Haïat, Guillaume
2015-10-01
Dental implant stability, which is an important parameter for the surgical outcome, can now be assessed using quantitative ultrasound. However, the acoustical propagation in dental implants remains poorly understood. The objective of this numerical study was to understand the propagation phenomena of ultrasonic waves in cylindrically shaped prototype dental implants and to investigate the sensitivity of the ultrasonic response to the surrounding bone quantity and quality. The 10-MHz ultrasonic response of the implant was calculated using an axisymetric 3D finite element model, which was validated by comparison with results obtained experimentally and using a 2D finite difference numerical model. The results show that the implant ultrasonic response changes significantly when a liquid layer is located at the implant interface compared to the case of an interface fully bounded with bone tissue. A dedicated model based on experimental measurements was developed in order to account for the evolution of the bone biomechanical properties at the implant interface. The effect of a gradient of material properties on the implant ultrasonic response is determined. Based on the reproducibility of the measurement, the results indicate that the device should be sensitive to the effects of a healing duration of less than one week. In all cases, the amplitude of the implant response is shown to decrease when the dental implant primary and secondary stability increase, which is consistent with the experimental results. This study paves the way for the development of a quantitative ultrasound method to evaluate dental implant stability.
Vayron, Romain; Nguyen, Vu-Hieu; Bosc, Romain; Naili, Salah; Haïat, Guillaume
2015-10-01
Dental implant stability, which is an important parameter for the surgical outcome, can now be assessed using quantitative ultrasound. However, the acoustical propagation in dental implants remains poorly understood. The objective of this numerical study was to understand the propagation phenomena of ultrasonic waves in cylindrically shaped prototype dental implants and to investigate the sensitivity of the ultrasonic response to the surrounding bone quantity and quality. The 10-MHz ultrasonic response of the implant was calculated using an axisymetric 3D finite element model, which was validated by comparison with results obtained experimentally and using a 2D finite difference numerical model. The results show that the implant ultrasonic response changes significantly when a liquid layer is located at the implant interface compared to the case of an interface fully bounded with bone tissue. A dedicated model based on experimental measurements was developed in order to account for the evolution of the bone biomechanical properties at the implant interface. The effect of a gradient of material properties on the implant ultrasonic response is determined. Based on the reproducibility of the measurement, the results indicate that the device should be sensitive to the effects of a healing duration of less than one week. In all cases, the amplitude of the implant response is shown to decrease when the dental implant primary and secondary stability increase, which is consistent with the experimental results. This study paves the way for the development of a quantitative ultrasound method to evaluate dental implant stability. PMID:25619479
ELIASSI,MEHDI; GLASS JR.,ROBERT J.
2000-03-08
The authors consider the ability of the numerical solution of Richards equation to model gravity-driven fingers. Although gravity-driven fingers can be easily simulated using a partial downwind averaging method, they find the fingers are purely artificial, generated by the combined effects of truncation error induced oscillations and capillary hysteresis. Since Richards equation can only yield a monotonic solution for standard constitutive relations and constant flux boundary conditions, it is not the valid governing equation to model gravity-driven fingers, and therefore is also suspect for unsaturated flow in initially dry, highly nonlinear, and hysteretic media where these fingers occur. However, analysis of truncation error at the wetting front for the partial downwind method suggests the required mathematical behavior of a more comprehensive and physically based modeling approach for this region of parameter space.
NASA Astrophysics Data System (ADS)
Matsue, Kazuma; Arakawa, Masahiko; Yasui, Minami; Matsumoto, Rie; Tsujido, Sayaka; Takano, Shota; Hasegawa, Sunao
2015-08-01
Introduction: Recent spacecraft surveys clarified that asteroid surfaces were covered with regolith made of boulders and pebbles such as that found on the asteroid Itokawa. It was also found that surface morphologies of asteroids formed on the regolith layer were modified. For example, the high-resolution images of the asteroid Eros revealed the evidence of the downslope movement of the regolith layer, then it could cause the degradation and the erasure of small impact crater. One possible process to explain these observations is the regolith layer collapse caused by seismic vibration after projectile impacts. The impact-induced seismic wave might be an important physical process affecting the morphology change of regolith layer on asteroid surfaces. Therefore, it is significant for us to know the relationship between the impact energy and the impact-induced seismic wave. So in this study, we carried out impact cratering experiments in order to observe the seismic wave propagating through the target far from the impact crater.Experimental method: Impact cratering experiments were conducted by using a single stage vertical gas gun set at Kobe Univ and a two-stage vertical gas gun set at ISAS. We used quartz sands with the particle diameter of 500μm, and the bulk density of 1.48g/cm3. The projectile was a ball made of polycarbonate with the diameter of 4.75mm and aluminum, titan, zirconia, stainless steel, cupper, tungsten carbide projectile with the diameter of 2mm. These projectiles were launched at the impact velocity from 0.2 to 7km/s. The target was set in a vacuum chamber evacuated below 10 Pa. We measured the seismic wave by using a piezoelectric uniaxial accelerometer.Result: The impact-induced seismic wave was measured to show a large single peak and found to attenuate with the propagation distance. The maximum acceleration of the seismic wave was recognized to have a good relationship with the normalized distance x/R, where x is the propagation distance
Vasserman, I N; Matveenko, V P; Shardakov, I N; Shestakov, A P
2015-01-01
The propagation of excitation wave in the inhomogeneous anisotropic finite element model of cardiac muscle is investigated. In this model, the inhomogeneity stands for the rotation of anisotropy axes through the wall thickness and results from a fibrous-laminar structure of the cardiac muscle tissue. Conductivity of the cardiac muscle is described using a monodomain model and the Aliev-Panfilov equations are used as the relationships between the transmembrane current and transmembrane potential. Numerical simulation is performed by applying the splitting algorithm, in which the partial differential solution to the nonlinear boundary value problem is reduced to a sequence of simple ordinary differential equations and linear partial differential equations. The simulation is carried out for a rectangular block of the cardiac tissue, the minimal size of which is considered to be the thickness of the heart wall. Two types of distribution of the fiber orientation angle are discussed. The first case corresponds 'to the left ventricle of a dog. The endocardium and epicardium fibers are generally oriented in the meridional direction. The angle of fiber orientation varies smoothly through the wall thickness making a half-turn. A circular layer, in which the fibers are oriented in the circumferential direction locates deep in the cardiac wall. The results of calculations show that for this case the wave form strongly depends on a place of initial excitation. For the endocardial and epicardial initial excitation one can see the earlier wave front propagation in the endocardium and epicardium, respectively. At the intramural initial excitation the simultaneous wave front propagation in the endocardium and epicardium occurs, but there is a wave front lag in the middle of the wall. The second case refers to the right ventricle of a swine, in which the endocardium and epicardium fibers are typically oriented in the circumferential direction, whereas the subepicardium fibers
Bend propagation in flagella. I. Derivation of equations of motion and their simulation.
Hines, M; Blum, J J
1978-07-01
A set of nonlinear differential equations describing flagellar motion in an external viscous medium is derived. Because of the local nature of these equations and the use of a Crank-Nicolson-type forward time step, which is stable for large deltat, numerical solution of these equations on a digital computer is relatively fast. Stable bend initiation and propagation, without internal viscous resistance, is demonstrated for a flagellum containing a linear elastic bending resistance and an elastic shear resistance that depends on sliding. The elastic shear resistance is derived from a plausible structural model of the radial link system. The active shear force for the dynein system is specified by a history-dependent functional of curvature characterized by the parameters m0, a proportionality constant between the maximum active shear moment and curvature, and tau, a relaxation time which essentially determines the delay between curvature and active moment.
NASA Technical Reports Server (NTRS)
Gupta, Vipul; Hochhalter, Jacob; Yamakov, Vesselin; Scott, Willard; Spear, Ashley; Smith, Stephen; Glaessgen, Edward
2013-01-01
A systematic study of crack tip interaction with grain boundaries is critical for improvement of multiscale modeling of microstructurally-sensitive fatigue crack propagation and for the computationally-assisted design of more durable materials. In this study, single, bi- and large-grain multi-crystal specimens of an aluminum-copper alloy are fabricated, characterized using electron backscattered diffraction (EBSD), and deformed under tensile loading and nano-indentation. 2D image correlation (IC) in an environmental scanning electron microscope (ESEM) is used to measure displacements near crack tips, grain boundaries and within grain interiors. The role of grain boundaries on slip transfer is examined using nano-indentation in combination with high-resolution EBSD. The use of detailed IC and EBSD-based experiments are discussed as they relate to crystal-plasticity finite element (CPFE) model calibration and validation.
Numerical simulation of acoustic holography with propagator adaptation. Application to a 3D disc
NASA Astrophysics Data System (ADS)
Martin, Vincent; Le Bourdon, Thibault; Pasqual, Alexander Mattioli
2011-08-01
Acoustical holography can be used to identify the vibration velocity of an extended vibrating body. Such an inverse problem relies on the radiated acoustic pressure measured by a microphone array and on an a priori knowledge of the way the body radiates sound. Any perturbation on the radiation model leads to a perturbation on the velocity identified by the inversion process. Thus, to obtain the source vibration velocity with a good precision, it is useful to identify also an appropriate propagation model. Here, this identification, or adaptation, procedure rests on a geometrical interpretation of the acoustic holography in the objective space (here the radiated pressure space equipped with the L2-norm) and on a genetic algorithm. The propagator adaptation adds information to the holographic process, so it is not a regularisation method, which approximates the inverse of the model but does not affect the model. Moreover regularisations act in the variables space, here the velocities space. It is shown that an adapted model significantly decreases the quantity of regularisation needed to obtain a good reconstructed velocity, and that model adaptation improves significantly the acoustical holography results. In the presence of perturbations on the radiated pressure, some indications will be given on the interest or not to adapt the model, again thanks to the geometrical interpretation of holography in the objective space. As a numerical example, a disc whose vibration velocity on one of its sides is identified by acoustic holography is presented. On an industrial scale, this problem occurs due to the noise radiated by car wheels. The assessment of the holographic results has not yet been rigorously performed in such situations due to the complexity of the wheel environment made up of the car body, road and rolling conditions.
ITER test blanket module error field simulation experiments at DIII-D
NASA Astrophysics Data System (ADS)
Schaffer, M. J.; Snipes, J. A.; Gohil, P.; de Vries, P.; Evans, T. E.; Fenstermacher, M. E.; Gao, X.; Garofalo, A. M.; Gates, D. A.; Greenfield, C. M.; Heidbrink, W. W.; Kramer, G. J.; La Haye, R. J.; Liu, S.; Loarte, A.; Nave, M. F. F.; Osborne, T. H.; Oyama, N.; Park, J.-K.; Ramasubramanian, N.; Reimerdes, H.; Saibene, G.; Salmi, A.; Shinohara, K.; Spong, D. A.; Solomon, W. M.; Tala, T.; Zhu, Y. B.; Boedo, J. A.; Chuyanov, V.; Doyle, E. J.; Jakubowski, M.; Jhang, H.; Nazikian, R. M.; Pustovitov, V. D.; Schmitz, O.; Srinivasan, R.; Taylor, T. S.; Wade, M. R.; You, K.-I.; Zeng, L.; DIII-D Team
2011-10-01
Experiments at DIII-D investigated the effects of magnetic error fields similar to those expected from proposed ITER test blanket modules (TBMs) containing ferromagnetic material. Studied were effects on: plasma rotation and locking, confinement, L-H transition, the H-mode pedestal, edge localized modes (ELMs) and ELM suppression by resonant magnetic perturbations, energetic particle losses, and more. The experiments used a purpose-built three-coil mock-up of two magnetized ITER TBMs in one ITER equatorial port. The largest effect was a reduction in plasma toroidal rotation velocity v across the entire radial profile by as much as Δv/v ~ 60% via non-resonant braking. Changes to global Δn/n, Δβ/β and ΔH98/H98 were ~3 times smaller. These effects are stronger at higher β. Other effects were smaller. The TBM field increased sensitivity to locking by an applied known n = 1 test field in both L- and H-mode plasmas. Locked mode tolerance was completely restored in L-mode by re-adjusting the DIII-D n = 1 error field compensation system. Numerical modelling by IPEC reproduces the rotation braking and locking semi-quantitatively, and identifies plasma amplification of a few n = 1 Fourier harmonics as the main cause of braking. IPEC predicts that TBM braking in H-mode may be reduced by n = 1 control. Although extrapolation from DIII-D to ITER is still an open issue, these experiments suggest that a TBM-like error field will produce only a few potentially troublesome problems, and that they might be made acceptably small.
ITER Test Blanket Module Error Field Simulation Experiments at DIII-D
Schaffer, M. J.; Testa, D.; Snipes, J. A.; Gohil, P.; De Vries, P.; Evans, T. E.; Fenstermacher, M. E.; Gao, X.; Garofalo, A.; Gates, D.A.; Greenfield, C. M.; Heidbrink, W.; La Haye, R.; Liu, S.; Loarte, A.; Nave, M. F. F.; Oyama, N.; Osakabe, M.; Park, J. K.; Ramasubramanian, N.; Reimerdes, H.; Saibene, G.; Saimi, A.; Shinohara, K.; Spong, Donald A; Solomon, W. M.; Tala, T.; Zhu, Y. B.; Zhai, K.; Boedo, J.; Chuyanov, V.; Doyle, E. J.; Jakubowski, M. W.; Jhang, H.; Nazikian, Raffi; Pustovitov, V. D.; Schmitz, O.; Sanchez, Raul; Srinivasan, R.; Taylor, T. S.; Wade, M.; You, K. I.; Zeng, L.
2011-01-01
Experiments at DIII-D investigated the effects of magnetic error fields similar to those expected from proposed ITER test blanket modules (TBMs) containing ferromagnetic material. Studied were effects on: plasma rotation and locking, confinement, L-H transition, the H-mode pedestal, edge localized modes (ELMs) and ELM suppression by resonant magnetic perturbations, energetic particle losses, and more. The experiments used a purpose-built three-coil mock-up of two magnetized ITER TBMs in one ITER equatorial port. The largest effect was a reduction in plasma toroidal rotation velocity v across the entire radial profile by as much as Delta upsilon/upsilon similar to 60% via non-resonant braking. Changes to global Delta n/n, Delta beta/beta and Delta H(98)/H(98) were similar to 3 times smaller. These effects are stronger at higher beta. Other effects were smaller. The TBM field increased sensitivity to locking by an applied known n = 1 test field in both L-and H-mode plasmas. Locked mode tolerance was completely restored in L-mode by re-adjusting the DIII-D n = 1 error field compensation system. Numerical modelling by IPEC reproduces the rotation braking and locking semi-quantitatively, and identifies plasma amplification of a few n = 1 Fourier harmonics as the main cause of braking. IPEC predicts that TBM braking in H-mode may be reduced by n = 1 control. Although extrapolation from DIII-D to ITER is still an open issue, these experiments suggest that a TBM-like error field will produce only a few potentially troublesome problems, and that they might be made acceptably small.
NASA Astrophysics Data System (ADS)
Shiota, D.; Kataoka, R.
2016-02-01
Coronal mass ejections (CMEs) are the most important drivers of various types of space weather disturbance. Here we report a newly developed magnetohydrodynamic (MHD) simulation of the solar wind, including a series of multiple CMEs with internal spheromak-type magnetic fields. First, the polarity of the spheromak magnetic field is set as determined automatically according to the Hale-Nicholson law and the chirality law of Bothmer and Schwenn. The MHD simulation is therefore capable of predicting the time profile of the southward interplanetary magnetic field at the Earth, in relation to the passage of a magnetic cloud within a CME. This profile is the most important parameter for space weather forecasts of magnetic storms. In order to evaluate the current ability of our simulation, we demonstrate a test case: the propagation and interaction process of multiple CMEs associated with the highly complex active region NOAA 10486 in October to November 2003, and present the result of a simulation of the solar wind parameters at the Earth during the 2003 Halloween storms. We succeeded in reproducing the arrival at the Earth's position of a large amount of southward magnetic flux, which is capable of causing an intense magnetic storm. We find that the observed complex time profile of the solar wind parameters at the Earth could be reasonably well understood by the interaction of a few specific CMEs.
NASA Astrophysics Data System (ADS)
Cha, Dong-Hyun; Lee, Dong-Kyou
2009-07-01
In this study, the systematic errors in regional climate simulation of 28-year summer monsoon over East Asia and the western North Pacific (WNP) and the impact of the spectral nudging technique (SNT) on the reduction of the systematic errors are investigated. The experiment in which the SNT is not applied (the CLT run) has large systematic errors in seasonal mean climatology such as overestimated precipitation, weakened subtropical high, and enhanced low-level southwesterly over the subtropical WNP, while in the experiment using the SNT (the SP run) considerably smaller systematic errors are resulted. In the CTL run, the systematic error of simulated precipitation over the ocean increases significantly after mid-June, since the CTL run cannot reproduce the principal intraseasonal variation of summer monsoon precipitation. The SP run can appropriately capture the spatial distribution as well as temporal variation of the principal empirical orthogonal function mode, and therefore, the systematic error over the ocean does not increase after mid-June. The systematic error of simulated precipitation over the subtropical WNP in the CTL run results from the unreasonable positive feedback between precipitation and surface latent heat flux induced by the warm sea surface temperature anomaly. Since the SNT plays a role in decreasing the positive feedback by improving monsoon circulations, the SP run can considerably reduce the systematic errors of simulated precipitation as well as atmospheric fields over the subtropical WNP region.
Pugh, Thomas J.; Amos, Richard A.; John Baptiste, Sandra; Choi, Seungtaek; Nhu Nguyen, Quyhn; Ronald Zhu, X.; Palmer, Matthew B.; Lee, Andrew K.
2013-10-01
To evaluate the dosimetric consequences of rotational and translational alignment errors in patients receiving intensity-modulated proton therapy with multifield optimization (MFO-IMPT) for prostate cancer. Ten control patients with localized prostate cancer underwent treatment planning for MFO-IMPT. Rotational and translation errors were simulated along each of 3 axes: anterior-posterior (A-P), superior-inferior (S-I), and left-right. Clinical target-volume (CTV) coverage remained high with all alignment errors simulated. Rotational errors did not result in significant rectum or bladder dose perturbations. Translational errors resulted in larger dose perturbations to the bladder and rectum. Perturbations in rectum and bladder doses were minimal for rotational errors and larger for translational errors. Rectum V45 and V70 increased most with A-P misalignment, whereas bladder V45 and V70 changed most with S-I misalignment. The bladder and rectum V45 and V70 remained acceptable even with extreme alignment errors. Even with S-I and A-P translational errors of up to 5 mm, the dosimetric profile of MFO-IMPT remained favorable. MFO-IMPT for localized prostate cancer results in robust coverage of the CTV without clinically meaningful dose perturbations to normal tissue despite extreme rotational and translational alignment errors.
NASA Technical Reports Server (NTRS)
Sylvester, W. B.
1984-01-01
A series of SEASAT repeat orbits over a sequence of best Low center positions is simulated by using the Seatrak satellite calculator. These Low centers are, upon appropriate interpolation to hourly positions, Located at various times during the + or - 3 hour assimilation cycle. Error analysis for a sample of best cyclone center positions taken from the Atlantic and Pacific oceans reveals a minimum average error of 1.1 deg of Longitude and a standard deviation of 0.9 deg of Longitude. The magnitude of the average error seems to suggest that by utilizing the + or - 3 hour window in the assimilation cycle, the quality of the SASS data is degraded to the Level of the background. A further consequence of this assimilation scheme is the effect which is manifested as a result of the blending of two or more more juxtaposed vector winds, generally possessing different properties (vector quantity and time). The outcome of this is to reduce gradients in the wind field and to deform isobaric and frontal patterns of the intial field.
NASA Astrophysics Data System (ADS)
Hirthe, Eugenia M.; Graf, Thomas
2012-12-01
The automatic non-iterative second-order time-stepping scheme based on the temporal truncation error proposed by Kavetski et al. [Kavetski D, Binning P, Sloan SW. Non-iterative time-stepping schemes with adaptive truncation error control for the solution of Richards equation. Water Resour Res 2002;38(10):1211, http://dx.doi.org/10.1029/2001WR000720.] is implemented into the code of the HydroGeoSphere model. This time-stepping scheme is applied for the first time to the low-Rayleigh-number thermal Elder problem of free convection in porous media [van Reeuwijk M, Mathias SA, Simmons CT, Ward JD. Insights from a pseudospectral approach to the Elder problem. Water Resour Res 2009;45:W04416, http://dx.doi.org/10.1029/2008WR007421.], and to the solutal [Shikaze SG, Sudicky EA, Schwartz FW. Density-dependent solute transport in discretely-fractured geological media: is prediction possible? J Contam Hydrol 1998;34:273-91] problem of free convection in fractured-porous media. Numerical simulations demonstrate that the proposed scheme efficiently limits the temporal truncation error to a user-defined tolerance by controlling the time-step size. The non-iterative second-order time-stepping scheme can be applied to (i) thermal and solutal variable-density flow problems, (ii) linear and non-linear density functions, and (iii) problems including porous and fractured-porous media.
NASA Astrophysics Data System (ADS)
Gui, Y. L.; Zhao, Z. Y.; Zhou, H. Y.; Wu, W.
2016-10-01
In this paper, a cohesive fracture model is applied to model P-wave propagation through fractured rock mass using hybrid continuum-discrete element method, i.e. Universal Distinct Element Code (UDEC). First, a cohesive fracture model together with the background of UDEC is presented. The cohesive fracture model considers progressive failure of rock fracture rather than an abrupt damage through simultaneously taking into account the elastic, plastic and damage mechanisms as well as a modified failure function. Then, a series of laboratory tests from the literature on P-wave propagation through rock mass containing single fracture and two parallel fractures are introduced and the numerical models used to simulate these laboratory tests are described. After that, all the laboratory tests are simulated and presented. The results show that the proposed model, particularly the cohesive fracture model, can capture very well the wave propagation characteristics in rock mass with non-welded and welded fractures with and without filling materials. In the meantime, in order to identify the significance of fracture on wave propagation, filling materials with different particle sizes and the fracture thickness are discussed. Both factors are found to be crucial for wave attenuation. The simulations also show that the frequency of transmission wave is lowered after propagating through fractures. In addition, the developed numerical scheme is applied to two-dimensional wave propagation in the rock mass.
NASA Astrophysics Data System (ADS)
Stephens, C. R.; Shepson, P.; Liao, J.; Huey, L. G.; Apel, E. C.; Cantrell, C. A.; Flocke, F. M.; Fried, A.; Hall, S. R.; Hornbrook, R. S.; Knapp, D. J.; Mauldin, L.; Montzka, D.; Sive, B. C.; Ullman, K.; Weibring, P.; Weinheimer, A. J.
2012-12-01
The springtime depletion of tropospheric ozone in the Arctic is believed to be caused by active halogen photochemistry resulting from halogen atom precursors present on snow, ice, or aerosol surfaces. The role of bromine in driving ozone depletion events (ODEs) has been generally accepted from numerous field studies that have observed high concentrations of BrO and filterable bromide during this time. The presence of chlorine in the Arctic has been recognized, but much less is known about the role of chlorine radicals in ozone depletion chemistry. Iodine monoxide has yet to be successfully detected in the High Arctic, although there have been indications of active iodine chemistry through observed enhancements in filterable iodide and probable detection of IO. Despite decades of research, significant uncertainty remains regarding the chemical mechanisms associated with the bromine-catalyzed depletion of ozone, as well as the complex interactions that occur in the polar boundary layer due to halogen chemistry. We developed a 0-D, multiphase, photochemical model to investigate the chemistry of bromine, chlorine and iodine relating to the occurrence of ODEs. Our model is highly constrained to time-varying observations of O3, Cl2, Br2, OVOCs, and VOCs from the 2009 Ocean-Atmosphere-Sea Ice-Snowpack (OASIS) campaign in Barrow, Alaska. We investigated a 7-day period in late March to determine the contribution of Br, Cl, and potential contribution of I to ozone depletion and the interactions occurring between these three halogens under the chemical conditions observed. We find that while Br accounts for the majority of ozone depletion, iodine is more efficient on a per molecule basis and that both chlorine and iodine serve to enhance the Br-induced depletion of ozone through synergistic effects. Though Cl does not directly contribute significantly to ozone depletion, chlorine impacts bromine chemistry through ClO and RO2, which in turn impact BrOx propagation, and by
Numerical simulation of wave propagation in a realistic model of the human external ear.
Fadaei, Mohaddeseh; Abouali, Omid; Emdad, Homayoun; Faramarzi, Mohammad; Ahmadi, Goodarz
2015-01-01
In this study, a numerical investigation is performed to evaluate the effects of high-pressure sinusoidal and blast wave's propagation around and inside of a human external ear. A series of computed tomography images are used to reconstruct a realistic three-dimensional (3D) model of a human ear canal and the auricle. The airflow field is then computed by solving the governing differential equations in the time domain using a computational fluid dynamics software. An unsteady algorithm is used to obtain the high-pressure wave propagation throughout the ear canal which is validated against the available analytical and numerical data in literature. The effects of frequency, wave shape, and the auricle on pressure distribution are then evaluated and discussed. The results clearly indicate that the frequency plays a key role on pressure distribution within the ear canal. At 4 kHz frequency, the pressure magnitude is much more amplified within the ear canal than the frequencies of 2 and 6 kHz, for the incident wave angle of 90° investigated in this study, attributable to the '4-kHz notch' in patients with noise-induced hearing loss. According to the results, the pressure distribution patterns at the ear canal are very similar for both sinusoidal pressure waveform with the frequency of 2 kHz and blast wave. The ratio of the peak pressure value at the eardrum to that at the canal entrance increases from about 8% to 30% as the peak pressure value of the blast wave increases from 5 to 100 kPa for the incident wave angle of 90° investigated in this study. Furthermore, incorporation of the auricle to the ear canal model is associated with centerline pressure magnitudes of about 50% and 7% more than those of the ear canal model without the auricle throughout the ear canal for sinusoidal and blast waves, respectively, without any significant effect on pressure distribution pattern along the ear canal for the incident wave angle of 90° investigated in this study. PMID
NASA Technical Reports Server (NTRS)
Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.
2004-01-01
One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Chen, Jacqueline H.; Hawkes, Evatt R.; Sankaran, Ramanan; Mason, Scott D.; Im, Hong G.
2006-04-15
The influence of thermal stratification on autoignition at constant volume and high pressure is studied by direct numerical simulation (DNS) with detailed hydrogen/air chemistry with a view to providing better understanding and modeling of combustion processes in homogeneous charge compression-ignition engines. Numerical diagnostics are developed to analyze the mode of combustion and the dependence of overall ignition progress on initial mixture conditions. The roles of dissipation of heat and mass are divided conceptually into transport within ignition fronts and passive scalar dissipation, which modifies the statistics of the preignition temperature field. Transport within ignition fronts is analyzed by monitoring the propagation speed of ignition fronts using the displacement speed of a scalar that tracks the location of maximum heat release rate. The prevalence of deflagrative versus spontaneous ignition front propagation is found to depend on the local temperature gradient, and may be identified by the ratio of the instantaneous front speed to the laminar deflagration speed. The significance of passive scalar mixing is examined using a mixing timescale based on enthalpy fluctuations. Finally, the predictions of the multizone modeling strategy are compared with the DNS, and the results are explained using the diagnostics developed. (author)
Razzaq, Misbah; Ahmad, Jamil
2015-01-01
Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework. PMID:26713449
Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David
2013-09-01
The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods. PMID:24103929
Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David
2013-09-01
The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods.
NASA Astrophysics Data System (ADS)
Bartsotas, Nikolaos S.; Nikolopoulos, Efthymios I.; Anagnostou, Emmanouil N.; Kallos, George
2015-04-01
Mountainous regions account for a significant part of the Earth's surface. Such areas are persistently affected by heavy precipitation episodes, which induce flash floods and landslides. The limitation of inadequate in-situ observations has put remote sensing rainfall estimates on a pedestal concerning the analyses of these events, as in many mountainous regions worldwide they serve as the only available data source. However, well-known issues of remote sensing techniques over mountainous areas, such as the strong underestimation of precipitation associated with low-level orographic enhancement, limit the way these estimates can accommodate operational needs. Even locations that fall within the range of weather radars suffer from strong biases in precipitation estimates due to terrain blockage and vertical rainfall profile issues. A novel approach towards the reduction of error in quantitative precipitation estimates lies upon the utilization of high-resolution numerical simulations in order to derive error correction functions for corresponding satellite precipitation data. The correction functions examined consist of 1) mean field bias adjustment and 2) pdf matching, two procedures that are simple and have been widely used in gauge-based adjustment techniques. For the needs of this study, more than 15 selected storms over the mountainous Upper Adige region of Northern Italy were simulated at 1-km resolution from a state-of-the-art atmospheric model (RAMS/ICLAMS), benefiting from the explicit cloud microphysical scheme, prognostic treatment of natural pollutants such as dust and sea-salt and the detailed SRTM90 topography that are implemented in the model. The proposed error correction approach is applied on three quasi-global and widely used satellite precipitation datasets (CMORPH, TRMM 3B42 V7 and PERSIANN) and the evaluation of the error model is based on independent in situ precipitation measurements from a dense rain gauge network (1 gauge / 70 km2
NASA Astrophysics Data System (ADS)
Joyce, M.; Marcos, B.; Baertschiger, T.
2009-04-01
The effects of discreteness arising from the use of the N-body method on the accuracy of simulations of cosmological structure formation are not currently well understood. In the first part of this paper, we discuss the essential question of how the relevant parameters introduced by this discretization should be extrapolated in convergence studies if the goal is to recover the Vlasov-Poisson limit. In the second part of the paper, we study numerically, and with analytical methods developed recently by us, the central issue of how finite particle density affects the precision of results above the force-smoothing scale. In particular, we focus on the precision of results for the power spectrum at wavenumbers around and above the Nyquist wavenumber, in simulations in which the force resolution is taken to be smaller than the initial interparticle spacing. Using simulations of identical theoretical initial conditions sampled on four different `pre-initial' configurations (three different Bravais lattices and a glass), we obtain a lower bound on the real discreteness error. With the guidance of our analytical results, which match extremely well this measured dispersion into the weakly non-linear regime, and of further controlled tests for dependences on the relevant discreteness parameters, we establish with confidence that the measured dispersion is not contaminated either by finite box size effects or by subtle numerical effects. Our results notably show that, at wavenumbers below the Nyquist wavenumber, the dispersion increases monotonically in time throughout the simulation, while the same is true above the Nyquist wavenumber once non-linearity sets in. For normalizations typical of cosmological simulations, we find lower bounds on errors at the Nyquist wavenumber of the order of 1 per cent, and larger above this scale. Our main conclusion is that the only way this error may be reduced below these levels at these physical scales, and indeed convergence to the
Flatte; Gerber
2000-06-01
We have simulated optical propagation through atmospheric turbulence in which the spectrum near the inner scale follows that of Hill and Clifford [J. Opt. Soc. Am. 68, 892 (1978)] and the turbulence strength puts the propagation into the asymptotic strong-fluctuation regime. Analytic predictions for this regime have the form of power laws as a function of beta0(2), the irradiance variance predicted by weak-fluctuation (Rytov) theory, and l0, the inner scale. The simulations indeed show power laws for both spherical-wave and plane-wave initial conditions, but the power-law indices are dramatically different from the analytic predictions. Let sigmaI(2) - 1 = a(beta0(2)/betac(2))-b(l0/Rf)c, where we take the reference value of beta0(2) to be betac(2) = 60.6, because this is the center of our simulation region. For zero inner scale (for which c = 0), the analytic prediction is b = 0.4 and a = 0.17 (0.37) for a plane (spherical) wave. Our simulations for a plane wave give a = 0.234 +/- 0.007 and b = 0.50 +/- 0.07, and for a spherical wave they give a = 0.58 + /- 0.01 and b = 0.65 +/- 0.05. For finite inner scale the analytic prediction is b = 1/6, c = 7/18 and a = 0.76 (2.07) for a plane (spherical) wave. We find that to a reasonable approximation the behavior with beta0(2) and l0 indeed factorizes as predicted, and each part behaves like a power law. However, our simulations for a plane wave give a = 0.57 +/- 0.03, b = 0.33 +/- 0.03, and c = 0.45 +/- 0.06. For spherical waves we find a = 3.3 +/- 0.3, b = 0.45 +/- 0.05, and c = 0.8 +/- 0.1.
Simulation Of Broadband Seismic Wave Propagation In A Deep Mine in Sudbury Ontario Canada
NASA Astrophysics Data System (ADS)
Saleh, R.; Chen, H.; Milkereit, B.; Liu, Q.
2014-12-01
In an active underground mine, amplitudes and travel times of seismic events are critical parameters that have to be determined at various locations. These parameters are useful to better understand the process of spatial and temporal stress distributions in a mine. In this study, variations of travel time and amplitude of seismic waves derived from the conventional constant velocity models are compared to the ones derived from 3D variable velocity model. The results show a significant variation in seismic energy distribution at the mine due to presence of very strong elastic contrast, and the observed complexity of the propagated seismic waves require the use of a variable velocity model. An active deep mine located in Sudbury Ontario Canada hosted this study. Dense 3D arrays of geophones, which are distributed around ore-bodies, have been monitoring controlled production blasts and microseismic events since the mine has started production. It is shown here that the conventional empirical method used to calculate peak particle velocities and accelerations (PPVs/PPAs), tends to underestimate the intensity of seismic waves in stopes or areas close to blast sites. This could be corrected if a more realistic model was implemented. Comparing the travel time information from recorded events in the past few years showed the temporal changes in the mine velocity model as mining progressed, thus updating the velocity model of the mine is needed if better accuracy of event location is required. In this study, a 2D/3D finite difference modeling method is used.
Numerical Simulation for sonic boom propagation through an Inhomogeneous atmosphere with winds
NASA Astrophysics Data System (ADS)
Yamamoto, Masafumi; Hashimoto, Atsushi; Takahashi, Takashi; Kamakura, Tomoo; Sakai, Takeharu
2012-09-01
Noise annoyance due to sonic boom is one of the serious problems for development of next-generation supersonic transport. To decrease this sonic boom noise, the design and analysis techniques are developed at Japan Aerospace eXploration Agency (JAXA). To predict the sonic boom on the ground accurately, we have developed a numerical code (Xnoise) using the augmented Burgers equation combined with the ray tracing. In this method, effects of nonlinearity, geometrical spreading, inhomogeneity of atmosphere, thermoviscous attenuation, molecular vibration relaxation and winds are taken into account. This method gives an estimation of the rise times of ground signatures without resorting to the weak shock theory and area balancing techniques. The nonlinear term is evaluated by the finite difference scheme in this method. In ray-path calculation, an explicit updating methodology is adopted. The augmented Burgers equation is numerically solved by using the operator split method entirely in the time domain. As for the effects of nonlinearity, geometrical spreading, and atmospheric inhomogeneity, the result obtained with the augmented Burgers equation agrees well with that obtained with the waveform parameter method (Thomas' method). For the effects of absorption and dispersion, the calculation based on the augmented Burgers equation is verified by comparing with a detailed one-dimensional CFD analysis. Moreover, we show calculations which account the effect of winds on the propagation of a sonic boom. The validation of model is a future work.
We developed and applied a spatially-explicit, eco-hydrologic model to examine how a landscape disturbance affects hydrologic processes, ecosystem cycling of C and N, and ecosystem structure. We simulated how the pattern and magnitude of tree removal in a catchment influences fo...
NASA Astrophysics Data System (ADS)
Morency, C.; Tromp, J.
2008-12-01
The mathematical formulation of wave propagation in porous media developed by Biot is based upon the principle of virtual work, ignoring processes at the microscopic level, and does not explicitly incorporate gradients in porosity. Based on recent studies focusing on averaging techniques, we derive the macroscopic porous medium equations from the microscale, with a particular emphasis on the effects of gradients in porosity. In doing so, we are able to naturally determine two key terms in the momentum equations and constitutive relationships, directly translating the coupling between the solid and fluid phases, namely a drag force and an interfacial strain tensor. In both terms, gradients in porosity arise. One remarkable result is that when we rewrite this set of equations in terms of the well known Biot variables us, w), terms involving gradients in porosity are naturally accommodated by gradients involving w, the fluid motion relative to the solid, and Biot's formulation is recovered, i.e., it remains valid in the presence of porosity gradients We have developed a numerical implementation of the Biot equations for two-dimensional problems based upon the spectral-element method (SEM) in the time domain. The SEM is a high-order variational method, which has the advantage of accommodating complex geometries like a finite-element method, while keeping the exponential convergence rate of (pseudo)spectral methods. As in the elastic and acoustic cases, poroelastic wave propagation based upon the SEM involves a diagonal mass matrix, which leads to explicit time integration schemes that are well-suited to simulations on parallel computers. Effects associated with physical dispersion & attenuation and frequency-dependent viscous resistance are addressed by using a memory variable approach. Various benchmarks involving poroelastic wave propagation in the high- and low-frequency regimes, and acoustic-poroelastic and poroelastic-poroelastic discontinuities have been
NASA Astrophysics Data System (ADS)
Ávila-Carrera, R.; Sánchez-Sesma, F. J.; Spurlin, James H.; Valle-Molina, C.; Rodríguez-Castellanos, A.
2014-09-01
An analytic formulation to understand the scattering, diffraction and attenuation of elastic waves at the neighborhood of fluid filled wells is presented. An important, and not widely exploited, technique to carefully investigate the wave propagation in exploration wells is the logging of sonic waveforms. Fundamental decisions and production planning in petroleum reservoirs are made by interpretation of such recordings. Nowadays, geophysicists and engineers face problems related to the acquisition and interpretation under complex conditions associated with conducting open-hole measurements. A crucial problem that directly affects the response of sonic logs is the eccentricity of the measuring tool with respect to the center of the borehole. Even with the employment of centralizers, this simple variation, dramatically changes the physical conditions on the wave propagation around the well. Recent works in the numerical field reported advanced studies in modeling and simulation of acoustic wave propagation around wells, including complex heterogeneities and anisotropy. However, no analytical efforts have been made to formally understand the wireline sonic logging measurements acquired with borehole-eccentered tools. In this paper, the Graf's addition theorem was used to describe monopole sources in terms of solutions of the wave equation. The formulation was developed from the three-dimensional discrete wave-number method in the frequency domain. The cylindrical Bessel functions of the third kind and order zero were re-derived to obtain a simplified set of equations projected into a bi-dimensional plane-space for displacements and stresses. This new and condensed analytic formulation allows the straightforward calculation of all converted modes and their visualization in the time domain via Fourier synthesis. The main aim was to obtain spectral surfaces of transfer functions and synthetic seismograms that might be useful to understand the wave motion produced by the
NASA Astrophysics Data System (ADS)
Haan, S. W.; Herrmann, M. C.; Salmonson, J. D.; Amendt, P. A.; Callahan, D. A.; Dittrich, T. R.; Edwards, M. J.; Jones, O. S.; Marinak, M. M.; Munro, D. H.; Pollaine, S. M.; Spears, B. K.; Suter, L. J.
2007-08-01
Targets intended to produce ignition on NIF are being simulated and the simulations are used to set specifications for target fabrication and other program elements. Recent design work has focused on designs that assume only 1.0 MJ of laser energy instead of the previous 1.6 MJ. To perform with less laser energy, the hohlraum has been redesigned to be more efficient than previously, and the capsules are slightly smaller. Three hohlraum designs are being examined: gas fill, SiO2 foam fill, and SiO2 lined. All have a cocktail wall, and shields mounted between the capsule and the laser entrance holes. Two capsule designs are being considered. One has a graded doped Be(Cu) ablator, and the other graded doped CH(Ge). Both can perform acceptably with recently demonstrated ice layer quality, and with recently demonstrated outer surface roughness. Complete tables of specifications are being prepared for both targets, to be completed this fiscal year. All the specifications are being rolled together into an error budget indicating adequate margin for ignition with the new designs. The dominant source of error is hohlraum asymmetry at intermediate modes 4 8, indicating the importance of experimental techniques to measure and control this asymmetry.
A map overlay error model based on boundary geometry
Gaeuman, D.; Symanzik, J.; Schmidt, J.C.
2005-01-01
An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.
NASA Astrophysics Data System (ADS)
Gavrilov, Nikolai M.; Kshevetskii, Sergey P.
2014-12-01
Three-dimensional nonlinear breaking acoustic-gravity waves (AGWs) propagating from the Earth's surface to the upper atmosphere are simulated numerically. Horizontally moving periodical structures of vertical velocity on the Earth's surface are used as AGW sources in the model. The 3D algorithm for hydrodynamic equation solution uses finite-difference analogues of basic conservation laws. This approach allows us to select physically correct generalized wave solutions of hydrodynamic equations. The numerical simulation covers altitudes from the ground up to 500 km. Vertical profiles of the mean temperature, density, molecular viscosity, and thermal conductivity are specified from standard models of the atmosphere. Atmospheric waves in a few minutes can propagate to high altitudes above 100 km after activation of the surface wave forcing. Surfaces of constant phases are quasi-vertical first, and then become inclined to the horizon below about 100 km after some transition time interval. Vertical wavelengths decrease with time and tend to theoretically predicted values after times longer than several periods of the wave forcing. Decrease in vertical wavelengths and increase in AGW amplitudes can lead to wave instabilities, accelerations of the mean flow and wave-induced jet streams at altitudes above 100 km. AGWs may transport amplitude modulation of atmospheric wave sources in horizontal directions up to very high levels. Low wave amplitudes in the beginning of transition processes after activation of atmospheric wave sources could be additional reasons for slower amplitude grows with height compared to the nondissipative exponential growth predicted for stationary linear AGWs. Production of wave-induced mean jets and their superposition with nonlinear unstable dissipative AGWs can produce strong narrow peaks of horizontal speed in the upper atmosphere. This may increase the role of transient nonstationary waves in effective energy transport and variations of
NASA Astrophysics Data System (ADS)
Costantino, L.; Heinrich, P.; Mzé, N.; Hauchecorne, A.
2015-09-01
In this work we perform numerical simulations of convective gravity waves (GWs), using the WRF (Weather Research and Forecasting) model. We first run an idealized, simplified and highly resolved simulation with model top at 80 km. Below 60 km of altitude, a vertical grid spacing smaller than 1 km is supposed to reliably resolve the effects of GW breaking. An eastward linear wind shear interacts with the GW field generated by a single convective thunderstorm. After 70 min of integration time, averaging within a radius of 300 km from the storm centre, results show that wave breaking in the upper stratosphere is largely dominated by saturation effects, driving an average drag force up to -41 m s-1 day-1. In the lower stratosphere, mean wave drag is positive and equal to 4.4 m s-1 day-1. In a second step, realistic WRF simulations are compared with lidar measurements from the NDACC network (Network for the Detection of Atmospheric Composition Changes) of gravity wave potential energy (Ep) over OHP (Haute-Provence Observatory, southern France). Using a vertical grid spacing smaller than 1 km below 50 km of altitude, WRF seems to reliably reproduce the effect of GW dynamics and capture qualitative aspects of wave momentum and energy propagation and transfer to background mean flow. Averaging within a radius of 120 km from the storm centre, the resulting drag force for the study case (2 h storm) is negative in the higher (-1 m s-1 day-1) and positive in the lower stratosphere (0.23 m s-1 day-1). Vertical structures of simulated potential energy profiles are found to be in good agreement with those measured by lidar. Ep is mostly conserved with altitude in August while, in October, Ep decreases in the upper stratosphere to grow again in the lower mesosphere. On the other hand, the magnitude of simulated wave energy is clearly underestimated with respect to lidar data by about 3-4 times.
Turbulence Scales, Rise Times, Caustics, and the Simulation of Sonic Boom Propagation
NASA Technical Reports Server (NTRS)
Pierce, Allan D.
1996-01-01
The general topic of atmospheric turbulence effects on sonic boom propagation is addressed with especial emphasis on taking proper and efficient account of the contributions of the portion oi the turbulence that is associated with extremely high wavenumber components. The recent work reported by Bart Lipkens in his doctoral thesis is reexamined to determine whether the good agreement between his measured rise times with the 1971 theory of the author is fortuitous. It is argued that Lipken's estimate of the distance to the first caustic was a gross overestimate because of the use of a sound speed correlation function shaped like a gaussian curve. In particular, it is argued that the expected distance to the first caustic varies with the kinematic viscosity nu and the energy epsilon dissipated per unit mass per unit time, and the sound speed c as : d(sub first caustic) = nu(exp 7/12) c(exp 2/3)/ epsilon(exp 5/12)(nu x epsilon/c(exp 4))(exp a), where the exponent a is greater than -7/12 and can be argued to be either O or 1/24. In any event, the surprising aspect of the relationship is that it actually goes to zero as the viscosity goes to zero with s held constant. It is argued that the apparent overabundance of caustics can be grossly reduced by a general computational and analytical perspective that partitions the turbulence into two parts, divided by a wavenumber k(sub c). Wavenumbers higher than kc correspond to small-scale turbulence, and the associated turbulence can be taken into account by a renormalization of the ambient sound speed so that the result has a small frequency dependence that results from a spatial averaging over of the smaller-scale turbulent fluctuations. Selection of k(sub c). can be made so large that only a very small number of caustics are encountered if one adopts the premise that the frequency dispersion of pulses is caused by that part of the turbulence spectrum which lies in the inertial range originally predicted by Kolmogoroff. The
Molecular-Level Simulations of Shock Generation and Propagation in Soda-Lime Glass
NASA Astrophysics Data System (ADS)
Grujicic, M.; Bell, W. C.; Pandurangan, B.; Cheeseman, B. A.; Fountzoulas, C.; Patel, P.
2012-08-01
A non-equilibrium molecular dynamics method is employed to study the mechanical response of soda-lime glass (a material commonly used in transparent armor applications) when subjected to the loading conditions associated with the generation and propagation of planar shock waves. Specific attention is given to the identification and characterization of various (inelastic-deformation and energy-dissipation) molecular-level phenomena and processes taking place at, or in the vicinity of, the shock front. The results obtained revealed that the shock loading causes a 2-4% (shock strength-dependent) density increase. In addition, an increase in the average coordination number of the silicon atoms is observed along with the creation of smaller Si-O rings. These processes are associated with substantial energy absorption and dissipation and are believed to greatly influence the blast/ballistic impact mitigation potential of soda-lime glass. The present work was also aimed at the determination of the shock Hugoniot (i.e., a set of axial stress vs. density/specific-volume vs. internal energy vs. particle velocity vs. temperature) material states obtained in soda-lime glass after the passage of a shock wave of a given strength (as quantified by the shock speed). The availability of a shock Hugoniot is critical for construction of a high deformation-rate, large-strain, high pressure material model which can be used within a continuum-level computational analysis to capture the response of a soda-lime glass based laminated transparent armor structure (e.g., a military vehicle windshield, door window, etc.) to blast/ballistic impact loading.
NASA Astrophysics Data System (ADS)
Watson, Cameron S.; Carrivick, Jonathan; Quincey, Duncan
2015-10-01
Modelling glacial lake outburst floods (GLOFs) or 'jökulhlaups', necessarily involves the propagation of large and often stochastic uncertainties throughout the source to impact process chain. Since flood routing is primarily a function of underlying topography, communication of digital elevation model (DEM) uncertainty should accompany such modelling efforts. Here, a new stochastic first-pass assessment technique was evaluated against an existing GIS-based model and an existing 1D hydrodynamic model, using three DEMs with different spatial resolution. The analysis revealed the effect of DEM uncertainty and model choice on several flood parameters and on the prediction of socio-economic impacts. Our new model, which we call MC-LCP (Monte Carlo Least Cost Path) and which is distributed in the supplementary information, demonstrated enhanced 'stability' when compared to the two existing methods, and this 'stability' was independent of DEM choice. The MC-LCP model outputs an uncertainty continuum within its extent, from which relative socio-economic risk can be evaluated. In a comparison of all DEM and model combinations, the Shuttle Radar Topography Mission (SRTM) DEM exhibited fewer artefacts compared to those with the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), and were comparable to those with a finer resolution Advanced Land Observing Satellite Panchromatic Remote-sensing Instrument for Stereo Mapping (ALOS PRISM) derived DEM. Overall, we contend that the variability we find between flood routing model results suggests that consideration of DEM uncertainty and pre-processing methods is important when assessing flow routing and when evaluating potential socio-economic implications of a GLOF event. Incorporation of a stochastic variable provides an illustration of uncertainty that is important when modelling and communicating assessments of an inherently complex process.
Simulation of nonlinear propagation of biomedical ultrasound using PZFlex and the KZK Texas code
NASA Astrophysics Data System (ADS)
Qiao, Shan; Jackson, Edward; Coussios, Constantin-C.; Cleveland, Robin
2015-10-01
In biomedical ultrasound nonlinear acoustics can be important in both diagnostic and therapeutic applications and robust simulations tools are needed in the design process but also for day-to-day use such as treatment planning. For most biomedical application the ultrasound sources generate focused sound beams of finite amplitude. The KZK equation is a common model as it accounts for nonlinearity, absorption and paraxial diffraction and there are a number of solvers available, primarily developed by research groups. We compare the predictions of the KZK Texas code (a finite-difference time-domain algorithm) to an FEM-based commercial software, PZFlex. PZFlex solves the continuity equation and momentum conservation equation with a correction for nonlinearity in the equation of state incorporated using an incrementally linear, 2nd order accurate, explicit algorithm in time domain. Nonlinear ultrasound beams from two transducers driven at 1 MHz and 3.3 MHz respectively were simulated by both the KZK Texas code and PZFlex, and the pressure field was also measured by a fibre-optic hydrophone to validate the models. Further simulations were carried out a wide range of frequencies. The comparisons showed good agreement for the fundamental frequency for PZFlex, the KZK Texas code and the experiments. For the harmonic components, the KZK Texas code was in good agreement with measurements but PZFlex underestimated the amplitude: 32% for the 2nd harmonic and 66% for the 3rd harmonic. The underestimation of harmonics by PZFlex was more significant when the fundamental frequency increased. Furthermore non-physical oscillations in the axial profile of harmonics occurred in the PZFlex results when the amplitudes were relatively low. These results suggest that careful benchmarking of nonlinear simulations is important.
Simulation of nonlinear propagation of biomedical ultrasound using PZFlex and the KZK Texas code
Qiao, Shan Jackson, Edward; Coussios, Constantin-C; Cleveland, Robin
2015-10-28
In biomedical ultrasound nonlinear acoustics can be important in both diagnostic and therapeutic applications and robust simulations tools are needed in the design process but also for day-to-day use such as treatment planning. For most biomedical application the ultrasound sources generate focused sound beams of finite amplitude. The KZK equation is a common model as it accounts for nonlinearity, absorption and paraxial diffraction and there are a number of solvers available, primarily developed by research groups. We compare the predictions of the KZK Texas code (a finite-difference time-domain algorithm) to an FEM-based commercial software, PZFlex. PZFlex solves the continuity equation and momentum conservation equation with a correction for nonlinearity in the equation of state incorporated using an incrementally linear, 2nd order accurate, explicit algorithm in time domain. Nonlinear ultrasound beams from two transducers driven at 1 MHz and 3.3 MHz respectively were simulated by both the KZK Texas code and PZFlex, and the pressure field was also measured by a fibre-optic hydrophone to validate the models. Further simulations were carried out a wide range of frequencies. The comparisons showed good agreement for the fundamental frequency for PZFlex, the KZK Texas code and the experiments. For the harmonic components, the KZK Texas code was in good agreement with measurements but PZFlex underestimated the amplitude: 32% for the 2nd harmonic and 66% for the 3rd harmonic. The underestimation of harmonics by PZFlex was more significant when the fundamental frequency increased. Furthermore non-physical oscillations in the axial profile of harmonics occurred in the PZFlex results when the amplitudes were relatively low. These results suggest that careful benchmarking of nonlinear simulations is important.
NASA Technical Reports Server (NTRS)
Rosch, E.
1975-01-01
The task of time estimation, an activity occasionally performed by pilots during actual flight, was investigated with the objective of providing human factors investigators with an unobtrusive and minimally loading additional task that is sensitive to differences in flying conditions and flight instrumentation associated with the main task of piloting an aircraft simulator. Previous research indicated that the duration and consistency of time estimates is associated with the cognitive, perceptual, and motor loads imposed by concurrent simple tasks. The relationships between the length and variability of time estimates and concurrent task variables under a more complex situation involving simulated flight were clarified. The wrap-around effect with respect to baseline duration, a consequence of mode switching at intermediate levels of concurrent task distraction, should contribute substantially to estimate variability and have a complex effect on the shape of the resulting distribution of estimates.
NASA Astrophysics Data System (ADS)
Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; Littlewood, David J.; Baines, Andrew J.
2016-05-01
Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cell represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Ultimately, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.
Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; Littlewood, David J.; Baines, Andrew J.
2016-03-16
Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less
Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.
1999-05-06
Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity to error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.
Plósz, Benedek Gy; De Clercq, Jeriffa; Nopens, Ingmar; Benedetti, Lorenzo; Vanrolleghem, Peter A
2011-01-01
In WWTP models, the accurate assessment of solids inventory in bioreactors equipped with solid-liquid separators, mostly described using one-dimensional (1-D) secondary settling tank (SST) models, is the most fundamental requirement of any calibration procedure. Scientific knowledge on characterising particulate organics in wastewater and on bacteria growth is well-established, whereas 1-D SST models and their impact on biomass concentration predictions are still poorly understood. A rigorous assessment of two 1-DSST models is thus presented: one based on hyperbolic (the widely used Takács-model) and one based on parabolic (the more recently presented Plósz-model) partial differential equations. The former model, using numerical approximation to yield realistic behaviour, is currently the most widely used by wastewater treatment process modellers. The latter is a convection-dispersion model that is solved in a numerically sound way. First, the explicit dispersion in the convection-dispersion model and the numerical dispersion for both SST models are calculated. Second, simulation results of effluent suspended solids concentration (XTSS,Eff), sludge recirculation stream (XTSS,RAS) and sludge blanket height (SBH) are used to demonstrate the distinct behaviour of the models. A thorough scenario analysis is carried out using SST feed flow rate, solids concentration, and overflow rate as degrees of freedom, spanning a broad loading spectrum. A comparison between the measurements and the simulation results demonstrates a considerably improved 1-D model realism using the convection-dispersion model in terms of SBH, XTSS,RAS and XTSS,Eff. Third, to assess the propagation of uncertainty derived from settler model structure to the biokinetic model, the impact of the SST model as sub-model in a plant-wide model on the general model performance is evaluated. A long-term simulation of a bulking event is conducted that spans temperature evolution throughout a summer
Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows
Templeton, Jeremy Alan; Blaylock, Myra L.; Domino, Stefan P.; Hewson, John C.; Kumar, Pritvi Raj; Ling, Julia; Najm, Habib N.; Ruiz, Anthony; Safta, Cosmin; Sargsyan, Khachik; Stewart, Alessia; Wagner, Gregory
2015-09-01
The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.
NASA Technical Reports Server (NTRS)
Vo, Q. D.
1984-01-01
A program which was written to simulate Real Time Minimal-Byte-Error Probability (RTMBEP) decoding of full unit-memory (FUM) convolutional codes on a 3-bit quantized AWGN channel is described. This program was used to compute the symbol-error probability of FUM codes and to determine the signal to noise (SNR) required to achieve a bit error rate (BER) of 10 to the minus 6th power for corresponding concatenated systems. A (6,6/30) FUM code, 6-bit Reed-Solomon code combination was found to achieve the required BER at a SNR of 1.886 dB. The RTMBEP algorithm was then modified for decoding partial unit-memory (PUM) convolutional codes. A simulation program was also written to simulate the symbol-error probability of these codes.
Solovchuk, Maxim; Sheu, Tony W H; Thiriet, Marc
2013-11-01
This study investigates the influence of blood flow on temperature distribution during high-intensity focused ultrasound (HIFU) ablation of liver tumors. A three-dimensional acoustic-thermal-hydrodynamic coupling model is developed to compute the temperature field in the hepatic cancerous region. The model is based on the nonlinear Westervelt equation, bioheat equations for the perfused tissue and blood flow domains. The nonlinear Navier-Stokes equations are employed to describe the flow in large blood vessels. The effect of acoustic streaming is also taken into account in the present HIFU simulation study. A simulation of the Westervelt equation requires a prohibitively large amount of computer resources. Therefore a sixth-order accurate acoustic scheme in three-point stencil was developed for effectively solving the nonlinear wave equation. Results show that focused ultrasound beam with the peak intensity 2470 W/cm(2) can induce acoustic streaming velocities up to 75 cm/s in the vessel with a diameter of 3 mm. The predicted temperature difference for the cases considered with and without acoustic streaming effect is 13.5 °C or 81% on the blood vessel wall for the vein. Tumor necrosis was studied in a region close to major vessels. The theoretical feasibility to safely necrotize the tumors close to major hepatic arteries and veins was shown. PMID:24180802
Electromagnetic Simulations of Ground-Penetrating Radar Propagation near Lunar Pits and Lava Tubes
NASA Technical Reports Server (NTRS)
Zimmerman, M. I.; Carter, L. M.; Farrell, W. M.; Bleacher, J. E.; Petro, N. E.
2013-01-01
Placing an Orion capsule at the Earth-Moon L2 point (EML2) would potentially enable telerobotic operation of a rover on the lunar surface. The Human Exploration Virtual Institute (HEVI) is proposing that rover operations be carried out near one of the recently discovered lunar pits, which may provide radiation shielding for long duration human stays as well as a cross-disciplinary, science-rich target for nearer-term telerobotic exploration. Ground penetrating radar (GPR) instrumentation included onboard a rover has the potential to reveal many details of underground geologic structures near a pit, as well as characteristics of the pit itself. In the present work we employ the full-wave electromagnetic code MEEP to simulate such GPR reflections from a lunar pit and other subsurface features including lava tubes. These simulations will feed forward to mission concepts requiring knowledge of where to hide from harmful radiation and other environmental hazards such as plama charging and extreme diurnal temperatures.
Brokaw, Charles J
2014-04-01
Experimental observations on cyclic splitting and bending by a flagellar doublet pair are modeled using forces obtained from a model for dynein mechanochemistry, based on ideas introduced by Andrew Huxley and Terrill Hill and extended previously for modeling flagellar movements. The new feature is elastic attachment of dynein to the A doublet, which allows movement perpendicular to the A doublet and provides adhesive force that can strain attached dyneins. This additional strain influences the kinetics of dynein attachment and detachment. Computations using this dynein model demonstrate that very simple and realistic ideas about dynein mechanochemistry are sufficient for explaining the separation and reattachment seen experimentally with flagellar doublet pairs. Additional simulations were performed after adding a "super-adhesion" elasticity. This elastic component is intended to mimic interdoublet connections, normally present in an intact axoneme, that would prevent visible splitting but allow sufficient separation to cause dynein detachment and cessation of shear force generation. This is the situation envisioned by Lindemann's "geometric clutch" hypothesis for control of dynein function in flagella and cilia. The simulations show abrupt disengagement of the "clutch" at one end of a bend, and abrupt reengagement of the "clutch" at the other end of a bend, ensuring that active sliding is only operating where it will cause bend propagation from base to tip.
NASA Astrophysics Data System (ADS)
Hohmann, Martin; Devrient, Martin; Klämpfl, Florian; Roth, Stephan; Schmidt, Michael
Laser transmission welding is a well-known joining technology for thermoplastics. Because of the needs of lightweight, cost effective and green production nowadays injection molded parts usually have to be welded. These parts are made out of semi-crystalline thermoplastics which are filled to a high amount with glass fibers. This leads to higher absorption and more scattering within the upper joining partner and hasa negative influence onto the welding process. Here a ray tracing model capable of considering every single glass fiber is introduced. Hence spatially not equally distributed glass fibers can be taken into account. Therefore the model is able to calculate in detail the welding laser intensity distribution after transmission through the upper joining partner. Data gained by numerical simulation is compared to data obtained by laser radiation scattering experiments. Thus observed deviation is quantified and discussed.
FRANC2D: A two-dimensional crack propagation simulator. Version 2.7: User's guide
NASA Technical Reports Server (NTRS)
Wawrzynek, Paul; Ingraffea, Anthony
1994-01-01
FRANC 2D (FRacture ANalysis Code, 2 Dimensions) is a menu driven, interactive finite element computer code that performs fracture mechanics analyses of 2-D structures. The code has an automatic mesh generator for triangular and quadrilateral elements. FRANC2D calculates the stress intensity factor using linear elastic fracture mechanics and evaluates crack extension using several methods that may be selected by the user. The code features a mesh refinement and adaptive mesh generation capability that is automatically developed according to the predicted crack extension direction and length. The code also has unique features that permit the analysis of layered structure with load transfer through simulated mechanical fasteners or bonded joints. The code was written for UNIX workstations with X-windows graphics and may be executed on the following computers: DEC DecStation 3000 and 5000 series, IBM RS/6000 series, Hewlitt-Packard 9000/700 series, SUN Sparc stations, and most Silicon Graphics models.
Simulation of ultra-high energy photon propagation in the geomagnetic field
NASA Astrophysics Data System (ADS)
Homola, P.; Góra, D.; Heck, D.; Klages, H.; PeĶala, J.; Risse, M.; Wilczyńska, B.; Wilczyński, H.
2005-12-01
The identification of primary photons or specifying stringent limits on the photon flux is of major importance for understanding the origin of ultra-high energy (UHE) cosmic rays. UHE photons can initiate particle cascades in the geomagnetic field, which leads to significant changes in the subsequent atmospheric shower development. We present a Monte Carlo program allowing detailed studies of conversion and cascading of UHE photons in the geomagnetic field. The program named PRESHOWER can be used both as an independent tool or together with a shower simulation code. With the stand-alone version of the code it is possible to investigate various properties of the particle cascade induced by UHE photons interacting in the Earth's magnetic field before entering the Earth's atmosphere. Combining this program with an extensive air shower simulation code such as CORSIKA offers the possibility of investigating signatures of photon-initiated showers. In particular, features can be studied that help to discern such showers from the ones induced by hadrons. As an illustration, calculations for the conditions of the southern part of the Pierre Auger Observatory are presented. Catalogue identifier:ADWG Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWG Program obtainable: CPC Program Library, Quen's University of Belfast, N. Ireland Computer on which the program has been thoroughly tested:Intel-Pentium based PC Operating system:Linux, DEC-Unix Programming language used:C, FORTRAN 77 Memory required to execute with typical data:<100 kB No. of bits in a word:32 Has the code been vectorized?:no Number of lines in distributed program, including test data, etc.:2567 Number of bytes in distributed program, including test data, etc.:25 690 Distribution format:tar.gz Other procedures used in PRESHOWER:IGRF [N.A. Tsyganenko, National Space Science Data Center, NASA GSFC, Greenbelt, MD 20771, USA, http://nssdc.gsfc.nasa.gov/space/model/magnetos/data-based/geopack.html], bessik
Simulations in evolution. II. Relative fitness and the propagation of mutants.
Testa, Bernard; Bojarski, Andrzej J
2009-03-01
In Neo-Darwinism, variation and natural selection are the two evolutionary mechanisms which propel biological evolution. Our previous article presented a histogram model [1] consisting in populations of individuals whose number changed under the influence of variation and/or fitness, the total population remaining constant. Individuals are classified into bins, and the content of each bin is calculated generation after generation by an Excel spreadsheet. Here, we apply the histogram model to a stable population with fitness F(1)=1.00 in which one or two fitter mutants emerge. In a first scenario, a single mutant emerged in the population whose fitness was greater than 1.00. The simulations ended when the original population was reduced to a single individual. The histogram model was validated by excellent agreement between its predictions and those of a classical continuous function (Eqn. 1) which predicts the number of generations needed for a favorable mutation to spread throughout a population. But in contrast to Eqn. 1, our histogram model is adaptable to more complex scenarios, as demonstrated here. In the second and third scenarios, the original population was present at time zero together with two mutants which differed from the original population by two higher and distinct fitness values. In the fourth scenario, the large original population was present at time zero together with one fitter mutant. After a number of generations, when the mutant offspring had multiplied, a second mutant was introduced whose fitness was even greater. The histogram model also allows Shannon entropy (SE) to be monitored continuously as the information content of the total population decreases or increases. The results of these simulations illustrate, in a graphically didactic manner, the influence of natural selection, operating through relative fitness, in the emergence and dominance of a fitter mutant.
Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.
2014-04-05
In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existingmore » HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.« less
Katrinia M. Groth; Curtis L. Smith; Laura P. Swiler
2014-08-01
In the past several years, several international organizations have begun to collect data on human performance in nuclear power plant simulators. The data collected provide a valuable opportunity to improve human reliability analysis (HRA), but these improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this paper, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.
Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.
2014-04-05
In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.
Wang, S.; Chen, Z. Y.; Wang, X. H. Li, D.; Yang, A. J.; Liu, D. X.; Rong, M. Z.; Chen, H. L.; Kong, M. G.
2015-11-28
Cold atmospheric-pressure plasmas have potential to be used for endoscope sterilization. In this study, a long quartz tube was used as the simulated endoscope channel, and an array of electrodes was warped one by one along the tube. Plasmas were generated in the inner channel of the tube, and their propagation characteristics in He+O{sub 2} feedstock gases were studied as a function of the oxygen concentration. It is found that each of the plasmas originates at the edge of an instantaneous cathode, and then it propagates bidirectionally. Interestingly, a plasma head with bright spots is formed in the hollow instantaneous cathode and moves towards its center part, and a plasma tail expands through the electrode gap and then forms a swallow tail in the instantaneous anode. The plasmas are in good axisymmetry when [O{sub 2}] ≤ 0.3%, but not for [O{sub 2}] ≥ 1%, and even behave in a stochastic manner when [O{sub 2}] = 3%. The antibacterial agents are charged species and reactive oxygen species, so their wall fluxes represent the “plasma dosage” for the sterilization. Such fluxes mainly act on the inner wall in the hollow electrode rather than that in the electrode gap, and they get to the maximum efficiency when the oxygen concentration is around 0.3%. It is estimated that one can reduce the electrode gap and enlarge the electrode width to achieve more homogenous and efficient antibacterial effect, which have benefits for sterilization applications.
Kim, Jihoon; Moridis, George J.
2013-10-01
We developed a hydraulic fracturing simulator by coupling a flow simulator to a geomechanics code, namely T+M simulator. Modeling of the vertical fracture development involves continuous updating of the boundary conditions and of the data connectivity, based on the finite element method for geomechanics. The T+M simulator can model the initial fracture development during the hydraulic fracturing operations, after which the domain description changes from single continuum to double or multiple continua in order to rigorously model both flow and geomechanics for fracture-rock matrix systems. The T+H simulator provides two-way coupling between fluid-heat flow and geomechanics, accounting for thermoporomechanics, treats nonlinear permeability and geomechanical moduli explicitly, and dynamically tracks changes in the fracture(s) and in the pore volume. We also fully accounts for leak-off in all directions during hydraulic fracturing. We first validate the T+M simulator, matching numerical solutions with the analytical solutions for poromechanical effects, static fractures, and fracture propagations. Then, from numerical simulation of various cases of the planar fracture propagation, shear failure can limit the vertical fracture propagation of tensile failure, because of leak-off into the reservoirs. Slow injection causes more leak-off, compared with fast injection, when the same amount of fluid is injected. Changes in initial total stress and contributions of shear effective stress to tensile failure can also affect formation of the fractured areas, and the geomechanical responses are still well-posed.
NASA Astrophysics Data System (ADS)
Liu, Chun; Yin, Hongwei; Zhu, Lili
2012-12-01
TrishearCreator is a platform independent web program constructed in Flash, which enables fold modeling, numerical simulation of trishear fault-propagation folding and strain analysis, etc. In the program, various types of original strata, such as folds and inclined strata can be easily constructed via adjusting shape parameters. In the simulation of trishear fault-propagation folding, growth strata and strain ellipses are calculated and displayed simultaneously. This web-based program is easy to use. Model parameters are changed by simple mouse actions, which have the advantage of speed and simplicity. And it gives an instant visual appreciation of the effect of changing the parameters that are used to construct the initial configuration of the model and the fold-propagation folding. These data can be exported to a text file, and be shared with other geologists to replay the kinematic evolution of structures using the program.
NASA Astrophysics Data System (ADS)
Kim, Kyeong-Hyeon; Kim, Dong-Su; Kim, Tae-Ho; Kang, Seong-Hee; Cho, Min-Seok; Suh, Tae Suk
2015-11-01
The phantom-alignment error is one of the factors affecting delivery quality assurance (QA) accuracy in intensity-modulated radiation therapy (IMRT). Accordingly, a possibility of inadequate use of spatial information in gamma evaluation may exist for patient-specific IMRT QA. The influence of the phantom-alignment error on gamma evaluation can be demonstrated experimentally by using the gamma passing rate and the gamma value. However, such experimental methods have a limitation regarding the intrinsic verification of the influence of the phantom set-up error because experimentally measuring the phantom-alignment error accurately is impossible. To overcome this limitation, we aimed to verify the effect of the phantom set-up error within the gamma evaluation formula by using a Monte Carlo simulation. Artificial phantom set-up errors were simulated, and the concept of the true point (TP) was used to represent the actual coordinates of the measurement point for the mathematical modeling of these effects on the gamma. Using dose distributions acquired from the Monte Carlo simulation, performed gamma evaluations in 2D and 3D. The results of the gamma evaluations and the dose difference at the TP were classified to verify the degrees of dose reflection at the TP. The 2D and the 3D gamma errors were defined by comparing gamma values between the case of the imposed phantom set-up error and the TP in order to investigate the effect of the set-up error on the gamma value. According to the results for gamma errors, the 3D gamma evaluation reflected the dose at the TP better than the 2D one. Moreover, the gamma passing rates were higher for 3D than for 2D, as is widely known. Thus, the 3D gamma evaluation can increase the precision of patient-specific IMRT QA by applying stringent acceptance criteria and setting a reasonable action level for the 3D gamma passing rate.
Modeling and Numerical Simulation of Microwave Pulse Propagation in Air Breakdown Environment
NASA Technical Reports Server (NTRS)
Kuo, S. P.; Kim, J.
1991-01-01
Numerical simulation is used to investigate the extent of the electron density at a distant altitude location which can be generated by a high-power ground-transmitted microwave pulse. This is done by varying the power, width, shape, and carrier frequency of the pulse. The results show that once the breakdown threshold field is exceeded in the region below the desired altitude location, electron density starts to build up in that region through cascading breakdown. The generated plasma attenuates the pulse energy (tail erosion) and thus deteriorates the energy transmission to the destined altitude. The electron density saturates at a level limited by the pulse width and the tail erosion process. As the pulse continues to travel upward, though the breakdown threshold field of the background air decreases, the pulse energy (width) is reduced more severely by the tail erosion process. Thus, the electron density grows more quickly at the higher altitude, but saturates at a lower level. Consequently, the maximum electron density produced by a single pulse at 50 km altitude, for instance, is limited to a value below 10(exp 6) cm(exp -3). Three different approaches are examined to determine if the ionization at the destined location can be improved: a repetitive pulse approach, a focused pulse approach, and two intersecting beams. Only the intersecting beam approach is found to be practical for generating the desired density level.
Drakaki, E; Makropoulou, M; Serafetinides, A A
2008-07-01
In dermatology, the in vivo spectral fluorescence measurements of human skin can serve as a valuable supplement to standard non-invasive techniques for diagnosing various skin diseases. However, quantitative analysis of the fluorescence spectra is complicated by the fact that skin is a complex multi-layered and inhomogeneous organ, with varied optical properties and biophysical characteristics. In this work, we recorded, in vitro, the laser-induced fluorescence emission signals of healthy porcine skin, one of the animals, which is considered as one of the most common models for investigations related to medical diagnostics of human cutaneous tissues. Differences were observed in the form and intensity of the fluorescence signal of the porcine skin, which can be attributed to the different concentrations of the native fluorophores and the variable physical and biological conditions of the skin tissue. As the light transport in the tissue target is directly influencing the absorption and the fluorescence emission signals, we performed Monte Carlo simulation of the light distribution in a five-layer model of human skin tissue, with a pulsed ultraviolet laser beam.
Memory for child sexual abuse information: simulated memory error and individual differences.
McWilliams, Kelly; Goodman, Gail S; Lyons, Kristen E; Newton, Jeremy; Avila-Mora, Elizabeth
2014-01-01
Building on the simulated-amnesia work of Christianson and Bylin (Applied Cognitive Psychology, 13, 495-511, 1999), the present research introduces a new paradigm for the scientific study of memory of childhood sexual abuse information. In Session 1, participants mentally took the part of an abuse victim as they read an account of the sexual assault of a 7-year-old. After reading the narrative, participants were randomly assigned to one of four experimental conditions: They (1) rehearsed the story truthfully (truth group), (2) left out the abuse details of the story (omission group), (3) lied about the abuse details to indicate that no abuse had occurred (commission group), or (4) did not recall the story during Session 1 (no-rehearsal group). One week later, participants returned for Session 2 and were asked to truthfully recall the narrative. The results indicated that, relative to truthful recall, untruthful recall or no rehearsal at Session 1 adversely affected memory performance at Session 2. However, untruthful recall resulted in better memory than did no rehearsal. Moreover, gender, PTSD symptoms, depression, adult attachment, and sexual abuse history significantly predicted memory for the childhood sexual abuse scenario. Implications for theory and application are discussed. PMID:23835600
Memory for child sexual abuse information: simulated memory error and individual differences.
McWilliams, Kelly; Goodman, Gail S; Lyons, Kristen E; Newton, Jeremy; Avila-Mora, Elizabeth
2014-01-01
Building on the simulated-amnesia work of Christianson and Bylin (Applied Cognitive Psychology, 13, 495-511, 1999), the present research introduces a new paradigm for the scientific study of memory of childhood sexual abuse information. In Session 1, participants mentally took the part of an abuse victim as they read an account of the sexual assault of a 7-year-old. After reading the narrative, participants were randomly assigned to one of four experimental conditions: They (1) rehearsed the story truthfully (truth group), (2) left out the abuse details of the story (omission group), (3) lied about the abuse details to indicate that no abuse had occurred (commission group), or (4) did not recall the story during Session 1 (no-rehearsal group). One week later, participants returned for Session 2 and were asked to truthfully recall the narrative. The results indicated that, relative to truthful recall, untruthful recall or no rehearsal at Session 1 adversely affected memory performance at Session 2. However, untruthful recall resulted in better memory than did no rehearsal. Moreover, gender, PTSD symptoms, depression, adult attachment, and sexual abuse history significantly predicted memory for the childhood sexual abuse scenario. Implications for theory and application are discussed.
NASA Technical Reports Server (NTRS)
Davis, Donald D.; Bryant, Janet L.; Tedrow, Lara; Liu, Ying; Selgrade, Katherine A.; Downey, Heather J.
2005-01-01
This report describes results of a study conducted for NASA-Langley Research Center. This study is part of a program of research conducted for NASA-LARC that has focused on identifying the influence of national culture on the performance of flight crews. We first reviewed the literature devoted to models of teamwork and team performance, crew resource management, error management, and cross-cultural psychology. Davis (1999) reported the results of this review and presented a model that depicted how national culture could influence teamwork and performance in flight crews. The second study in this research program examined accident investigations of foreign airlines in the United States conducted by the National Transportation Safety Board (NTSB). The ability of cross-cultural values to explain national differences in flight outcomes was examined. Cultural values were found to covary in a predicted way with national differences, but the absence of necessary data in the NTSB reports and limitations in the research method that was used prevented a clear understanding of the causal impact of cultural values. Moreover, individual differences such as personality traits were not examined in this study. Davis and Kuang (2001) report results of this second study. The research summarized in the current report extends this previous research by directly assessing cultural and individual differences among students from the United States and China who were trained to fly in a flight simulator using desktop computer workstations. The research design used in this study allowed delineation of the impact of national origin, cultural values, personality traits, cognitive style, shared mental model, and task workload on teamwork, error management and flight outcomes. We briefly review the literature that documents the importance of teamwork and error management and its impact on flight crew performance. We next examine teamwork and crew resource management training designed to improve
NASA Astrophysics Data System (ADS)
Costantino, Lorenzo; Heinrich, Philippe; Mzé, Nahoudha; Hauchecorne, Alain
2016-04-01
In this work we perform numerical simulations of convective gravity waves (GWs), using the WRF (Weather Research and Forecasting) model. We first run an idealized, simplified and highly resolved simulation with model top at 80 km. Below 60 km of altitude, a vertical grid spacing smaller than 1 km is supposed to reliably resolve the effects of GW breaking. An eastward linear wind shear interacts with the GW field generated by a single convective thunderstorm. After 70 min of integration time, averaging within a radius of 300 km from the storm centre, results show that wave breaking in the upper stratosphere is largely dominated by saturation effects, driving an average drag force up to -41 m s -1 day -1. In the lower stratosphere, mean wave drag is positive and equal to 4.4 m s -1 day -1. In a second step, realistic WRF simulations are compared with lidar measurements from the NDACC network (Network for the Detection of Atmospheric Composition Changes) of gravity wave potential energy (E p) over OHP (Haute-Provence Observatory, southern France). Using a vertical grid spacing smaller than 1 km below 50 km of altitude, WRF seems to reliably reproduce the effect of GW dynamics and capture qualitative aspects of wave momentum and energy propagation and transfer to background mean flow. Averaging within a radius of 120 km from the storm centre, the resulting drag force for the study case (2 h storm) is negative in the higher (-1 m s -1 day -1) and positive in the lower stratosphere (0.23 m s -1 day -1). Vertical structures of simulated potential energy profiles are found to be in good agreement with those measured by lidar. E p is mostly conserved with altitude in August while, in October, E p decreases in the upper stratosphere to grow again in the lower mesosphere. On the other hand, the magnitude of simulated wave energy is clearly underestimated with respect to lidar data by about 3-4 times. Keywords: Meteorology and atmospheric dynamics (mesoscale meteorology middle
NASA Astrophysics Data System (ADS)
Costantino, Lorenzo; Heinrich, Philippe; Mzé, Nahoudha; Hauchecorne, Alain
2016-04-01
In this work we perform numerical simulations of convective gravity waves (GWs), using the WRF (Weather Research and Forecasting) model. We first run an idealized, simplified and highly resolved simulation with model top at 80 km. Below 60 km of altitude, a vertical grid spacing smaller than 1 km is supposed to reliably resolve the effects of GW breaking. An eastward linear wind shear interacts with the GW field generated by a single convective thunderstorm. After 70 min of integration time, averaging within a radius of 300 km from the storm centre, results show that wave breaking in the upper stratosphere is largely dominated by saturation effects, driving an average drag force up to ‑41 m s ‑1 day ‑1. In the lower stratosphere, mean wave drag is positive and equal to 4.4 m s ‑1 day ‑1. In a second step, realistic WRF simulations are compared with lidar measurements from the NDACC network (Network for the Detection of Atmospheric Composition Changes) of gravity wave potential energy (E p) over OHP (Haute-Provence Observatory, southern France). Using a vertical grid spacing smaller than 1 km below 50 km of altitude, WRF seems to reliably reproduce the effect of GW dynamics and capture qualitative aspects of wave momentum and energy propagation and transfer to background mean flow. Averaging within a radius of 120 km from the storm centre, the resulting drag force for the study case (2 h storm) is negative in the higher (‑1 m s ‑1 day ‑1) and positive in the lower stratosphere (0.23 m s ‑1 day ‑1). Vertical structures of simulated potential energy profiles are found to be in good agreement with those measured by lidar. E p is mostly conserved with altitude in August while, in October, E p decreases in the upper stratosphere to grow again in the lower mesosphere. On the other hand, the magnitude of simulated wave energy is clearly underestimated with respect to lidar data by about 3-4 times. Keywords: Meteorology and atmospheric dynamics
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Tsay, Chung-Biau
1987-01-01
The authors have proposed a method for the generation of circular arc helical gears which is based on the application of standard equipment, worked out all aspects of the geometry of the gears, proposed methods for the computer aided simulation of conditions of meshing and bearing contact, investigated the influence of manufacturing and assembly errors, and proposed methods for the adjustment of gears to these errors. The results of computer aided solutions are illustrated with computer graphics.
NASA Astrophysics Data System (ADS)
Hazza, Muataz Hazza F. Al; Adesta, Erry Y. T.; Riza, Muhammad
2013-12-01
High speed milling has many advantages such as higher removal rate and high productivity. However, higher cutting speed increase the flank wear rate and thus reducing the cutting tool life. Therefore estimating and predicting the flank wear length in early stages reduces the risk of unaccepted tooling cost. This research presents a neural network model for predicting and simulating the flank wear in the CNC end milling process. A set of sparse experimental data for finish end milling on AISI H13 at hardness of 48 HRC have been conducted to measure the flank wear length. Then the measured data have been used to train the developed neural network model. Artificial neural network (ANN) was applied to predict the flank wear length. The neural network contains twenty hidden layer with feed forward back