Error propagation in first-principles kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Matera, Sebastian
2017-04-01
First-principles kinetic Monte Carlo models allow for the modeling of catalytic surfaces with predictive quality. This comes at the price of non-negligible errors induced by the underlying approximate density functional calculation. On the example of CO oxidation on RuO2(110), we demonstrate a novel, efficient approach to global sensitivity analysis, with which we address the error propagation in these multiscale models. We find, that we can still derive the most important atomistic factors for reactivity, albeit the errors in the simulation results are sizable. The presented approach might also be applied in the hierarchical model construction or computational catalyst screening.
Error propagation in a digital avionic processor: A simulation-based study
NASA Technical Reports Server (NTRS)
Lomelino, D.; Iyer, R. K.
1986-01-01
An experimental analysis to study error propagation from the gate to the chip level is described. The target system is the CPU in the Bendix BDX-930, an avionic miniprocessor. Error activity data for the study was collected via a gate-level simulation. A family of distributions to characterize the error propagation, both within the chip and at the pins, was then generated. Based on these distributions, measures of error propagation and severity were defined. The analysis quantifies the dependency of the measured error propagation on the location of the fault and the type of instruction/microinstruction executed.
Simulation of radar rainfall errors and their propagation into rainfall-runoff processes
NASA Astrophysics Data System (ADS)
Aghakouchak, A.; Habib, E.
2008-05-01
Radar rainfall data compared with rain gauge measurements provide higher spatial and temporal resolution. However, radar data obtained form reflectivity patterns are subject to various errors such as errors in Z-R relationship, vertical profile of reflectivity, spatial and temporal sampling, etc. Characterization of such uncertainties in radar data and their effects on hydrologic simulations (e.g., streamflow estimation) is a challenging issue. This study aims to analyze radar rainfall error characteristics empirically to gain information on prosperities of random error representativeness and its temporal and spatial dependency. To empirically analyze error characteristics, high resolution and accurate rain gauge measurements are required. The Goodwin Creek watershed located in the north part of Mississippi is selected for this study due to availability of a dense rain gauge network. A total of 30 rain gauge measurement stations within Goodwin Creak watershed and the NWS Level II radar reflectivity data obtained from the WSR-88dD Memphis radar station with temporal resolution of 5min and spatial resolution of 1 km2 are used in this study. Radar data and rain gauge measurements comparisons are used to estimate overall bias, and statistical characteristics and spatio-temporal dependency of radar rainfall error fields. This information is then used to simulate realizations of radar error patterns with multiple correlated variables using Monte Calro method and the Cholesky decomposition. The generated error fields are then imposed on radar rainfall fields to obtain statistical realizations of input rainfall fields. Each simulated realization is then fed as input to a distributed physically based hydrological model resulting in an ensemble of predicted runoff hydrographs. The study analyzes the propagation of radar errors on the simulation of different rainfall-runoff processes such as streamflow, soil moisture, infiltration, and over-land flooding.
Error Propagation in a System Model
NASA Technical Reports Server (NTRS)
Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)
2015-01-01
Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.
NLO error propagation exercise: statistical results
Pack, D.J.; Downing, D.J.
1985-09-01
Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or /sup 235/U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, /sup 235/U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and /sup 235/U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods.
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
Scout trajectory error propagation computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1982-01-01
Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.
Truncation and Accumulated Errors in Wave Propagation
NASA Astrophysics Data System (ADS)
Chiang, Yi-Ling F.
1988-12-01
The approximation of the truncation and accumulated errors in the numerical solution of a linear initial-valued partial differential equation problem can be established by using a semidiscretized scheme. This error approximation is observed as a lower bound to the errors of a finite difference scheme. By introducing a modified von Neumann solution, this error approximation is applicable to problems with variable coefficients. To seek an in-depth understanding of this newly established error approximation, numerical experiments were performed to solve the hyperbolic equation {∂U}/{∂t} = -C 1(x)C 2(t) {∂U}/{∂x}, with both continuous and discontinuous initial conditions. We studied three cases: (1) C1( x)= C0 and C2( t)=1; (2) C1( x)= C0 and C2( t= t; and (3) C 1(x)=1+( {solx}/{a}) 2 and C2( t)= C0. Our results show that the errors are problem dependent and are functions of the propagating wave speed. This suggests a need to derive problem-oriented schemes rather than the equation-oriented schemes as is commonly done. Furthermore, in a wave-propagation problem, measurement of the error by the maximum norm is not particularly informative when the wave speed is incorrect.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Observation error propagation on video meteor orbit determination
NASA Astrophysics Data System (ADS)
SonotaCo
2016-04-01
A new radiant direction error computation method on SonotaCo Network meteor observation data was tested. It uses single station observation error obtained by reference star measurement and trajectory linearity measurement on each video, as its source error value, and propagates this to the radiant and orbit parameter errors via the Monte Carlo simulation method. The resulting error values on a sample data set showed a reasonable error distribution that makes accuracy-based selecting feasible. A sample set of selected orbits obtained by this method revealed a sharper concentration of shower meteor radiants than we have ever seen before. The simultaneously observed meteor data sets published by the SonotaCo Network will be revised to include this error value on each record and will be publically available along with the computation program in near future.
Propagation error minimization method for multiple structural displacement monitoring system
NASA Astrophysics Data System (ADS)
Jeon, Haemin; Shin, Jae-Uk; Myung, Hyun
2013-04-01
In the previous study, a visually servoed paired structured light system (ViSP) which is composed of two sides facing each other, each with one or two lasers, a 2-DOF manipulator, a camera, and a screen has been proposed. The lasers project their parallel beams to the screen on the opposite side and 6-DOF relative displacement between two sides is estimated by calculating positions of the projected laser beams and rotation angles of the manipulators. To apply the system to massive civil structures such as long-span bridges or high-rise buildings, the whole area should be divided into multiple partitions and each ViSP module is placed in each partition in a cascaded manner. In other words, the movement of the entire structure can be monitored by multiplying the estimated displacements from multiple ViSP modules. In the multiplication, however, there is a major problem that the displacement estimation error is propagated throughout the multiple modules. To solve the problem, propagation error minimization method (PEMM) which uses Newton-Raphson formulation inspired by the error back-propagation algorithm is proposed. In this method, a propagation error at the last module is calculated and then the estimated displacement from ViSP at each partition is updated in reverse order by using the proposed PEMM that minimizes the propagation error. To verify the performance of the proposed method, various simulations and experimental tests have been performed. The results show that the propagation error is significantly reduced after applying PEMM.
Error propagation in energetic carrying capacity models
Pearse, Aaron T.; Stafford, Joshua D.
2014-01-01
Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.
VPSim: Variance propagation by simulation
Burr, T.; Coulter, C.A.; Prommel, J.
1997-12-01
One of the fundamental concepts in a materials control and accountability system for nuclear safeguards is the materials balance (MB). All transfers into and out of a material balance area are measured, as are the beginning and ending inventories. The resulting MB measures the material loss, MB = T{sub in} + I{sub B} {minus} T{sub out} {minus} I{sub E}. To interpret the MB, the authors must estimate its measurement error standard deviation, {sigma}{sub MB}. When feasible, they use a method usually known as propagation of variance (POV) to estimate {sigma}{sub MB}. The application of POV for estimating the measurement error variance of an MB is straightforward but tedious. By applying POV to individual measurement error standard deviations they can estimate {sigma}{sub MB} (or more generally, they can estimate the variance-covariance matrix, {Sigma}, of a sequence of MBs). This report describes a new computer program (VPSim) that uses simulation to estimate the {Sigma} matrix of a sequence of MBs. Given the proper input data, VPSim calculates the MB and {sigma}{sub MB}, or calculates a sequence of n MBs and the associated n-by-n covariance matrix, {Sigma}. The covariance matrix, {Sigma}, contains the variance of each MB in the diagonal entries and the covariance between pairs of MBs in the off-diagonal entries.
Error Propagation Made Easy--Or at Least Easier
ERIC Educational Resources Information Center
Gardenier, George H.; Gui, Feng; Demas, James N.
2011-01-01
Complex error propagation is reduced to formula and data entry into a Mathcad worksheet or an Excel spreadsheet. The Mathcad routine uses both symbolic calculus analysis and Monte Carlo methods to propagate errors in a formula of up to four variables. Graphical output is used to clarify the contributions to the final error of each of the…
Detecting and preventing error propagation via competitive learning.
Silva, Thiago Christiano; Zhao, Liang
2013-05-01
Semisupervised learning is a machine learning approach which is able to employ both labeled and unlabeled samples in the training process. It is an important mechanism for autonomous systems due to the ability of exploiting the already acquired information and for exploring the new knowledge in the learning space at the same time. In these cases, the reliability of the labels is a crucial factor, because mislabeled samples may propagate wrong labels to a portion of or even the entire data set. This paper has the objective of addressing the error propagation problem originated by these mislabeled samples by presenting a mechanism embedded in a network-based (graph-based) semisupervised learning method. Such a procedure is based on a combined random-preferential walk of particles in a network constructed from the input data set. The particles of the same class cooperate among them, while the particles of different classes compete with each other to propagate class labels to the whole network. Computer simulations conducted on synthetic and real-world data sets reveal the effectiveness of the model.
Learning representations by back-propagating errors
NASA Astrophysics Data System (ADS)
Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J.
1986-10-01
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal `hidden' units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.
Error analysis using organizational simulation.
Fridsma, D. B.
2000-01-01
Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885
Uncertainty and error in computational simulations
Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.
1997-10-01
The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.
Analysis of Errors in a Special Perturbations Satellite Orbit Propagator
Beckerman, M.; Jones, J.P.
1999-02-01
We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.
Characterizing error propagation in quantum circuits: the Isotropic Index
NASA Astrophysics Data System (ADS)
Fonseca de Oliveira, André L.; Buksman, Efrain; Cohn, Ilan; García López de Lacalle, Jesús
2017-02-01
This paper presents a novel index in order to characterize error propagation in quantum circuits by separating the resultant mixed error state in two components: an isotropic component that quantifies the lack of information, and a disalignment component that represents the shift between the current state and the original pure quantum state. The Isotropic Triangle, a graphical representation that fits naturally with the proposed index, is also introduced. Finally, some examples with the analysis of well-known quantum algorithms degradation are given.
Error Analysis and Propagation in Metabolomics Data Analysis.
Moseley, Hunter N B
2013-01-01
Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of 'omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed.
Inductively Coupled Plasma Mass Spectrometry Uranium Error Propagation
Hickman, D P; Maclean, S; Shepley, D; Shaw, R K
2001-07-01
The Hazards Control Department at Lawrence Livermore National Laboratory (LLNL) uses Inductively Coupled Plasma Mass Spectrometer (ICP/MS) technology to analyze uranium in urine. The ICP/MS used by the Hazards Control Department is a Perkin-Elmer Elan 6000 ICP/MS. The Department of Energy Laboratory Accreditation Program requires that the total error be assessed for bioassay measurements. A previous evaluation of the errors associated with the ICP/MS measurement of uranium demonstrated a {+-} 9.6% error in the range of 0.01 to 0.02 {micro}g/l. However, the propagation of total error for concentrations above and below this level have heretofore been undetermined. This document is an evaluation of the errors associated with the current LLNL ICP/MS method for a more expanded range of uranium concentrations.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Simulation of wave propagation in three-dimensional random media
NASA Technical Reports Server (NTRS)
Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1993-01-01
Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.
NASA Astrophysics Data System (ADS)
Jeon, H.; Shin, J. U.; Myung, H.
2013-04-01
Visually servoed paired structured light system (ViSP) has been found to be useful in estimating 6-DOF relative displacement. The system is composed of two screens facing each other, each with one or two lasers, a 2-DOF manipulator and a camera. The displacement between two sides is estimated by observing positions of the projected laser beams and rotation angles of the manipulators. To apply the system to massive structures, the whole area should be partitioned and each ViSP module is placed in each partition in a cascaded manner. The estimated displacement between adjoining ViSPs is combined with the next partition so that the entire movement of the structure can be estimated. The multiple ViSPs, however, have a major problem that the error is propagated through the partitions. Therefore, a displacement estimation error back-propagation (DEEP) method which uses Newton-Raphson or gradient descent formulation inspired by the error back-propagation algorithm is proposed. In this method, the estimated displacement from the ViSP is updated using the error back-propagated from a fixed position. To validate the performance of the proposed method, various simulations and experiments have been performed. The results show that the proposed method significantly reduces the propagation error throughout the multiple modules.
A neural fuzzy controller learning by fuzzy error propagation
NASA Technical Reports Server (NTRS)
Nauck, Detlef; Kruse, Rudolf
1992-01-01
In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.
High-order Taylor series expansion methods for error propagation in geographic information systems
NASA Astrophysics Data System (ADS)
Xue, Jie; Leung, Yee; Ma, Jiang-Hong
2015-04-01
The quality of modeling results in GIS operations depends on how well we can track error propagating from inputs to outputs. Monte Carlo simulation, moment design and Taylor series expansion have been employed to study error propagation over the years. Among them, first-order Taylor series expansion is popular because error propagation can be analytically studied. Because most operations in GIS are nonlinear, first-order Taylor series expansion generally cannot meet practical needs, and higher-order approximation is thus necessary. In this paper, we employ Taylor series expansion methods of different orders to investigate error propagation when the random error vectors are normally and independently or dependently distributed. We also extend these methods to situations involving multi-dimensional output vectors. We employ these methods to examine length measurement of linear segments, perimeter of polygons and intersections of two line segments basic in GIS operations. Simulation experiments indicate that the fifth-order Taylor series expansion method is most accurate compared with the first-order and third-order method. Compared with the third-order expansion; however, it can only slightly improve the accuracy, but on the expense of substantially increasing the number of partial derivatives that need to be calculated. Striking a balance between accuracy and complexity, the third-order Taylor series expansion method appears to be a more appropriate choice for practical applications.
Autonomous Navigation Error Propagation Assessment for Lunar Surface Mobility Applications
NASA Technical Reports Server (NTRS)
Welch, Bryan W.; Connolly, Joseph W.
2006-01-01
The NASA Vision for Space Exploration is focused on the return of astronauts to the Moon. While navigation systems have already been proven in the Apollo missions to the moon, the current exploration campaign will involve more extensive and extended missions requiring new concepts for lunar navigation. In this document, the results of an autonomous navigation error propagation assessment are provided. The analysis is intended to be the baseline error propagation analysis for which Earth-based and Lunar-based radiometric data are added to compare these different architecture schemes, and quantify the benefits of an integrated approach, in how they can handle lunar surface mobility applications when near the Lunar South pole or on the Lunar Farside.
Dose calibration optimization and error propagation in polymer gel dosimetry
NASA Astrophysics Data System (ADS)
Jirasek, A.; Hilts, M.
2014-02-01
This study reports on the relative precision, relative error, and dose differences observed when using a new full-image calibration technique in NIPAM-based x-ray CT polymer gel dosimetry. The effects of calibration parameters (e.g. gradient thresholding, dose bin size, calibration fit function, and spatial remeshing) on subsequent errors in calibrated gel images are reported. It is found that gradient thresholding, dose bin size, and fit function all play a primary role in affecting errors in calibrated images. Spatial remeshing induces minimal reductions or increases in errors in calibrated images. This study also reports on a full error propagation throughout the CT gel image pre-processing and calibration procedure thus giving, for the first time, a realistic view of the errors incurred in calibrated CT polymer gel dosimetry. While the work is based on CT polymer gel dosimetry, the formalism is valid for and easily extended to MRI or optical CT dosimetry protocols. Hence, the procedures developed within the work are generally applicable to calibration of polymer gel dosimeters.
On the error propagation of semi-Lagrange and Fourier methods for advection problems.
Einkemmer, Lukas; Ostermann, Alexander
2015-02-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley-Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme.
Cross Section Sensitivity and Propagated Errors in HZE Exposures
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Wilson, John W.; Blatnig, Steve R.; Qualls, Garry D.; Badavi, Francis F.; Cucinotta, Francis A.
2005-01-01
It has long been recognized that galactic cosmic rays are of such high energy that they tend to pass through available shielding materials resulting in exposure of astronauts and equipment within space vehicles and habitats. Any protection provided by shielding materials result not so much from stopping such particles but by changing their physical character in interaction with shielding material nuclei forming, hopefully, less dangerous species. Clearly, the fidelity of the nuclear cross-sections is essential to correct specification of shield design and sensitivity to cross-section error is important in guiding experimental validation of cross-section models and database. We examine the Boltzmann transport equation which is used to calculate dose equivalent during solar minimum, with units (cSv/yr), associated with various depths of shielding materials. The dose equivalent is a weighted sum of contributions from neutrons, protons, light ions, medium ions and heavy ions. We investigate the sensitivity of dose equivalent calculations due to errors in nuclear fragmentation cross-sections. We do this error analysis for all possible projectile-fragment combinations (14,365 such combinations) to estimate the sensitivity of the shielding calculations to errors in the nuclear fragmentation cross-sections. Numerical differentiation with respect to the cross-sections will be evaluated in a broad class of materials including polyethylene, aluminum and copper. We will identify the most important cross-sections for further experimental study and evaluate their impact on propagated errors in shielding estimates.
Relationships between GPS-signal propagation errors and EISCAT observations
NASA Astrophysics Data System (ADS)
Jakowski, N.; Sardon, E.; Engler, E.; Jungstand, A.; Klähn, D.
1996-12-01
When travelling through the ionosphere the signals of space-based radio navigation systems such as the Global Positioning System (GPS) are subject to modifications in amplitude, phase and polarization. In particular, phase changes due to refraction lead to propagation errors of up to 50 m for single-frequency GPS users. If both the L1 and the L2 frequencies transmitted by the GPS satellites are measured, first-order range error contributions of the ionosphere can be determined and removed by difference methods. The ionospheric contribution is proportional to the total electron content (TEC) along the ray path between satellite and receiver. Using about ten European GPS receiving stations of the International GPS Service for Geodynamics (IGS), the TEC over Europe is estimated within the geographic ranges -20°leq
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies w_{r} in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = w_{r}/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less
NASA Astrophysics Data System (ADS)
Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok
2005-08-01
Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of
On the error propagation of semi-Lagrange and Fourier methods for advection problems☆
Einkemmer, Lukas; Ostermann, Alexander
2015-01-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018
Molecular dynamics simulation of propagating cracks
NASA Technical Reports Server (NTRS)
Mullins, M.
1982-01-01
Steady state crack propagation is investigated numerically using a model consisting of 236 free atoms in two (010) planes of bcc alpha iron. The continuum region is modeled using the finite element method with 175 nodes and 288 elements. The model shows clear (010) plane fracture to the edge of the discrete region at moderate loads. Analysis of the results obtained indicates that models of this type can provide realistic simulation of steady state crack propagation.
Simulation of guided wave propagation near numerical Brillouin zones
NASA Astrophysics Data System (ADS)
Kijanka, Piotr; Staszewski, Wieslaw J.; Packo, Pawel
2016-04-01
Attractive properties of guided waves provides very unique potential for characterization of incipient damage, particularly in plate-like structures. Among other properties, guided waves can propagate over long distances and can be used to monitor hidden structural features and components. On the other hand, guided propagation brings substantial challenges for data analysis. Signal processing techniques are frequently supported by numerical simulations in order to facilitate problem solution. When employing numerical models additional sources of errors are introduced. These can play significant role for design and development of a wave-based monitoring strategy. Hence, the paper presents an investigation of numerical models for guided waves generation, propagation and sensing. Numerical dispersion analysis, for guided waves in plates, based on the LISA approach is presented and discussed in the paper. Both dispersion and modal amplitudes characteristics are analysed. It is shown that wave propagation in a numerical model resembles propagation in a periodic medium. Consequently, Lamb wave propagation close to numerical Brillouin zone is investigated and characterized.
Simulation of action potential propagation in plants.
Sukhov, Vladimir; Nerush, Vladimir; Orlova, Lyubov; Vodeneev, Vladimir
2011-12-21
Action potential is considered to be one of the primary responses of a plant to action of various environmental factors. Understanding plant action potential propagation mechanisms requires experimental investigation and simulation; however, a detailed mathematical model of plant electrical signal transmission is absent. Here, the mathematical model of action potential propagation in plants has been worked out. The model is a two-dimensional system of excitable cells; each of them is electrically coupled with four neighboring ones. Ion diffusion between excitable cell apoplast areas is also taken into account. The action potential generation in a single cell has been described on the basis of our previous model. The model simulates active and passive signal transmission well enough. It has been used to analyze theoretically the influence of cell to cell electrical conductivity and H(+)-ATPase activity on the signal transmission in plants. An increase in cell to cell electrical conductivity has been shown to stimulate an increase in the length constant, the action potential propagation velocity and the temperature threshold, while the membrane potential threshold being weakly changed. The growth of H(+)-ATPase activity has been found to induce the increase of temperature and membrane potential thresholds and the reduction of the length constant and the action potential propagation velocity.
Propagation of atmospheric density errors to satellite orbits
NASA Astrophysics Data System (ADS)
Emmert, J. T.; Warren, H. P.; Segerman, A. M.; Byers, J. M.; Picone, J. M.
2017-01-01
We develop and test approximate analytic expressions relating time-dependent atmospheric density errors to errors in the mean motion and mean anomaly orbital elements. The mean motion and mean anomaly errors are proportional to the first and second integrals, respectively, of the density error. This means that the mean anomaly (and hence the in-track position) error variance grows with time as t3 for a white noise density error process and as t5 for a Brownian motion density error process. Our approximate expressions are accurate over a wide range of orbital configurations, provided the perigee altitude change is less than ∼0.2 atmospheric scale heights. For orbit prediction, density forecasts are driven in large part by forecasts of solar extreme ultraviolet (EUV) irradiance; we show that errors in EUV ten-day forecasts (and consequently in the density forecasts) approximately follow a Brownian motion process.
Simulations of Seismic Wave Propagation on Mars
NASA Astrophysics Data System (ADS)
Bozdağ, Ebru; Ruan, Youyi; Metthez, Nathan; Khan, Amir; Leng, Kuangdai; van Driel, Martin; Wieczorek, Mark; Rivoldini, Attilio; Larmat, Carène S.; Giardini, Domenico; Tromp, Jeroen; Lognonné, Philippe; Banerdt, Bruce W.
2017-03-01
We present global and regional synthetic seismograms computed for 1D and 3D Mars models based on the spectral-element method. For global simulations, we implemented a radially-symmetric Mars model with a 110 km thick crust (Sohl and Spohn in J. Geophys. Res., Planets 102(E1):1613-1635, 1997). For this 1D model, we successfully benchmarked the 3D seismic wave propagation solver SPECFEM3D_GLOBE (Komatitsch and Tromp in Geophys. J. Int. 149(2):390-412, 2002a; 150(1):303-318, 2002b) against the 2D axisymmetric wave propagation solver AxiSEM (Nissen-Meyer et al. in Solid Earth 5(1):425-445, 2014) at periods down to 10 s. We also present higher-resolution body-wave simulations with AxiSEM down to 1 s in a model with a more complex 1D crust, revealing wave propagation effects that would have been difficult to interpret based on ray theory. For 3D global simulations based on SPECFEM3D_GLOBE, we superimposed 3D crustal thickness variations capturing the distinct crustal dichotomy between Mars' northern and southern hemispheres, as well as topography, ellipticity, gravity, and rotation. The global simulations clearly indicate that the 3D crust speeds up body waves compared to the reference 1D model, whereas it significantly changes surface waveforms and their dispersive character depending on its thickness. We also perform regional simulations with the solver SES3D (Fichtner et al. Geophys. J. Int. 179:1703-1725, 2009) based on 3D crustal models derived from surface composition, thereby addressing the effects of various distinct crustal features down to 2 s. The regional simulations confirm the strong effects of crustal variations on waveforms. We conclude that the numerical tools are ready for examining more scenarios, including various other seismic models and sources.
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated
NASA Technical Reports Server (NTRS)
Weir, Kent A.; Wells, Eugene M.
1990-01-01
The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.
Propagation of errors from the sensitivity image in list mode reconstruction
Qi, Jinyi; Huesman, Ronald H.
2003-11-15
List mode image reconstruction is attracting renewed attention. It eliminates the storage of empty sinogram bins. However, a single back projection of all LORs is still necessary for the pre-calculation of a sensitivity image. Since the detection sensitivity is dependent on the object attenuation and detector efficiency, it must be computed for each study. Exact computation of the sensitivity image can be a daunting task for modern scanners with huge numbers of LORs. Thus, some fast approximate calculation may be desirable. In this paper, we theoretically analyze the error propagation from the sensitivity image into the reconstructed image. The theoretical analysis is based on the fixed point condition of the list mode reconstruction. The non-negativity constraint is modeled using the Kuhn-Tucker condition. With certain assumptions and the first order Taylor series approximation, we derive a closed form expression for the error in the reconstructed image as a function of the error in the sensitivity image. The result provides insights on what kind of error might be allowable in the sensitivity image. Computer simulations show that the theoretical results are in good agreement with the measured results.
Error propagation and scaling for tropical forest biomass estimates.
Chave, Jerome; Condit, Richard; Aguilar, Salomon; Hernandez, Andres; Lao, Suzanne; Perez, Rolando
2004-01-01
The above-ground biomass (AGB) of tropical forests is a crucial variable for ecologists, biogeochemists, foresters and policymakers. Tree inventories are an efficient way of assessing forest carbon stocks and emissions to the atmosphere during deforestation. To make correct inferences about long-term changes in biomass stocks, it is essential to know the uncertainty associated with AGB estimates, yet this uncertainty is rarely evaluated carefully. Here, we quantify four types of uncertainty that could lead to statistical error in AGB estimates: (i) error due to tree measurement; (ii) error due to the choice of an allometric model relating AGB to other tree dimensions; (iii) sampling uncertainty, related to the size of the study plot; (iv) representativeness of a network of small plots across a vast forest landscape. In previous studies, these sources of error were reported but rarely integrated into a consistent framework. We estimate all four terms in a 50 hectare (ha, where 1 ha = 10(4) m2) plot on Barro Colorado Island, Panama, and in a network of 1 ha plots scattered across central Panama. We find that the most important source of error is currently related to the choice of the allometric model. More work should be devoted to improving the predictive power of allometric models for biomass. PMID:15212093
Investigation of Propagation in Foliage Using Simulation Techniques
2011-12-01
simulation models provide a rough approximation to radiowave propagation in an actual rainforest environment. Based on the simulated results, the...simulation models provide a rough approximation to radiowave propagation in an actual rainforest environment. Based on the simulated results, the path... Rainforest ...............................2 2. Electrical Properties of a Forest .........................................................3 B. OBJECTIVES OF
Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.
2010-09-15
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article
NASA Astrophysics Data System (ADS)
Pan, Zhao; Whitehead, Jared; Truscott, Tadd
2016-11-01
Little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure calculation. Rather than measure experimental error, we analytically investigate error propagation by examining the properties of the Poisson equation directly. Our results provide two contributions to the PIV community. First, we quantify the error bound in the pressure field by illustrating the mathematical roots of why and how PIV-based pressure calculations propagate. Second, we design the "worst case error" for a pressure Poisson solver. In other words, we provide a systematic example where the relatively small errors in the experimental data can lead to maximum error in the corresponding pressure calculations. The 2D calculation of the worst case error surprisingly leads to the classic Kirchhoff plates problem, and connects the PIV-based pressure calculation, which is a typical fluid problem, to elastic dynamics. The results can be used for optimizing experimental error minimization by avoiding worst case scenarios. More importantly, they can be used to design synthetic velocity error for future PIV-pressure challenges, which can be the hardest test case in the examinations.
Error analysis of mixed finite element methods for wave propagation in double negative metamaterials
NASA Astrophysics Data System (ADS)
Li, Jichun
2007-12-01
In this paper, we develop both semi-discrete and fully discrete mixed finite element methods for modeling wave propagation in three-dimensional double negative metamaterials. Optimal error estimates are proved for Nedelec spaces under the assumption of smooth solutions. To our best knowledge, this is the first error analysis obtained for Maxwell's equations when metamaterials are involved.
Hoogeveen, R. C.; Martens, E. P.; van der Stelt, P. F.; Berkhout, W. E. R.
2015-01-01
Objective. To investigate if software simulation is practical for quantifying random error (RE) in phantom dosimetry. Materials and Methods. We applied software error simulation to an existing dosimetry study. The specifications and the measurement values of this study were brought into the software (R version 3.0.2) together with the algorithm of the calculation of the effective dose (E). Four sources of RE were specified: (1) the calibration factor; (2) the background radiation correction; (3) the read-out process of the dosimeters; and (4) the fluctuation of the X-ray generator. Results. The amount of RE introduced by these sources was calculated on the basis of the experimental values and the mathematical rules of error propagation. The software repeated the calculations of E multiple times (n = 10,000) while attributing the applicable RE to the experimental values. A distribution of E emerged as a confidence interval around an expected value. Conclusions. Credible confidence intervals around E in phantom dose studies can be calculated by using software modelling of the experiment. With credible confidence intervals, the statistical significance of differences between protocols can be substantiated or rejected. This modelling software can also be used for a power analysis when planning phantom dose experiments. PMID:26881200
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.
2013-08-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006-2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC) ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92) are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors in the 7-15 hPa layer (29-32 km), a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold). Comparisons of total column O3 yield average differences of +1.6 DU (-1.1 to +4.9 DU 10th to 90th percentiles) when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (-1.1 to +2.2 DU) when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly distinguishable
Simulation of MAD Cow Disease Propagation
NASA Astrophysics Data System (ADS)
Magdoń-Maksymowicz, M. S.; Maksymowicz, A. Z.; Gołdasz, J.
Computer simulation of dynamic of BSE disease is presented. Both vertical (to baby) and horizontal (to neighbor) mechanisms of the disease spread are considered. The game takes place on a two-dimensional square lattice Nx×Ny = 1000×1000 with initial population randomly distributed on the net. The disease may be introduced either with the initial population or by a spontaneous development of BSE in an item, at a small frequency. Main results show a critical probability of the BSE transmission above which the disease is present in the population. This value is vulnerable to possible spatial clustering of the population and it also depends on the mechanism responsible for the disease onset, evolution and propagation. A threshold birth rate below which the population is extinct is seen. Above this threshold the population is disease free at equilibrium until another birth rate value is reached when the disease is present in population. For typical model parameters used for the simulation, which may correspond to the mad cow disease, we are close to the BSE-free case.
Zandbergen, P A; Hart, T C; Lenzer, K E; Camponovo, M E
2012-04-01
The quality of geocoding has received substantial attention in recent years. A synthesis of published studies shows that the positional errors of street geocoding are somewhat unique relative to those of other types of spatial data: (1) the magnitude of error varies strongly across urban-rural gradients; (2) the direction of error is not uniform, but strongly associated with the properties of local street segments; (3) the distribution of errors does not follow a normal distribution, but is highly skewed and characterized by a substantial number of very large error values; and (4) the magnitude of error is spatially autocorrelated and is related to properties of the reference data. This makes it difficult to employ analytic approaches or Monte Carlo simulations for error propagation modeling because these rely on generalized statistical characteristics. The current paper describes an alternative empirical approach to error propagation modeling for geocoded data and illustrates its implementation using three different case-studies of geocoded individual-level datasets. The first case-study consists of determining the land cover categories associated with geocoded addresses using a point-in-raster overlay. The second case-study consists of a local hotspot characterization using kernel density analysis of geocoded addresses. The third case-study consists of a spatial data aggregation using enumeration areas of varying spatial resolution. For each case-study a high quality reference scenario based on address points forms the basis for the analysis, which is then compared to the result of various street geocoding techniques. Results show that the unique nature of the positional error of street geocoding introduces substantial noise in the result of spatial analysis, including a substantial amount of bias for some analysis scenarios. This confirms findings from earlier studies, but expands these to a wider range of analytical techniques.
Prediction and simulation errors in parameter estimation for nonlinear systems
NASA Astrophysics Data System (ADS)
Aguirre, Luis A.; Barbosa, Bruno H. G.; Braga, Antônio P.
2010-11-01
This article compares the pros and cons of using prediction error and simulation error to define cost functions for parameter estimation in the context of nonlinear system identification. To avoid being influenced by estimators of the least squares family (e.g. prediction error methods), and in order to be able to solve non-convex optimisation problems (e.g. minimisation of some norm of the free-run simulation error), evolutionary algorithms were used. Simulated examples which include polynomial, rational and neural network models are discussed. Our results—obtained using different model classes—show that, in general the use of simulation error is preferable to prediction error. An interesting exception to this rule seems to be the equation error case when the model structure includes the true model. In the case of error-in-variables, although parameter estimation is biased in both cases, the algorithm based on simulation error is more robust.
Numerical simulation of wave propagation in cancellous bone.
Padilla, F; Bossy, E; Haiat, G; Jenson, F; Laugier, P
2006-12-22
Numerical simulation of wave propagation is performed through 31 3D volumes of trabecular bone. These volumes were reconstructed from high synchrotron microtomography experiments and are used as the input geometry in a simulation software developed in our laboratory. The simulation algorithm accounts for propagation into both the saturating fluid and bone but absorption is not taken into account. We show that 3D simulation predicts phenomena observed experimentally in trabecular bones : linear frequency dependence of attenuation, increase of attenuation and speed of sound with the bone volume fraction, negative phase velocity dispersion in most of the specimens, propagation of fast and slow wave depending on the orientation of the trabecular network compared to the direction of propagation of the ultrasound. Moreover, the predicted attenuation is in very close agreement with the experimental one measured on the same specimens. Coupling numerical simulation with real bone architecture therefore provides a powerful tool to investigate the physics of ultrasound propagation in trabecular structures.
Modeling human response errors in synthetic flight simulator domain
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.
1992-01-01
This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.
Simulation of sound propagation over porous barriers of arbitrary shapes.
Ke, Guoyi; Zheng, Z C
2015-01-01
A time-domain solver using an immersed boundary method is investigated for simulating sound propagation over porous and rigid barriers of arbitrary shapes. In this study, acoustic propagation in the air from an impulse source over the ground is considered as a model problem. The linearized Euler equations are solved for sound propagation in the air and the Zwikker-Kosten equations for propagation in barriers as well as in the ground. In comparison to the analytical solutions, the numerical scheme is validated for the cases of a single rigid barrier with different shapes and for two rigid triangular barriers. Sound propagations around barriers with different porous materials are then simulated and discussed. The results show that the simulation is able to capture the sound propagation behaviors accurately around both rigid and porous barriers.
Error propagation and metamodeling for a fidelity tradeoff capability in complex systems design
NASA Astrophysics Data System (ADS)
McDonald, Robert A.
Complex man-made systems are ubiquitous in modern technological society. The national air transportation infrastructure and the aircraft that operate within it, the highways stretching coast-to-coast and the vehicles that travel on them, and global communications networks and the computers that make them possible are all complex systems. It is impossible to fully validate a systems analysis or a design process. Systems are too large, complex, and expensive to build test and validation articles. Furthermore, the operating conditions throughout the life cycle of a system are impossible to predict and control for a validation experiment. Error is introduced at every point in a complex systems design process. Every error source propagates through the complex system in the same way information propagates, feedforward, feedback, and coupling are all present with error. As with error propagation through a single analysis, error sources grow and decay when propagated through a complex system. These behaviors are made more complex by the complex interactions of a complete system. This complication and the loss of intuition that accompanies it makes proper error propagation calculations even more important to aid the decision maker. Error allocation and fidelity trade decisions answer questions like: Is the fidelity of a complex systems analysis adequate, or is an improvement needed? If an improvement is needed, how is that improvement best achieved? Where should limited resources be invested for the improvement of fidelity? How does knowledge of the imperfection of a model impact design decisions based on the model and the certainty of the performance of a particular design? In this research, a fidelity trade environment was conceived, formulated, developed, and demonstrated. This development relied on the advancement of enabling techniques including error propagation, metamodeling, and information management. A notional transport aircraft is modeled in the fidelity trade
Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results
Clark, E.L.
1994-07-01
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.
Estimation and Propagation of Errors in Ice Sheet Bed Elevation Measurements
NASA Astrophysics Data System (ADS)
Johnson, J. V.; Brinkerhoff, D.; Nowicki, S.; Plummer, J.; Sack, K.
2012-12-01
This work is presented in two parts. In the first, we use a numerical inversion technique to determine a "mass conserving bed" (MCB) and estimate errors in interpolation of the bed elevation. The MCB inversion technique adjusts the bed elevation to assure that the mass flux determined from surface velocity measurements does not violate conservation. Cross validation of the MCB technique is done using a subset of available flight lines. The unused flight lines provide data to compare to, quantifying the errors produced by MCB and other interpolation methods. MCB errors are found to be similar to those produced with more conventional interpolation schemes, such as kriging. However, MCB interpolation is consistent with the physics that govern ice sheet models. In the second part, a numerical model of glacial ice is used to propagate errors in bed elevation to the kinematic surface boundary condition. Initially, a control run is completed to establish the surface velocity produced by the model. The control surface velocity is subsequently used as a target for data inversions performed on perturbed versions of the control bed. The perturbation of the bed represents the magnitude of error in bed measurement. Through the inversion for traction, errors in bed measurement are propagated forward to investigate errors in the evolution of the free surface. Our primary conclusion relates the magnitude of errors in the surface evolution to errors in the bed. By linking free surface errors back to the errors in bed interpolation found in the first part, we can suggest an optimal spacing of the radar flight lines used in bed acquisition.
NASA Technical Reports Server (NTRS)
Mallinckrodt, A. J.
1977-01-01
Data from an extensive array of collocated instrumentation at the Wallops Island test facility were intercompared in order to (1) determine the practical achievable accuracy limitations of various tropospheric and ionospheric correction techniques; (2) examine the theoretical bases and derivation of improved refraction correction techniques; and (3) estimate internal systematic and random error levels of the various tracking stations. The GEOS 2 satellite was used as the target vehicle. Data were obtained regarding the ionospheric and tropospheric propagation errors, the theoretical and data analysis of which was documented in some 30 separate reports over the last 6 years. An overview of project results is presented.
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas
2016-12-01
There is unanimous agreement that a precise spatial representation of past landslide occurrences is a prerequisite to produce high quality statistical landslide susceptibility models. Even though perfectly accurate landslide inventories rarely exist, investigations of how landslide inventory-based errors propagate into subsequent statistical landslide susceptibility models are scarce. The main objective of this research was to systematically examine whether and how inventory-based positional inaccuracies of different magnitudes influence modelled relationships, validation results, variable importance and the visual appearance of landslide susceptibility maps. The study was conducted for a landslide-prone site located in the districts of Amstetten and Waidhofen an der Ybbs, eastern Austria, where an earth-slide point inventory was available. The methodological approach comprised an artificial introduction of inventory-based positional errors into the present landslide data set and an in-depth evaluation of subsequent modelling results. Positional errors were introduced by artificially changing the original landslide position by a mean distance of 5, 10, 20, 50 and 120 m. The resulting differently precise response variables were separately used to train logistic regression models. Odds ratios of predictor variables provided insights into modelled relationships. Cross-validation and spatial cross-validation enabled an assessment of predictive performances and permutation-based variable importance. All analyses were additionally carried out with synthetically generated data sets to further verify the findings under rather controlled conditions. The results revealed that an increasing positional inventory-based error was generally related to increasing distortions of modelling and validation results. However, the findings also highlighted that interdependencies between inventory-based spatial inaccuracies and statistical landslide susceptibility models are complex. The
NASA Astrophysics Data System (ADS)
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
Exploring Discretization Error in Simulation-Based Aerodynamic Databases
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2010-01-01
This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.
Electromagnetic simulations for salinity index error estimation
NASA Astrophysics Data System (ADS)
Wilczek, Andrzej; Szypłowska, Agnieszka; Kafarski, Marcin; Nakonieczna, Anna; Skierucha, Wojciech
2017-01-01
Soil salinity index (SI) is a measure of salt concentration in soil water. The salinity index is calculated as a partial derivative of the soil bulk electrical conductivity (EC) with respect to the bulk dielectric permittivity (DP). The paper focused on the impact of different sensitivity zones for measured both EC and DP on the salinity index determination accuracy. For this purpose, a set of finite difference time domain (FDTD) simulations was prepared. The simulations were carried out on the model of a reflectometric probe consisting of three parallel rods inserted into a modelled material of simulated DP and EC. The combinations of stratified distributions of DP and EC were tested. An experimental verification of the simulation results on selected cases was performed. The results showed that the electromagnetic simulations can provide useful data to improve accuracy of the determination of soil SI.
NASA Technical Reports Server (NTRS)
James, R.; Brownlow, J. D.
1985-01-01
A study is performed under NASA contract to evaluate data from an AN/FPS-16 radar installed for support of flight programs at Dryden Flight Research Facility of NASA Ames Research Center. The purpose of this study is to provide information necessary for improving post-flight data reduction and knowledge of accuracy of derived radar quantities. Tracking data from six flights are analyzed. Noise and bias errors in raw tracking data are determined for each of the flights. A discussion of an altitude bias error during all of the tracking missions is included. This bias error is defined by utilizing pressure altitude measurements made during survey flights. Four separate filtering methods, representative of the most widely used optimal estimation techniques for enhancement of radar tracking data, are analyzed for suitability in processing both real-time and post-mission data. Additional information regarding the radar and its measurements, including typical noise and bias errors in the range and angle measurements, is also presented. This report is in two parts. This is part 2, a discussion of the modeling of propagation path errors.
On the propagation of uncertainties in radiation belt simulations
NASA Astrophysics Data System (ADS)
Camporeale, Enrico; Shprits, Yuri; Chandorkar, Mandar; Drozdov, Alexander; Wing, Simon
2016-11-01
We present the first study of the uncertainties associated with radiation belt simulations, performed in the standard quasi-linear diffusion framework. In particular, we estimate how uncertainties of some input parameters propagate through the nonlinear simulation, producing a distribution of outputs that can be quite broad. Here we restrict our focus on two-dimensional simulations (in energy and pitch angle space) of parallel-propagating chorus waves only, and we study as stochastic input parameters the geomagnetic index Kp (that characterizes the time dependency of an idealized storm), the latitudinal extent of waves, and the average electron density. We employ a collocation method, thus performing an ensemble of simulations. The results of this work point to the necessity of shifting to a probabilistic interpretation of radiation belt simulation results and suggest that an accurate specification of a time-dependent density model is crucial for modeling the radiation environment.
Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C
2014-03-01
Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward
Laser propagation in simulations of low fill density hohlraums
NASA Astrophysics Data System (ADS)
Meezan, Nathan; Berzak Hopkins, L. F.; Izumi, N.; Divol, L.; Hinkel, D. E.; Ralph, J. E.; Moody, J. D.; Callahan, D. A.
2016-10-01
We present analysis of laser propagation in simulations of low fill density hohlraums on the National Ignition Facility (NIF). Simulations using the radiation hydrodynamic code
Subresolution Displacements in Finite Difference Simulations of Ultrasound Propagation and Imaging.
Pinton, Gianmarco F
2017-03-01
Time domain finite difference simulations are used extensively to simulate wave propagation. They approximate the wave field on a discrete domain with a grid spacing that is typically on the order of a tenth of a wavelength. The smallest displacements that can be modeled by this type of simulation are thus limited to discrete values that are integer multiples of the grid spacing. This paper presents a method to represent continuous and subresolution displacements by varying the impedance of individual elements in a multielement scatterer. It is demonstrated that this method removes the limitations imposed by the discrete grid spacing by generating a continuum of displacements as measured by the backscattered signal. The method is first validated on an ideal perfect correlation case with a single scatterer. It is subsequently applied to a more complex case with a field of scatterers that model an acoustic radiation force-induced displacement used in ultrasound elasticity imaging. A custom finite difference simulation tool is used to simulate propagation from ultrasound imaging pulses in the scatterer field. These simulated transmit-receive events are then beamformed into images, which are tracked with a correlation-based algorithm to determine the displacement. A linear predictive model is developed to analytically describe the relationship between element impedance and backscattered phase shift. The error between model and simulation is λ/ 1364 , where λ is the acoustical wavelength. An iterative method is also presented that reduces the simulation error to λ/ 5556 over one iteration. The proposed technique therefore offers a computationally efficient method to model continuous subresolution displacements of a scattering medium in ultrasound imaging. This method has applications that include ultrasound elastography, blood flow, and motion tracking. This method also extends generally to finite difference simulations of wave propagation, such as electromagnetic or
The Propagation of Errors in Experimental Data Analysis: A Comparison of Pre-and Post-Test Designs
ERIC Educational Resources Information Center
Gorard, Stephen
2013-01-01
Experimental designs involving the randomization of cases to treatment and control groups are powerful and under-used in many areas of social science and social policy. This paper reminds readers of the pre-and post-test, and the post-test only, designs, before explaining briefly how measurement errors propagate according to error theory. The…
Propagation of radar rainfall uncertainty in urban flood simulations
NASA Astrophysics Data System (ADS)
Liguori, Sara; Rico-Ramirez, Miguel
2013-04-01
This work discusses the results of the implementation of a novel probabilistic system designed to improve ensemble sewer flow predictions for the drainage network of a small urban area in the North of England. The probabilistic system has been developed to model the uncertainty associated to radar rainfall estimates and propagate it through radar-based ensemble sewer flow predictions. The assessment of this system aims at outlining the benefits of addressing the uncertainty associated to radar rainfall estimates in a probabilistic framework, to be potentially implemented in the real-time management of the sewer network in the study area. Radar rainfall estimates are affected by uncertainty due to various factors [1-3] and quality control and correction techniques have been developed in order to improve their accuracy. However, the hydrological use of radar rainfall estimates and forecasts remains challenging. A significant effort has been devoted by the international research community to the assessment of the uncertainty propagation through probabilistic hydro-meteorological forecast systems [4-5], and various approaches have been implemented for the purpose of characterizing the uncertainty in radar rainfall estimates and forecasts [6-11]. A radar-based ensemble stochastic approach, similar to the one implemented for use in the Southern-Alps by the REAL system [6], has been developed for the purpose of this work. An ensemble generator has been calibrated on the basis of the spatial-temporal characteristics of the residual error in radar estimates assessed with reference to rainfall records from around 200 rain gauges available for the year 2007, previously post-processed and corrected by the UK Met Office [12-13]. Each ensemble member is determined by summing a perturbation field to the unperturbed radar rainfall field. The perturbations are generated by imposing the radar error spatial and temporal correlation structure to purely stochastic fields. A
Characteristics and dependencies of error in satellite-based flood event simulations
NASA Astrophysics Data System (ADS)
Mei, Yiwen; Nikolopoulos, Efthymios I.; Anagnostou, Emmanouil N.; Zoccatelli, Davide; Borga, Marco
2016-04-01
The error in satellite precipitation driven complex terrain flood simulations is characterized in this study for eight different global satellite products and 128 flood events over the Eastern Italian Alps. The flood events are grouped according to two flood types: rain floods and flash floods. The satellite precipitation products and runoff simulations are evaluated based on systematic and random error metrics applied on the matched event pairs and basin scale event properties (i.e. rainfall and runoff cumulative depth and time series shape). Overall, error characteristics exhibit dependency on the flood type. Generally, timing of the event precipitation mass center and dispersion of the time series derived from satellite-precipitation exhibits good agreement with reference; the cumulative depth is mostly underestimated. The study shows a dampening effect in both systematic and random error components of the satellite-driven hydrograph relative to the satellite-retrieved hyetograph. The systematic error in shape of time series shows significant dampening effect. The random error dampening effect is less pronounced for the flash flood events, and the rain flood events with high runoff coefficient. This event-based analysis of the satellite precipitation error propagation in flood modeling sheds light on the application of satellite precipitation in mountain flood hydrology.
Simulation of error in optical radar range measurements.
Der, S; Redman, B; Chellappa, R
1997-09-20
We describe a computer simulation of atmospheric and target effects on the accuracy of range measurements using pulsed laser radars with p-i-n or avalanche photodiodes for direct detection. The computer simulation produces simulated images as a function of a wide variety of atmospheric, target, and sensor parameters for laser radars with range accuracies smaller than the pulse width. The simulation allows arbitrary target geometries and simulates speckle, turbulence, and near-field and far-field effects. We compare simulation results with actual range error data collected in field tests.
Mitigating Particle Integration Error in Relativistic Laser-Plasma Simulations
NASA Astrophysics Data System (ADS)
Higuera, Adam; Weichmann, Kathleen; Cowan, Benjamin; Cary, John
2016-10-01
In particle-in-cell simulations of laser wakefield accelerators with a0 greater than unity, errors in particle trajectories produce incorrect beam charges and energies, predicting performance not realized in experiments such as the Texas Petawatt Laser. In order to avoid these errors, the simulation time step must resolve a time scale smaller than the laser period by a factor of a0. If the Yee scheme advances the fields with this time step, the laser wavelength must be over-resolved by a factor of a0 to avoid dispersion errors. Here is presented and demonstrated with Vorpal simulations, a new electromagnetic algorithm, building on previous work, correcting Yee dispersion for arbitrary sub-CFL time steps, reducing simulation times by a0.
Generalized phase-shifting algorithms: error analysis and minimization of noise propagation.
Ayubi, Gastón A; Perciante, César D; Di Martino, J Matías; Flores, Jorge L; Ferrari, José A
2016-02-20
Phase shifting is a technique for phase retrieval that requires a series of intensity measurements with certain phase steps. The purpose of the present work is threefold: first we present a new method for generating general phase-shifting algorithms with arbitrarily spaced phase steps. Second, we study the conditions for which the phase-retrieval error due to phase-shift miscalibration can be minimized. Third, we study the phase extraction from interferograms with additive random noise, and deduce the conditions to be satisfied for minimizing the phase-retrieval error. Algorithms with unevenly spaced phase steps are discussed under linear phase-shift errors and additive Gaussian noise, and simulations are presented.
Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko
2015-01-01
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations
NASA Technical Reports Server (NTRS)
Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.
2012-01-01
In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.
Disentangling timing and amplitude errors in streamflow simulations
NASA Astrophysics Data System (ADS)
Seibert, Simon Paul; Ehret, Uwe; Zehe, Erwin
2016-09-01
This article introduces an improvement in the Series Distance (SD) approach for the improved discrimination and visualization of timing and magnitude uncertainties in streamflow simulations. SD emulates visual hydrograph comparison by distinguishing periods of low flow and periods of rise and recession in hydrological events. Within these periods, it determines the distance of two hydrographs not between points of equal time but between points that are hydrologically similar. The improvement comprises an automated procedure to emulate visual pattern matching, i.e. the determination of an optimal level of generalization when comparing two hydrographs, a scaled error model which is better applicable across large discharge ranges than its non-scaled counterpart, and "error dressing", a concept to construct uncertainty ranges around deterministic simulations or forecasts. Error dressing includes an approach to sample empirical error distributions by increasing variance contribution, which can be extended from standard one-dimensional distributions to the two-dimensional distributions of combined time and magnitude errors provided by SD. In a case study we apply both the SD concept and a benchmark model (BM) based on standard magnitude errors to a 6-year time series of observations and simulations from a small alpine catchment. Time-magnitude error characteristics for low flow and rising and falling limbs of events were substantially different. Their separate treatment within SD therefore preserves useful information which can be used for differentiated model diagnostics, and which is not contained in standard criteria like the Nash-Sutcliffe efficiency. Construction of uncertainty ranges based on the magnitude of errors of the BM approach and the combined time and magnitude errors of the SD approach revealed that the BM-derived ranges were visually narrower and statistically superior to the SD ranges. This suggests that the combined use of time and magnitude errors to
Temperature measurement error simulation of the pure rotational Raman lidar
NASA Astrophysics Data System (ADS)
Jia, Jingyu; Huang, Yong; Wang, Zhirui; Yi, Fan; Shen, Jianglin; Jia, Xiaoxing; Chen, Huabin; Yang, Chuan; Zhang, Mingyang
2015-11-01
Temperature represents the atmospheric thermodynamic state. Measure the atmospheric temperature accurately and precisely is very important to understand the physics of the atmospheric process. Lidar has some advantages in the atmospheric temperature measurement. Based on the lidar equation and the theory of pure rotational Raman (PRR), we've simulated the temperature measurement errors of the double-grating-polychromator (DGP) based PRR lidar. First of all, without considering the attenuation terms of the atmospheric transmittance and the range in the lidar equation, we've simulated the temperature measurement errors which are influenced by the beam splitting system parameters, such as the center wavelength, the receiving bandwidth and the atmospheric temperature. We analyzed three types of the temperature measurement errors in theory. We've proposed several design methods for the beam splitting system to reduce the temperature measurement errors. Secondly, we simulated the temperature measurement error profiles by the lidar equation. As the lidar power-aperture product is determined, the main target of our lidar system is to reduce the statistical and the leakage errors.
Propagation of radiation in fluctuating multiscale plasmas. II. Kinetic simulations
Pal Singh, Kunwar; Robinson, P. A.; Cairns, Iver H.; Tyshetskiy, Yu.
2012-11-15
A numerical algorithm is developed and tested that implements the kinetic treatment of electromagnetic radiation propagating through plasmas whose properties have small scale fluctuations, which was developed in a companion paper. This method incorporates the effects of refraction, damping, mode structure, and other aspects of large-scale propagation of electromagnetic waves on the distribution function of quanta in position and wave vector, with small-scale effects of nonuniformities, including scattering and mode conversion approximated as causing drift and diffusion in wave vector. Numerical solution of the kinetic equation yields the distribution function of radiation quanta in space, time, and wave vector. Simulations verify the convergence, accuracy, and speed of the methods used to treat each term in the equation. The simulations also illustrate the main physical effects and place the results in a form that can be used in future applications.
Design Optimization and Simulation of Wave Propagation in Metamaterials
2014-09-24
AFRL-OSR-VA-TR-2014-0232 Design Optimizations Simulation of Wave Propagation in Metamaterials Robert Freund MASSACHUSETTS INSTITUTE OF TECHNOLOGY...In Metamaterials FA9550-11-1-0141 FA9550-11-1-0141 Freund, Robert Peraire, Jaime Nguyen, Cuong Massachusetts Institute of Technology 77...cannot be achieved with conventional materials. For instance, metamaterials can be designed to bend electromagnetic waves around an object so that
Visual field test simulation and error in threshold estimation.
Spenceley, S E; Henson, D B
1996-01-01
AIM: To establish, via computer simulation, the effects of patient response variability and staircase starting level upon the accuracy and repeatability of static full threshold visual field tests. METHOD: Patient response variability, defined by the standard deviation of the frequency of seeing versus stimulus intensity curve, is varied from 0.5 to 20 dB (in steps of 0.5 dB) with staircase starting levels ranging from 30 dB below to 30 dB above the patient's threshold (in steps of 10 dB). Fifty two threshold estimates are derived for each condition and the error of each estimate calculated (difference between the true threshold and the threshold estimate derived from the staircase procedure). The mean and standard deviation of the errors are then determined for each condition. The results from a simulated quadrantic defect (response variability set to typical values for a patient with glaucoma) are presented using two different algorithms. The first corresponds with that normally used when performing a full threshold examination while the second uses results from an earlier simulated full threshold examination for the staircase starting values. RESULTS: The mean error in threshold estimates was found to be biased towards the staircase starting level. The extent of the bias was dependent upon patient response variability. The standard deviation of the error increased both with response variability and staircase starting level. With the routinely used full threshold strategy the quadrantic defect was found to have a large mean error in estimated threshold values and an increase in the standard deviation of the error along the edge of the defect. When results from an earlier full threshold test are used as staircase starting values this error and increased standard deviation largely disappeared. CONCLUSION: The staircase procedure widely used in threshold perimetry increased the error and the variability of threshold estimates along the edges of defects. Using
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
Ar-Ar_Redux: rigorous error propagation of 40Ar/39Ar data, including covariances
NASA Astrophysics Data System (ADS)
Vermeesch, P.
2015-12-01
Rigorous data reduction and error propagation algorithms are needed to realise Earthtime's objective to improve the interlaboratory accuracy of 40Ar/39Ar dating to better than 1% and thereby facilitate the comparison and combination of the K-Ar and U-Pb chronometers. Ar-Ar_Redux is a new data reduction protocol and software program for 40Ar/39Ar geochronology which takes into account two previously underappreciated aspects of the method: 1. 40Ar/39Ar measurements are compositional dataIn its simplest form, the 40Ar/39Ar age equation can be written as: t = log(1+J [40Ar/39Ar-298.5636Ar/39Ar])/λ = log(1 + JR)/λ Where λ is the 40K decay constant and J is the irradiation parameter. The age t does not depend on the absolute abundances of the three argon isotopes but only on their relative ratios. Thus, the 36Ar, 39Ar and 40Ar abundances can be normalised to unity and plotted on a ternary diagram or 'simplex'. Argon isotopic data are therefore subject to the peculiar mathematics of 'compositional data', sensu Aitchison (1986, The Statistical Analysis of Compositional Data, Chapman & Hall). 2. Correlated errors are pervasive throughout the 40Ar/39Ar methodCurrent data reduction protocols for 40Ar/39Ar geochronology propagate the age uncertainty as follows: σ2(t) = [J2 σ2(R) + R2 σ2(J)] / [λ2 (1 + R J)], which implies zero covariance between R and J. In reality, however, significant error correlations are found in every step of the 40Ar/39Ar data acquisition and processing, in both single and multi collector instruments, during blank, interference and decay corrections, age calculation etc. Ar-Ar_Redux revisits every aspect of the 40Ar/39Ar method by casting the raw mass spectrometer data into a contingency table of logratios, which automatically keeps track of all covariances in a compositional context. Application of the method to real data reveals strong correlations (r2 of up to 0.9) between age measurements within a single irradiation batch. Propertly taking
Simulation-Based Learning Environment for Assisting Error-Correction
NASA Astrophysics Data System (ADS)
Horiguchi, Tomoya; Hirashima, Tsukasa
In simulation-based learning environments, 'unexpected' phenomena often work as counterexamples which promote a learner to reconsider the problem. It is important that counterexamples contain sufficient information which leads a learner to correct understanding. This paper proposes a method for creating such counterexamples. Error-Based Simulation (EBS) is used for this purpose, which simulates the erroneous motion in mechanics based on a learner's erroneous equation. Our framework is as follows: (1) to identify the cause of errors by comparing a learner's answer with the problem-solver's correct one, (2) to visualize the cause of errors by the unnatural motions in EBS. To perform (1), misconceptions are classified based on problem-solving model, and related to their appearance on a learner's answers (error-identification rules). To perform (2), objects' motions in EBS are classified and related to their suggesting misconceptions (error-visualization rules). A prototype system is implemented and evaluated through a preliminary test, to confirm the usefulness of the framework.
Statistical error in particle simulations of low mach number flows
Hadjiconstantinou, N G; Garcia, A L
2000-11-13
We present predictions for the statistical error due to finite sampling in the presence of thermal fluctuations in molecular simulation algorithms. The expressions are derived using equilibrium statistical mechanics. The results show that the number of samples needed to adequately resolve the flowfield scales as the inverse square of the Mach number. Agreement of the theory with direct Monte Carlo simulations shows that the use of equilibrium theory is justified.
Li, Hui; Fu, Zhida; Liu, Liying; Lin, Zhili; Deng, Wei; Feng, Lishuang
2017-01-03
An improved temperature-insensitive optical voltage sensor (OVS) with a reciprocal dual-crystal sensing method is proposed. The inducing principle of OVS reciprocity degradation is expounded by taking the different temperature fields of two crystals and the axis-errors of optical components into consideration. The key parameters pertaining to the system reciprocity degeneration in the dual-crystal sensing unit are investigated in order to optimize the optical sensing model based on the Maxwell's electromagnetic theory. The influencing principle of axis-angle errors on the system nonlinearity in the Pockels phase transfer unit is analyzed. Moreover, a novel axis-angle compensation method is proposed to improve the OVS measurement precision according to the simulation results. The experiment results show that the measurement precision of OVS is superior to ±0.2% in the temperature range from -40 °C to +60 °C, which demonstrates the excellent temperature stability of the designed voltage sensing system.
Communication Systems Simulator with Error Correcting Codes Using MATLAB
ERIC Educational Resources Information Center
Gomez, C.; Gonzalez, J. E.; Pardo, J. M.
2003-01-01
In this work, the characteristics of a simulator for channel coding techniques used in communication systems, are described. This software has been designed for engineering students in order to facilitate the understanding of how the error correcting codes work. To help students understand easily the concepts related to these kinds of codes, a…
Error and Uncertainty Analysis for Ecological Modeling and Simulation
2001-12-01
in GIS has been proposed by Openshaw (1992) based on Monte Carlo simulation (recommended method). As we mentioned above, however, this method is...Modelling, 8: 297-311. Openshaw , S., 1992. Learning to live with errors in spatial databases. Accuracy of spatial databases (Eds. Goodchild, M., & S
Numerical Simulation of the Detonation Propagation in Silicon Carbide Shell
NASA Astrophysics Data System (ADS)
Balagansky, Igor; Terechov, Anton
2013-06-01
Last years it was experimentally shown that in condensed high explosive charges (HE) placed in silicon carbide shell with sound velocity greater than the detonation velocity in HE, there may be observed interesting phenomena. Depending on the conditions, as an increase or decrease of the detonation velocity and pressure on the detonation front can be observed. There is also the distortion of the detonation front until the formation of a concave front. For a detailed explanation of the physical nature of the phenomenon we have provided numerical simulation of detonation wave propagation in Composition B HE charge, which was placed in silicon carbide shell. Modeling was performed with Ansys Autodyn in 2D-axis symmetry posting on an Eulerian mesh. Special attention was paid to selection of the parameters values in Lee-Tarver kinetic equation for HE and choice of constants to describe behavior of the ceramics. For comparison, also we have carried out the modeling of propagation of detonation in a completely similar assembly with brass shell. The simulation results agree well with the experimental data. In particular, in silicon carbide shell distortion of the detonation front was observed. A characteristic feature of the process is the pressure waves propagating in the direction of the axis of symmetry on the back surface of the detonation front.
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
2014-09-01
density is in turn strongly controlled by incident ultraviolet radiation from the sun . Accordingly, modeling and forecasting upper atmospheric density...Propagation of Forecast Errors from the Sun to LEO Trajectories: How Does Drag Uncertainty Affect Conjunction Frequency? John Emmert, Jeff Byers...trajectories of most objects in low- Earth orbit, and solar variability is the largest source of error in upper atmospheric density forecasts. There is
Robust Simulator for Error-Visualization in Assisting Learning Science
NASA Astrophysics Data System (ADS)
Horiguchi, Tomoya; Hirashima, Tsukasa
Error-based Simulation (EBS) is a framework for assisting a learner to become aware of his error. It makes simulation based on his erroneous hypothesis to show what unreasonable phenomena would occur if the hypothesis were correct, which has been proved effective in causing cognitive conflict. In making EBS, it is necessary (1) to make simulation by dealing with a set of inconsistent constraints because erroneous hypotheses often contradict the correct knowledge, and (2) to estimate the 'unreasonableness' of phenomena in simulation because it must be recognized to be 'unreasonable' by a learner. Since the method used in previous EBS-systems was much domain-dependent, this paper describes a method for making EBS based on any inconsistent simultaneous equations/inequalities by using TMS (it is called 'Partial Constraint Analysis (PCA)'). It also describes a set of general heuristics to estimate the 'unreasonableness' of physical phenomena. By using PCA and the heuristics, a prototype of EBS-system for elementary mechanics and electric circuit problems was implemented in which a learner is asked to set up equations of the systems. A preliminary test proved our method useful in which most of the subjects agreed that the EBSs and explanations made by the prototype were effective in making a learner be aware of his error.
Fast Video Encryption Using the H.264 Error Propagation Property for Smart Mobile Devices
Chung, Yongwha; Lee, Sungju; Jeon, Taewoong; Park, Daihee
2015-01-01
In transmitting video data securely over Video Sensor Networks (VSNs), since mobile handheld devices have limited resources in terms of processor clock speed and battery size, it is necessary to develop an efficient method to encrypt video data to meet the increasing demand for secure connections. Selective encryption methods can reduce the amount of computation needed while satisfying high-level security requirements. This is achieved by selecting an important part of the video data and encrypting it. In this paper, to ensure format compliance and security, we propose a special encryption method for H.264, which encrypts only the DC/ACs of I-macroblocks and the motion vectors of P-macroblocks. In particular, the proposed new selective encryption method exploits the error propagation property in an H.264 decoder and improves the collective performance by analyzing the tradeoff between the visual security level and the processing speed compared to typical selective encryption methods (i.e., I-frame, P-frame encryption, and combined I-/P-frame encryption). Experimental results show that the proposed method can significantly reduce the encryption workload without any significant degradation of visual security. PMID:25850068
Fast video encryption using the H.264 error propagation property for smart mobile devices.
Chung, Yongwha; Lee, Sungju; Jeon, Taewoong; Park, Daihee
2015-04-02
In transmitting video data securely over Video Sensor Networks (VSNs), since mobile handheld devices have limited resources in terms of processor clock speed and battery size, it is necessary to develop an efficient method to encrypt video data to meet the increasing demand for secure connections. Selective encryption methods can reduce the amount of computation needed while satisfying high-level security requirements. This is achieved by selecting an important part of the video data and encrypting it. In this paper, to ensure format compliance and security, we propose a special encryption method for H.264, which encrypts only the DC/ACs of I-macroblocks and the motion vectors of P-macroblocks. In particular, the proposed new selective encryption method exploits the error propagation property in an H.264 decoder and improves the collective performance by analyzing the tradeoff between the visual security level and the processing speed compared to typical selective encryption methods (i.e., I-frame, P-frame encryption, and combined I-/P-frame encryption). Experimental results show that the proposed method can significantly reduce the encryption workload without any significant degradation of visual security.
The Communication Link and Error ANalysis (CLEAN) simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.; Crowe, Shane
1993-01-01
During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.
A new version of the CADNA library for estimating round-off error propagation in Fortran programs
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc
2010-11-01
The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round
Correction of Discretization Errors Simulated at Supply Wells.
MacMillan, Gordon J; Schumacher, Jens
2015-01-01
Many hydrogeology problems require predictions of hydraulic heads in a supply well. In most cases, the regional hydraulic response to groundwater withdrawal is best approximated using a numerical model; however, simulated hydraulic heads at supply wells are subject to errors associated with model discretization and well loss. An approach for correcting the simulated head at a pumping node is described here. The approach corrects for errors associated with model discretization and can incorporate the user's knowledge of well loss. The approach is model independent, can be applied to finite difference or finite element models, and allows the numerical model to remain somewhat coarsely discretized and therefore numerically efficient. Because the correction is implemented external to the numerical model, one important benefit of this approach is that a response matrix, reduced model approach can be supported even when nonlinear well loss is considered.
On the accurate simulation of tsunami wave propagation
NASA Astrophysics Data System (ADS)
Castro, C. E.; Käser, M.; Toro, E. F.
2009-04-01
A very important part of any tsunami early warning system is the numerical simulation of the wave propagation in the open sea and close to geometrically complex coastlines respecting bathymetric variations. Here we are interested in improving the numerical tools available to accurately simulate tsunami wave propagation on a Mediterranean basin scale. To this end, we need to accomplish some targets, such as: high-order numerical simulation in space and time, preserve steady state conditions to avoid spurious oscillations and describe complex geometries due to bathymetry and coastlines. We use the Arbitrary accuracy DERivatives Riemann problem method together with Finite Volume method (ADER-FV) over non-structured triangular meshes. The novelty of this method is the improvement of the ADER-FV scheme, introducing the well-balanced property when geometrical sources are considered for unstructured meshes and arbitrary high-order accuracy. In a previous work from Castro and Toro [1], the authors mention that ADER-FV schemes approach asymptotically the well-balanced condition, which was true for the test case mentioned in [1]. However, new evidence[2] shows that for real scale problems as the Mediterranean basin, and considering realistic bathymetry as ETOPO-2[3], this asymptotic behavior is not enough. Under these realistic conditions the standard ADER-FV scheme fails to accurately describe the propagation of gravity waves without being contaminated with spurious oscillations, also known as numerical waves. The main problem here is that at discrete level, i.e. from a numerical point of view, the numerical scheme does not correctly balance the influence of the fluxes and the sources. Numerical schemes that retain this balance are said to satisfy the well-balanced property or the exact C-property. This unbalance reduces, as we refine the spatial discretization or increase the order of the numerical method. However, the computational cost increases considerably this way
Numerical Simulation of Shock Propagation in Dilute Monodisperse Bubbly Liquids
NASA Astrophysics Data System (ADS)
Cartmell, J. J.; Nadim, A.; Barbone, P. E.
1997-11-01
The MacCormack finite-difference method is used to simulate the propagation and evolution of shock waves in a bubbly liquid. The bubbly liquid is modeled as a continuum which is described by the continuity and Euler equations, but with a non-equilibrium equation of state (EOS) which relates the mixture pressure to the mixture density and its first two material time derivatives. This nonlinear EOS can be derived by assuming the liquid phase to be incompressible and the gas bubbles to be identical and non-interacting. The bubbles are further assumed to translate with the mixture velocity, and their spherical oscillations are taken to be described by the Rayleigh-Plesset equation. In 1-D, the evolution of an initial step function in pressure is followed in time. This produces a shock which propagates towards the low pressure side and a rarefaction front which moves in the opposite direction. The shock forms a steady traveling wave with the oscillatory tail characteristic of bubbly liquids. In 2-D, the focusing of an initially small amplitude wave into a strong shock is simulated.
Computer simulation of short shock pulses propagation in ceramic materials
NASA Astrophysics Data System (ADS)
Skripnyak, Vladimir A.; Skripnyak, Evgenia G.; Zhukova, Tat'yana V.
2001-06-01
The propagation of shock impulses with duration from microsecond to several tens of nanoseconds and attenuation of their amplitude in single-phase polycrystalline ceramics, sapphire and ruby single crystals, nanocrystalline ceramic composites are investigated by numerical simulation method. The propagation of shock waves and unloading waves are determined by the mechanical behavior of ceramics depending from evolution of structure of ceramics. The relaxation of shear stress in constructional ceramics can be caused by set of physical mechanisms on meso- and micro-scale levels. The used model takes into account the kinetics of inelastic deformation caused by martensitic phase transformation, nucleation and motion of dislocation, nucleation of shear microcracks etc. The outcomes of simulation testify, that inelastic deformation can be negligible in the constructional elements from polycrystalline ceramics when the shock pulse amplitude is higher, than the Hugoniot Elastic Limit (HEL), if the impulses duration is comparable with time of relaxation corresponding to preferred physical mechanisms. In these conditions the actual spall strength of polycrystalline ceramics is comparable to the theoretical strength at tension of the single crystals. The Al2O3 ceramics is capable to be practically elastic-deformed and the shear stress is compared to the theoretical shear strength, if the duration of pulse loading is not more than some tens nanoseconds.
Numerical simulation of premixed flame propagation in a closed tube
NASA Astrophysics Data System (ADS)
Kuzuu, Kazuto; Ishii, Katsuya; Kuwahara, Kunio
1996-08-01
Premixed flame propagation of methane-air mixture in a closed tube is estimated through a direct numerical simulation of the three-dimensional unsteady Navier-Stokes equations coupled with chemical reaction. In order to deal with a combusting flow, an extended version of the MAC method, which can be applied to a compressible flow with strong density variation, is employed as a numerical method. The chemical reaction is assumed to be an irreversible single step reaction between methane and oxygen. The chemical species are CH 4, O 2, N 2, CO 2, and H 2O. In this simulation, we reproduce a formation of a tulip flame in a closed tube during the flame propagation. Furthermore we estimate not only a two-dimensional shape but also a three-dimensional structure of the flame and flame-induced vortices, which cannot be observed in the experiments. The agreement between the calculated results and the experimental data is satisfactory, and we compare the phenomenon near the side wall with the one in the corner of the tube.
Monte Carlo simulation of light propagation in the adult brain
NASA Astrophysics Data System (ADS)
Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter
2004-06-01
When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.
NASA Astrophysics Data System (ADS)
Liu, C. L.; Kirchengast, G.; Zhang, K. F.; Norman, R.; Li, Y.; Zhang, S. C.; Carter, B.; Fritzer, J.; Schwaerz, M.; Choy, S. L.; Wu, S. Q.; Tan, Z. X.
2013-09-01
Global Navigation Satellite System (GNSS) radio occultation (RO) is an innovative meteorological remote sensing technique for measuring atmospheric parameters such as refractivity, temperature, water vapour and pressure for the improvement of numerical weather prediction (NWP) and global climate monitoring (GCM). GNSS RO has many unique characteristics including global coverage, long-term stability of observations, as well as high accuracy and high vertical resolution of the derived atmospheric profiles. One of the main error sources in GNSS RO observations that significantly affect the accuracy of the derived atmospheric parameters in the stratosphere is the ionospheric error. In order to mitigate the effect of this error, the linear ionospheric correction approach for dual-frequency GNSS RO observations is commonly used. However, the residual ionospheric errors (RIEs) can be still significant, especially when large ionospheric disturbances occur and prevail such as during the periods of active space weather. In this study, the RIEs were investigated under different local time, propagation direction and solar activity conditions and their effects on RO bending angles are characterised using end-to-end simulations. A three-step simulation study was designed to investigate the characteristics of the RIEs through comparing the bending angles with and without the effects of the RIEs. This research forms an important step forward in improving the accuracy of the atmospheric profiles derived from the GNSS RO technique.
Li, Hui; Fu, Zhida; Liu, Liying; Lin, Zhili; Deng, Wei; Feng, Lishuang
2017-01-01
An improved temperature-insensitive optical voltage sensor (OVS) with a reciprocal dual-crystal sensing method is proposed. The inducing principle of OVS reciprocity degradation is expounded by taking the different temperature fields of two crystals and the axis-errors of optical components into consideration. The key parameters pertaining to the system reciprocity degeneration in the dual-crystal sensing unit are investigated in order to optimize the optical sensing model based on the Maxwell's electromagnetic theory. The influencing principle of axis-angle errors on the system nonlinearity in the Pockels phase transfer unit is analyzed. Moreover, a novel axis-angle compensation method is proposed to improve the OVS measurement precision according to the simulation results. The experiment results show that the measurement precision of OVS is superior to ±0.2% in the temperature range from −40 °C to +60 °C, which demonstrates the excellent temperature stability of the designed voltage sensing system. PMID:28054951
NASA Astrophysics Data System (ADS)
Jia, Hao; Chen, Bin; Li, Dong; Zhang, Yong
2015-02-01
To adapt the complex tissue structure, laser propagation in a two-layered skin model is simulated to compare voxel-based Monte Carlo (VMC) and tetrahedron-based MC (TMC) methods with a geometry-based MC (GMC) method. In GMC, the interface is mathematically defined without any discretization. GMC is the most accurate but is not applicable to complicated domains. The implementation of VMC is simple because of its structured voxels. However, unavoidable errors are expected because of the zigzag polygonal interface. Compared with GMC and VMC, TMC provides a balance between accuracy and flexibility by the tetrahedron cells. In the present TMC, the body-fitted tetrahedra are generated in different tissues. No interface tetrahedral cells exist, thereby avoiding the photon reflection error in the interface cells in VMC. By introducing a distance threshold, the error caused by confused optical parameters between neighboring cells when photons are incident along the cell boundary can be avoided. The results show that the energy deposition error by TMC in the interfacial region is one-tenth to one-fourth of that by VMC, yielding more accurate computations of photon reflection, refraction, and energy deposition. The results of multilayered and n-shaped vessels indicate that a laser with a 1064-nm wavelength should be introduced to clean deep-buried vessels.
Unraveling the uncertainty and error propagation in the vertical flux Martin curve
NASA Astrophysics Data System (ADS)
Olli, Kalle
2015-06-01
Analyzing the vertical particle flux and particle retention in the upper twilight zone has commonly been accomplished by fitting a power function to the data. Measuring the vertical particle flux in the upper twilight zone, where most of the re-mineralization occurs, is a complex endeavor. Here I use field data and simulations to show how uncertainty in the particle flux measurements propagates into the vertical flux attenuation model parameters. Further, I analyze how the number of sampling depths, and variations in the vertical sampling locations influences the model performance and parameters stability. The arguments provide a simple framework to optimize sampling scheme when vertical flux attenuation profiles are measured in the field, either by using an array of sediment traps or 234Th methodology. A compromise between effort and quality of results is to sample from at least six depths: upper sampling depth as close to the base of the euphotic layer as feasible, the vertical sampling depths slightly aggregated toward the upper aphotic zone where most of the vertical flux attenuation takes place, and extending the lower end of the sampling range to as deep as practicable in the twilight zone.
[Monte Carlo simulation of the divergent beam propagation in a semi-infinite bio-tissue].
Zhang, Lin; Qi, Shengwen
2013-12-01
In order to study the light propagation in biological tissue, we analyzed the divergent beam propagation in turbid medium. We set up a Monte Carlo simulation model for simulating the divergent beam propagation in a semi-infinite bio-tissue. Using this model, we studied the absorbed photon density with different tissue parameters in the case of a divergent beam injecting the tissue. The simulation results showed that the rules of optical propagation in the tissue were found and further the results also suggested that the diagnosis and treatment of the light could refer to the rules of optical propagation.
Simulations of ultra-high-energy cosmic rays propagation
Kalashev, O. E.; Kido, E.
2015-05-15
We compare two techniques for simulation of the propagation of ultra-high-energy cosmic rays (UHECR) in intergalactic space: the Monte Carlo approach and a method based on solving transport equations in one dimension. For the former, we adopt the publicly available tool CRPropa and for the latter, we use the code TransportCR, which has been developed by the first author and used in a number of applications, and is made available online with publishing this paper. While the CRPropa code is more universal, the transport equation solver has the advantage of a roughly 100 times higher calculation speed. We conclude that the methods give practically identical results for proton or neutron primaries if some accuracy improvements are introduced to the CRPropa code.
Computer simulation of microwave propagation in heterogeneous and fractal media
NASA Astrophysics Data System (ADS)
Korvin, Gabor; Khachaturov, Ruben V.; Oleschko, Klaudia; Ronquillo, Gerardo; Correa López, María de jesús; García, Juan-josé
2017-03-01
Maxwell's equations (MEs) are the starting point for all calculations involving surface or borehole electromagnetic (EM) methods in Petroleum Industry. In well-log analysis numerical modeling of resistivity and induction tool responses has became an indispensable step of interpretation. We developed a new method to numerically simulate electromagnetic wave propagation through heterogeneous and fractal slabs taking into account multiple scattering in the direction of normal incidence. In simulation, the gray-scale image of the porous medium is explored by monochromatic waves. The gray-tone of each pixel can be related to the dielectric permittivity of the medium at that point by two different equations (linear dependence, and fractal or power law dependence). The wave equation is solved in second order difference approximation, using a modified sweep technique. Examples will be shown for simulated EM waves in carbonate rocks imaged at different scales by electron microscopy and optical photography. The method has wide ranging applications in remote sensing, borehole scanning and Ground Penetrating Radar (GPR) exploration.
Simulation of seismic wave propagation for reconnaissance in machined tunnelling
NASA Astrophysics Data System (ADS)
Lambrecht, L.; Friederich, W.
2012-04-01
During machined tunnelling, there is a complex interaction chain of the involved components. For example, on one hand the machine influences the surrounding ground during excavation, on the other hand supporting measures are needed acting on the ground. Furthermore, the different soil conditions are influencing the wearing of tools, the speed of the excavation and the safety of the construction site. In order to get information about the ground along the tunnel track, one can use seismic imaging. To get a better understanding of seismic wave propagation for a tunnel environment, we want to perform numerical simulations. For that, we use the spectral element method (SEM) and the nodal discontinuous galerkin method (NDG). In both methods, elements are the basis to discretize the domain of interest for performing high order elastodynamic simulations. The SEM is a fast and widely used method but the biggest drawback is it's limitation to hexahedral elements. For complex heterogeneous models with a tunnel included, it is a better choice to use the NDG, which needs more computation time but can be adapted to tetrahedral elements. Using this technique, we can perform high resolution simulations of waves initialized by a single force acting either on the front face or the side face of the tunnel. The aim is to produce waves that travel mainly in the direction of the tunnel track and to get as much information as possible from the backscattered part of the wave field.
Cereatti, Andrea; Camomilla, Valentina; Vannozzi, Giuseppe; Cappozzo, Aurelio
2007-01-01
To estimate hip joint angles during selected motor tasks using stereophotogrammetric data, it is necessary to determine the hip joint centre position. The question is whether the errors affecting that determination propagate less to the angles estimates when a three degrees of freedom (DOFs) constraint (spherical hinge) is used between femur and pelvis, rather than when the two bones are assumed to be unconstrained (six DOFs). An analytical relationship between the hip joint centre location error and the joint angle error was obtained limited to the planar case. In the 3-D case, a similar relationship was obtained using a simulation approach based on experimental data. The joint angle patterns resulted in a larger distortion using a constrained approach, especially when wider rotations occur. The range of motion of the hip flexion-extension, obtained simulating different location errors and without taking into account soft tissue artefacts, varied approximately 7 deg using a constrained approach and up to 1 deg when calculated with an unconstrained approach. Thus, the unconstrained approach should be preferred even though its estimated three linear DOFs most unlikely carry meaningful information.
Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A
2013-08-26
We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.
Supporting the Development of Soft-Error Resilient Message Passing Applications using Simulation
Engelmann, Christian; Naughton III, Thomas J
2016-01-01
Radiation-induced bit flip faults are of particular concern in extreme-scale high-performance computing systems. This paper presents a simulation-based tool that enables the development of soft-error resilient message passing applications by permitting the investigation of their correctness and performance under various fault conditions. The documented extensions to the Extreme-scale Simulator (xSim) enable the injection of bit flip faults at specific of injection location(s) and fault activation time(s), while supporting a significant degree of configurability of the fault type. Experiments show that the simulation overhead with the new feature is ~2,325% for serial execution and ~1,730% at 128 MPI processes, both with very fine-grain fault injection. Fault injection experiments demonstrate the usefulness of the new feature by injecting bit flips in the input and output matrices of a matrix-matrix multiply application, revealing vulnerability of data structures, masking and error propagation. xSim is the very first simulation-based MPI performance tool that supports both, the injection of process failures and bit flip faults.
Computational fluid dynamics simulation of sound propagation through a blade row.
Zhao, Lei; Qiao, Weiyang; Ji, Liang
2012-10-01
The propagation of sound waves through a blade row is investigated numerically. A wave splitting method in a two-dimensional duct with arbitrary mean flow is presented, based on which pressure amplitude of different wave mode can be extracted at an axial plane. The propagation of sound wave through a flat plate blade row has been simulated by solving the unsteady Reynolds average Navier-Stokes equations (URANS). The transmission and reflection coefficients obtained by Computational Fluid Dynamics (CFD) are compared with semi-analytical results. It indicates that the low order URANS scheme will cause large errors if the sound pressure level is lower than -100 dB (with as reference pressure the product of density, main flow velocity, and speed of sound). The CFD code has sufficient precision when solving the interaction of sound wave and blade row providing the boundary reflections have no substantial influence. Finally, the effects of flow Mach number, blade thickness, and blade turning angle on sound propagation are studied.
Simulation of 3D Global Wave Propagation Through Geodynamic Models
NASA Astrophysics Data System (ADS)
Schuberth, B.; Piazzoni, A.; Bunge, H.; Igel, H.; Steinle-Neumann, G.
2005-12-01
This project aims at a better understanding of the forward problem of global 3D wave propagation. We use the spectral element program "SPECFEM3D" (Komatitsch and Tromp, 2002a,b) with varying input models of seismic velocities derived from mantle convection simulations (Bunge et al., 2002). The purpose of this approach is to obtain seismic velocity models independently from seismological studies. In this way one can test the effects of varying parameters of the mantle convection models on the seismic wave field. In order to obtain the seismic velocities from the temperature field of the geodynamical simulations we follow a mineral physics approach. Assuming a certain mantle composition (e.g. pyrolite with CMASF composition) we compute the stable phases for each depth (i.e. pressure) and temperature by system Gibbs free energy minimization. Elastic moduli and density are calculated from the equations of state of the stable mineral phases. For this we use a mineral physics database derived from calorimetric experiments (enthalphy and entropy of formation, heat capacity) and EOS parameters.
Numerical simulation of broadband vortex terahertz beams propagation
NASA Astrophysics Data System (ADS)
Semenova, V. A.; Kulya, M. S.; Bespalov, V. G.
2016-08-01
Orbital angular momentum (OAM) represents new informational degree of freedom for data encoding and multiplexing in fiber and free-space communications. OAM-carrying beams (also called vortex beams) were successfully used to increase the capacity of optical, millimetre-wave and radio frequency communication systems. However, the investigation of the OAM potential for the new generation high-speed terahertz communications is also of interest due to the unlimited demand of higher capacity in telecommunications. Here we present a simulation-based study of the propagating in non-dispersive medium broadband terahertz vortex beams generated by a spiral phase plate (SPP). The algorithm based on scalar diffraction theory was used to obtain the spatial amplitude and phase distributions of the vortex beam in the frequency range from 0.1 to 3 THz at the distances 20-80 mm from the SPP. The simulation results show that the amplitude and phase distributions without unwanted modulation are presented in the wavelengths ranges with centres on the wavelengths which are multiple to the SPP optical thickness. This fact may allow to create the high-capacity near-field communication link which combines OAM and wavelength-division multiplexing.
Numerical Simulation of Acoustic Propagation in a Lined Duct
NASA Astrophysics Data System (ADS)
Biringen, S.; Reichert, R. S.; Yu, J.; Zorumski, W. E.
1996-11-01
An inviscid, spatial time-domain numerical simulation is employed to compute acoustic wave propagation in a duct treated with an acoustic liner. The motivation is to assess the effects on sound attenuation of bias flow passed through the liner for application to noise suppression in jet engine nacelles. Physically, the liner is composed of porous sheets with backing air cavities. The mathematical model lumps the liner presence into a continuous empirical source term which modifies the right-hand side of the momentum equations. Thus, liner effects are felt interior to the domain rather than through boundary conditions. This source term determines the time-domain effects of the frequency-domain resistance and reactance of the liner's component sheets. The source term constants are matched to frequency-domain impedance data via a one-dimensional numerical impedance tube simulation. Nonlinear behavior of the liner at high sound pressure levels is included in the form of the source term. Sound pressure levels and axially transmitted power are computed to assess the effect of various magnitudes of bias flow on attenuation.
Analysis of errors occurring in large eddy simulation.
Geurts, Bernard J
2009-07-28
We analyse the effect of second- and fourth-order accurate central finite-volume discretizations on the outcome of large eddy simulations of homogeneous, isotropic, decaying turbulence at an initial Taylor-Reynolds number Re(lambda)=100. We determine the implicit filter that is induced by the spatial discretization and show that a higher order discretization also induces a higher order filter, i.e. a low-pass filter that keeps a wider range of flow scales virtually unchanged. The effectiveness of the implicit filtering is correlated with the optimal refinement strategy as observed in an error-landscape analysis based on Smagorinsky's subfilter model. As a point of reference, a finite-volume method that is second-order accurate for both the convective and the viscous fluxes in the Navier-Stokes equations is used. We observe that changing to a fourth-order accurate convective discretization leads to a higher value of the Smagorinsky coefficient C(S) required to achieve minimal total error at given resolution. Conversely, changing only the viscous flux discretization to fourth-order accuracy implies that optimal simulation results are obtained at lower values of C(S). Finally, a fully fourth-order discretization yields an optimal C(S) that is slightly lower than the reference fully second-order method.
NASA Technical Reports Server (NTRS)
Greatorex, Scott (Editor); Beckman, Mark
1996-01-01
Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would
Hoinaski, Leonardo; Franco, Davide; de Melo Lisboa, Henrique
2017-03-01
Dispersion modelling was proved by researchers that most part of the models, including the regulatory models recommended by the Environmental Protection Agency of the United States (AERMOD and CALPUFF), do not have the ability to predict under complex situations. This article presents a novel evaluation of the propagation of errors in lateral dispersion coefficient of AERMOD with emphasis on estimate of average times under 10 min. The sources of uncertainty evaluated were parameterizations of lateral dispersion ([Formula: see text]), standard deviation of lateral wind speed ([Formula: see text]) and processing of obstacle effect. The model's performance was tested in two field tracer experiments: Round Hill II and Uttenweiller. The results show that error propagation from the estimate of [Formula: see text] directly affects the determination of [Formula: see text], especially in Round Hill II experiment conditions. After average times are reduced, errors arise in the parameterization of [Formula: see text], even after observation assimilations of [Formula: see text], exposing errors on Lagrangian Time Scale parameterization. The assessment of the model in the presence of obstacles shows that the implementation of a plume rise model enhancement algorithm can improve the performance of the AERMOD model. However, these improvements are small when the obstacles have a complex geometry, such as Uttenweiller.
Clark, E.L.
1993-08-01
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, calibration Mach number and Reynolds number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-stream Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for nine fundamental aerodynamic ratios, most of which relate free-stream test conditions (pressure, temperature, density or velocity) to a reference condition. Tables of the ratios, R, absolute sensitivity coefficients, {partial_derivative}R/{partial_derivative}M{infinity}, and relative sensitivity coefficients, (M{infinity}/R) ({partial_derivative}R/{partial_derivative}M{infinity}), are provided as functions of M{infinity}.
Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui
2016-01-01
Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation.
Van Niel, Kimberly P; Austin, Mike P
2007-01-01
The effect of digital elevation model (DEM) error on environmental variables, and subsequently on predictive habitat models, has not been explored. Based on an error analysis of a DEM, multiple error realizations of the DEM were created and used to develop both direct and indirect environmental variables for input to predictive habitat models. The study explores the effects of DEM error and the resultant uncertainty of results on typical steps in the modeling procedure for prediction of vegetation species presence/absence. Results indicate that all of these steps and results, including the statistical significance of environmental variables, shapes of species response curves in generalized additive models (GAMs), stepwise model selection, coefficients and standard errors for generalized linear models (GLMs), prediction accuracy (Cohen's kappa and AUC), and spatial extent of predictions, were greatly affected by this type of error. Error in the DEM can affect the reliability of interpretations of model results and level of accuracy in predictions, as well as the spatial extent of the predictions. We suggest that the sensitivity of DEM-derived environmental variables to error in the DEM should be considered before including them in the modeling processes.
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel
Evaluation of color error and noise on simulated images
NASA Astrophysics Data System (ADS)
Mornet, Clémence; Vaillant, Jérôme; Decroux, Thomas; Hérault, Didier; Schanen, Isabelle
2010-01-01
The evaluation of CMOS sensors performance in terms of color accuracy and noise is a big challenge for camera phone manufacturers. On this paper, we present a tool developed with Matlab at STMicroelectronics which allows quality parameters to be evaluated on simulated images. These images are computed based on measured or predicted Quantum Efficiency (QE) curves and noise model. By setting the parameters of integration time and illumination, the tool optimizes the color correction matrix (CCM) and calculates the color error, color saturation and signal-to-noise ratio (SNR). After this color correction optimization step, a Graphics User Interface (GUI) has been designed to display a simulated image at a chosen illumination level, with all the characteristics of a real image taken by the sensor with the previous color correction. Simulated images can be a synthetic Macbeth ColorChecker, for which reflectance of each patch is known, or a multi-spectral image, described by the reflectance spectrum of each pixel or an image taken at high-light level. A validation of the results has been performed with ST under development sensors. Finally we present two applications one based on the trade-offs between color saturation and noise by optimizing the CCM and the other based on demosaicking SNR trade-offs.
Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation
NASA Astrophysics Data System (ADS)
Schiavazzi, Daniele; Marsden, Alison
2015-11-01
Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.
Numerical simulation of propagation of the MHD waves in sunspots
NASA Astrophysics Data System (ADS)
Parchevsky, K.; Kosovichev, A.; Khomenko, E.; Olshevsky, V.; Collados, M.
2010-11-01
We present results of numerical 3D simulation of propagation of MHD waves in sunspots. We used two self consistent magnetohydrostatic background models of sunspots. There are two main differences between these models: (i) the topology of the magnetic field and (ii) dependence of the horizontal profile of the sound speed on depth. The model with convex shape of the magnetic field lines near the photosphere has non-zero horizorntal perturbations of the sound speed up to the depth of 7.5 Mm (deep model). In the model with concave shape of the magnetic field lines near the photosphere Δ c/c is close to zero everywhere below 2 Mm (shallow model). Strong Alfven wave is generated at the wave source location in the deep model. This wave is almost unnoticeable in the shallow model. Using filtering technique we separated magnetoacoustic and magnetogravity waves. It is shown, that inside the sunspot magnetoacoustic and magnetogravity waves are not spatially separated unlike the case of the horizontally uniform background model. The sunspot causes anisotropy of the amplitude distribution along the wavefront and changes the shape of the wavefront. The amplitude of the waves is reduced inside the sunspot. This effect is stronger for the magnetogravity waves than for magnetoacoustic waves. The shape of the wavefront of the magnetogravity waves is distorted stronger as well. The deep model causes bigger anisotropy for both mgnetoacoustic and magneto gravity waves than the shallow model.
Simulation of Magnetic Cloud Erosion and Deformation During Propagation
NASA Astrophysics Data System (ADS)
Manchester, W.; Kozyra, J. U.; Lepri, S. T.; Lavraud, B.; Jackson, B. V.
2013-12-01
We examine a three-dimensional (3-D) numerical magnetohydrodynamic (MHD) simulation describing a very fast interplanetary coronal mass ejection (ICME) propagating from the solar corona to 1 AU. In conjunction with it's high speed, the ICME evolves in ways that give it a unique appearance at 1AU that does not resemble a typical ICME. First, as the ICME decelerates in the solar wind, filament material at the back of the flux rope pushes its way forward through the flux rope. Second, diverging nonradial flows in front of the filament transport azimuthal flux of the rope to the sides of the ICME. Third, the magnetic flux rope reconnects with the interplanetary magnetic field (IMF). As a consequence of these processes, the flux rope partially unravels and appears to evolve to an entirely open configuration near its nose. At the same time, filament material at the base of the flux rope moves forward and comes in direct contact with the shocked plasma in the CME sheath. We find evidence such remarkable behavior has occurred when we examine a very fast CME that erupted from the Sun on 2005 January 20. In situ observations of this event near 1 AU show very dense cold material impacting the Earth following immediately behind the CME sheath. Charge state analysis shows this dense plasma is filament material, and the analysis of SMEI data provides the trajectory of this dense plasma from the Sun. Consistent with the simulation, we find the azimuthal flux (Bz) to be entirely unbalanced giving the appearance that the flux rope has completely eroded on the anti-sunward side.
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
NASA Astrophysics Data System (ADS)
Privé, N. C.; Errico, R. M.; Tai, K.-S.
2013-06-01
The National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a 1 month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 h forecast, increased observation error only yields a slight decline in forecast skill in the extratropics and no discernible degradation of forecast skill in the tropics.
Electron Beam Propagation Through a Magnetic Wiggler with Random Field Errors
1989-08-21
Another quantity of interest is the vector potential 6.A,.(:) associated with the field error 6B,,,(:). Defining the normalized vector potentials ba = ebA...then follows that the correlation of the normalized vector potential errors is given by 1 . 12 (-a.(zj)a.,(z2)) = a,k,, dz’ , dz" (bBE(z’)bB , (z")) a2...Throughout the following, terms of order O(z:/z) will be neglected. Similarly, for the y-component of the normalized vector potential errors, one
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
The numerical robustness of four generally applicable, recursive, least-squares-estimation schemes is analyzed by means of a theoretical round-off propagation study. This study highlights a number of practical, interesting insights of widely used recursive least-squares schemes. These insights have been confirmed in an experimental study as well.
Hubig, Michael; Muggenthaler, Holger; Mall, Gita
2014-05-01
Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.
NASA Astrophysics Data System (ADS)
Arahata, E.; Nikuni, T.
2013-05-01
We study sound propagation in a Bose-condensed gas confined in a highly elongated harmonic trap at finite temperatures. Our analysis is based on Zaremba-Nikuni-Griffin (ZNG) formalism, which consists of Gross-Pitaevskii equation for the condensate and the kinetic equation for a thermal cloud. We extend ZNG formalism to deal with a highly-anisotropic trap potential, and use it to simulate sound propagation in the two fluid hydrodynamic regime. We use the trap parameters for the experiment that has reported second sound propagation. Our simulation results show that propagation of two sound pulses corresponding to first and second sound can be observed in an intermediate temperature.
PLASIM: A computer code for simulating charge exchange plasma propagation
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Deininger, W. D.; Winder, D. R.; Kaufman, H. R.
1982-01-01
The propagation of the charge exchange plasma for an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ANSI Standard FORTRAN.
Simulation of reactive nanolaminates using reduced models: II. Normal propagation
Salloum, Maher; Knio, Omar M.
2010-03-15
Transient normal flame propagation in reactive Ni/Al multilayers is analyzed computationally. Two approaches are implemented, based on generalization of earlier methodology developed for axial propagation, and on extension of the model reduction formalism introduced in Part I. In both cases, the formulation accommodates non-uniform layering as well as the presence of inert layers. The equations of motion for the reactive system are integrated using a specially-tailored integration scheme, that combines extended-stability, Runge-Kutta-Chebychev (RKC) integration of diffusion terms with exact treatment of the chemical source term. The detailed and reduced models are first applied to the analysis of self-propagating fronts in uniformly-layered materials. Results indicate that both the front velocities and the ignition threshold are comparable for normal and axial propagation. Attention is then focused on analyzing the effect of a gap composed of inert material on reaction propagation. In particular, the impacts of gap width and thermal conductivity are briefly addressed. Finally, an example is considered illustrating reaction propagation in reactive composites combining regions corresponding to two bilayer widths. This setup is used to analyze the effect of the layering frequency on the velocity of the corresponding reaction fronts. In all cases considered, good agreement is observed between the predictions of the detailed model and the reduced model, which provides further support for adoption of the latter. (author)
NASA Astrophysics Data System (ADS)
Prive, N.; Errico, R. M.; Tai, K.
2012-12-01
A global observing system simulation experiment (OSSE) has been developed at the NASA Global Modeling and Assimilation Office using the Global Earth Observing System (GEOS-5) forecast model and Gridpoint Statistical Interpolation data assimilation. A 13-month integration of the European Centre for Medium-Range Weather Forecasts operational forecast model is used as the Nature Run. Synthetic observations for conventional and radiance data types are interpolated from the Nature Run, with calibrated observation errors added to reproduce realistic statistics of analysis increment and observation innovation. It is found that correlated observation errors are necessary in order to replicate the statistics of analysis increment and observation innovation found with real data. The impact of these observation errors is explored in a series of OSSE experiments in which the magnitude of the applied observation error is varied from zero to double the calibrated values while the observation error covariances of the GSI are held fixed. Increased observation error has a strong effect on the variance of the analysis increment and observation innovation fields, but a much weaker impact on the root mean square (RMS) analysis error. For the 120 hour forecast, only slight degradation of forecast skill in terms of anomaly correlation and RMS forecast error is observed in the midlatitudes, and there is no appreciable impact of observation error on forecast skill in the tropics.
Control and alignment of segmented-mirror telescopes: matrices, modes, and error propagation.
Chanan, Gary; MacMartin, Douglas G; Nelson, Jerry; Mast, Terry
2004-02-20
Starting from the successful Keck telescope design, we construct and analyze the control matrix for the active control system of the primary mirror of a generalized segmented-mirror telescope, with up to 1000 segments and including an alternative sensor geometry to the one used at Keck. In particular we examine the noise propagation of the matrix and its consequences for both seeing-limited and diffraction-limited observations. The associated problem of optical alignment of such a primary mirror is also analyzed in terms of the distinct but related matrices that govern this latter problem.
function. Key Words and Phrases: Parametric estimation , exponential families, nonlinear models, nonlinear least squares, neural networks, Monte Carlo simulation, computer intensive statistical methods.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
Molecular dynamics simulation of the burning front propagation in PETN
NASA Astrophysics Data System (ADS)
Yanilkin, A. V.; Sergeev, O. V.
2014-05-01
One of the models of detonation development in condensed explosives under shock loading is the concept of "hot spots." According to this model, the reaction initially starts at various defects and inhomogeneities, where energy is localized during shock wave propagation. In such a region the reaction may start and the heat flux sufficient for the ignition of the adjacent layers of matter may be formed. If the reaction propagates fast enough, the merging of the burning fronts from several hot spots may lead to detonation. So there is an interest in determining the burning propagation rate from the hot spot in various conditions. In this work we investigate the propagation of plane burning front from initially heated layer in PETN single crystal using molecular dynamics method with the reactive force field (ReaxFF). The burning rate depends on the direction in crystal. The kinetics of chemical transformations is considered. The dependence of the burning front propagation rate along [100] direction on the external pressure in the pressure range from normal to 30 GPa is calculated, it is shown that it grows linearly in the considered range from 50 m/s to 320 m/s. The results are compared with the data from experiments and quantum chemical calculations.
Molecular dynamics simulation of the burning front propagation in PETN
NASA Astrophysics Data System (ADS)
Yanilkin, Alexey; Sergeev, Oleg; Computational materials science Team
2013-06-01
One of the models of detonation development in condensed explosives under shock loading is the concept of ``hot spots.'' According to this model, the reaction initially starts at various defects and inhomogeneities, where energy is localized during shock wave propagation. In such a region the exothermic reaction may start with heat yield sufficient for the ignition of the adjacent layers of matter. If the reaction propagates fast enough, the merging of the burning fronts from several hot spots may lead to detonation. So there is an interest in determining the burning propagation rate from the hot spot in various conditions. In this work we investigate the propagation of plane burning front from initially heated layer in PETN single crystal using molecular dynamics method with reaction force field (ReaxFF). It is shown that the burning rate depends on the direction in crystal. The kinetics of chemical transformations is considered, main reaction paths are determined. The dependence of the burning front propagation rate on the external pressure in the pressure range of normal to 30 GPa is calculated, it is shown that it grows linearly in the considered range from 50 m/s to 320 m/s. The results are compared with the data from experiments and quantum chemical calculations.
Statistical error in simulations of Poisson processes: Example of diffusion in solids
NASA Astrophysics Data System (ADS)
Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.
2016-08-01
Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.
Error propagation in the numerical solutions of the differential equations of orbital mechanics
NASA Technical Reports Server (NTRS)
Bond, V. R.
1982-01-01
The relationship between the eigenvalues of the linearized differential equations of orbital mechanics and the stability characteristics of numerical methods is presented. It is shown that the Cowell, Encke, and Encke formulation with an independent variable related to the eccentric anomaly all have a real positive eigenvalue when linearized about the initial conditions. The real positive eigenvalue causes an amplification of the error of the solution when used in conjunction with a numerical integration method. In contrast an element formulation has zero eigenvalues and is numerically stable.
NASA Astrophysics Data System (ADS)
Tong, Xiaohua; Sun, Tong; Fan, Junyi; Goodchild, Michael F.; Shi, Wenzhong
2013-04-01
This paper presents a new error band model, the statistical simulation error model, for describing the positional error of line features by incorporating both analytical and simulation methods. In this study, line features include line segments, polylines, and polygons. In existing error models, an infinite number of points on the line segment are considered as the stochastic variables and the error band of a line segment is obtained from the union of all intermediate points on the line segment, while that of a polyline/polygon is obtained from the union of all error bands of the composite line segments. Our proposed error band model, however, regards the entire line feature (line segment/polyline/polygon) as the stochastic variable, instead of the infinite number of points on the line segment. Based solely on the statistical characteristics of the endpoints of the line feature and the predefined confidence level, our proposed error model is created by a simulation method that integrates a population of line segments/polylines/polygons computed from the entire solution set of the error model's defining equation. A comprehensive comparison of the proposed and existing error band models is carried out through both simulated and practical experiments. The experimental results show the following: (1) For line segments, the proposed standard statistically simulated error band matches that of existing error models (for example, the G-band). Further, it is found that a scaled G-band with a specific scale factor (e.g.,√{χ42(α)}) matches the proposed statistically simulated error band with probability (1 - α) × 100%. (2) For polylines and polygons, if we correlate the errors of all the endpoints of the polyline/polygon, there is a marked difference between the proposed statistically simulated error band and existing error bands. The reason for the difference is explained as follows. The existing error model defines the error band of a polyline/polygon as the union of
Numerical simulation of impurity propagation in sea channels
NASA Astrophysics Data System (ADS)
Cherniy, Dmitro; Dovgiy, Stanislav; Gourjii, Alexandre
2009-11-01
Building the dike (2003) in Kerch channel (between Black and Azov seas) from Taman peninsula is an example of technological influence on the fluid flow and hydrological conditions in the channel. Increasing velocity flow by two times in a fairway region results in the appearance dangerous tendencies in hydrology of Kerch channel. A flow near the coastal edges generates large scale vortices, which move along the channel. A shipwreck (November 11, 2007) of tanker ``Volganeft-139'' in Kerch channel resulted in an ecological catastrophe in the indicated region. More than 1300 tons of petroleum appeared on the sea surface. Intensive vortices formed here involve part of the impurity region in own motion. Boundary of the impurity region is deformed, stretched and cover the center part of the channel. The adapted vortex singularity method for the impurity propagation in Kerch channel and analyze of the pollution propagation are the main goal of the report.
Investigation of Radar Propagation in Buildings: A 10 Billion Element Cartesian-Mesh FETD Simulation
Stowell, M L; Fasenfest, B J; White, D A
2008-01-14
In this paper large scale full-wave simulations are performed to investigate radar wave propagation inside buildings. In principle, a radar system combined with sophisticated numerical methods for inverse problems can be used to determine the internal structure of a building. The composition of the walls (cinder block, re-bar) may effect the propagation of the radar waves in a complicated manner. In order to provide a benchmark solution of radar propagation in buildings, including the effects of typical cinder block and re-bar, we performed large scale full wave simulations using a Finite Element Time Domain (FETD) method. This particular FETD implementation is tuned for the special case of an orthogonal Cartesian mesh and hence resembles FDTD in accuracy and efficiency. The method was implemented on a general-purpose massively parallel computer. In this paper we briefly describe the radar propagation problem, the FETD implementation, and we present results of simulations that used over 10 billion elements.
End-to-End Network Simulation Using a Site-Specific Radio Wave Propagation Model
Djouadi, Seddik M; Kuruganti, Phani Teja; Nutaro, James J
2013-01-01
The performance of systems that rely on a wireless network depends on the propagation environment in which that network operates. To predict how these systems and their supporting networks will perform, simulations must take into consideration the propagation environment and how this effects the performance of the wireless network. Network simulators typically use empirical models of the propagation environment. However, these models are not intended for, and cannot be used, to predict a wireless system will perform in a specific location, e.g., in the center of a particular city or the interior of a specific manufacturing facility. In this paper, we demonstrate how a site-specific propagation model and the NS3 simulator can be used to predict the end-to-end performance of a wireless network.
Coherent-wave Monte Carlo method for simulating light propagation in tissue
NASA Astrophysics Data System (ADS)
Kraszewski, Maciej; Pluciński, Jerzy
2016-03-01
Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime.
Revised error propagation of 40Ar/39Ar data, including covariances
NASA Astrophysics Data System (ADS)
Vermeesch, Pieter
2015-12-01
The main advantage of the 40Ar/39Ar method over conventional K-Ar dating is that it does not depend on any absolute abundance or concentration measurements, but only uses the relative ratios between five isotopes of the same element -argon- which can be measured with great precision on a noble gas mass spectrometer. The relative abundances of the argon isotopes are subject to a constant sum constraint, which imposes a covariant structure on the data: the relative amount of any of the five isotopes can always be obtained from that of the other four. Thus, the 40Ar/39Ar method is a classic example of a 'compositional data problem'. In addition to the constant sum constraint, covariances are introduced by a host of other processes, including data acquisition, blank correction, detector calibration, mass fractionation, decay correction, interference correction, atmospheric argon correction, interpolation of the irradiation parameter, and age calculation. The myriad of correlated errors arising during the data reduction are best handled by casting the 40Ar/39Ar data reduction protocol in a matrix form. The completely revised workflow presented in this paper is implemented in a new software platform, Ar-Ar_Redux, which takes raw mass spectrometer data as input and generates accurate 40Ar/39Ar ages and their (co-)variances as output. Ar-Ar_Redux accounts for all sources of analytical uncertainty, including those associated with decay constants and the air ratio. Knowing the covariance matrix of the ages removes the need to consider 'internal' and 'external' uncertainties separately when calculating (weighted) mean ages. Ar-Ar_Redux is built on the same principles as its sibling program in the U-Pb community (U-Pb_Redux), thus improving the intercomparability of the two methods with tangible benefits to the accuracy of the geologic time scale. The program can be downloaded free of charge from
Whistler propagation in ionospheric density ducts: Simulations and DEMETER observations
NASA Astrophysics Data System (ADS)
Woodroffe, J. R.; Streltsov, A. V.; Vartanyan, A.; Milikh, G. M.
2013-11-01
On 16 October 2009, the Detection of Electromagnetic Emissions Transmitted from Earthquake Regions (DEMETER) satellite observed VLF whistler wave activity coincident with an ionospheric heating experiment conducted at HAARP. At the same time, density measurements by DEMETER indicate the presence of multiple field-aligned enhancements. Using an electron MHD model, we show that the distribution of VLF power observed by DEMETER is consistent with the propagation of whistlers from the heating region inside the observed density enhancements. We also discuss other interesting features of this event, including coupling of the lower hybrid and whistler modes, whistler trapping in artificial density ducts, and the interference of whistlers waves from two adjacent ducts.
Fencil, L E; Metz, C E
1990-01-01
We are developing a technique for determination of the three-dimensional (3-D) structure of vascular objects from two radiographic projection images acquired at arbitrary and unknown relative orientations. No separate calibration steps are required with this method, which exploits an inherent redundancy of biplane imaging to extract the imaging geometry as well as the 3-D locations of eight or more object points. The theoretical basis of this technique has been described previously. In this paper, we review the method from the perspective of linear algebra and describe an improvement, not heretofore reported, that reduces the method's sensitivity to experimental error. We then examine the feasibility and inherent accuracy of this approach by computer simulation of biplane imaging experiments. The precision with which 3-D object structure may be retrieved, together with the dependence of precision on the actual imaging geometry and errors in various measured quantities, is studied in detail. Our simulation studies show that the method is not only feasible but potentially accurate, typically determining object-point configurations with root-mean-square (RMS) error on the order of 1 to 2 mm. The method is also quite fast, requiring approximately one second of CPU time on a VAX 11/750 computer (0.6 MIPS).
Simulation of Ductile Crack Propagation for Pipe Structures Using X-FEM
NASA Astrophysics Data System (ADS)
Miura, Naoki; Nagashima, Toshio
Conventional finite element method is continually used for the flaw evaluation of pipe structures to investigate the fitness-for-service for power plant components, however, it is generally time consuming to make a model of specific crack configuration. The consideration of a propagating surface crack is further accentuated since the crack propagation behavior along the crack front is implicitly affected by the distribution of the crack driving force along the crack front. The authors developed a system to conduct crack propagation analysis by use of the three-dimensional elastic-plastic extended finite element method. It was applied to simulate ductile crack propagation of circumferentially surface cracks in pipe structures and could realize the simultaneous calculation of the J-integral and the consequent ductile crack propagation. Both the crack extension and the possible change of crack shape were evaluated by the developed system.
Standard Errors of Equating Differences: Prior Developments, Extensions, and Simulations
ERIC Educational Resources Information Center
Moses, Tim; Zhang, Wenmin
2011-01-01
The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…
NASA Astrophysics Data System (ADS)
Mizokami, Naoya; Nakahata, Kazuyuki; Ogi, Keiji; Yamawaki, Hisashi; Shiwa, Mitsuharu
2017-02-01
The use of fiber reinforced plastics (FRPs) as structural components has significantly increased in recent years. FRPs are made of stacks of plies, each of which is reinforced by fibers. When modeling ultrasonic wave propagation in FRPs, it is important to introduce three-dimensional mesoscopic and microscopic structures to account for the anisotropy and heterogeneity caused by fiber orientation and the lay-up of laminates. In this study, a finite element method using an image-based modeling is applied to simulation of ultrasonic wave propagation in a carbon FRP (CFRP). Here, the elastic stiffness of a single ply is determined using a homogenization method, where a CFRP microstructure is incorporated on the basis of a two-scale asymptotic expansion. The wave propagation in a CFRP specimen composed of unidirectionally aligned fibers is calculated, and the simulation results are compared to visualization results obtained for ultrasonic wave propagation using a laser scanning device.
ITER Test Blanket Module Error Field Simulation Experiments
NASA Astrophysics Data System (ADS)
Schaffer, M. J.
2010-11-01
Recent experiments at DIII-D used an active-coil mock-up to investigate effects of magnetic error fields similar to those expected from two ferromagnetic Test Blanket Modules (TBMs) in one ITER equatorial port. The largest and most prevalent observed effect was plasma toroidal rotation slowing across the entire radial profile, up to 60% in H-mode when the mock-up local ripple at the plasma was ˜4 times the local ripple expected in front of ITER TBMs. Analysis showed the slowing to be consistent with non-resonant braking by the mock-up field. There was no evidence of strong electromagnetic braking by resonant harmonics. These results are consistent with the near absence of resonant helical harmonics in the TBM field. Global particle and energy confinement in H-mode decreased by <20% for the maximum mock-up ripple, but <5% at the local ripple expected in ITER. These confinement reductions may be linked with the large velocity reductions. TBM field effects were small in L-mode but increased with plasma beta. The L-H power threshold was unaffected within error bars. The mock-up field increased plasma sensitivity to mode locking by a known n=1 test field (n = toroidal harmonic number). In H-mode the increased locking sensitivity was from TBM torque slowing plasma rotation. At low beta, locked mode tolerance was fully recovered by re-optimizing the conventional DIII-D ``I-coils'' empirical compensation of n=1 errors in the presence of the TBM mock-up field. Empirical error compensation in H-mode should be addressed in future experiments. Global loss of injected neutral beam fast ions was within error bars, but 1 MeV fusion triton loss may have increased. The many DIII-D mock-up results provide important benchmarks for models needed to predict effects of TBMs in ITER.
Accumulation of errors in numerical simulations of chemically reacting gas dynamics
NASA Astrophysics Data System (ADS)
Smirnov, N. N.; Betelin, V. B.; Nikitin, V. F.; Stamov, L. I.; Altoukhov, D. I.
2015-12-01
The aim of the present study is to investigate problems of numerical simulations precision and stochastic errors accumulation in solving problems of detonation or deflagration combustion of gas mixtures in rocket engines. Computational models for parallel computing on supercomputers incorporating CPU and GPU units were tested and verified. Investigation of the influence of computational grid size on simulation precision and computational speed was performed. Investigation of accumulation of errors for simulations implying different strategies of computation were performed.
The Simulation of Off-Axis Laser Propagation Using Heleeos
2006-03-01
56 Laser Pointer Test...55 6. Laser pointer simulation ....................................................................................... 57 7... lasers have many different uses and can be found in much of today’s new technology. They are used in DVD players, CD players, builder’s leveling
Simulation-based reasoning about the physical propagation of fault effects
NASA Technical Reports Server (NTRS)
Feyock, Stefan; Li, Dalu
1990-01-01
The research described deals with the effects of faults on complex physical systems, with particular emphasis on aircraft and spacecraft systems. Given that a malfunction has occurred and been diagnosed, the goal is to determine how that fault will propagate to other subsystems, and what the effects will be on vehicle functionality. In particular, the use of qualitative spatial simulation to determine the physical propagation of fault effects in 3-D space is described.
Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S
2016-05-01
The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures.
Batista, R. Alves; Vliet, A. van; Boncioli, D.; Di Matteo, A.; Walz, D. E-mail: denise.boncioli@lngs.infn.it E-mail: a.vanvliet@astro.ru.nl
2015-10-01
The results of simulations of extragalactic propagation of ultra-high energy cosmic rays (UHECRs) have intrinsic uncertainties due to poorly known physical quantities and approximations used in the codes. We quantify the uncertainties in the simulated UHECR spectrum and composition due to different models of extragalactic background light (EBL), different photodisintegration setups, approximations concerning photopion production and the use of different simulation codes. We discuss the results for several representative source scenarios with proton, nitrogen or iron at injection. For this purpose we used SimProp and CRPropa, two publicly available codes for Monte Carlo simulations of UHECR propagation. CRPropa is a detailed and extensive simulation code, while SimProp aims to achieve acceptable results using a simpler code. We show that especially the choices for the EBL model and the photodisintegration setup can have a considerable impact on the simulated UHECR spectrum and composition.
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
NASA Astrophysics Data System (ADS)
Ferreira, F.; Gendron, E.; Rousset, G.; Gratadour, D.
2016-07-01
The future European Extremely Large Telescope (E-ELT) adaptive optics (AO) systems will aim at wide field correction and large sky coverage. Their performance will be improved by using post processing techniques, such as point spread function (PSF) deconvolution. The PSF estimation involves characterization of the different error sources in the AO system. Such error contributors are difficult to estimate: simulation tools are a good way to do that. We have developed in COMPASS (COMputing Platform for Adaptive opticS Systems), an end-to-end simulation tool using GPU (Graphics Processing Unit) acceleration, an estimation tool that provides a comprehensive error budget by the outputs of a single simulation run.
FDTD Simulation on Terahertz Waves Propagation Through a Dusty Plasma
NASA Astrophysics Data System (ADS)
Wang, Maoyan; Zhang, Meng; Li, Guiping; Jiang, Baojun; Zhang, Xiaochuan; Xu, Jun
2016-08-01
The frequency dependent permittivity for dusty plasmas is provided by introducing the charging response factor and charge relaxation rate of airborne particles. The field equations that describe the characteristics of Terahertz (THz) waves propagation in a dusty plasma sheath are derived and discretized on the basis of the auxiliary differential equation (ADE) in the finite difference time domain (FDTD) method. Compared with numerical solutions in reference, the accuracy for the ADE FDTD method is validated. The reflection property of the metal Aluminum interlayer of the sheath at THz frequencies is discussed. The effects of the thickness, effective collision frequency, airborne particle density, and charge relaxation rate of airborne particles on the electromagnetic properties of Terahertz waves through a dusty plasma slab are investigated. Finally, some potential applications for Terahertz waves in information and communication are analyzed. supported by National Natural Science Foundation of China (Nos. 41104097, 11504252, 61201007, 41304119), the Fundamental Research Funds for the Central Universities (Nos. ZYGX2015J039, ZYGX2015J041), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (No. 20120185120012)
Realization of State-Space Models for Wave Propagation Simulations
2012-01-01
turquoise ). 13 Verification and Performance of Superstable Model The second FDTD analysis was to verify that a simulation using the...another, so that the turquoise and red lines are all that are visible). The implication is that model-order reduction can serve a useful purpose when
Monte Carlo simulations of intensity profiles for energetic particle propagation
NASA Astrophysics Data System (ADS)
Tautz, R. C.; Bolte, J.; Shalchi, A.
2016-02-01
Aims: Numerical test-particle simulations are a reliable and frequently used tool for testing analytical transport theories and predicting mean-free paths. The comparison between solutions of the diffusion equation and the particle flux is used to critically judge the applicability of diffusion to the stochastic transport of energetic particles in magnetized turbulence. Methods: A Monte Carlo simulation code is extended to allow for the generation of intensity profiles and anisotropy-time profiles. Because of the relatively low number density of computational particles, a kernel function has to be used to describe the spatial extent of each particle. Results: The obtained intensity profiles are interpreted as solutions of the diffusion equation by inserting the diffusion coefficients that have been directly determined from the mean-square displacements. The comparison shows that the time dependence of the diffusion coefficients needs to be considered, in particular the initial ballistic phase and the often subdiffusive perpendicular coefficient. Conclusions: It is argued that the perpendicular component of the distribution function is essential if agreement between the diffusion solution and the simulated flux is to be obtained. In addition, time-dependent diffusion can provide a better description than the classic diffusion equation only after the initial ballistic phase.
Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue
Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William
2008-01-01
In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.
Simulations of Wave Propagation in the Jovian Atmosphere after SL9 Impact Events
NASA Astrophysics Data System (ADS)
Pond, Jarrad W.; Palotai, C.; Korycansky, D.; Harrington, J.
2013-10-01
Our previous numerical investigations into Jovian impacts, including the Shoemaker Levy- 9 (SL9) event (Korycansky et al. 2006 ApJ 646. 642; Palotai et al. 2011 ApJ 731. 3), the 2009 bolide (Pond et al. 2012 ApJ 745. 113), and the ephemeral flashes caused by smaller impactors in 2010 and 2012 (Hueso et al. 2013; Submitted to A&A), have covered only up to approximately 3 to 30 seconds after impact. Here, we present further SL9 impacts extending to minutes after collision with Jupiter’s atmosphere, with a focus on the propagation of shock waves generated as a result of the impact events. Using a similar yet more efficient remapping method than previously presented (Pond et al. 2012; DPS 2012), we move our simulation results onto a larger computational grid, conserving quantities with minimal error. The Jovian atmosphere is extended as needed to accommodate the evolution of the features of the impact event. We restart the simulation, allowing the impact event to continue to progress to greater spatial extents and for longer times, but at lower resolutions. This remap-restart process can be implemented multiple times to achieve the spatial and temporal scales needed to investigate the observable effects of waves generated by the deposition of energy and momentum into the Jovian atmosphere by an SL9-like impactor. As before, we use the three-dimensional, parallel hydrodynamics code ZEUS-MP 2 (Hayes et al. 2006 ApJ.SS. 165. 188) to conduct our simulations. Wave characteristics are tracked throughout these simulations. Of particular interest are the wave speeds and wave positions in the atmosphere as a function of time. These properties are compared to the characteristics of the HST rings to see if shock wave behavior within one hour of impact is consistent with waves observed at one hour post-impact and beyond (Hammel et al. 1995 Science 267. 1288). This research was supported by National Science Foundation Grant AST-1109729 and NASA Planetary Atmospheres Program Grant
Simulation techniques for estimating error in the classification of normal patterns
NASA Technical Reports Server (NTRS)
Whitsitt, S. J.; Landgrebe, D. A.
1974-01-01
Methods of efficiently generating and classifying samples with specified multivariate normal distributions were discussed. Conservative confidence tables for sample sizes are given for selective sampling. Simulation results are compared with classified training data. Techniques for comparing error and separability measure for two normal patterns are investigated and used to display the relationship between the error and the Chernoff bound.
NASA Astrophysics Data System (ADS)
Jiang, Xianan
2017-01-01
As a prominent climate variability mode with widespread influences on global weather extremes, the Madden-Julian Oscillation (MJO) remains poorly represented in the latest generation of general circulation models (GCMs), with a particular challenge in simulating its eastward propagating convective signals. In this study, by analyzing multimodel simulations from a recent global MJO model evaluation project, an effort is made to identify key processes for the eastward propagation of the MJO through analyses of moisture entropy (ME) processes under a "moisture mode" framework for the MJO. The column-integrated horizontal ME advection is found to play a critical role for the eastward propagation of the MJO in both observations and good MJO models, with a primary contribution through advection of the lower tropospheric seasonal mean ME by the MJO anomalous circulations. By contrast, the horizontal ME advection effect for the eastward propagation is greatly underestimated in poor MJO GCMs, due to model deficiencies in simulating both the seasonal mean ME pattern and MJO circulations, leading to a largely stationary MJO mode in these GCMs. These results thus pinpoint an important guidance toward improved representation of the MJO in climate and weather forecast models. While this study mainly focuses on fundamental physics for the MJO propagation over the Indian Ocean, complex influences by the Maritime Continent on the MJO and also ME processes associated with the MJO over the western Pacific warrant further investigations.
Computer simulation of crack propagation in ductile materials under biaxial dynamic loads
Chen, Y.M.
1980-07-29
The finite-difference computer program HEMP is used to simulate the crack-propagation phenomenon in two-dimensional ductile materials under truly dynamic biaxial loads. A comulative strain-damage criterion for the initiation of ductile fracture is used. To simulate crack propagation numerically, the method of equivalent free-surface boundary conditions and the method of artifical velocity are used in the computation. Centrally cracked rectangular aluminum bars subjected to constant-velocity biaxial loads at the edges are considered. Tensile and compressive loads in the direction of crack length are found, respectively, to increase and decrease directional instability in crack propagation, where the directional instability is characterized by branching or bifurcation.
Freudenthal, Daniel; Pine, Julian M; Jones, Gary; Gobet, Fernand
2015-10-01
One of the most striking features of children's early multi-word speech is their tendency to produce non-finite verb forms in contexts in which a finite verb form is required (Optional Infinitive [OI] errors, Wexler, 1994). MOSAIC is a computational model of language learning that simulates developmental changes in the rate of OI errors across several different languages by learning compound finite constructions from the right edge of the utterance (Freudenthal, Pine, Aguado-Orea, & Gobet, 2007; Freudenthal, Pine, & Gobet, 2006a, 2009). However, MOSAIC currently only simulates the pattern of OI errors in declaratives, and there are important differences in the cross-linguistic patterning of OI errors in declaratives and Wh- questions. In the present study, we describe a new version of MOSAIC that learns from both the right and left edges of the utterance. Our simulations demonstrate that this new version of the model is able to capture the cross-linguistic patterning of OI errors in declaratives in English, Dutch, German and Spanish by learning from declarative input, and the cross-linguistic patterning of OI errors in Wh- questions in English, German and Spanish by learning from interrogative input. These results show that MOSAIC is able to provide an integrated account of the cross-linguistic patterning of OI errors in declaratives and Wh- questions, and provide further support for the view, instantiated in MOSAIC, that OI errors are compound-finite utterances with missing modals or auxiliaries.
Pointing-error simulations of the DSS-13 antenna due to wind disturbances
NASA Technical Reports Server (NTRS)
Gawronski, W.; Bienkiewicz, B.; Hill, R. E.
1992-01-01
Accurate spacecraft tracking by the NASA Deep Space Network (DSN) antennas must be assured during changing weather conditions. Wind disturbances are the main source of tracking errors. The development of a wind-force model and simulations of wind-induced pointing errors of DSN antennas are presented. The antenna model includes the antenna structure, the elevation and azimuth servos, and the tracking controller. Simulation results show that pointing errors due to wind gusts are of the same order as errors due to static wind pressure and that these errors (similar to those of static wind pressure) satisfy the velocity quadratic law. The presented methodology is used for wind-disturbance estimation and for the design of an antenna controller with wind-disturbance rejection properties.
Fully kinetic particle simulations of high pressure streamer propagation
NASA Astrophysics Data System (ADS)
Rose, David; Welch, Dale; Thoma, Carsten; Clark, Robert
2012-10-01
Streamer and leader formation in high pressure devices is a dynamic process involving a hierarchy of physical phenomena. These include elastic and inelastic particle collisions in the gas, radiation generation, transport and absorption, and electrode interactions. We have performed 2D and 3D fully EM implicit particle-in-cell simulation model of gas breakdown leading to streamer formation under DC and RF fields. The model uses a Monte Carlo treatment for all particle interactions and includes discrete photon generation, transport, and absorption for ultra-violet and soft x-ray radiation. Central to the realization of this fully kinetic particle treatment is an algorithm [D. R. Welch, et al., J. Comp. Phys. 227, 143 (2007)] that manages the total particle count by species while preserving the local momentum distribution functions and conserving charge. These models are being applied to the analysis of high-pressure gas switches [D. V. Rose, et al., Phys. Plasmas 18, 093501 (2011)] and gas-filled RF accelerator cavities [D. V. Rose, et al. Proc. IPAC12, to appear].
1991-11-20
3 1M. Spivack , "Accuracy of the Moments from Simulation of Waves in Random Media," J. Opt Soc Am A 7, 790-793 (1990). 32D. Rouseff and R. P. Porter...34Anomalous Microwave Propagation through Atmospheric Ducts," Johns Hopkins APL Tech. Dig. 4, 12-26 (1983). 31M. Spivack , "Accuracy of the Moments from
GPU simulation of nonlinear propagation of dual band ultrasound pulse complexes
Kvam, Johannes Angelsen, Bjørn A. J.; Elster, Anne C.
2015-10-28
In a new method of ultrasound imaging, called SURF imaging, dual band pulse complexes composed of overlapping low frequency (LF) and high frequency (HF) pulses are transmitted, where the frequency ratio LF:HF ∼ 1 : 20, and the relative bandwidth of both pulses are ∼ 50 − 70%. The LF pulse length is hence ∼ 20 times the HF pulse length. The LF pulse is used to nonlinearly manipulate the material elasticity observed by the co-propagating HF pulse. This produces nonlinear interaction effects that give more information on the propagation of the pulse complex. Due to the large difference in frequency and pulse length between the LF and the HF pulses, we have developed a dual level simulation where the LF pulse propagation is first simulated independent of the HF pulse, using a temporal sampling frequency matched to the LF pulse. A separate equation for the HF pulse is developed, where the the presimulated LF pulse modifies the propagation velocity. The equations are adapted to parallel processing in a GPU, where nonlinear simulations of a typical HF beam of 10 MHz down to 40 mm is done in ∼ 2 secs in a standard GPU. This simulation is hence very useful for studying the manipulation effect of the LF pulse on the HF pulse.
Simulation study of wakefield generation by two color laser pulses propagating in homogeneous plasma
Kumar Mishra, Rohit; Saroch, Akanksha; Jha, Pallavi
2013-09-15
This paper deals with a two-dimensional simulation of electric wakefields generated by two color laser pulses propagating in homogeneous plasma, using VORPAL simulation code. The laser pulses are assumed to have a frequency difference equal to the plasma frequency. Simulation studies are performed for two similarly as well as oppositely polarized laser pulses and the respective amplitudes of the generated longitudinal wakefields for the two cases are compared. Enhancement of wake amplitude for the latter case is reported. This simulation study validates the analytical results presented by Jha et al.[Phys. Plasmas 20, 053102 (2013)].
NASA Astrophysics Data System (ADS)
Taozheng
2015-08-01
In recent years, due to the high stability and privacy of vortex beam, the optical vortex became the hot spot in research of atmospheric optical transmission .We numerically investigate the propagation of vector elliptical vortex beams in turbulent atmosphere. Numerical simulations are realized with random phase screen. To simulate the vortex beam transport processes in the atmospheric turbulence. Using numerical simulation method to study in the atmospheric turbulence vortex beam transmission characteristics (light intensity, phase, polarization, etc.) Our simulation results show that, vortex beam in the atmospheric transmission distortion is small, make elliptic vortex beam for space communications is a promising strategy.
Lill, J V; Broughton, J Q
2000-06-19
The method of Parrinello and Rahman is generalized to include slip in addition to deformation of the simulation cell. Equations of motion are derived, and a microscopic expression for traction is introduced. Lagrangian constraints are imposed so that the combination of deformation and slip conform to the invariant plane shear characteristic of martensites. Simulation of a model transformation demonstrates the nucleation and propagation of a glissile dislocation interface.
Classical simulation of quantum error correction in a Fibonacci anyon code
NASA Astrophysics Data System (ADS)
Burton, Simon; Brell, Courtney G.; Flammia, Steven T.
2017-02-01
Classically simulating the dynamics of anyonic excitations in two-dimensional quantum systems is likely intractable in general because such dynamics are sufficient to implement universal quantum computation. However, processes of interest for the study of quantum error correction in anyon systems are typically drawn from a restricted class that displays significant structure over a wide range of system parameters. We exploit this structure to classically simulate, and thereby demonstrate the success of, an error-correction protocol for a quantum memory based on the universal Fibonacci anyon model. We numerically simulate a phenomenological model of the system and noise processes on lattice sizes of up to 128 ×128 sites, and find a lower bound on the error-correction threshold of approximately 0.125 errors per edge, which is comparable to those previously known for Abelian and (nonuniversal) non-Abelian anyon models.
A Compact Code for Simulations of Quantum Error Correction in Classical Computers
Nyman, Peter
2009-03-10
This study considers implementations of error correction in a simulation language on a classical computer. Error correction will be necessarily in quantum computing and quantum information. We will give some examples of the implementations of some error correction codes. These implementations will be made in a more general quantum simulation language on a classical computer in the language Mathematica. The intention of this research is to develop a programming language that is able to make simulations of all quantum algorithms and error corrections in the same framework. The program code implemented on a classical computer will provide a connection between the mathematical formulation of quantum mechanics and computational methods. This gives us a clear uncomplicated language for the implementations of algorithms.
Runyon, Matthew K; Kastrup, Christian J; Johnson-Kerner, Bethany L; Ha, Thuong G Van; Ismagilov, Rustem F
2008-03-19
This paper describes microfluidic experiments with human blood plasma and numerical simulations to determine the role of fluid flow in the regulation of propagation of blood clotting. We demonstrate that propagation of clotting can be regulated by different mechanisms depending on the volume-to-surface ratio of a channel. In small channels, propagation of clotting can be prevented by surface-bound inhibitors of clotting present on vessel walls. In large channels, where surface-bound inhibitors are ineffective, propagation of clotting can be prevented by a shear rate above a threshold value, in agreement with predictions of a simple reaction-diffusion mechanism. We also demonstrate that propagation of clotting in a channel with a large volume-to-surface ratio and a shear rate below a threshold shear rate can be slowed by decreasing the production of thrombin, an activator of clotting. These in vitro results make two predictions, which should be experimentally tested in vivo. First, propagation of clotting from superficial veins to deep veins may be regulated by shear rate, which might explain the correlation between superficial thrombosis and the development of deep vein thrombosis (DVT). Second, nontoxic thrombin inhibitors with high binding affinities could be locally administered to prevent recurrent thrombosis after a clot has been removed. In addition, these results demonstrate the utility of simplified mechanisms and microfluidics for generating and testing predictions about the dynamics of complex biochemical networks.
Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, Ronald M.
2015-01-01
The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.
NASA Astrophysics Data System (ADS)
Rakesh, V.; Kantharao, B.
2017-03-01
Data assimilation is considered as one of the effective tools for improving forecast skill of mesoscale models. However, for optimum utilization and effective assimilation of observations, many factors need to be taken into account while designing data assimilation methodology. One of the critical components that determines the amount and propagation observation information into the analysis, is model background error statistics (BES). The objective of this study is to quantify how BES in data assimilation impacts on simulation of heavy rainfall events over a southern state in India, Karnataka. Simulations of 40 heavy rainfall events were carried out using Weather Research and Forecasting Model with and without data assimilation. The assimilation experiments were conducted using global and regional BES while the experiment with no assimilation was used as the baseline for assessing the impact of data assimilation. The simulated rainfall is verified against high-resolution rain-gage observations over Karnataka. Statistical evaluation using several accuracy and skill measures shows that data assimilation has improved the heavy rainfall simulation. Our results showed that the experiment using regional BES outperformed the one which used global BES. Critical thermo-dynamic variables conducive for heavy rainfall like convective available potential energy simulated using regional BES is more realistic compared to global BES. It is pointed out that these results have important practical implications in design of forecast platforms while decision-making during extreme weather events
Molecular dynamics simulation of effect of hydrogen atoms on crack propagation behavior of α-Fe
NASA Astrophysics Data System (ADS)
Song, H. Y.; Zhang, L.; Xiao, M. X.
2016-12-01
The effect of the hydrogen concentration and hydrogen distribution on the mechanical properties of α-Fe with a pre-existing unilateral crack under tensile loading is investigated by molecular dynamics simulation. The results reveal that the models present good ductility when the front region of crack tip has high local hydrogen concentration. The peak stress of α-Fe decreases with increasing hydrogen concentration. The studies also indicate that for the samples with hydrogen atoms, the crack propagation behavior is independent of the model size and boundaries. In addition, the crack propagation behavior is significantly influenced by the distribution of hydrogen atoms.
Packo, P.; Staszewski, W. J.; Uhl, T.
2016-01-01
Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808
NASA Astrophysics Data System (ADS)
Winey, J. M.; Gupta, Y. M.
2010-05-01
An anisotropic continuum material model was developed to describe the thermomechanical response of unreacted pentaerythritol tetranitrate (PETN) single crystals to shock wave loading. Using this model, which incorporates nonlinear elasticity and crystal plasticity in a thermodynamically consistent tensor formulation, wave propagation simulations were performed to compare to experimental wave profiles [J. J. Dick and J. P. Ritchie, J. Appl. Phys. 76, 2726 (1994)] for PETN crystals under plate impact loading to 1.2 GPa. Our simulations show that for shock propagation along the [100] orientation where deformation across shear planes is sterically unhindered, a dislocation-based model provides a good match to the wave profile data. For shock propagation along the [110] direction, where deformation across shear planes is sterically hindered, a dislocation-based model cannot account for the observed strain-softening behavior. Instead, a shear cracking model was developed, providing good agreement with the data for [110] and [001] shock orientations. These results show that inelastic deformation due to hindered and unhindered shear in PETN occurs through mechanisms that are physically different. In addition, results for shock propagation normal to the (101) crystal plane suggest that the primary slip system identified from quasistatic indentation tests is not activated under shock wave loading. Overall, results from our continuum simulations are consistent with a previously proposed molecular mechanism for shock-induced chemical reaction in PETN in which the formation of polar conformers, due to hindered shear, facilitates the development of ionic reaction pathways.
Wendelberger, James G.
2016-10-31
These are slides from a presentation made by a researcher from Los Alamos National Laboratory. The following topics are covered: sources of error for NDA gamma measurements, precision and accuracy are two important characteristics of measurements, four items processed in a material balance area during the inventory time period, inventory difference and propagation of variance, sum in quadrature, and overview of the ID/POV process.
NASA Astrophysics Data System (ADS)
Blanco, Joaquín. E.; Nolan, David S.; Tulich, Stefan N.
2016-10-01
Convectively coupled Kelvin waves (CCKWs) represent a significant contribution to the total variability of the Intertropical Convergence Zone (ITCZ). This study analyzes the structure and propagation of CCKWs simulated by the Weather Research and Forecasting (WRF) model using two types of idealized domains. These are the "aquachannel," a flat rectangle on a beta plane with zonally periodic boundary conditions and length equal to the Earth's circumference at the equator, and the "aquapatch," a square domain with zonal extent equal to one third of the aquachannel's length. A series of simulations are performed, including a doubly nested aquapatch, in which convection is solved explicitly along the equator. The model intercomparison is carried out throughout the use of several techniques such as power spectra, filtering, wave tracking, and compositing, and it is extended to some simulations from the Aquaplanet Experiment (APE). Results show that despite the equatorial superrotation bias produced by the WRF simulations, the CCKWs simulated with this model propagate with similar phase speeds (relative to the low-level mean flow) as the corresponding waves from the APE simulations. Horizontal and vertical structures of the CCKWs simulated with aquachannels are also in overall good agreement with those from aquaplanet simulations and observations, although there is a distortion of the zonal extent of anomalies when the shorter aquapatch is used.
Simulation of picosecond pulse propagation in fibre-based radiation shaping units
NASA Astrophysics Data System (ADS)
Kuptsov, G. V.; Petrov, V. V.; Laptev, A. V.; Petrov, V. A.; Pestryakov, E. V.
2016-09-01
We have performed a numerical simulation of picosecond pulse propagation in a combined stretcher consisting of a segment of a telecommunication fibre and diffraction holographic gratings. The process of supercontinuum generation in a nonlinear photoniccrystal fibre pumped by picosecond pulses is simulated by solving numerically the generalised nonlinear Schrödinger equation; spectral and temporal pulse parameters are determined. Experimental data are in good agreement with simulation results. The obtained results are used to design a high-power femtosecond laser system with a pulse repetition rate of 1 kHz.
Experimental and Computational Models for Simulating Sound Propagation Within the Lungs
Acikgoz, S.; Ozer, M. B.; Mansy, H. A.; Sandler, R. H.
2008-01-01
An acoustic boundary element model is used to simulate sound propagation in the lung parenchyma and surrounding chest wall. It is validated theoretically and numerically and then compared with experimental studies on lung-chest phantom models that simulate the lung pathology of pneumothorax. Studies quantify the effect of the simulated lung pathology on the resulting acoustic field measured at the phantom chest surface. This work is relevant to the development of advanced auscultatory techniques for lung, vascular, and cardiac sounds within the torso that utilize multiple noninvasive sensors to create acoustic images of the sound generation and transmission to identify certain pathologies. PMID:18568101
NASA Technical Reports Server (NTRS)
Matda, Y.; Crawford, F. W.
1974-01-01
An economical low noise plasma simulation model is applied to a series of problems associated with electrostatic wave propagation in a one-dimensional, collisionless, Maxwellian plasma, in the absence of magnetic field. The model is described and tested, first in the absence of an applied signal, and then with a small amplitude perturbation, to establish the low noise features and to verify the theoretical linear dispersion relation at wave energy levels as low as 0.000,001 of the plasma thermal energy. The method is then used to study propagation of an essentially monochromatic plane wave. Results on amplitude oscillation and nonlinear frequency shift are compared with available theories. The additional phenomena of sideband instability and satellite growth, stimulated by large amplitude wave propagation and the resulting particle trapping, are described.
Design of a predictive targeting error simulator for MRI-guided prostate biopsy.
Avni, Shachar; Vikal, Siddharth; Fichtinger, Gabor
2010-02-23
Multi-parametric MRI is a new imaging modality superior in quality to Ultrasound (US) which is currently used in standard prostate biopsy procedures. Surface-based registration of the pre-operative and intra-operative prostate volumes is a simple alternative to side-step the challenges involved with deformable registration. However, segmentation errors inevitably introduced during prostate contouring spoil the registration and biopsy targeting accuracies. For the crucial purpose of validating this procedure, we introduce a fully interactive and customizable simulator which determines the resulting targeting errors of simulated registrations between prostate volumes given user-provided parameters for organ deformation, segmentation, and targeting. We present the workflow executed by the simulator in detail and discuss the parameters involved. We also present a segmentation error introduction algorithm, based on polar curves and natural cubic spline interpolation, which introduces statistically realistic contouring errors. One simulation, including all I/O and preparation for rendering, takes approximately 1 minute and 40 seconds to complete on a system with 3 GB of RAM and four Intel Core 2 Quad CPUs each with a speed of 2.40 GHz. Preliminary results of our simulation suggest the maximum tolerable segmentation error given the presence of a 5.0 mm wide small tumor is between 4-5 mm. We intend to validate these results via clinical trials as part of our ongoing work.
Design of a predictive targeting error simulator for MRI-guided prostate biopsy
NASA Astrophysics Data System (ADS)
Avni, Shachar; Vikal, Siddharth; Fichtinger, Gabor
2010-02-01
Multi-parametric MRI is a new imaging modality superior in quality to Ultrasound (US) which is currently used in standard prostate biopsy procedures. Surface-based registration of the pre-operative and intra-operative prostate volumes is a simple alternative to side-step the challenges involved with deformable registration. However, segmentation errors inevitably introduced during prostate contouring spoil the registration and biopsy targeting accuracies. For the crucial purpose of validating this procedure, we introduce a fully interactive and customizable simulator which determines the resulting targeting errors of simulated registrations between prostate volumes given user-provided parameters for organ deformation, segmentation, and targeting. We present the workflow executed by the simulator in detail and discuss the parameters involved. We also present a segmentation error introduction algorithm, based on polar curves and natural cubic spline interpolation, which introduces statistically realistic contouring errors. One simulation, including all I/O and preparation for rendering, takes approximately 1 minute and 40 seconds to complete on a system with 3 GB of RAM and four Intel Core 2 Quad CPUs each with a speed of 2.40 GHz. Preliminary results of our simulation suggest the maximum tolerable segmentation error given the presence of a 5.0 mm wide small tumor is between 4-5 mm. We intend to validate these results via clinical trials as part of our ongoing work.
Simulating underwater plasma sound sources to evaluate focusing performance and analyze errors
NASA Astrophysics Data System (ADS)
Ma, Tian; Huang, Jian-Guo; Lei, Kai-Zhuo; Chen, Jian-Feng; Zhang, Qun-Fei
2010-03-01
Focused underwater plasma sound sources are being applied in more and more fields. Focusing performance is one of the most important factors determining transmission distance and peak values of the pulsed sound waves. The sound source’s components and focusing mechanism were all analyzed. A model was built in 3D Max and wave strength was measured on the simulation platform. Error analysis was fully integrated into the model so that effects on sound focusing performance of processing-errors and installation-errors could be studied. Based on what was practical, ways to limit the errors were proposed. The results of the error analysis should guide the design, machining, placement, debugging and application of underwater plasma sound sources.
SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors
Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I
2014-06-01
Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with γ<κ was < (or ≥) τ. (Conventionally, κ=1 and τ=90%.) A ROC curve was generated from the 616 tests by varying τ. For a range of κ and τ, true/false positive/negative rates were calculated. This procedure was repeated for inserted errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some κ+τ combination. Results: 20mm{sup 2} errors with intensity altered by ≥20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ≥50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though
Mézière, Fabien; Muller, Marie; Dobigny, Blandine; Bossy, Emmanuel; Derode, Arnaud
2013-02-01
Ultrasound propagation in clusters of elliptic (two-dimensional) or ellipsoidal (three-dimensional) scatterers randomly distributed in a fluid is investigated numerically. The essential motivation for the present work is to gain a better understanding of ultrasound propagation in trabecular bone. Bone microstructure exhibits structural anisotropy and multiple wave scattering. Some phenomena remain partially unexplained, such as the propagation of two longitudinal waves. The objective of this study was to shed more light on the occurrence of these two waves, using finite-difference simulations on a model medium simpler than bone. Slabs of anisotropic, scattering media were randomly generated. The coherent wave was obtained through spatial and ensemble-averaging of the transmitted wavefields. When varying relevant medium parameters, four of them appeared to play a significant role for the observation of two waves: (i) the solid fraction, (ii) the direction of propagation relatively to the scatterers orientation, (iii) the ability of scatterers to support shear waves, and (iv) a continuity of the solid matrix along the propagation. These observations are consistent with the hypothesis that fast waves are guided by the locally plate/bar-like solid matrix. If confirmed, this interpretation could significantly help developing approaches for a better understanding of trabecular bone micro-architecture using ultrasound.
Xiao, Xifeng; Voelz, David G; Toselli, Italo; Korotkova, Olga
2016-05-20
Experimental and theoretical work has shown that atmospheric turbulence can exhibit "non-Kolmogorov" behavior including anisotropy and modifications of the classically accepted spatial power spectral slope, -11/3. In typical horizontal scenarios, atmospheric anisotropy implies that the variations in the refractive index are more spatially correlated in both horizontal directions than in the vertical. In this work, we extend Gaussian beam theory for propagation through Kolmogorov turbulence to the case of anisotropic turbulence along the horizontal direction. We also study the effects of different spatial power spectral slopes on the beam propagation. A description is developed for the average beam intensity profile, and the results for a range of scenarios are demonstrated for the first time with a wave optics simulation and a spatial light modulator-based laboratory benchtop counterpart. The theoretical, simulation, and benchtop intensity profiles show good agreement and illustrate that an elliptically shaped beam profile can develop upon propagation. For stronger turbulent fluctuation regimes and larger anisotropies, the theory predicts a slightly more elliptical form of the beam than is generated by the simulation or benchtop setup. The theory also predicts that without an outer scale limit, the beam width becomes unbounded as the power spectral slope index α approaches a maximum value of 4. This behavior is not seen in the simulation or benchtop results because the numerical phase screens used for these studies do not model the unbounded wavefront tilt component implied in the analytic theory.
Sophocleous, M.A.
1991-01-01
The hypothesis is explored that groundwater-level rises in the Great Bend Prairie aquifer of Kansas are caused not only by water percolating downward through the soil but also by pressure pulses from stream flooding that propagate in a translatory motion through numerous high hydraulic diffusivity buried channels crossing the Great Bend Prairie aquifer in an approximately west to east direction. To validate this hypothesis, two transects of wells in a north-south and east-west orientation crossing and alongside some paleochannels in the area were instrumented with water-level-recording devices; streamflow data from all area streams were obtained from available stream-gaging stations. A theoretical approach was also developed to conceptualize numerically the stream-aquifer processes. The field data and numerical simulations provided support for the hypothesis. Thus, observation wells located along the shoulders or in between the inferred paleochannels show little or no fluctuations and no correlations with streamflow, whereas wells located along paleochannels show high water-level fluctuations and good correlation with the streamflows of the stream connected to the observation site by means of the paleochannels. The stream-aquifer numerical simulation results demonstrate that the larger the hydraulic diffusivity of the aquifer, the larger the extent of pressure pulse propagation and the faster the propagation speed. The conceptual simulation results indicate that long-distance propagation of stream floodwaves (of the order of tens of kilometers) through the Great Bend aquifer is indeed feasible with plausible stream and aquifer parameters. The sensitivity analysis results indicate that the extent and speed of pulse propagation is more sensitive to variations of stream roughness (Manning's coefficient) and stream channel slope than to any aquifer parameter. ?? 1991.
NASA Astrophysics Data System (ADS)
Jiang, Shan; Sewell, Thomas D.; Thompson, Donald L.
2015-06-01
We are interested in understanding the fundamental processes that occur during propagation of shock waves across the crystal-melt interface in molecular substances. We have carried out molecular dynamics simulations of shock passage from the nitromethane (100)-oriented crystal into the melt and vice versa using the fully flexible, non-reactive Sorescu, Rice, and Thompson force field. A stable interface was established for a temperature near the melting point by using a combination of isobaric-isothermal (NPT) and isochoric-isothermal (NVT) simulations. The equilibrium bulk and interfacial regions were characterized using spatial-temporal distributions of molecular number density, kinetic and potential energy, and C-N bond orientations. Those same properties were calculated as functions of time during shock propagation. As expected, the local temperatures (intermolecular, intramolecular, and total) and stress states differed significantly between the liquid and crystal regions and depending on the direction of shock propagation. Substantial differences in the spatial distribution of shock-induced defect structures in the crystalline region were observed depending on the direction of shock propagation. Research supported by the U.S. Army Research Office.
Sampling data for OSSEs. [simulating errors for WINDSAT Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Hoffman, Ross
1988-01-01
An OSSE should for the sake of realism incorporate at least some of the high-frequency, small-scale phenomena that are suppressed by atmospheric models; these phenomena should be present in the realistic atmosphere sampled by all observing sensor systems whose data are being used. Errors are presently generated for an OSSE in a way that encompasses representational errors, sampling, geophysical local bias, random error, and sensor filtering.
Sampling errors in free energy simulations of small molecules in lipid bilayers.
Neale, Chris; Pomès, Régis
2016-10-01
Free energy simulations are a powerful tool for evaluating the interactions of molecular solutes with lipid bilayers as mimetics of cellular membranes. However, these simulations are frequently hindered by systematic sampling errors. This review highlights recent progress in computing free energy profiles for inserting molecular solutes into lipid bilayers. Particular emphasis is placed on a systematic analysis of the free energy profiles, identifying the sources of sampling errors that reduce computational efficiency, and highlighting methodological advances that may alleviate sampling deficiencies. This article is part of a Special Issue entitled: Biosimulations edited by Ilpo Vattulainen and Tomasz Róg.
NASA Astrophysics Data System (ADS)
Ohori, Tomohiro; Yoshida, Shuhei; Yamamoto, Manabu
2010-05-01
The rapid progress in computer performance and widespread use of broadband networks has facilitated the transmission of huge quantities of digital information, thus increasing the need for high-speed, large-capacity storage devices and leading to studies on holographic data storage (HDS). Compared with laser disks where the recording density is limited by optical diffraction, HDS provides ultrahigh capacity with multiplex recording and high-speed transfer greater than 1 Gbps; it has excellent potential for optical memory systems of the future [1]. To develop HDS, a design theory for element technologies such as signal processing, recording materials and optical systems is required. Therefore, this study examines technology for simulating the recording and reproduction for HDS. In simulations thus far, the medium for the recording process has usually been approximated as laminated layers of holographic thin films. This method is suitable for systematic evaluation because the computational cost is low and it allows simulation in the true form of data, that is, in two-dimensional digital data patterns. However, it is difficult to accurately examine the influence of film thickness with a two-dimensional lamination simulation. Therefore, in this study, a technique for analyzing thick-film holograms is examined using the beam propagation method. The results of a two-dimensional simulation assuming laminated, holographic thin films and a three-dimensional simulation using the beam propagation method are compared for cases where the medium need not be treated as a thick film.
Wang, Fei; Toselli, Italo; Korotkova, Olga
2016-02-10
An optical system consisting of a laser source and two independent consecutive phase-only spatial light modulators (SLMs) is shown to accurately simulate a generated random beam (first SLM) after interaction with a stationary random medium (second SLM). To illustrate the range of possibilities, a recently introduced class of random optical frames is examined on propagation in free space and several weak turbulent channels with Kolmogorov and non-Kolmogorov statistics.
A Discussion on the Errors in the Surface Heat Fluxes Simulated by a Coupled GCM.
NASA Astrophysics Data System (ADS)
Yu, Jin-Yi; Mechoso, Carlos R.
1999-02-01
This paper contrasts the sea surface temperature (SST) and surface heat flux errors in the Tropical Pacific simulated by the University of California, Los Angeles, coupled atmosphere-ocean general circulation model (CGCM) and by its atmospheric component (AGCM) using prescribed SSTs. The usefulness of such a comparison is discussed in view of the sensitivities of the coupled system.Off the equator, the CGCM simulates more realistic surface heat fluxes than the AGCM, except in the eastern Pacific south of the equator where the coupled model produces a spurious intertropical convergence zone. The AGCM errors are dominated by excessive latent heat flux, except in the stratus regions along the coasts of California and Peru where errors are dominated by excessive shortwave flux. The CGCM tends to balance the AGCM errors by either correctly decreasing the evaporation at the expense of cold SST biases or erroneously increasing the evaporation at the expense of warm SST biases.At the equator, errors in simulated SSTs are amplified by the feedbacks of the coupled system. Over the western equatorial Pacific, the CGCM produces a cold SST bias that is a manifestation of a spuriously elongated cold tongue. The AGCM produces realistic values of surface heat flux. Over the cold tongue in the eastern equatorial Pacific, the CGCM simulates realistic annual variations in SST. In the simulation, however, the relationship between variations in SST and surface latent heat flux corresponds to a negative feedback, while in the observation it corresponds to a positive feedback. Such an erroneous feature of the CGCM is linked to deficiencies in the simulation of the cross-equatorial component of the surface wind. The reasons for the success in the simulation of SST in the equatorial cold tongue despite the erroneous surface heat flux are examined.
Quantitative analyses of spectral measurement error based on Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Jiang, Jingying; Ma, Congcong; Zhang, Qi; Lu, Junsheng; Xu, Kexin
2015-03-01
The spectral measurement error is controlled by the resolution and the sensitivity of the spectroscopic instrument and the instability of involved environment. In this talk, the spectral measurement error has been analyzed quantitatively by using the Monte Carlo (MC) simulation. Take the floating reference point measurement for example, unavoidably there is a deviation between the measuring position and the theoretical position due to various influence factors. In order to determine the error caused by the positioning accuracy of the measuring device, Monte Carlo simulation has been carried out at the wavelength of 1310nm, simulating Intralipid solution of 2%. MC simulation was performed with the number of 1010 photons and the sampling interval of the ring at 1μm. The data from MC simulation will be analyzed on the basis of thinning and calculating method (TCM) proposed in this talk. The results indicate that TCM could be used to quantitatively analyze the spectral measurement error brought by the positioning inaccuracy.
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2014-06-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity.
Impacts of Characteristics of Errors in Radar Rainfall Estimates for Rainfall-Runoff Simulation
NASA Astrophysics Data System (ADS)
KO, D.; PARK, T.; Lee, T. S.; Shin, J. Y.; Lee, D.
2015-12-01
For flood prediction, weather radar has been commonly employed to measure the amount of precipitation and its spatial distribution. However, estimated rainfall from the radar contains uncertainty caused by its errors such as beam blockage and ground clutter. Even though, previous studies have been focused on removing error of radar data, it is crucial to evaluate runoff volumes which are influenced primarily by the radar errors. Furthermore, resolution of rainfall modeled by previous studies for rainfall uncertainty analysis or distributed hydrological simulation are quite coarse to apply to real application. Therefore, in the current study, we tested the effects of radar rainfall errors on rainfall runoff with a high resolution approach, called spatial error model (SEM). In the current study, the synthetic generation of random and cross-correlated radar errors were employed as SEM. A number of events for the Nam River dam region were tested to investigate the peak discharge from a basin according to error variance. The results indicate that the dependent error brings much higher variations in peak discharge than the independent random error. To further investigate the effect of the magnitude of cross-correlation between radar errors, the different magnitudes of spatial cross-correlations were employed for the rainfall-runoff simulation. The results demonstrate that the stronger correlation leads to higher variation of peak discharge and vice versa. We conclude that the error structure in radar rainfall estimates significantly affects on predicting the runoff peak. Therefore, the efforts must take into consideration on not only removing radar rainfall error itself but also weakening the cross-correlation structure of radar errors in order to forecast flood events more accurately. Acknowledgements This research was supported by a grant from a Strategic Research Project (Development of Flood Warning and Snowfall Estimation Platform Using Hydrological Radars), which
Experimental study on propagation of fault slip along a simulated rock fault
NASA Astrophysics Data System (ADS)
Mizoguchi, K.
2015-12-01
Around pre-existing geological faults in the crust, we have often observed off-fault damage zone where there are many fractures with various scales, from ~ mm to ~ m and their density typically increases with proximity to the fault. One of the fracture formation processes is considered to be dynamic shear rupture propagation on the faults, which leads to the occurrence of earthquakes. Here, I have conducted experiments on propagation of fault slip along a pre-cut rock surface to investigate the damaging behavior of rocks with slip propagation. For the experiments, I used a pair of metagabbro blocks from Tamil Nadu, India, of which the contacting surface simulates a fault of 35 cm in length and 1cm width. The experiments were done with the similar uniaxial loading configuration to Rosakis et al. (2007). Axial load σ is applied to the fault plane with an angle 60° to the loading direction. When σ is 5kN, normal and shear stresses on the fault are 1.25MPa and 0.72MPa, respectively. Timing and direction of slip propagation on the fault during the experiments were monitored with several strain gauges arrayed at an interval along the fault. The gauge data were digitally recorded with a 1MHz sampling rate and 16bit resolution. When σ is 4.8kN is applied, we observed some fault slip events where a slip nucleates spontaneously in a subsection of the fault and propagates to the whole fault. However, the propagation speed is about 1.2km/s, much lower than the S-wave velocity of the rock. This indicates that the slip events were not earthquake-like dynamic rupture ones. More efforts are needed to reproduce earthquake-like slip events in the experiments. This work is supported by the JSPS KAKENHI (26870912).
NASA Astrophysics Data System (ADS)
Rauter, N.; Lammering, R.
2015-04-01
In order to detect micro-structural damages accurately new methods are currently developed. A promising tool is the generation of higher harmonic wave modes caused by the nonlinear Lamb wave propagation in plate like structures. Due to the very small amplitudes a cumulative effect is used. To get a better overview of this inspection method numerical simulations are essential. Previous studies have developed the analytical description of this phenomenon which is based on the five-constant nonlinear elastic theory. The analytical solution has been approved by numerical simulations. In this work first the nonlinear cumulative wave propagation is simulated and analyzed considering micro-structural cracks in thin linear elastic isotropic plates. It is shown that there is a cumulative effect considering the S1-S2 mode pair. Furthermore the sensitivity of the relative acoustical nonlinearity parameter regarding those damages is validated. Furthermore, an influence of the crack size and orientation on the nonlinear wave propagation behavior is observed. In a second step the micro-structural cracks are replaced by a nonlinear material model. Instead of the five-constant nonlinear elastic theory hyperelastic material models that are implemented in commonly used FEM software are used to simulate the cumulative effect of the higher harmonic Lamb wave generation. The cumulative effect as well as the different nonlinear behavior of the S1-S2 and S2-S4 mode pairs are found by using these hyperelastic material models. It is shown that, both numerical simulations, which take into account micro-structural cracks on the one hand and nonlinear material on the other hand, lead to comparable results. Furthermore, in comparison to the five-constant nonlinear elastic theory the use of the well established hyperelastic material models like Neo-Hooke and Mooney-Rivlin are a suitable alternative to simulate the cumulative higher harmonic generation.
NASA Astrophysics Data System (ADS)
Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros
2012-10-01
We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.
Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros
2012-10-14
We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.
NASA Technical Reports Server (NTRS)
Goldberg, Louis F.
1992-01-01
Aspects of the information propagation modeling behavior of integral machine computer simulation programs are investigated in terms of a transmission line. In particular, the effects of pressure-linking and temporal integration algorithms on the amplitude ratio and phase angle predictions are compared against experimental and closed-form analytic data. It is concluded that the discretized, first order conservation balances may not be adequate for modeling information propagation effects at characteristic numbers less than about 24. An entropy transport equation suitable for generalized use in Stirling machine simulation is developed. The equation is evaluated by including it in a simulation of an incompressible oscillating flow apparatus designed to demonstrate the effect of flow oscillations on the enhancement of thermal diffusion. Numerical false diffusion is found to be a major factor inhibiting validation of the simulation predictions with experimental and closed-form analytic data. A generalized false diffusion correction algorithm is developed which allows the numerical results to match their analytic counterparts. Under these conditions, the simulation yields entropy predictions which satisfy Clausius' inequality.
Bossy, Emmanuel; Padilla, Frédéric; Peyrin, Françoise; Laugier, Pascal
2005-12-07
Three-dimensional numerical simulations of ultrasound transmission were performed through 31 trabecular bone samples measured by synchrotron microtomography. The synchrotron microtomography provided high resolution 3D mappings of bone structures, which were used as the input geometry in the simulation software developed in our laboratory. While absorption (i.e. the absorption of ultrasound through dissipative mechanisms) was not taken into account in the algorithm, the simulations reproduced major phenomena observed in real through-transmission experiments in trabecular bone. The simulated attenuation (i.e. the decrease of the transmitted ultrasonic energy) varies linearly with frequency in the MHz frequency range. Both the speed of sound (SOS) and the slope of the normalized frequency-dependent attenuation (nBUA) increase with the bone volume fraction. Twenty-five out of the thirty-one samples exhibited negative velocity dispersion. One sample was rotated to align the main orientation of the trabecular structure with the direction of ultrasonic propagation, leading to the observation of a fast and a slow wave. Coupling numerical simulation with real bone architecture therefore provides a powerful tool to investigate the physics of ultrasound propagation in trabecular structures. As an illustration, comparison between results obtained on bone modelled either as a fluid or a solid structure suggested the major role of mode conversion of the incident acoustic wave to shear waves in bone to explain the large contribution of scattering to the overall attenuation.
NASA Astrophysics Data System (ADS)
Bossy, Emmanuel; Padilla, Frédéric; Peyrin, Françoise; Laugier, Pascal
2005-12-01
Three-dimensional numerical simulations of ultrasound transmission were performed through 31 trabecular bone samples measured by synchrotron microtomography. The synchrotron microtomography provided high resolution 3D mappings of bone structures, which were used as the input geometry in the simulation software developed in our laboratory. While absorption (i.e. the absorption of ultrasound through dissipative mechanisms) was not taken into account in the algorithm, the simulations reproduced major phenomena observed in real through-transmission experiments in trabecular bone. The simulated attenuation (i.e. the decrease of the transmitted ultrasonic energy) varies linearly with frequency in the MHz frequency range. Both the speed of sound (SOS) and the slope of the normalized frequency-dependent attenuation (nBUA) increase with the bone volume fraction. Twenty-five out of the thirty-one samples exhibited negative velocity dispersion. One sample was rotated to align the main orientation of the trabecular structure with the direction of ultrasonic propagation, leading to the observation of a fast and a slow wave. Coupling numerical simulation with real bone architecture therefore provides a powerful tool to investigate the physics of ultrasound propagation in trabecular structures. As an illustration, comparison between results obtained on bone modelled either as a fluid or a solid structure suggested the major role of mode conversion of the incident acoustic wave to shear waves in bone to explain the large contribution of scattering to the overall attenuation.
NASA Astrophysics Data System (ADS)
Ishmuratov, I. K.; Baibekov, E. I.
2016-12-01
We investigate the possibility to restore transient nutations of electron spin centers embedded in the solid using specific composite pulse sequences developed previously for the application in nuclear magnetic resonance spectroscopy. We treat two types of systematic errors simultaneously: (i) rotation angle errors related to the spatial distribution of microwave field amplitude in the sample volume, and (ii) off-resonance errors related to the spectral distribution of Larmor precession frequencies of the electron spin centers. Our direct simulations of the transient signal in erbium- and chromium-doped CaWO4 crystal samples with and without error corrections show that the application of the selected composite pulse sequences can substantially increase the lifetime of Rabi oscillations. Finally, we discuss the applicability limitations of the studied pulse sequences for the use in solid-state electron paramagnetic resonance spectroscopy.
Evaluation of Interprofessional Team Disclosure of a Medical Error to a Simulated Patient
Kern, Donna H.; Shrader, Sarah P.
2016-01-01
Objective. To evaluate the impact of an Interprofessional Communication Skills Workshop on pharmacy student confidence and proficiency in disclosing medical errors to patients. Pharmacy student behavior was also compared to that of other health professions’ students on the team. Design. Students from up to four different health professions participated in a simulation as part of an interprofessional team. Teams were evaluated with a validated rubric postsimulation on how well they handled the disclosure of an error to the patient. Individually, each student provided anonymous feedback and self-reflected on their abilities via a Likert-scale evaluation tool. A comparison of pharmacy students who completed the workshop (active group) vs all others who did not (control group) was completed and analyzed. Assessment. The majority of students felt they had adequate training related to communication issues that cause medication errors. However, fewer students believed that they knew how to report such an error to a patient or within a health system. Pharmacy students who completed the workshop were significantly more comfortable explicitly stating the error disclosure to a patient and/or caregiver and were more likely to apologize and respond to questions forthrightly (p<0.05). Conclusions. This data affirms the need to devote more time to training students on communicating with patients about the occurrence of medical errors and how to report these errors. Educators should be encouraged to incorporate such training within interprofessional education curricula. PMID:27899834
One-way approximation for the simulation of weak shock wave propagation in atmospheric flows.
Gallin, Louis-Jonardan; Rénier, Mathieu; Gaudard, Eric; Farges, Thomas; Marchiano, Régis; Coulouvrat, François
2014-05-01
A numerical scheme is developed to simulate the propagation of weak acoustic shock waves in the atmosphere with no absorption. It generalizes the method previously developed for a heterogeneous medium [Dagrau, Rénier, Marchiano, and Coulouvrat, J. Acoust. Soc. Am. 130, 20-32 (2011)] to the case of a moving medium. It is based on an approximate scalar wave equation for potential, rewritten in a moving time frame, and separated into three parts: (i) the linear wave equation in a homogeneous and quiescent medium, (ii) the effects of atmospheric winds and of density and speed of sound heterogeneities, and (iii) nonlinearities. Each effect is then solved separately by an adapted method: angular spectrum for the wave equation, finite differences for the flow and heterogeneity corrections, and analytical method in time domain for nonlinearities. To keep a one-way formulation, only forward propagating waves are kept in the angular spectrum part, while a wide-angle parabolic approximation is performed on the correction terms. The numerical process is validated in the case of guided modal propagation with a shear flow. It is then applied to the case of blast wave propagation within a boundary layer flow over a flat and rigid ground.
Testing the Propagating Fluctuations Model with a Long, Global Accretion Disk Simulation
NASA Astrophysics Data System (ADS)
Hogg, J. Drew; Reynolds, Christopher S.
2016-07-01
The broadband variability of many accreting systems displays characteristic structures; log-normal flux distributions, root-mean square (rms)-flux relations, and long inter-band lags. These characteristics are usually interpreted as inward propagating fluctuations of the mass accretion rate in an accretion disk driven by stochasticity of the angular momentum transport mechanism. We present the first analysis of propagating fluctuations in a long-duration, high-resolution, global three-dimensional magnetohydrodynamic (MHD) simulation of a geometrically thin (h/r ≈ 0.1) accretion disk around a black hole. While the dynamical-timescale turbulent fluctuations in the Maxwell stresses are too rapid to drive radially coherent fluctuations in the accretion rate, we find that the low-frequency quasi-periodic dynamo action introduces low-frequency fluctuations in the Maxwell stresses, which then drive the propagating fluctuations. Examining both the mass accretion rate and emission proxies, we recover log-normality, linear rms-flux relations, and radial coherence that would produce inter-band lags. Hence, we successfully relate and connect the phenomenology of propagating fluctuations to modern MHD accretion disk theory.
NASA Astrophysics Data System (ADS)
Shay, M. A.; Drake, J. F.
2009-12-01
In a recent substorm case study using THEMIS data [1], it was inferred that auroral intensification occurred 96 seconds after reconnection onset initiated a substorm in the magnetotail. These conclusions have been the subject of some controversy [2,3]. The time delay between reconnection and auroral intensification requires a propagation speed significantly faster than can be explained by Alfvén waves. Kinetic Alfvén waves, however, can be much faster and could possibly explain the time lag. To test this possiblity, we simulate large scale reconnection events with the kinetic PIC code P3D and examine the disturbances on a magnetic field line as it propagates through a reconnection region. In the regions near the separatrices but relatively far from the x-line, the propagation physics is expected to be governed by the physics of kinetic Alfvén waves. Indeed, we find that the propagation speed of the magnetic disturbance roughly scales with kinetic Alfvén speeds. We also examine energization of electrons due to this disturbance. Consequences for our understanding of substorms will be discussed. [1] Angelopoulos, V. et al., Science, 321, 931, 2008. [2] Lui, A. T. Y., Science, 324, 1391-b, 2009. [3] Angelopoulos, V. et al., Science, 324, 1391-c, 2009.
Simulation of Propagation and Transformation of THz Bessel Beams with Orbital Angular Momentum
NASA Astrophysics Data System (ADS)
Choporova, Yulia; Knyazev, Boris; Mitkov, Mikhail; Osintseva, Natalya; Pavelyev, Vladimir
Recently, terahertz Bessel beams with angular orbital momentum ("vortex beams") with topological charges l = ±1 and l = ±2 were generated for the first time using radiation of the Novosibirsk free electron laser (NovoFEL) and silicon binary phase axicons (Knyazev et al., Phys. Rev. Letters, vol. 115, Art. 163901, 2015). Such beams are prospective for application in wireless communication and remote sensing. In present paper, numerical modelling of generation and transformation of vortex beams based on the scalar diffraction theory has been performed. It was shown that the Bessel beams with the diameters of the first ring of 1.7 and 3.2 mm for topological charges ±1 and ±2, respectively, propagate at a distance up to 160 mm without dispersion. Calculation showed that the propagation distance can be increased by reducing of the radiation wavelength or using a telescopic system. In the first case, the propagation distance grows up inversely proportional to the wavelength, whereas, in the latter case the propagation distance increases as a square of a ratio of the telescope lenses foci. Modelling of the passing of the vortex Bessel beams through a random phase screen and amplitude obstacles showed the self-healing ability of the beams. Even if an obstacle with a diameter of 10 mm blocks several central rings of Bessel beam, it reconstructs itself after passing a length of about 100 mm. Results of the simulations are in a good agreement with the experimental data, when the latter exist.
NASA Astrophysics Data System (ADS)
Lamb, Masen; Correia, Carlos; Sauvage, Jean-François; Véran, Jean-Pierre; Andersen, David; Vigan, Arthur; Wizinowich, Peter; van Dam, Marcos; Mugnier, Laurent; Bond, Charlotte
2016-07-01
We propose and apply two methods for estimating phase discontinuities for two realistic scenarios on VLT and Keck. The methods use both phase diversity and a form of image sharpening. For the case of VLT, we simulate the `low wind effect' (LWE) which is responsible for focal plane errors in low wind and good seeing conditions. We successfully estimate the LWE using both methods, and show that using both methods both independently and together yields promising results. We also show the use of single image phase diversity in the LWE estimation, and show that it too yields promising results. Finally, we simulate segmented piston effects on Keck/NIRC2 images and successfully recover the induced phase errors using single image phase diversity. We also show that on Keck we can estimate both the segmented piston errors and any Zernike modes affiliated with the non-common path.
Simulations of laser propagation and ionization in l'OASIS experiments
Dimitrov, D.A.; Bruhwiler, D.L.; Leemans, W.; Esarey, E.; Catravas, P.; Toth, C.; Shadwick, B.; Cary, J.R.; Giacone, R.
2002-06-30
We have conducted particle-in-cell simulations of laser pulse propagation through neutral He, including the effects of tunneling ionization, within the parameter regime of the l'OASIS experiments [1,2] at the Lawrence Berkeley National Laboratory (LBNL). The simulations show the theoretically predicted [3] blue shifting of the laser frequency at the leading edge of the pulse. The observed blue shifting is in good agreement with the experimental data. These results indicate that such computations can be used to accurately simulate a number of important effects related to tunneling ionization for laser-plasma accelerator concepts, such as steepening due to ionization-induced pump depletion, which can seed and enhance instabilities. Our simulations show self-modulation occurring earlier when tunneling ionization is included then for a pre-ionized plasma.
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Accelerating Simulation of Seismic Wave Propagation by Multi-GPUs (Invited)
NASA Astrophysics Data System (ADS)
Okamoto, T.; Takenaka, H.; Nakamura, T.; Aoki, T.
2010-12-01
Simulation of seismic wave propagation is essential in modern seismology: the effects of irregular topography of the surface, internal discontinuities and heterogeneity on the seismic waveforms must be precisely modeled in order to probe the Earth's and other planets' interiors, to study the earthquake sources, and to evaluate the strong ground motions due to earthquakes. Devices with high computing performance are necessary because in large scale simulations more than one billion of grid points are required. GPU (Graphics Processing Unit) is a remarkable device for its many core architecture with more-than-one-hundred processing units, and its high memory bandwidth. Now GPU delivers extremely high computing performance (more than one tera-flops in single-precision arithmetic) at a reduced power and cost compared to conventional CPUs. The simulation of seismic wave propagation is a memory intensive problem which involves large amount of data transfer between the memory and the arithmetic units while the number of arithmetic calculations is relatively small. Therefore the simulation should benefit from the high memory bandwidth of the GPU. Thus several approaches to adopt GPU to the simulation of seismic wave propagation have been emerging (e.g., Komatitsch et al., 2009; Micikevicius, 2009; Michea and Komatitsch, 2010; Aoi et al., SSJ 2009, JPGU 2010; Okamoto et al., SSJ 2009, SACSIS 2010). In this paper we describe our approach to accelerate the simulation of seismic wave propagation based on the finite-difference method (FDM) by adopting multi-GPU computing. The finite-difference scheme we use is the three-dimensional, velocity-stress staggered grid scheme (e.g., Grave 1996; Moczo et al., 2007) for heterogeneous medium with perfect elasticity (incorporation of an-elasticity is underway). We use the GPUs (NVIDIA S1070, 1.44 GHz) installed in the TSUBAME grid cluster in the Global Scientific Information and Computing Center, Tokyo Institute of Technology and NVIDIA
Effects of Crew Resource Management Training on Medical Errors in a Simulated Prehospital Setting
ERIC Educational Resources Information Center
Carhart, Elliot D.
2012-01-01
This applied dissertation investigated the effect of crew resource management (CRM) training on medical errors in a simulated prehospital setting. Specific areas addressed by this program included situational awareness, decision making, task management, teamwork, and communication. This study is believed to be the first investigation of CRM…
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2008-01-01
An approach for assessing the delamination propagation simulation capabilities in commercial finite element codes is presented and demonstrated. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. The load-displacement relationship and the total strain energy obtained from the propagation analysis results and the benchmark results were compared and good agreements could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as was expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.
NASA Astrophysics Data System (ADS)
Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian
2016-08-01
Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modelling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5 per cent and 9° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10 per cent in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1 per cent at periods greater than 30 s in most oceanic regions, but the error is up to 2 per cent for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.
Simulation of Lamb wave propagation for the characterization of complex structures.
Agostini, Valentina; Delsanto, Pier Paolo; Genesio, Ivan; Olivero, Dimitri
2003-04-01
Reliable numerical simulation techniques represent a very valuable tool for analysis. For this purpose we investigated the applicability of the local interaction simulation approach (LISA) to the study of the propagation of Lamb waves in complex structures. The LISA allows very fast and flexible simulations, especially in conjunction with parallel processing, and it is particularly useful for complex (heterogeneous, anisotropic, attenuative, and/or nonlinear) media. We present simulations performed on a glass fiber reinforced plate, initially undamaged and then with a hole passing through its thickness (passing-by hole). In order to give a validation of the method, the results are compared with experimental data. Then we analyze the interaction of Lamb waves with notches, delaminations, and complex structures. In the first case the discontinuity due to a notch generates mode conversion, which may be used to predict the defect shape and size. In the case of a single delamination, the most striking "signature" is a time-shift delay, which may be observed in the temporal evolution of the signal recorded by a receiver. We also present some results obtained on a geometrically complex structure. Due to the inherent discontinuities, a wealth of propagation mechanisms are observed, which can be exploited for the purpose of quantitative nondestructive evaluation (NDE).
Efficient simulation for fixed-receiver bistatic SAR with time and frequency synchronization errors
NASA Astrophysics Data System (ADS)
Yan, Feifei; Chang, Wenge; Li, Xiangyang
2015-12-01
Raw signal simulation is a useful tool for synthetic aperture radar (SAR) system design, mission planning, processing algorithm testing, and inversion algorithm design. Time and frequency synchronization is the key technique of bistatic SAR (BiSAR) system, and raw data simulation is an effective tool for verifying the time and frequency synchronization techniques. According to the two-dimensional (2-D) frequency spectrum of fixed-receiver BiSAR, a rapid raw data simulation approach with time and frequency synchronization errors is proposed in this paper. Through 2-D inverse Stolt transform in 2-D frequency domain and phase compensation in range-Doppler frequency domain, this method can significantly improve the efficiency of scene raw data simulation. Simulation results of point targets and extended scene are presented to validate the feasibility and efficiency of the proposed simulation approach.
Simulation of Crack Propagation in Engine Rotating Components under Variable Amplitude Loading
NASA Technical Reports Server (NTRS)
Bonacuse, P. J.; Ghosn, L. J.; Telesman, J.; Calomino, A. M.; Kantzos, P.
1998-01-01
The crack propagation life of tested specimens has been repeatedly shown to strongly depend on the loading history. Overloads and extended stress holds at temperature can either retard or accelerate the crack growth rate. Therefore, to accurately predict the crack propagation life of an actual component, it is essential to approximate the true loading history. In military rotorcraft engine applications, the loading profile (stress amplitudes, temperature, and number of excursions) can vary significantly depending on the type of mission flown. To accurately assess the durability of a fleet of engines, the crack propagation life distribution of a specific component should account for the variability in the missions performed (proportion of missions flown and sequence). In this report, analytical and experimental studies are described that calibrate/validate the crack propagation prediction capability ]or a disk alloy under variable amplitude loading. A crack closure based model was adopted to analytically predict the load interaction effects. Furthermore, a methodology has been developed to realistically simulate the actual mission mix loading on a fleet of engines over their lifetime. A sequence of missions is randomly selected and the number of repeats of each mission in the sequence is determined assuming a Poisson distributed random variable with a given mean occurrence rate. Multiple realizations of random mission histories are generated in this manner and are used to produce stress, temperature, and time points for fracture mechanics calculations. The result is a cumulative distribution of crack propagation lives for a given, life limiting, component location. This information can be used to determine a safe retirement life or inspection interval for the given location.
Parametric decay of a parallel propagating monochromatic whistler wave: Particle-in-cell simulations
NASA Astrophysics Data System (ADS)
Ke, Yangguang; Gao, Xinliang; Lu, Quanming; Wang, Shui
2017-01-01
In this paper, by using one-dimensional (1-D) particle-in-cell simulations, we investigate the parametric decay of a parallel propagating monochromatic whistler wave with various wave frequencies and amplitudes. The pump whistler wave can decay into a backscattered daughter whistler wave and an ion acoustic wave, and the decay instability grows more rapidly with the increase of the frequency or amplitude. When the frequency or amplitude is sufficiently large, a multiple decay process may occur, where the daughter whistler wave undergoes a secondary decay into an ion acoustic wave and a forward propagating whistler wave. We also find that during the parametric decay a considerable part of protons can be accelerated along the background magnetic field by the enhanced ion acoustic wave through the Landau resonance. The implication of the parametric decay to the evolution of whistler waves in Earth's magnetosphere is also discussed in the paper.
NASA Astrophysics Data System (ADS)
Gai, F. F.; Pang, B. J.; Guan, G. S.
2009-03-01
In the paper SPH methods in AUTODYN-2D is used to investigate the characteristics of debris clouds propagation inside the gas-filled pressure vessels for hypervelocity impact on the pressure vessels. The effect of equation of state on debris cloud has been investigated. The numerical simulation performed to analyze the effect of the gas pressure and the impact condition on the propagation of the debris clouds. The result shows that the increase of gas pressure can reduce the damage of the debris clouds' impact on the back wall of vessels when the pressure value is in a certain range. The smaller projectile lead the axial velocity of the debris cloud to stronger deceleration and the debris cloud deceleration is increasing with increased impact velocity. The time of venting begins to occur is related to the "vacuum column" at the direction of impact-axial. The paper studied the effect of impact velocities on gas shock wave.
An Atomistic Simulation of Crack Propagation in a Nickel Single Crystal
NASA Technical Reports Server (NTRS)
Karimi, Majid
2002-01-01
The main objective of this paper is to determine mechanisms of crack propagation in a nickel single crystal. Motivation for selecting nickel as a case study is because we believe that its physical properties are very close to that of nickel-base super alloy. We are directed in identifying some generic trends that would lead a single crystalline material to failure. We believe that the results obtained here would be of interest to the experimentalists in guiding them to a more optimized experimental strategy. The dynamic crack propagation experiments are very difficult to do. We are partially motivated to fill the gap by generating the simulation results in lieu of the experimental ones for the cases where experiment can not be done or when the data is not available.
Simulation of the trans-oceanic tsunami propagation due to the 1883 Krakatau volcanic eruption
NASA Astrophysics Data System (ADS)
Choi, B. H.; Pelinovsky, E.; Kim, K. O.; Lee, J. S.
The 1883 Krakatau volcanic eruption has generated a destructive tsunami higher than 40 m on the Indonesian coast where more than 36 000 lives were lost. Sea level oscillations related with this event have been reported on significant distances from the source in the Indian, Atlantic and Pacific Oceans. Evidence of many manifestations of the Krakatau tsunami was a subject of the intense discussion, and it was suggested that some of them are not related with the direct propagation of the tsunami waves from the Krakatau volcanic eruption. Present paper analyzes the hydrodynamic part of the Krakatau event in details. The worldwide propagation of the tsunami waves generated by the Krakatau volcanic eruption is studied numerically using two conventional models: ray tracing method and two-dimensional linear shallow-water model. The results of the numerical simulations are compared with available data of the tsunami registration.
3D dynamic simulation of crack propagation in extracorporeal shock wave lithotripsy
NASA Astrophysics Data System (ADS)
Wijerathne, M. L. L.; Hori, Muneo; Sakaguchi, Hide; Oguni, Kenji
2010-06-01
Some experimental observations of Shock Wave Lithotripsy(SWL), which include 3D dynamic crack propagation, are simulated with the aim of reproducing fragmentation of kidney stones with SWL. Extracorporeal shock wave lithotripsy (ESWL) is the fragmentation of kidney stones by focusing an ultrasonic pressure pulse onto the stones. 3D models with fine discretization are used to accurately capture the high amplitude shear shock waves. For solving the resulting large scale dynamic crack propagation problem, PDS-FEM is used; it provides numerically efficient failure treatments. With a distributed memory parallel code of PDS-FEM, experimentally observed 3D photoelastic images of transient stress waves and crack patterns in cylindrical samples are successfully reproduced. The numerical crack patterns are in good agreement with the experimental ones, quantitatively. The results shows that the high amplitude shear waves induced in solid, by the lithotriptor generated shock wave, play a dominant role in stone fragmentation.
Simulation of quasi-static hydraulic fracture propagation in porous media with XFEM
NASA Astrophysics Data System (ADS)
Juan-Lien Ramirez, Alina; Neuweiler, Insa; Löhnert, Stefan
2015-04-01
Hydraulic fracturing is the injection of a fracking fluid at high pressures into the underground. Its goal is to create and expand fracture networks to increase the rock permeability. It is a technique used, for example, for oil and gas recovery and for geothermal energy extraction, since higher rock permeability improves production. Many physical processes take place when it comes to fracking; rock deformation, fluid flow within the fractures, as well as into and through the porous rock. All these processes are strongly coupled, what makes its numerical simulation rather challenging. We present a 2D numerical model that simulates the hydraulic propagation of an embedded fracture quasi-statically in a poroelastic, fully saturated material. Fluid flow within the porous rock is described by Darcy's law and the flow within the fracture is approximated by a parallel plate model. Additionally, the effect of leak-off is taken into consideration. The solid component of the porous medium is assumed to be linear elastic and the propagation criteria are given by the energy release rate and the stress intensity factors [1]. The used numerical method for the spatial discretization is the eXtended Finite Element Method (XFEM) [2]. It is based on the standard Finite Element Method, but introduces additional degrees of freedom and enrichment functions to describe discontinuities locally in a system. Through them the geometry of the discontinuity (e.g. a fracture) becomes independent of the mesh allowing it to move freely through the domain without a mesh-adapting step. With this numerical model we are able to simulate hydraulic fracture propagation with different initial fracture geometries and material parameters. Results from these simulations will also be presented. References [1] D. Gross and T. Seelig. Fracture Mechanics with an Introduction to Micromechanics. Springer, 2nd edition, (2011) [2] T. Belytschko and T. Black. Elastic crack growth in finite elements with minimal
Numerical Simulation of Propagation and Transformation of the MHD Waves in Sunspots
NASA Astrophysics Data System (ADS)
Parchevsky, Konstantin; Zhao, J.; Kosovichev, A.
2010-05-01
Direct numerical simulation of propagation of MHD waves in stratified medium in regions with non-uniform magnetic field is very important for understanding of scattering and transformation of waves by sunspots. We present numerical simulations of wave propagation through the sunspot in 3D. We compare results propagation in two different magnitostatic models of sunspots refferred to as "deep" and "shallow" models. The "deep" model has convex shape of magnetic field lines near the photosphere and non-zero horizorntal perturbations of the sound speed up to the bottom of the model. The "shallow" model has concave shape of the magnetic field lines near the photosphere and horizontally uniform sound speed below 2 Mm. Waves reduce their amplitude when they reach the center of the sunspot and estore the amplitude when pass the center. For the "deep" model this effect is bigger than for the "shallow" model. The wave amplitude depends on the distance of the source from the sunspot center. For the "shallow" model and source distance of 9 Mm from the sunspot center the wave amplitude at some moment (when the wavefront passes the sunspot center) becomes bigger inside the sunspot than outside. For the source distance of 12 Mm the wave amplitude remains smaller inside the sunspot than outside for all moments of time. Using filtering technique we separated magnetoacoustic and magnetogravity waves. Simulations show that the sunspot changes the shape of the wave front and amplitude of the f-modes significantly stronger than the p-modes. It is shown, that inside the sunspot magnetoacoustic and magnetogravity waves are not spatially separated unlike the case of the horizontally uniform background model. We compared simulation results with the wave signals (Green's functions) extracted from the SOHO/MDI data for AR9787.
Preventing technology-induced errors in healthcare: the role of simulation.
Kushniruk, Andre W; Borycki, Elizabeth M; Anderson, James G; Anderson, Marilyn M
2009-01-01
We describe a novel approach to the study and prediction of technology-induced error in healthcare. The objective of our approach is to identify and reduce the potential for error so that the benefits of introducing information technology, such as Computerized Physician Order Entry (CPOE) or Electronic Health Records (EHRs), are maximized. The approach involves four phases. In Phase 1, we typically conduct small scale clinical simulations to assess whether or not the use of a new information technology can introduce error. (Human subjects are involved and user-system interactions are recorded.) In Phase 2, we analyze the results from Phase 1 to identify statistically significant relationships between usability issues and the occurrence of error (e.g., medication error). In Phase 3, we enter the results from Phase 2 into computer-based simulation models to explore the potential impact of the technology over time and across user populations. In Phase 4, we conduct naturalistic studies to examine whether or not the predictions made in Phases 2 and 3 apply to the real world. In closing, we discuss how the approach can be used to increase the safety of health information systems.
NASA Astrophysics Data System (ADS)
Goulden, T.; Hopkinson, C.
2013-12-01
The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future
Chemical basis of Trotter-Suzuki errors in quantum chemistry simulation
NASA Astrophysics Data System (ADS)
Babbush, Ryan; McClean, Jarrod; Wecker, Dave; Aspuru-Guzik, Alán; Wiebe, Nathan
2015-02-01
Although the simulation of quantum chemistry is one of the most anticipated applications of quantum computing, the scaling of known upper bounds on the complexity of these algorithms is daunting. Prior work has bounded errors due to discretization of the time evolution (known as "Trotterization") in terms of the norm of the error operator and analyzed scaling with respect to the number of spin orbitals. However, we find that these error bounds can be loose by up to 16 orders of magnitude for some molecules. Furthermore, numerical results for small systems fail to reveal any clear correlation between ground-state error and number of spin orbitals. We instead argue that chemical properties, such as the maximum nuclear charge in a molecule and the filling fraction of orbitals, can be decisive for determining the cost of a quantum simulation. Our analysis motivates several strategies to use classical processing to further reduce the required Trotter step size and estimate the necessary number of steps, without requiring additional quantum resources. Finally, we demonstrate improved methods for state preparation techniques which are asymptotically superior to proposals in the simulation literature.
On the Chemical Basis of Trotter-Suzuki Errors in Quantum Chemistry Simulation
NASA Astrophysics Data System (ADS)
Babbush, Ryan; McClean, Jarrod; Wecker, Dave; Aspuru-Guzik, Alán; Wiebe, Nathan
2015-03-01
Although the simulation of quantum chemistry is one of the most anticipated applications of quantum computing, the scaling of known upper bounds on the complexity of these algorithms is daunting. Prior work has bounded errors due to Trotterization in terms of the norm of the error operator and analyzed scaling with respect to the number of spin-orbitals. However, we find that these error bounds can be loose by up to sixteen orders of magnitude for some molecules. Furthermore, numerical results for small systems fail to reveal any clear correlation between ground state error and number of spin-orbitals. We instead argue that chemical properties, such as the maximum nuclear charge in a molecule and the filling fraction of orbitals, can be decisive for determining the cost of a quantum simulation. Our analysis motivates several strategies to use classical processing to further reduce the required Trotter step size and to estimate the necessary number of steps, without requiring additional quantum resources. Finally, we demonstrate improved methods for state preparation techniques which are asymptotically superior to proposals in the simulation literature.
Absolute Time Error Calibration of GPS Receivers Using Advanced GPS Simulators
1997-12-01
29th Annual Precise Time a d Time Interval (PTTI) Meeting ABSOLUTE TIME ERROR CALIBRATION OF GPS RECEIVERS USING ADVANCED GPS SIMULATORS E.D...DC 20375 USA Abstract Preche time transfer eq)er&nen& using GPS with t h e stabd?v’s under ten nanoseconh are common& being reported willrbr the... time transfer communily. Relarive calibrations are done by naeasurhg the time error of one GPS receiver versus a “known master refmence receiver.” Z?t
Time-domain study on reproducibility of laser-based soft-error simulation
NASA Astrophysics Data System (ADS)
Itsuji, Hiroaki; Kobayashi, Daisuke; Lourenco, Nelson E.; Hirose, Kazuyuki
2017-04-01
Studied is the soft error issue, which is a circuit malfunction caused by ion-radiation-induced noise currents. We have developed a laser-based soft-error simulation system to emulate the noise and evaluate its reproducibility in the time domain. It is found that this system, which utilizes a two-photon absorption process, can reproduce the shape of ion-induced transient currents, which are assumed to be induced from neutrons at the ground level. A technique used to extract the initial carrier structure inside the device is also presented.
Global particle simulation of lower hybrid wave propagation and mode conversion in tokamaks
Bao, J.; Lin, Z.; Kuley, A.
2015-12-10
Particle-in-cell simulation of lower hybrid (LH) waves in core plasmas is presented with a realistic electron-to-ion mass ratio in toroidal geometry. Due to the fact that LH waves mainly interact with electrons to drive the current, ion dynamic is described by cold fluid equations for simplicity, while electron dynamic is described by drift kinetic equations. This model could be considered as a new method to study LH waves in tokamak plasmas, which has advantages in nonlinear simulations. The mode conversion between slow and fast waves is observed in the simulation when the accessibility condition is not satisfied, which is consistent with the theory. The poloidal spectrum upshift and broadening effects are observed during LH wave propagation in the toroidal geometry.
NASA Astrophysics Data System (ADS)
Sonnad, Kiran G.; Hammond, Kenneth C.; Schwartz, Robert M.; Veitzer, Seth A.
2014-08-01
The use of transverse electric (TE) waves has proved to be a powerful, noninvasive method for estimating the densities of electron clouds formed in particle accelerators. Results from the plasma simulation program VSim have served as a useful guide for experimental studies related to this method, which have been performed at various accelerator facilities. This paper provides results of the simulation and modeling work done in conjunction with experimental efforts carried out at the Cornell electron storage ring “Test Accelerator” (CESRTA). This paper begins with a discussion of the phase shift induced by electron clouds in the transmission of RF waves, followed by the effect of reflections along the beam pipe, simulation of the resonant standing wave frequency shifts and finally the effects of external magnetic fields, namely dipoles and wigglers. A derivation of the dispersion relationship of wave propagation for arbitrary geometries in field free regions with a cold, uniform cloud density is also provided.
Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation
NASA Astrophysics Data System (ADS)
Li, C.
2012-07-01
POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.
Canestrari, Niccolo; Chubar, Oleg; Reininger, Ruben
2014-09-01
X-ray beamlines in modern synchrotron radiation sources make extensive use of grazing-incidence reflective optics, in particular Kirkpatrick-Baez elliptical mirror systems. These systems can focus the incoming X-rays down to nanometer-scale spot sizes while maintaining relatively large acceptance apertures and high flux in the focused radiation spots. In low-emittance storage rings and in free-electron lasers such systems are used with partially or even nearly fully coherent X-ray beams and often target diffraction-limited resolution. Therefore, their accurate simulation and modeling has to be performed within the framework of wave optics. Here the implementation and benchmarking of a wave-optics method for the simulation of grazing-incidence mirrors based on the local stationary-phase approximation or, in other words, the local propagation of the radiation electric field along geometrical rays, is described. The proposed method is CPU-efficient and fully compatible with the numerical methods of Fourier optics. It has been implemented in the Synchrotron Radiation Workshop (SRW) computer code and extensively tested against the geometrical ray-tracing code SHADOW. The test simulations have been performed for cases without and with diffraction at mirror apertures, including cases where the grazing-incidence mirrors can be hardly approximated by ideal lenses. Good agreement between the SRW and SHADOW simulation results is observed in the cases without diffraction. The differences between the simulation results obtained by the two codes in diffraction-dominated cases for illumination with fully or partially coherent radiation are analyzed and interpreted. The application of the new method for the simulation of wavefront propagation through a high-resolution X-ray microspectroscopy beamline at the National Synchrotron Light Source II (Brookhaven National Laboratory, USA) is demonstrated.
Hoffelner, J; Landes, H; Kaltenbacher, M; Lerch, R
2001-05-01
A recently developed finite element method (FEM) for the numerical simulation of nonlinear sound wave propagation in thermoviscous fluids is presented. Based on the nonlinear wave equation as derived by Kuznetsov, typical effects associated with nonlinear acoustics, such as generation of higher harmonics and dissipation resulting from the propagation of a finite amplitude wave through a thermoviscous medium, are covered. An efficient time-stepping algorithm based on a modification of the standard Newmark method is used for solving the non-linear semidiscrete equation system. The method is verified by comparison with the well-known Fubini and Fay solutions for plane wave problems, where good agreement is found. As a practical application, a high intensity focused ultrasound (HIFU) source is considered. Impedance simulations of the piezoelectric transducer and the complete HIFU source loaded with air and water are performed and compared with measured data. Measurements of radiated low and high amplitude pressure pulses are compared with corresponding simulation results. The obtained good agreement demonstrates validity and applicability of the nonlinear FEM.
Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators
Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew
2014-01-01
Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in
Source altitude for experiments to simulate space-to-earth laser propagation.
NASA Technical Reports Server (NTRS)
Minott, P. O.
1973-01-01
The bias in scintillation measurements caused by the proximity of a spherical-wave source to the turbulence region of the atmosphere is predicted, and the laser-source altitude required for meaningful experiments simulating space-to-earth laser propagation is estimated. It is concluded that the source should be located at two or more times the maximum altitude of the tropopause to ensure that all measurements are not biased by more than 25%. Thus the vehicle used for experiments of this type should be capable of reaching a minimum altitude of 32 km.
NASA Astrophysics Data System (ADS)
Ishii, Katsuhiro; Nishidate, Izumi; Iwai, Toshiaki
2014-05-01
Numerical analysis of optical propagation in highly scattering media is investigated when light is normally incident to the surface and re-emerges backward from the same point. This situation corresponds to practical light scattering setups, such as in optical coherence tomography. The simulation uses the path-length-assigned Monte Carlo method based on an ellipsoidal algorithm. The spatial distribution of the scattered light is determined and the dependence of its width and penetration depth on the path-length is found. The backscattered light is classified into three types, in which ballistic, snake, and diffuse photons are dominant.
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; ...
2016-09-19
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and themore » trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. Ultimately, these results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.« less
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; Touma, Danielle; Ruby Leung, L.
2016-09-19
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. Ultimately, these results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
NASA Astrophysics Data System (ADS)
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; Touma, Danielle; Ruby Leung, L.
2016-09-01
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land-atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land-atmosphere interactions in the development and maintenance of SAM.
Errors in the Simulated Heat Budget of CGCMs in the Eastern Part of the Tropical Oceans
NASA Astrophysics Data System (ADS)
Hazel, J.; Masarik, M. T.; Mechoso, C. R.; Small, R. J.; Curchitser, E. N.
2014-12-01
The simulation of the tropical climate by coupled atmosphere-ocean general circulation models (CGCMs) shows severe warm biases in the sea-surface temperature (SST) field of the southeastern part of the Pacific and the Atlantic (SEP and SEA, respectively). The errors are strongest near the land mass with a broad plume extending west, Also, the equatorial cold tongue is too strong and extends too far to the west. The simulated precipitation field generally shows a persistent double Inter-tropical Convergence Zone (ITCZ). Tremendous effort has been made to improve CGCM performance in general and to address these tropical errors in particular. The present paper start by comparing Taylor diagrams of the SST errors in the SEP and SEA by CGCMs participating in the Coupled Model Intercomparison Project phases 3 and 5 (CMIP3 and CMIP5, respectively). Some improvement is noted in models that perform poorly in CMIP3, but the overall performance is broadly similar in the two intercomparison projects. We explore the hypothesis that an improved representation of atmosphere-ocean interaction involving stratocumulus cloud decks and oceanic upwelling is essential to reduce errors in the SEP and SEA. To estimate the error contribution by clouds and upwelling, we examine the upper ocean surface heat flux budget. The resolution of the oceanic component of the CGCMs in both CMIP3 and CMIP5 is too coarse for a realistic representation of upwelling. Therefore, we also examine simulations by the Nested Regional Climate Model (nRCM) system, which is a CGCM with a very high-resolution regional model embedded in coastal regions. The nRCM consists of the Community Atmosphere Model (CAM, run at 1°) coupled to the global Parallel Ocean Program Model (POP, run at 1°) to which the Regional Ocean Modeling System (ROMS6, run at 5-10 km) is nested in selected coastal regions.
Elias, John J.; Kelly, Michael J.; Smith, Kathryn E.; Gall, Kenneth A.; Farr, Jack
2016-01-01
Background: Medial patellofemoral ligament (MPFL) reconstruction is performed to prevent recurrent instability, but errors in femoral fixation can elevate graft tension. Hypothesis: Errors related to femoral fixation will overconstrain the patella and increase medial patellofemoral pressures. Study Design: Controlled laboratory study. Methods: Five knees with patellar instability were represented with computational models. Kinematics during knee extension were characterized from computational reconstruction of motion performed within a dynamic computed tomography (CT) scanner. Multibody dynamic simulation of knee extension, with discrete element analysis used to quantify contact pressures, was performed for the preoperative condition and after MPFL reconstruction. A standard femoral attachment and graft resting length were set for each knee. The resting length was decreased by 2 mm, and the femoral attachment was shifted 5 mm posteriorly. The simulated errors were also combined. Root-mean-square errors were quantified for the comparison of preoperative patellar lateral shift and tilt between computationally reconstructed motion and dynamic simulation. Simulation output was compared between the preoperative and MPFL reconstruction conditions with repeated-measures Friedman tests and Dunnett comparisons against a control, which was the standard MPFL condition, with statistical significance set at P < .05. Results: Root-mean-square errors for simulated patellar tilt and shift were 5.8° and 3.3 mm, respectively. Patellar lateral tracking for the preoperative condition was significantly larger near full extension compared with the standard MPFL reconstruction (mean differences of 8 mm and 13° for shift and tilt, respectively, at 0°), and lateral tracking was significantly smaller for a posterior femoral attachment (mean differences of 3 mm and 4° for shift and tilt, respectively, at 0°). The maximum medial pressure was also larger for the short graft with a
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
NASA Technical Reports Server (NTRS)
Prive, Nikki C.; Errico, Ronald M.
2013-01-01
A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.
Nagatani, Yoshiki; Mizuno, Katsunori; Saeki, Takashi; Matsukawa, Mami; Sakaguchi, Takefumi; Hosoi, Hiroshi
2008-11-01
In cancellous bone, longitudinal waves often separate into fast and slow waves depending on the alignment of bone trabeculae in the propagation path. This interesting phenomenon becomes an effective tool for the diagnosis of osteoporosis because wave propagation behavior depends on the bone structure. Since the fast wave mainly propagates in trabeculae, this wave is considered to reflect the structure of trabeculae. For a new diagnosis method using the information of this fast wave, therefore, it is necessary to understand the generation mechanism and propagation behavior precisely. In this study, the generation process of fast wave was examined by numerical simulations using elastic finite-difference time-domain (FDTD) method and experimental measurements. As simulation models, three-dimensional X-ray computer tomography (CT) data of actual bone samples were used. Simulation and experimental results showed that the attenuation of fast wave was always higher in the early state of propagation, and they gradually decreased as the wave propagated in bone. This phenomenon is supposed to come from the complicated propagating paths of fast waves in cancellous bone.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Bidari, Pooya Sobhe; Alirezaie, Javad; Tavakkoli, Jahan
2017-03-01
This paper presents a method for modeling and simulation of shear wave generation from a nonlinear Acoustic Radiation Force Impulse (ARFI) that is considered as a distributed force applied at the focal region of a HIFU transducer radiating in nonlinear regime. The shear wave propagation is simulated by solving the Navier's equation from the distributed nonlinear ARFI as the source of the shear wave. Then, the Wigner-Ville Distribution (WVD) as a time-frequency analysis method is used to detect the shear wave at different local points in the region of interest. The WVD results in an estimation of the shear wave time of arrival, its mean frequency and local attenuation which can be utilized to estimate medium's shear modulus and shear viscosity using the Voigt model.
Titze, Ingo R.; Palaparthi, Anil; Smith, Simeon L.
2014-01-01
Time-domain computer simulation of sound production in airways is a widely used tool, both for research and synthetic speech production technology. Speed of computation is generally the rationale for one-dimensional approaches to sound propagation and radiation. Transmission line and wave-reflection (scattering) algorithms are used to produce formant frequencies and bandwidths for arbitrarily shaped airways. Some benchmark graphs and tables are provided for formant frequencies and bandwidth calculations based on specific mathematical terms in the one-dimensional Navier–Stokes equation. Some rules are provided here for temporal and spatial discretization in terms of desired accuracy and stability of the solution. Kinetic losses, which have been difficult to quantify in frequency-domain simulations, are quantified here on the basis of the measurements of Scherer, Torkaman, Kucinschi, and Afjeh [(2010). J. Acoust. Soc. Am. 128(2), 828–838]. PMID:25480071
A combined ADER-DG and PML approach for simulating wave propagation in unbounded domains
NASA Astrophysics Data System (ADS)
Amler, Thomas G.; Hoteit, Ibrahim; Alkhalifah, Tariq A.
2012-09-01
In this work, we present a numerical approach for simulating wave propagation in unbounded domains which combines discontinuous Galerkin methods with arbitrary high order time integration (ADER-DG) and a stabilized modification of perfectly matched layers (PML). Here, the ADER-DG method is applied to Bérenger's formulation of PML. The instabilities caused by the original PML formulation are treated by a fractional step method that allows to monitor whether waves are damped in PML region. In grid cells where waves are amplified by the PML, the contribution of damping terms is neglected and auxiliary variables are reset. Results of 2D simulations in acoustic media with constant and discontinuous material parameters are presented to illustrate the performance of the method.
NASA Technical Reports Server (NTRS)
Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.
2005-01-01
This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.
A phase screen model for simulating numerically the propagation of a laser beam in rain
Lukin, I P; Rychkov, D S; Falits, A V; Lai, Kin S; Liu, Min R
2009-09-30
The method based on the generalisation of the phase screen method for a continuous random medium is proposed for simulating numerically the propagation of laser radiation in a turbulent atmosphere with precipitation. In the phase screen model for a discrete component of a heterogeneous 'air-rain droplet' medium, the amplitude screen describing the scattering of an optical field by discrete particles of the medium is replaced by an equivalent phase screen with a spectrum of the correlation function of the effective dielectric constant fluctuations that is similar to the spectrum of a discrete scattering component - water droplets in air. The 'turbulent' phase screen is constructed on the basis of the Kolmogorov model, while the 'rain' screen model utiises the exponential distribution of the number of rain drops with respect to their radii as a function of the rain intensity. Theresults of the numerical simulation are compared with the known theoretical estimates for a large-scale discrete scattering medium. (propagation of laser radiation in matter)
Atomistic Simulation of Environment-Assisted Crack Propagation Behavior of SiO2
NASA Astrophysics Data System (ADS)
Yasukawa, Akio
A modified extended Tersoff interatomic potential function is proposed to simulate environment-assisted crack propagation behavior. First, the physical properties of Si, O2, H2, SiO2, and H2O were calculated by this modified function. It was confirmed that the calculated values agreed with the measured values very well. Next, the potential surface of the H2O molecular transporting process to the crack tip of SiO2 material was calculated by the same function. The relationship between the velocity of crack propagation "υ" and the stress intensity factor "K" was calculated based on this surface. The results agreed with the experimental results well. This simulation clarified that the crack velocity is controlled by the H2O transporting process in both regions I and II of the "υ-K curve". In region I, H2O molecules have physically limited access to the crack tip due to the small opening in the crack. This works as an energy barrier in transporting H2O molecules. Due to the relatively large crack opening in region II, H2O molecules have free access to the crack tip without any energy barrier. This difference makes a bend in the "υ-K curve" between regions I and II.
Parallel Simulation of Wave Propagation in Three-Dimensional Poroelastic Media
NASA Astrophysics Data System (ADS)
Sheen, D.; Baag, C.; Tuncay, K.; Ortoleva, P. J.
2003-12-01
Parallelized velocity-stress staggered-grid finite-difference method to simulate wave propagation in 3-D heterogeneous poroelastic media is presented. Biot_s poroelasticity theory is used to study the behavior of wavefield in fluid saturated media. In the poroelasticity theory, the fluid velocities and pressure are included as additional field variables to those for the pure elasticity in order to describe the interaction between pore fluid and solid. Discretization of governing equations for finite-difference approximation is performed for total of 13 components of field variables in 3-D Cartesian coordinates: six components for velocity, six components for solid stress, and a component for fluid pressure. The scheme has fourth-order accuracy in space and second-order accuracy in time. Also, to simulate wave propagation in an unbounded medium, the perfectly matched layer (PML) method is used as an absorbing boundary condition. In contrast with the pure elastic problem, the larger number of components to describe the poroelasticity requires a huge sum of core memory inevitably. In the case of modeling in a realistic scale, the computation is hardly to run on serial platforms. Therefore, the computationally efficient scheme to run on a large parallel environment is required. The parallel implementation is achieved by using a spatial decomposition and the portable message passing interface (MPI) for communication between neighboring processors. Direct comparisons are made for serial and parallel computations. The inevitability and efficiency of parallelization for the poroelastic wave modeling are also demonstrated using model examples.
NASA Astrophysics Data System (ADS)
Nikolopoulos, Efthymios I.; Polcher, Jan; Anagnostou, Emmanouil N.; Eisner, Stephanie; Fink, Gabriel; Kallos, George
2016-04-01
Precipitation is arguably one of the most important forcing variables that drive terrestrial water cycle processes. The process of precipitation exhibits significant variability in space and time, is associated with different water phases (liquid or solid) and depends on several other factors (aerosols, orography etc), which make estimation and modeling of this process a particularly challenging task. As such, precipitation information from different sensors/products is associated with uncertainty. Propagation of this uncertainty into hydrologic simulations can have a considerable impact on the accuracy of the simulated hydrologic variables. Therefore, to make hydrologic predictions more useful, it is important to investigate and assess the impact of precipitation uncertainty in hydrologic simulations in order to be able to quantify it and identify ways to minimize it. In this work we investigate the impact of precipitation uncertainty in hydrologic simulations using land surface models (e.g. ORCHIDEE) and global hydrologic models (e.g. WaterGAP3) for the simulation of several hydrologic variables (soil moisture, ET, runoff) over the Iberian Peninsula. Uncertainty in precipitation is assessed by utilizing various sources of precipitation input that include one reference precipitation dataset (SAFRAN), three widely-used satellite precipitation products (TRMM 3B42v7, CMORPH, PERSIANN) and a state-of-the-art reanalysis product (WFDEI) based on the ECMWF ERA-Interim reanalysis. Comparative analysis is based on using the SAFRAN-simulations as reference and it is carried out at different space (0.5deg or regional average) and time (daily or seasonal) scales. Furthermore, as an independent verification, simulated discharge is compared against available discharge observations for selected major rivers of Iberian region. Results allow us to draw conclusions regarding the impact of precipitation uncertainty with respect to i) hydrologic variable of interest, ii
NASA Technical Reports Server (NTRS)
Warner, Joseph D.; Theofylaktos, Onoufrios
2012-01-01
A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.
Davis, Matthew H.
2016-01-01
Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior
Systematic errors in Monsoon simulation: importance of the equatorial Indian Ocean processes
NASA Astrophysics Data System (ADS)
Annamalai, H.; Taguchi, B.; McCreary, J. P., Jr.; Nagura, M.; Miyama, T.
2015-12-01
H. Annamalai1, B. Taguchi2, J.P. McCreary1, J. Hafner1, M. Nagura2, and T. Miyama2 International Pacific Research Center, University of Hawaii, USA Application Laboratory, JAMSTEC, Japan In climate models, simulating the monsoon precipitation climatology remains a grand challenge. Compared to CMIP3, the multi-model-mean (MMM) errors for Asian-Australian monsoon (AAM) precipitation climatology in CMIP5, relative to GPCP observations, have shown little improvement. One of the implications is that uncertainties in the future projections of time-mean changes to AAM rainfall may not have reduced from CMIP3 to CMIP5. Despite dedicated efforts by the modeling community, the progress in monsoon modeling is rather slow. This leads us to wonder: Has the scientific community reached a "plateau" in modeling mean monsoon precipitation? Our focus here is to better understanding of the coupled air-sea interactions, and moist processes that govern the precipitation characteristics over the tropical Indian Ocean where large-scale errors persist. A series idealized coupled model experiments are performed to test the hypothesis that errors in the coupled processes along the equatorial Indian Ocean during inter-monsoon seasons could potentially influence systematic errors during the monsoon season. Moist static energy budget diagnostics has been performed to identify the leading moist and radiative processes that account for the large-scale errors in the simulated precipitation. As a way forward, we propose three coordinated efforts, and they are: (i) idealized coupled model experiments; (ii) process-based diagnostics and (iii) direct observations to constrain model physics. We will argue that a systematic and coordinated approach in the identification of the various interactive processes that shape the precipitation basic state needs to be carried out, and high-quality observations over the data sparse monsoon region are needed to validate models and further improve model physics.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
NASA Astrophysics Data System (ADS)
Zhao, Y.; Ciais, P.; Peylin, P.; Viovy, N.; Longdoz, B.; Bonnefond, J. M.; Rambal, S.; Klumpp, K.; Olioso, A.; Cellier, P.; Maignan, F.; Eglin, T.; Calvet, J. C.
2011-03-01
We analyze how biases of meteorological drivers impact the calculation of ecosystem CO2, water and energy fluxes by models. To do so, we drive the same ecosystem model by meteorology from gridded products and by ''true" meteorology from local observation at eddy-covariance flux sites. The study is focused on six flux tower sites in France spanning across a 7-14 °C and 600-1040 mm yr-1 climate gradient, with forest, grassland and cropland ecosystems. We evaluate the results of the ORCHIDEE process-based model driven by four different meteorological models against the same model driven by site-observed meteorology. The evaluation is decomposed into characteristic time scales. The main result is that there are significant differences between meteorological models and local tower meteorology. The seasonal cycle of air temperature, humidity and shortwave downward radiation is reproduced correctly by all meteorological models (average R2=0.90). At sites located near the coast and influenced by sea-breeze, or located in altitude, the misfit of meteorological drivers from gridded dataproducts and tower meteorology is the largest. We show that day-to-day variations in weather are not completely well reproduced by meteorological models, with R2 between modeled grid point and measured local meteorology going from 0.35 (REMO model) to 0.70 (SAFRAN model). The bias of meteorological models impacts the flux simulation by ORCHIDEE, and thus would have an effect on regional and global budgets. The forcing error defined by the simulated flux difference resulting from prescribing modeled instead than observed local meteorology drivers to ORCHIDEE is quantified for the six studied sites and different time scales. The magnitude of this forcing error is compared to that of the model error defined as the modeled-minus-observed flux, thus containing uncertain parameterizations, parameter values, and initialization. The forcing error is the largest on a daily time scale, for which it is
NASA Astrophysics Data System (ADS)
Lisinetskaya, Polina G.; Röhr, Merle I. S.; Mitrić, Roland
2016-06-01
We present a theoretical approach for the simulation of the electric field and exciton propagation in ordered arrays constructed of molecular-sized noble metal clusters bound to organic polymer templates. In order to describe the electronic coupling between individual constituents of the nanostructure we use the ab initio parameterized transition charge method which is more accurate than the usual dipole-dipole coupling. The electronic population dynamics in the nanostructure under an external laser pulse excitation is simulated by numerical integration of the time-dependent Schrödinger equation employing the fully coupled Hamiltonian. The solution of the TDSE gives rise to time-dependent partial point charges for each subunit of the nanostructure, and the spatio-temporal electric field distribution is evaluated by means of classical electrodynamics methods. The time-dependent partial charges are determined based on the stationary partial and transition charges obtained in the framework of the TDDFT. In order to treat large plasmonic nanostructures constructed of many constituents, the approximate self-consistent iterative approach presented in (Lisinetskaya and Mitrić in Phys Rev B 89:035433, 2014) is modified to include the transition-charge-based interaction. The developed methods are used to study the optical response and exciton dynamics of Ag3+ and porphyrin-Ag4 dimers. Subsequently, the spatio-temporal electric field distribution in a ring constructed of ten porphyrin-Ag4 subunits under the action of circularly polarized laser pulse is simulated. The presented methodology provides a theoretical basis for the investigation of coupled light-exciton propagation in nanoarchitectures built from molecular size metal nanoclusters in which quantum confinement effects are important.
The Contribution of Statistical Errors in DNS Data Quantified with RANS-DNS Simulations
NASA Astrophysics Data System (ADS)
Poroseva, Svetlana V.; Jeyapaul, Elbert; Murman, Scott M.; Colmenares F., Juan D.
2016-11-01
In RANS-DNS simulations, the Reynolds-averaged Navier-Stokes (RANS) equations are solved, with all terms but molecular diffusion being represented by the data from direct numerical simulations (DNS). No turbulence modeling is involved in such simulations. Recently, we demonstrated the use of RANS-DNS simulations as a framework for uncertainty quantification in statistical data collected from DNS. In the current study, contribution of the statistical error in the DNS data uncertainty is investigated using RANS-DNS simulations. Simulations of the Reynolds stress transport were conducted in a planar fully-developed turbulent channel flow at Re = 392 (based on the friction velocity) using DNS data collected at seven averaging times. The open-source CFD software OpenFOAM was used in RANS simulations. Budgets for the Reynolds stresses were obtained from DNS performed using a pseudo-spectral (Fourier/Chebyshev-tau) method. The material is in part based upon work supported by NASA under Award NNX12AJ61A.
Perez-Benito, Joaquin F; Mulero-Raichs, Mar
2016-10-06
Many kinetic studies concerning homologous reaction series report the existence of an activation enthalpy-entropy linear correlation (compensation plot), its slope being the temperature at which all the members of the series have the same rate constant (isokinetic temperature). Unfortunately, it has been demonstrated by statistical methods that the experimental errors associated with the activation enthalpy and entropy are mutually interdependent. Therefore, the possibility that some of those correlations might be caused by accidental errors has been explored by numerical simulations. As a result of this study, a computer program has been developed to evaluate the probability that experimental errors might lead to a linear compensation plot parting from an initial randomly scattered set of activation parameters (p-test). Application of this program to kinetic data for 100 homologous reaction series extracted from bibliographic sources has allowed concluding that most of the reported compensation plots can hardly be explained by the accumulation of experimental errors, thus requiring the existence of a previously existing, physically meaningful correlation.
Dogan, Hakan; Popov, Viktor
2016-05-01
We investigate the acoustic wave propagation in bubbly liquid inside a pilot sonochemical reactor which aims to produce antibacterial medical textile fabrics by coating the textile with ZnO or CuO nanoparticles. Computational models on acoustic propagation are developed in order to aid the design procedures. The acoustic pressure wave propagation in the sonoreactor is simulated by solving the Helmholtz equation using a meshless numerical method. The paper implements both the state-of-the-art linear model and a nonlinear wave propagation model recently introduced by Louisnard (2012), and presents a novel iterative solution procedure for the nonlinear propagation model which can be implemented using any numerical method and/or programming tool. Comparative results regarding both the linear and the nonlinear wave propagation are shown. Effects of bubble size distribution and bubble volume fraction on the acoustic wave propagation are discussed in detail. The simulations demonstrate that the nonlinear model successfully captures the realistic spatial distribution of the cavitation zones and the associated acoustic pressure amplitudes.
Salomons, Erik M.; Lohman, Walter J. A.; Zhou, Han
2016-01-01
Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing. PMID:26789631
Salomons, Erik M; Lohman, Walter J A; Zhou, Han
2016-01-01
Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.
NASA Astrophysics Data System (ADS)
Petrov, P.; Newman, G. A.
2010-12-01
-Fourier domain we had developed 3D code for full-wave field simulation in the elastic media which take into account nonlinearity introduced by free-surface effects. Our approach is based on the velocity-stress formulation. In the contrast to conventional formulation we defined the material properties such as density and Lame constants not at nodal points but within cells. This second order finite differences method formulated in the cell-based grid, generate numerical solutions compatible with analytical ones within the range errors determinate by dispersion analysis. Our simulator will be embedded in an inversion scheme for joint seismic- electromagnetic imaging. It also offers possibilities for preconditioning the seismic wave propagation problems in the frequency domain. References. Shin, C. & Cha, Y. (2009), Waveform inversion in the Laplace-Fourier domain, Geophys. J. Int. 177(3), 1067- 1079. Shin, C. & Cha, Y. H. (2008), Waveform inversion in the Laplace domain, Geophys. J. Int. 173(3), 922-931. Commer, M. & Newman, G. (2008), New advances in three-dimensional controlled-source electromagnetic inversion, Geophys. J. Int. 172(2), 513-535. Newman, G. A., Commer, M. & Carazzone, J. J. (2010), Imaging CSEM data in the presence of electrical anisotropy, Geophysics, in press.
NASA Astrophysics Data System (ADS)
Hoshiba, M.; Aoki, S.
2014-12-01
In many methods of the present Earthquake Early Warning (EEW) systems, hypocenter and magnitude are determined quickly and then strengths of ground motions are predicted. The 2011 Tohoku Earthquake (MW9.0), however, revealed some technical issues with the conventional methods: under-prediction due to the large extent of the fault rupture, and over-prediction due to confusion of the system by multiple aftershocks occurred simultaneously. To address these issues, a new concept is proposed for EEW: applying data assimilation technique, present wavefield is estimated precisely in real time (real-time shake mapping) and then future wavefield is predicted time-evolutionally using physical process of seismic wave propagation. Information of hypocenter location and magnitude are not required, which is basically different from the conventional method. In the proposed method, data assimilation technique is applied to estimate the current spatial distribution of wavefield, in which not only actual observation but also anticipated wavefield predicted from one time-step before are used. Real-time application of the data assimilation technique enables us to estimate wavefield in real time, which corresponds to real-time shake mapping. Once present situation is estimated precisely, we go forward to the prediction of future situation using simulation of wave propagation. The proposed method is applied to the 2011 Tohoku Earthquake (MW9.0) and the 2004 Mid-Niigata earthquake (Mw6.7). Future wavefield is precisely predicted, and the prediction is improved with shortening the lead time: for example, the error of 10 s prediction is smaller than that of 20 s, and that of 5 s is much smaller. By introducing this method, it becomes possible to predict ground motion precisely even for cases of the large extent of fault rupture and the multiple simultaneous earthquakes. The proposed method is based on a simulation of physical process from the precisely estimated present condition. This
A simulator study of the interaction of pilot workload with errors, vigilance, and decisions
NASA Technical Reports Server (NTRS)
Smith, H. P. R.
1979-01-01
A full mission simulation of a civil air transport scenario that had two levels of workload was used to observe the actions of the crews and the basic aircraft parameters and to record heart rates. The results showed that the number of errors was very variable among crews but the mean increased in the higher workload case. The increase in errors was not related to rise in heart rate but was associated with vigilance times as well as the days since the last flight. The recorded data also made it possible to investigate decision time and decision order. These also varied among crews and seemed related to the ability of captains to manage the resources available to them on the flight deck.
Spectral-infinite-element Simulations of Self-gravitating Seismic Wave Propagation
NASA Astrophysics Data System (ADS)
Gharti, H. N.; Tromp, J.
2015-12-01
Gravitational perturbations induced by particle motions are governed by the Poisson/Laplace equation, whosedomain includes all of space. Due to its unbounded nature, obtaining an accurate numerical solution is verychallenging. Consequently, gravitational perturbations are generally ignored in simulations of global seismicwave propagation, and only the unperturbed equilibrium gravitational field is taken into account. This so-called"Cowling approximation" is justified for relatively short-period waves (periods less than 250 s), but is invalidfor free-oscillation seismology. Existing methods are usually based on spherical harmonic expansions. Mostmethods are either limited to spherically symmetric models or have to rely on costly iterative implementationprocedures. We propose a spectral-infinite-element method to solve wave propagation in a self-gravitating Earthmodel. The spectral-infinite-element method combines the spectral-element method with the infinite-elementmethod. Spectral elements are used to capture the internal field, and infinite elements are used to represent theexternal field. To solve the weak form of the Poisson/Laplace equation, we employ Gauss-Legendre-Lobattoquadrature in spectral elements. In infinite elements, Gauss-Radau quadrature is used in the radial directionwhereas Gauss-Legendre-Lobatto quadrature is used in the lateral directions. Infinite elements naturally integratewith spectral elements, thereby avoiding an iterative implementation. We demonstrate the accuracy of themethod by comparing our results with a spherical harmonics method. The new method empowers us to tackleseveral problems in long-period seismology accurately and efficiently.
Numerical simulation of an adaptive optics system with laser propagation in the atmosphere.
Yan, H X; Li, S S; Zhang, D L; Chen, S
2000-06-20
A comprehensive model of laser propagation in the atmosphere with a complete adaptive optics (AO) system for phase compensation is presented, and a corresponding computer program is compiled. A direct wave-front gradient control method is used to reconstruct the wave-front phase. With the long-exposure Strehl ratio as the evaluation parameter, a numerical simulation of an AO system in a stationary state with the atmospheric propagation of a laser beam was conducted. It was found that for certain conditions the phase screen that describes turbulence in the atmosphere might not be isotropic. Numerical experiments show that the computational results in imaging of lenses by means of the fast Fourier transform (FFT) method agree well with those computed by means of an integration method. However, the computer time required for the FFT method is 1 order of magnitude less than that of the integration method. Phase tailoring of the calculated phase is presented as a means to solve the problem that variance of the calculated residual phase does not correspond to the correction effectiveness of an AO system. It is found for the first time to our knowledge that for a constant delay time of an AO system, when the lateral wind speed exceeds a threshold, the compensation effectiveness of an AO system is better than that of complete phase conjugation. This finding indicates that the better compensation capability of an AO system does not mean better correction effectiveness.
NASA Astrophysics Data System (ADS)
Taghavifar, Hadi; Khalilarya, Shahram; Jafarmadar, Samad; Taghavifar, Hamid
2016-08-01
A multidimensional computational fluid dynamic code was developed and integrated with probability density function combustion model to give the detailed account of multiphase fluid flow. The vapor phase within injector domain is treated with Reynolds-averaged Navier-Stokes technique. A new parameter is proposed which is an index of plane-cut spray propagation and takes into account two parameters of spray penetration length and cone angle at the same time. It was found that spray propagation factor (SPI) tends to increase at lower r/ d ratios, although the spray penetration tends to decrease. The results of SPI obtained by empirical correlation of Hay and Jones were compared with the simulation computation as a function of respective r/ d ratio. Based on the results of this study, the spray distribution on plane area has proportional correlation with heat release amount, NO x emission mass fraction, and soot concentration reduction. Higher cavitation is attributed to the sharp edge of nozzle entrance, yielding better liquid jet disintegration and smaller spray droplet that reduces soot mass fraction of late combustion process. In order to have better insight of cavitation phenomenon, turbulence magnitude in nozzle and combustion chamber was acquired and depicted along with spray velocity.
Open Boundary Particle-in-Cell Simulation of Dipolarization Front Propagation
NASA Technical Reports Server (NTRS)
Klimas, Alex; Hwang, Kyoung-Joo; Vinas, Adolfo F.; Goldstein, Melvyn L.
2014-01-01
First results are presented from an ongoing open boundary 2-1/2D particle-in-cell simulation study of dipolarization front (DF) propagation in Earth's magnetotail. At this stage, this study is focused on the compression, or pileup, region preceding the DF current sheet. We find that the earthward acceleration of the plasma in this region is in general agreement with a recent DF force balance model. A gyrophase bunched reflected ion population at the leading edge of the pileup region is reflected by a normal electric field in the pileup region itself, rather than through an interaction with the current sheet. We discuss plasma wave activity at the leading edge of the pileup region that may be driven by gradients, or by reflected ions, or both; the mode has not been identified. The waves oscillate near but above the ion cyclotron frequency with wavelength several ion inertial lengths. We show that the waves oscillate primarily in the perpendicular magnetic field components, do not propagate along the background magnetic field, are right handed elliptically (close to circularly) polarized, exist in a region of high electron and ion beta, and are stationary in the plasma frame moving earthward. We discuss the possibility that the waves are present in plasma sheet data, but have not, thus far, been discovered.
Stott, Shannon L; Irimia, Daniel; Karlsson, Jens O M
2004-04-01
A microscale theoretical model of intracellular ice formation (IIF) in a heterogeneous tissue volume comprising a tumor mass and surrounding normal tissue is presented. Intracellular ice was assumed to form either by intercellular ice propagation or by processes that are not affected by the presence of ice in neighboring cells (e.g., nucleation or mechanical rupture). The effects of cryosurgery on a 2D tissue consisting of 10(4) cells were simulated using a lattice Monte Carlo technique. A parametric analysis was performed to assess the specificity of IIF-related cell damage and to identify criteria for minimization of collateral damage to the healthy tissue peripheral to the tumor. Among the parameters investigated were the rates of interaction-independent IIF and intercellular ice propagation in the tumor and in the normal tissue, as well as the characteristic length scale of thermal gradients in the vicinity of the cryosurgical probe. Model predictions suggest gap junctional intercellular communication as a potential new target for adjuvant therapies complementing the cryosurgical procedure.
Numerical Simulations of Upstream Propagating Solitary Waves and Wave Breaking In A Stratified Fjord
NASA Astrophysics Data System (ADS)
Stastna, M.; Peltier, W. R.
In this talk we will discuss ongoing numerical modeling of the flow of a stratified fluid over large scale topography motivated by observations in Knight Inlet, a fjord in British Columbia, Canada. After briefly surveying the work done on the topic in the past we will discuss our latest set of simulations in which we have observed the gener- ation and breaking of three different types of nonlinear internal waves in the lee of the sill topography. The first type of wave observed is a large lee wave in the weakly strat- ified main portion of the water column, The second is an upward propagating internal wave forced by topography that breaks in the strong, near-surface pycnocline. The third is a train of upstream propagating solitary waves that, in certain circumstances, form as breaking waves consisting of a nearly solitary wave envelope and a highly unsteady core near the surface. Time premitting, we will comment on the implications of these results for our long term goal of quantifying tidally driven mixing in Knight Inlet.
NASA Astrophysics Data System (ADS)
Lu, B.; Darmon, M.; Leymarie, N.; Chatillon, S.; Potel, C.
2012-05-01
In-service inspection of Sodium-Cooled Fast Reactors (SFR) requires the development of non-destructive techniques adapted to the harsh environment conditions and the examination complexity. From past experiences, ultrasonic techniques are considered as suitable candidates. The ultrasonic telemetry is a technique used to constantly insure the safe functioning of reactor inner components by determining their exact position: it consists in measuring the time of flight of the ultrasonic response obtained after propagation of a pulse emitted by a transducer and its interaction with the targets. While in-service the sodium flow creates turbulences that lead to temperature inhomogeneities, which translates into ultrasonic velocity inhomogeneities. These velocity variations could directly impact the accuracy of the target locating by introducing time of flight variations. A stochastic simulation model has been developed to calculate the propagation of ultrasonic waves in such an inhomogeneous medium. Using this approach, the travel time is randomly generated by a stochastic process whose inputs are the statistical moments of travel times known analytically. The stochastic model predicts beam deviations due to velocity inhomogeneities, which are similar to those provided by a determinist method, such as the ray method.
NASA Astrophysics Data System (ADS)
Celik, Cihangir
-scale technologies. Prevention of SEEs has been studied and applied in the semiconductor industry by including radiation protection precautions in the system architecture or by using corrective algorithms in the system operation. Decreasing 10B content (20%of natural boron) in the natural boron of Borophosphosilicate glass (BPSG) layers that are conventionally used in the fabrication of semiconductor devices was one of the major radiation protection approaches for the system architecture. Neutron interaction in the BPSG layer was the origin of the SEEs because of the 10B (n,alpha) 7Li reaction products. Both of the particles produced have the capability of ionization in the silicon substrate region, whose thickness is comparable to the ranges of these particles. Using the soft error phenomenon in exactly the opposite manner of the semiconductor industry can provide a new neutron detection system based on the SERs in the semiconductor memories. By investigating the soft error mechanisms in the available semiconductor memories and enhancing the soft error occurrences in these devices, one can convert all memory using intelligent systems into portable, power efficient, directiondependent neutron detectors. The Neutron Intercepting Silicon Chip (NISC) project aims to achieve this goal by introducing 10B-enriched BPSG layers to the semiconductor memory architectures. This research addresses the development of a simulation tool, the NISC Soft Error Analysis Tool (NISCSAT), for soft error modeling and analysis in the semiconductor memories to provide basic design considerations for the NISC. NISCSAT performs particle transport and calculates the soft error probabilities, or SER, depending on energy depositions of the particles in a given memory node model of the NISC. Soft error measurements were performed with commercially available, off-the-shelf semiconductor memories and microprocessors to observe soft error variations with the neutron flux and memory supply voltage. Measurement
Enhancement of the NMSU Channel Error Simulator to Provide User-Selectable Link Delays
NASA Technical Reports Server (NTRS)
Horan, Stephen; Wang, Ru-Hai
2000-01-01
This is the third in a continuing series of reports describing the development of the Space-to-Ground Link Simulator (SGLS) to be used for testing data transfers under simulated space channel conditions. The SGLS is based upon Virtual Instrument (VI) software techniques for managing the error generation, link data rate configuration, and, now, selection of the link delay value. In this report we detail the changes that needed to be made to the SGLS VI configuration to permit link delays to be added to the basic error generation and link data rate control capabilities. This was accomplished by modifying the rate-splitting VIs to include a buffer the hold the incoming data for the duration selected by the user to emulate the channel link delay. In sample tests of this configuration, the TCP/IP(sub ftp) service and the SCPS(sub fp) service were used to transmit 10-KB data files using both symmetric (both forward and return links set to 115200 bps) and unsymmetric (forward link set at 2400 bps and a return link set at 115200 bps) link configurations. Transmission times were recorded at bit error rates of 0 through 10(exp -5) to give an indication of the link performance. In these tests. we noted separate timings for the protocol setup time to initiate the file transfer and the variation in the actual file transfer time caused by channel errors. Both protocols showed similar performance to that seen earlier for the symmetric and unsymmetric channels. This time, the delays in establishing the file protocol also showed that these delays could double the transmission time and need to be accounted for in mission planning. Both protocols also showed a difficulty in transmitting large data files over large link delays. In these tests, there was no clear favorite between the TCP/IP(sub ftp) and the SCPS(sub fp). Based upon these tests, further testing is recommended to extend the results to different file transfer configurations.
Shan, S.; Bevis, M.; Kendrick, E.; Mader, G.L.; Raleigh, D.; Hudnut, K.; Sartori, M.; Phillips, D.
2007-01-01
When kinematic GPS processing software is used to estimate the trajectory of an aircraft, unless the delays imposed on the GPS signals by the atmosphere are either estimated or calibrated via external observations, then vertical height errors of decimeters can occur. This problem is clearly manifested when the aircraft is positioned against multiple base stations in areas of pronounced topography because the aircraft height solutions obtained using different base stations will tend to be mutually offset, or biased, in proportion to the elevation differences between the base stations. When performing kinematic surveys in areas with significant topography it should be standard procedure to use multiple base stations, and to separate them vertically to the maximum extent possible, since it will then be much easier to detect mis-modeling of the atmosphere. Copyright 2007 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Guerdoux, Simon; Fourment, Lionel
2007-05-01
An Arbitrary Lagrangian Eulerian (ALE) formulation is developed to simulate the different stages of the Friction Stir Welding (FSW) process with the FORGE3® F.E. software. A splitting method is utilized: a) the material velocity/pressure and temperature fields are calculated, b) the mesh velocity is derived from the domain boundary evolution and an adaptive refinement criterion provided by error estimation, c) P1 and P0 variables are remapped. Different velocity computation and remap techniques have been investigated, providing significant improvement with respect to more standard approaches. The proposed ALE formulation is applied to FSW simulation. Steady state welding, but also transient phases are simulated, showing good robustness and accuracy of the developed formulation. Friction parameters are identified for an Eulerian steady state simulation by comparison with experimental results. Void formation can be simulated. Simulations of the transient plunge and welding phases help to better understand the deposition process that occurs at the trailing edge of the probe. Flexibility and robustness of the model finally allows investigating the influence of new tooling designs on the deposition process.
NASA Astrophysics Data System (ADS)
Murshid, Syed H.; Chakravarty, Abhijit
2011-06-01
Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.
Signal propagation time from the magnetotail to the ionosphere: OpenGGCM simulation
NASA Astrophysics Data System (ADS)
Ferdousi, Banafsheh; Raeder, Joachim
2016-07-01
Distinguishing the processes that occur during the first 2 min of a substorm depends critically on the correct timing of different signals between the plasma sheet and the ionosphere. To investigate signal propagation paths and signal travel times, we use a magnetohydrodynamic global simulation model of the Earth magnetosphere and ionosphere, OpenGGCM-CTIM model. By creating single impulse or sinusoidal pulsations in various locations in the magnetotail, the waves are launched, and we investigate the paths taken by the waves and the time that different waves take to reach the ionosphere. We find that it takes approximately about 27, 36, 45, 60, and 72 s for waves to travel from the tail plasma sheet at x =- 10,-15,-20,-25, and -30RE, respectively, to the ionosphere, contrary to previous reports. We also find that waves originating in the plasma sheet generally travel faster through the lobes than through the plasma sheet.
Bazelyan, E. M.; Sysoev, V. S.; Andreev, M. G.
2009-08-15
A numerical model of a spark discharge propagating along the ground surface from the point at which an {approx}100-kA current pulse is input into the ground has been developed based on experiments in which the velocity of a long leader was measured as a function of the leader current. The results of numerical simulations are in good agreement with the measured characteristics of creeping discharges excited in field experiments by using a high-power explosive magnetic generator. The reason why the length of a spark discharge depends weakly on the number of simultaneously developing channels is found. Analysis of the influence of the temporal characteristics of the current pulse on the parameters of the creeping spark discharge shows that actual lighting may exhibit similar behavior.
Numerical simulations of wave propagation in long bars with application to Kolsky bar testing
Corona, Edmundo
2014-11-01
Material testing using the Kolsky bar, or split Hopkinson bar, technique has proven instrumental to conduct measurements of material behavior at strain rates in the order of 10^{3} s^{-1}. Test design and data reduction, however, remain empirical endeavors based on the experimentalist's experience. Issues such as wave propagation across discontinuities, the effect of the deformation of the bar surfaces in contact with the specimen, the effect of geometric features in tensile specimens (dog-bone shape), wave dispersion in the bars and other particulars are generally treated using simplified models. The work presented here was conducted in Q3 and Q4 of FY14. The objective was to demonstrate the feasibility of numerical simulations of Kolsky bar tests, which was done successfully.
A 2D spring model for the simulation of ultrasonic wave propagation in nonlinear hysteretic media.
Delsanto, P P; Gliozzi, A S; Hirsekorn, M; Nobili, M
2006-07-01
A two-dimensional (2D) approach to the simulation of ultrasonic wave propagation in nonclassical nonlinear (NCNL) media is presented. The approach represents the extension to 2D of a previously proposed one dimensional (1D) Spring Model, with the inclusion of a PM space treatment of the intersticial regions between grains. The extension to 2D is of great practical relevance for its potential applications in the field of quantitative nondestructive evaluation and material characterization, but it is also useful, from a theoretical point of view, to gain a better insight of the interaction mechanisms involved. The model is tested by means of virtual 2D experiments. The expected NCNL behaviors are qualitatively well reproduced.
Chen, Qiang; Chen, Bin
2012-10-01
In this paper, a hybrid electrodynamics and kinetics numerical model based on the finite-difference time-domain method and lattice Boltzmann method is presented for electromagnetic wave propagation in weakly ionized hydrogen plasmas. In this framework, the multicomponent Bhatnagar-Gross-Krook collision model considering both elastic and Coulomb collisions and the multicomponent force model based on the Guo model are introduced, which supply a hyperfine description on the interaction between electromagnetic wave and weakly ionized plasma. Cubic spline interpolation and mean filtering technique are separately introduced to solve the multiscalar problem and enhance the physical quantities, which are polluted by numerical noise. Several simulations have been implemented to validate our model. The numerical results are consistent with a simplified analytical model, which demonstrates that this model can obtain satisfying numerical solutions successfully.
Lisitsa, Vadim; Tcheverda, Vladimir; Botter, Charlotte
2016-04-15
We present an algorithm for the numerical simulation of seismic wave propagation in models with a complex near surface part and free surface topography. The approach is based on the combination of finite differences with the discontinuous Galerkin method. The discontinuous Galerkin method can be used on polyhedral meshes; thus, it is easy to handle the complex surfaces in the models. However, this approach is computationally intense in comparison with finite differences. Finite differences are computationally efficient, but in general, they require rectangular grids, leading to the stair-step approximation of the interfaces, which causes strong diffraction of the wavefield. In this research we present a hybrid algorithm where the discontinuous Galerkin method is used in a relatively small upper part of the model and finite differences are applied to the main part of the model.
López, Rodrigo A.; Muñoz, Víctor; Viñas, Adolfo F.; Valdivia, Juan A.
2015-09-15
We use a particle-in-cell simulation to study the propagation of localized structures in a magnetized electron-positron plasma with relativistic finite temperature. We use as initial condition for the simulation an envelope soliton solution of the nonlinear Schrödinger equation, derived from the relativistic two fluid equations in the strongly magnetized limit. This envelope soliton turns out not to be a stable solution for the simulation and splits in two localized structures propagating in opposite directions. However, these two localized structures exhibit a soliton-like behavior, as they keep their profile after they collide with each other due to the periodic boundary conditions. We also observe the formation of localized structures in the evolution of a spatially uniform circularly polarized Alfvén wave. In both cases, the localized structures propagate with an amplitude independent velocity.
Computational Simulation of Damage Propagation in Three-Dimensional Woven Composites
NASA Technical Reports Server (NTRS)
Huang, Dade; Minnetyan, Levon
2005-01-01
Three dimensional (3D) woven composites have demonstrated multi-directional properties and improved transverse strength, impact resistance, and shear characteristics. The objective of this research is to develop a new model for predicting the elastic constants, hygrothermal effects, thermomechanical response, and stress limits of 3D woven composites; and to develop a computational tool to facilitate the evaluation of 3D woven composite structures with regard to damage tolerance and durability. Fiber orientations of weave and braid patterns are defined with reference to composite structural coordinates. Orthotropic ply properties and stress limits computed via micromechanics are transformed to composite structural coordinates and integrated to obtain the 3D properties. The various stages of degradation, from damage initiation to collapse of structures, in the 3D woven structures are simulated for the first time. Three dimensional woven composite specimens with various woven patterns under different loading conditions, such as tension, compression, bending, and shear are simulated in the validation process of this research. Damage initiation, growth, accumulation, and propagation to fracture are included in these simulations.
On a Wavelet-Based Method for the Numerical Simulation of Wave Propagation
NASA Astrophysics Data System (ADS)
Hong, Tae-Kyung; Kennett, B. L. N.
2002-12-01
A wavelet-based method for the numerical simulation of acoustic and elastic wave propagation is developed. Using a displacement-velocity formulation and treating spatial derivatives with linear operators, the wave equations are rewritten as a system of equations whose evolution in time is controlled by first-order derivatives. The linear operators for spatial derivatives are implemented in wavelet bases using an operator projection technique with nonstandard forms of wavelet transform. Using a semigroup approach, the discretized solution in time can be represented in an explicit recursive form, based on Taylor expansion of exponential functions of operator matrices. The boundary conditions are implemented by augmenting the system of equations with equivalent force terms at the boundaries. The wavelet-based method is applied to the acoustic wave equation with rigid boundary conditions at both ends in 1-D domain and to the elastic wave equation with a traction-free boundary conditions at a free surface in 2-D spatial media. The method can be applied directly to media with plane surfaces, and surface topography can be included with the aid of distortion of the grid describing the properties of the medium. The numerical results are compared with analytic solutions based on the Cagniard technique and show high accuracy. The wavelet-based approach is also demonstrated for complex media including highly varying topography or stochastic heterogeneity with rapid variations in physical parameters. These examples indicate the value of the approach as an accurate and stable tool for the simulation of wave propagation in general complex media.
Yılmaz, Bülent; Çiftçi, Emre
2013-06-01
Extracorporeal Shock Wave Lithotripsy (ESWL) is based on disintegration of the kidney stone by delivering high-energy shock waves that are created outside the body and transmitted through the skin and body tissues. Nowadays high-energy shock waves are also used in orthopedic operations and investigated to be used in the treatment of myocardial infarction and cancer. Because of these new application areas novel lithotriptor designs are needed for different kinds of treatment strategies. In this study our aim was to develop a versatile computer simulation environment which would give the device designers working on various medical applications that use shock wave principle a substantial amount of flexibility while testing the effects of new parameters such as reflector size, material properties of the medium, water temperature, and different clinical scenarios. For this purpose, we created a finite-difference time-domain (FDTD)-based computational model in which most of the physical system parameters were defined as an input and/or as a variable in the simulations. We constructed a realistic computational model of a commercial electrohydraulic lithotriptor and optimized our simulation program using the results that were obtained by the manufacturer in an experimental setup. We, then, compared the simulation results with the results from an experimental setup in which oxygen level in water was varied. Finally, we studied the effects of changing the input parameters like ellipsoid size and material, temperature change in the wave propagation media, and shock wave source point misalignment. The simulation results were consistent with the experimental results and expected effects of variation in physical parameters of the system. The results of this study encourage further investigation and provide adequate evidence that the numerical modeling of a shock wave therapy system is feasible and can provide a practical means to test novel ideas in new device design procedures.
Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Gass, Katherine; Strickland, Matthew J; Tolbert, Paige E
2012-09-01
In recent years, geostatistical modeling has been used to inform air pollution health studies. In this study, distributions of daily ambient concentrations were modeled over space and time for 12 air pollutants. Simulated pollutant fields were produced for a 6-year time period over the 20-county metropolitan Atlanta area using the Stanford Geostatistical Modeling Software (SGeMS). These simulations incorporate the temporal and spatial autocorrelation structure of ambient pollutants, as well as season and day-of-week temporal and spatial trends; these fields were considered to be the true ambient pollutant fields for the purposes of the simulations that followed. Simulated monitor data at the locations of actual monitors were then generated that contain error representative of instrument imprecision. From the simulated monitor data, four exposure metrics were calculated: central monitor and unweighted, population-weighted, and area-weighted averages. For each metric, the amount and type of error relative to the simulated pollutant fields are characterized and the impact of error on an epidemiologic time-series analysis is predicted. The amount of error, as indicated by a lack of spatial autocorrelation, is greater for primary pollutants than for secondary pollutants and is only moderately reduced by averaging across monitors; more error will result in less statistical power in the epidemiologic analysis. The type of error, as indicated by the correlations of error with the monitor data and with the true ambient concentration, varies with exposure metric, with error in the central monitor metric more of the classical type (i.e., independent of the monitor data) and error in the spatial average metrics more of the Berkson type (i.e., independent of the true ambient concentration). Error type will affect the bias in the health risk estimate, with bias toward the null and away from the null predicted depending on the exposure metric; population-weighting yielded the
Seidman, M.M.; Bredberg, A.; Seetharam, S.; Kraemer, K.H.
1987-07-01
Mutagenesis was studied at the DNA-sequence level in human fibroblast and lymphoid cells by use of a shuttle vector plasmid, pZ189, containing a suppressor tRNA marker gene. In a series of experiments, 62 plasmids were recovered that had two to six base substitutions in the 160-base-pair marker gene. Approximately 20-30% of the mutant plasmids that were recovered after passing ultraviolet-treated pZ189 through a repair-proficient human fibroblast line contained these multiple mutations. In contrast, passage of ultraviolet-treated pZ189 through an excision-repair-deficient (xeroderma pigmentosum) line yielded only 2% multiple base substitution mutants. Introducing a single-strand nick in otherwise unmodified pZ189 adjacent to the marker, followed by passage through the xeroderma pigmentosum cells, resulted in about 66% multiple base substitution mutants. The multiple mutations were found in a 160-base-pair region containing the marker gene but were rarely found in an adjacent 170-base-pair region. Passing ultraviolet-treated or nicked pZ189 through a repair-proficient human B-cell line also yielded multiple base substitution mutations in 20-33% of the mutant plasmids. An explanation for these multiple mutations is that they were generated by an error-prone polymerase while filling gaps. These mutations share many of the properties displayed by mutations in the immunoglobulin hypervariable regions.
1D and 2D simulations of seismic wave propagation in fractured media
NASA Astrophysics Data System (ADS)
Möller, Thomas; Friederich, Wolfgang
2016-04-01
Fractures and cracks have a significant influence on the propagation of seismic waves. Their presence causes reflections and scattering and makes the medium effectively anisotropic. We present a numerical approach to simulation of seismic waves in fractured media that does not require direct modelling of the fracture itself, but uses the concept of linear slip interfaces developed by Schoenberg (1980). This condition states that at an interface between two imperfectly bonded elastic media, stress is continuous across the interface while displacement is discontinuous. It is assumed that the jump of displacement is proportional to stress which implies a jump in particle velocity at the interface. We use this condition as a boundary condition to the elastic wave equation and solve this equation in the framework of a Nodal Discontinuous Galerkin scheme using a velocity-stress formulation. We use meshes with tetrahedral elements to discretise the medium. Each individual element face may be declared as a slip interface. Numerical fluxes have been derived by solving the 1D Riemann problem for slip interfaces with elastic and viscoelastic rheology. Viscoelasticity is realised either by a Kelvin-Voigt body or a Standard Linear Solid. These fluxes are not limited to 1D and can - with little modification - be used for simulations in higher dimensions as well. The Nodal Discontinuous Galerkin code "neXd" developed by Lambrecht (2013) is used as a basis for the numerical implementation of this concept. We present examples of simulations in 1D and 2D that illustrate the influence of fractures on the seismic wavefield. We demonstrate the accuracy of the simulation through comparison to an analytical solution in 1D.
NASA Astrophysics Data System (ADS)
Colli, Matteo; Lanza, Luca Giovanni; Rasmussen, Roy; Mireille Thériault, Julie
2014-05-01
Among the different environmental sources of error for ground based solid precipitation measurements, wind is the main responsible for a large reduction of the catching performance. This is due to the aero-dynamic response of the gauge that affects the originally undisturbed airflow causing the deformation of the snowflakes trajectories. The application of composite gauge/wind shield measuring configurations allows the improvements of the collection efficiency (CE) at low wind speeds (Uw) but the performance achievable under severe airflow velocities and the role of turbulence still have to be explained. This work is aimed to assess the wind induced errors of a Geonor T200B vibrating wires gauge equipped with a single Alter shield. This is a common measuring system for solid precipitation, which constitutes of the R3 reference system in the ongoing WMO Solid Precipitation InterComparison Experiment (SPICE). The analysis is carried out by adopting advanced Computational Fluid Dynamics (CFD) tools for the numerical simulation of the turbulent airflow realized in the proximity of the catching section of the gauge. The airflow patterns were computed by running both time-dependent (Large Eddies Simulation) and time-independent (Reynolds Averaged Navier-Stokes) simulations. on the Yellowstone high performance computing system of the National Center for Atmospheric Research. The evaluation of CE under different Uw conditions was obtained by running a Lagrangian model for the calculation of the snowflakes trajectories building on the simulated airflow patterns. Particular attention has been paid to the sensitivity of the trajectories to different snow particles sizes and water content (corresponding to dry and wet snow). The results will be illustrated in comparative form between the different methodologies adopted and the existing infield CE evaluations based on double shield reference gauges.
Shahmirzadi, Danial; Li, Ronny X; Konofagou, Elisa E
2012-11-01
Pulse wave imaging (PWI) is an ultrasound-based method for noninvasive characterization of arterial stiffness based on pulse wave propagation. Reliable numerical models of pulse wave propagation in normal and pathological aortas could serve as powerful tools for local pulse wave analysis and a guideline for PWI measurements in vivo. The objectives of this paper are to (1) apply a fluid-structure interaction (FSI) simulation of a straight-geometry aorta to confirm the Moens-Korteweg relationship between the pulse wave velocity (PWV) and the wall modulus, and (2) validate the simulation findings against phantom and in vitro results. PWI depicted and tracked the pulse wave propagation along the abdominal wall of canine aorta in vitro in sequential Radio-Frequency (RF) ultrasound frames and estimates the PWV in the imaged wall. The same system was also used to image multiple polyacrylamide phantoms, mimicking the canine measurements as well as modeling softer and stiffer walls. Finally, the model parameters from the canine and phantom studies were used to perform 3D two-way coupled FSI simulations of pulse wave propagation and estimate the PWV. The simulation results were found to correlate well with the corresponding Moens-Korteweg equation. A high linear correlation was also established between PWV² and E measurements using the combined simulation and experimental findings (R² = 0.98) confirming the relationship established by the aforementioned equation.
Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J; Koole, Sander L
2016-03-23
The error-related negativity (ERN or Ne) is a negative event-related brain potential that peaks about 20-100 ms after people perform an incorrect response in choice reaction time tasks. Prior research has shown that the ERN may be enhanced by situational and dispositional factors that promote intrinsic motivation. Building on and extending this work the authors hypothesized that simulated interpersonal touch may increase task engagement and thereby increase ERN amplitude. To test this notion, 20 participants performed a Go/No-Go task while holding a teddy bear or a same-sized cardboard box. As expected, the ERN was significantly larger when participants held a teddy bear rather than a cardboard box. This effect was most pronounced for people high (rather than low) in trait intrinsic motivation, who may depend more on intrinsically motivating task cues to maintain task engagement. These findings highlight the potential benefits of simulated interpersonal touch in stimulating attention to errors, especially among people who are intrinsically motivated.
Kushniruk, Andre W; Borycki, Elizabeth M; Anderson, James; Anderson, Marilyn; Nicoll, James; Kannry, Joseph
2013-01-01
This paper describes how simulations can be used to reason about the impact of user interface design features in exploring the effect of different contexts of use on the occurrence of technology-induced errors. The paper describes our approach in several phases, using an example from the analysis of technology-induced errors in medication administration. In the initial phase a clinical simulation is conducted to gather baseline data on the occurrence of technology-induced error using the technology under study. In this phase of the study, data arising from the clinical simulation are collected and then analyzed using qualitative and quantitative approaches to assess the relationship between aspects of interface design (i.e. usability problems) and rates of technology-induced error. In the next phase, the base rates for error associated with specific types of usability problems (from the initial phase) form the input into computer-based mathematical simulations. This approach links clinical simulations with computer-based simulations and demonstrates the potential impact of aspects of interface design and contextual factors upon medical error along with the implications for correcting interface design issues.
Difference in Simulated Low-Frequency Sound Propagation in the Various Species of Baleen Whale
NASA Astrophysics Data System (ADS)
Tsuchiya, Toshio; Naoi, Jun; Futa, Koji; Kikuchi, Toshiaki
2004-05-01
Whales found in the north Pacific are known to migrate over several thousand kilometers, from the Alaskan coast where they heartily feed during the summer to low latitude waters where they breed during the winter. Therefore, it is assumed that whales are using the “deep sound channel” for their long-distance communication. The main objective of this study is to clarify the behaviors of baleen whales from the standpoint of acoustical oceanography. Hence, authors investigated the possibility of long distance communication in various species of baleen whales, by simulating the long-distance propagation of their sound transmission, by applying the mode theory to actual sound speed profiles and by simulating their transmission frequencies. As a result, the possibility of long distance communication among blue whales using the deep sound channel was indicated. It was also indicated that communication among fin whales and blue whales can be made possible by coming close to shore slopes such as the Island of Hawaii.
First-principles simulation for strong and ultra-short laser pulse propagation in dielectrics
NASA Astrophysics Data System (ADS)
Yabana, K.
2016-05-01
We develop a computational approach for interaction between strong laser pulse and dielectrics based on time-dependent density functional theory (TDDFT). In this approach, a key ingredient is a solver to simulate electron dynamics in a unit cell of solids under a time-varying electric field that is a time-dependent extension of the static band calculation. This calculation can be regarded as a constitutive relation, providing macroscopic electric current for a given electric field applied to the medium. Combining the solver with Maxwell equations for electromagnetic fields of the laser pulse, we describe propagation of laser pulses in dielectrics without any empirical parameters. An important output from the coupled Maxwell+TDDFT simulation is the energy transfer from the laser pulse to electrons in the medium. We have found an abrupt increase of the energy transfer at certain laser intensity close to damage threshold. We also estimate damage threshold by comparing the transferred energy with melting and cohesive energies. It shows reasonable agreement with measurements.
Boundary element model for simulating sound propagation and source localization within the lungs.
Ozer, M B; Acikgoz, S; Royston, T J; Mansy, H A; Sandler, R H
2007-07-01
An acoustic boundary element (BE) model is used to simulate sound propagation in the lung parenchyma. It is computationally validated and then compared with experimental studies on lung phantom models. Parametric studies quantify the effect of different model parameters on the resulting acoustic field within the lung phantoms. The BE model is then coupled with a source localization algorithm to predict the position of an acoustic source within the phantom. Experimental studies validate the BE-based source localization algorithm and show that the same algorithm does not perform as well if the BE simulation is replaced with a free field assumption that neglects reflections and standing wave patterns created within the finite-size lung phantom. The BE model and source localization procedure are then applied to actual lung geometry taken from the National Library of Medicine's Visible Human Project. These numerical studies are in agreement with the studies on simpler geometry in that use of a BE model in place of the free field assumption alters the predicted acoustic field and source localization results. This work is relevant to the development of advanced auscultatory techniques that utilize multiple noninvasive sensors to construct acoustic images of sound generation and transmission to identify pathologies.
NASA Astrophysics Data System (ADS)
Warren, Craig; Giannopoulos, Antonios; Giannakis, Iraklis
2016-12-01
gprMax is open source software that simulates electromagnetic wave propagation, using the Finite-Difference Time-Domain (FDTD) method, for the numerical modelling of Ground Penetrating Radar (GPR). gprMax was originally developed in 1996 when numerical modelling using the FDTD method and, in general, the numerical modelling of GPR were in their infancy. Current computing resources offer the opportunity to build detailed and complex FDTD models of GPR to an extent that was not previously possible. To enable these types of simulations to be more easily realised, and also to facilitate the addition of more advanced features, gprMax has been redeveloped and significantly modernised. The original C-based code has been completely rewritten using a combination of Python and Cython programming languages. Standard and robust file formats have been chosen for geometry and field output files. New advanced modelling features have been added including: an unsplit implementation of higher order Perfectly Matched Layers (PMLs) using a recursive integration approach; diagonally anisotropic materials; dispersive media using multi-pole Debye, Drude or Lorenz expressions; soil modelling using a semi-empirical formulation for dielectric properties and fractals for geometric characteristics; rough surface generation; and the ability to embed complex transducers and targets.
Forward and adjoint simulations of seismic wave propagation on fully unstructured hexahedral meshes
NASA Astrophysics Data System (ADS)
Peter, Daniel; Komatitsch, Dimitri; Luo, Yang; Martin, Roland; Le Goff, Nicolas; Casarotti, Emanuele; Le Loher, Pieyre; Magnoni, Federica; Liu, Qinya; Blitz, Céline; Nissen-Meyer, Tarje; Basini, Piero; Tromp, Jeroen
2011-08-01
We present forward and adjoint spectral-element simulations of coupled acoustic and (an)elastic seismic wave propagation on fully unstructured hexahedral meshes. Simulations benefit from recent advances in hexahedral meshing, load balancing and software optimization. Meshing may be accomplished using a mesh generation tool kit such as CUBIT, and load balancing is facilitated by graph partitioning based on the SCOTCH library. Coupling between fluid and solid regions is incorporated in a straightforward fashion using domain decomposition. Topography, bathymetry and Moho undulations may be readily included in the mesh, and physical dispersion and attenuation associated with anelasticity are accounted for using a series of standard linear solids. Finite-frequency Fréchet derivatives are calculated using adjoint methods in both fluid and solid domains. The software is benchmarked for a layercake model. We present various examples of fully unstructured meshes, snapshots of wavefields and finite-frequency kernels generated by Version 2.0 'Sesame' of our widely used open source spectral-element package SPECFEM3D.
NASA Astrophysics Data System (ADS)
Suvorov, Alexey; Cai, Yong Q.; Sutter, John P.; Chubar, Oleg
2014-09-01
Up to now simulation of perfect crystal optics in the "Synchrotron Radiation Workshop" (SRW) wave-optics computer code was not available, thus hindering the accurate modelling of synchrotron radiation beamlines containing optical components with multiple-crystal arrangements, such as double-crystal monochromators and high-energy-resolution monochromators. A new module has been developed for SRW for calculating dynamical diffraction from a perfect crystal in the Bragg case. We demonstrate its successful application to the modelling of partially-coherent undulator radiation propagating through the Inelastic X-ray Scattering (IXS) beamline of the National Synchrotron Light Source II (NSLS-II) at Brookhaven National Laboratory. The IXS beamline contains a double-crystal and a multiple-crystal highenergy- resolution monochromator, as well as complex optics such as compound refractive lenses and Kirkpatrick-Baez mirrors for the X-ray beam transport and shaping, which makes it an excellent case for benchmarking the new functionalities of the updated SRW codes. As a photon-hungry experimental technique, this case study for the IXS beamline is particularly valuable as it provides an accurate evaluation of the photon flux at the sample position, using the most advanced simulation methods and taking into account parameters of the electron beam, details of undulator source, and the crystal optics.
Experimental simulations of beam propagation over large distances in a compact linear Paul trapa)
NASA Astrophysics Data System (ADS)
Gilson, Erik P.; Chung, Moses; Davidson, Ronald C.; Dorf, Mikhail; Efthimion, Philip C.; Majeski, Richard
2006-05-01
The Paul Trap Simulator Experiment (PTSX) is a compact laboratory experiment that places the physicist in the frame of reference of a long, charged-particle bunch coasting through a kilometers-long magnetic alternating-gradient (AG) transport system. The transverse dynamics of particles in both systems are described by similar equations, including nonlinear space-charge effects. The time-dependent voltages applied to the PTSX quadrupole electrodes are equivalent to the axially oscillating magnetic fields applied in the AG system. Experiments concerning the quiescent propagation of intense beams over large distances can then be performed in a compact and flexible facility. An understanding and characterization of the conditions required for quiescent beam transport, minimum halo particle generation, and precise beam compression and manipulation techniques, are essential, as accelerators and transport systems demand that ever-increasing amounts of space charge be transported. Application areas include ion-beam-driven high energy density physics, high energy and nuclear physics accelerator systems, etc. One-component cesium plasmas have been trapped in PTSX that correspond to normalized beam intensities, ŝ=ωp2(0)/2ωq2, up to 80% of the space-charge limit where self-electric forces balance the applied focusing force. Here, ωp(0)=[nb(0)eb2/mbɛ0]1/2 is the on-axis plasma frequency, and ωq is the smooth-focusing frequency associated with the applied focusing field. Plasmas in PTSX with values of ŝ that are 20% of the limit have been trapped for times corresponding to equivalent beam propagation over 10km. Results are presented for experiments in which the amplitude of the quadrupole focusing lattice is modified as a function of time. It is found that instantaneous changes in lattice amplitude can be detrimental to transverse confinement of the charge bunch.
NASA Astrophysics Data System (ADS)
Gelman, David; Schwartz, Steven D.
2010-05-01
The recently developed quantum-classical method has been applied to the study of dissipative dynamics in multidimensional systems. The method is designed to treat many-body systems consisting of a low dimensional quantum part coupled to a classical bath. Assuming the approximate zeroth order evolution rule, the corrections to the quantum propagator are defined in terms of the total Hamiltonian and the zeroth order propagator. Then the corrections are taken to the classical limit by introducing the frozen Gaussian approximation for the bath degrees of freedom. The evolution of the primary part is governed by the corrected propagator yielding the exact quantum dynamics. The method has been tested on two model systems coupled to a harmonic bath: (i) an anharmonic (Morse) oscillator and (ii) a double-well potential. The simulations have been performed at zero temperature. The results have been compared to the exact quantum simulations using the surrogate Hamiltonian approach.
Layout-aware simulation of soft errors in sub-100 nm integrated circuits
NASA Astrophysics Data System (ADS)
Balbekov, A.; Gorbunov, M.; Bobkov, S.
2016-12-01
Single Event Transient (SET) caused by charged particle traveling through the sensitive volume of integral circuit (IC) may lead to different errors in digital circuits in some cases. In technologies below 180 nm, a single particle can affect multiple devices causing multiple SET. This fact adds the complexity to fault tolerant devices design, because the schematic design techniques become useless without their layout consideration. The most common layout mitigation technique is a spatial separation of sensitive nodes of hardened circuits. Spatial separation decreases the circuit performance and increases power consumption. Spacing should thus be reasonable and its scaling follows the device dimensions' scaling trend. This paper presents the development of the SET simulation approach comprised of SPICE simulation with "double exponent" current source as SET model. The technique uses layout in GDSII format to locate nearby devices that can be affected by a single particle and that can share the generated charge. The developed software tool automatizes multiple simulations and gathers the produced data to present it as the sensitivity map. The examples of conducted simulations of fault tolerant cells and their sensitivity maps are presented in this paper.
Efficient simulation of cardiac electrical propagation using high order finite elements.
Arthurs, Christopher J; Bishop, Martin J; Kay, David
2012-05-20
We present an application of high order hierarchical finite elements for the efficient approximation of solutions to the cardiac monodomain problem. We detail the hurdles which must be overcome in order to achieve theoretically-optimal errors in the approximations generated, including the choice of method for approximating the solution to the cardiac cell model component. We place our work on a solid theoretical foundation and show that it can greatly improve the accuracy in the approximation which can be achieved in a given amount of processor time. Our results demonstrate superior accuracy over linear finite elements at a cheaper computational cost and thus indicate the potential indispensability of our approach for large-scale cardiac simulation.
Efficient simulation of cardiac electrical propagation using high order finite elements
Arthurs, Christopher J.; Bishop, Martin J.; Kay, David
2012-01-01
We present an application of high order hierarchical finite elements for the efficient approximation of solutions to the cardiac monodomain problem. We detail the hurdles which must be overcome in order to achieve theoretically-optimal errors in the approximations generated, including the choice of method for approximating the solution to the cardiac cell model component. We place our work on a solid theoretical foundation and show that it can greatly improve the accuracy in the approximation which can be achieved in a given amount of processor time. Our results demonstrate superior accuracy over linear finite elements at a cheaper computational cost and thus indicate the potential indispensability of our approach for large-scale cardiac simulation. PMID:24976644
Bizzarri, A.; Dunham, Eric M.; Spudich, P.
2010-01-01
We study how heterogeneous rupture propagation affects the coherence of shear and Rayleigh Mach wavefronts radiated by supershear earthquakes. We address this question using numerical simulations of ruptures on a planar, vertical strike-slip fault embedded in a three-dimensional, homogeneous, linear elastic half-space. Ruptures propagate spontaneously in accordance with a linear slip-weakening friction law through both homogeneous and heterogeneous initial shear stress fields. In the 3-D homogeneous case, rupture fronts are curved owing to interactions with the free surface and the finite fault width; however, this curvature does not greatly diminish the coherence of Mach fronts relative to cases in which the rupture front is constrained to be straight, as studied by Dunham and Bhat (2008a). Introducing heterogeneity in the initial shear stress distribution causes ruptures to propagate at speeds that locally fluctuate above and below the shear wave speed. Calculations of the Fourier amplitude spectra (FAS) of ground velocity time histories corroborate the kinematic results of Bizzarri and Spudich (2008a): (1) The ground motion of a supershear rupture is richer in high frequency with respect to a subshear one. (2) When a Mach pulse is present, its high frequency content overwhelms that arising from stress heterogeneity. Present numerical experiments indicate that a Mach pulse causes approximately an ω−1.7 high frequency falloff in the FAS of ground displacement. Moreover, within the context of the employed representation of heterogeneities and over the range of parameter space that is accessible with current computational resources, our simulations suggest that while heterogeneities reduce peak ground velocity and diminish the coherence of the Mach fronts, ground motion at stations experiencing Mach pulses should be richer in high frequencies compared to stations without Mach pulses. In contrast to the foregoing theoretical results, we find no average elevation
NASA Astrophysics Data System (ADS)
Sen, Seema; Lake, Markus; Kroppen, Norman; Farber, Peter; Wilden, Johannes; Schaaf, Peter
2017-02-01
This study describes the self-propagating exothermic reaction in Ti/Al reactive multilayer foils by using experiments and computational fluid dynamics simulation. The Ti/Al foils with different molar ratios of 1Ti/1Al, 1Ti/2Al and 1Ti/3Al were fabricated by magnetron sputtering method. Microstructural characteristics of the unreacted and reacted foils were analyzed by using electronic and atomic force microscopes. After an electrical ignition, the influence of ignition potentials on reaction propagation has been experimentally investigated. The reaction front propagates with a velocity of minimum 0.68 ± 0.4 m/s and maximum 2.57 ± 0.6 m/s depending on the input ignition potentials and the chemical compositions. Here, the 1Ti/3Al reactive foil exhibits both steady state and unsteady wavelike reaction propagation. Moreover, the numerical computational fluid dynamics (CFD) simulation shows the time dependent temperature flow and atomic mixing in a nanoscale reaction zone. The CFD simulation also indicates the potentiality for simulating exothermic reaction in the nanoscale Ti/Al foil.
Guo, Min; Abbott, Derek; Lu, Minhua; Liu, Huafeng
2016-03-01
Shear wave propagation speed has been regarded as an attractive indicator for quantitatively measuring the intrinsic mechanical properties of soft tissues. While most existing techniques use acoustic radiation force (ARF) excitation with focal spot region based on linear array transducers, we try to employ a special ARF with a focal line region and apply it to viscoelastic materials to create shear waves. First, a two-dimensional capacitive micromachined ultrasonic transducer with 64 × 128 fully controllable elements is realised and simulated to generate this special ARF. Then three-dimensional finite element models are developed to simulate the resulting shear wave propagation through tissue phantom materials. Three different phantoms are explored in our simulation study using: (a) an isotropic viscoelastic medium, (b) within a cylindrical inclusion, and (c) a transverse isotropic viscoelastic medium. For each phantom, the ARF creates a quasi-plane shear wave which has a preferential propagation direction perpendicular to the focal line excitation. The propagation of the quasi-plane shear wave is investigated and then used to reconstruct shear moduli sequentially after the estimation of shear wave speed. In the phantom with a transverse isotropic viscoelastic medium, the anisotropy results in maximum speed parallel to the fiber direction and minimum speed perpendicular to the fiber direction. The simulation results show that the line excitation extends the displacement field to obtain a large imaging field in comparison with spot excitation, and demonstrate its potential usage in measuring the mechanical properties of anisotropic tissues.
Kowalewski, Markus; Mukamel, Shaul
2015-07-28
Femtosecond Stimulated Raman Spectroscopy (FSRS) signals that monitor the excited state conical intersections dynamics of acrolein are simulated. An effective time dependent Hamiltonian for two C-H vibrational marker bands is constructed on the fly using a local mode expansion combined with a semi-classical surface hopping simulation protocol. The signals are obtained by a direct forward and backward propagation of the vibrational wave function on a numerical grid. Earlier work is extended to fully incorporate the anharmonicities and intermode couplings.
Kowalewski, Markus Mukamel, Shaul
2015-07-28
Femtosecond Stimulated Raman Spectroscopy (FSRS) signals that monitor the excited state conical intersections dynamics of acrolein are simulated. An effective time dependent Hamiltonian for two C—H vibrational marker bands is constructed on the fly using a local mode expansion combined with a semi-classical surface hopping simulation protocol. The signals are obtained by a direct forward and backward propagation of the vibrational wave function on a numerical grid. Earlier work is extended to fully incorporate the anharmonicities and intermode couplings.
Causes and cures for errors in the simulation of ion extraction from plasmas
Becker, R.
2006-03-15
For many years, computer programs have been available to simulate the extraction of positive ions from plasmas. The results of such simulations may not always agree with measurements. There are different reasons for this: the mathematical formulation must match with the simulated physics, the number of meshes must be high enough to correctly take into account the nonlinear space charge in the sheath, and ray tracing must be done in sufficiently small steps, using numerically correct field components and partial derivatives. In addition to these hidden problems the user may create errors by a wrong choice of parameters, which are not matching the assumptions of the mathematical formulation. Examples are the use of a positive ion extraction program for the extraction of negative ones, the choice of a wrong angle between the plasma electrode and the beam boundary in the vicinity of the meniscus, and the use of too few trajectories. The design of extraction electrodes generally has the aim to optimize the optical properties and the current of the ion beam. However, it is also important to take into account the surface fields in order to avoid dark currents and sparking.
Alastruey, Jordi; Khir, Ashraf W.; Matthys, Koen S.; Segers, Patrick; Sherwin, Spencer J.; Verdonck, Pascal R.; Parker, Kim H.; Peiró, Joaquim
2011-01-01
The accuracy of the nonlinear one-dimensional (1-D) equations of pressure and flow wave propagation in Voigt-type visco-elastic arteries was tested against measurements in a well-defined experimental 1:1 replica of the 37 largest conduit arteries in the human systemic circulation. The parameters required by the numerical algorithm were directly measured in the in vitro setup and no data fitting was involved. The inclusion of wall visco-elasticity in the numerical model reduced the underdamped high-frequency oscillations obtained using a purely elastic tube law, especially in peripheral vessels, which was previously reported in this paper [Matthys et al., 2007. Pulse wave propagation in a model human arterial network: Assessment of 1-D numerical simulations against in vitro measurements. J. Biomech. 40, 3476–3486]. In comparison to the purely elastic model, visco-elasticity significantly reduced the average relative root-mean-square errors between numerical and experimental waveforms over the 70 locations measured in the in vitro model: from 3.0% to 2.5% (p<0.012) for pressure and from 15.7% to 10.8% (p<0.002) for the flow rate. In the frequency domain, average relative errors between numerical and experimental amplitudes from the 5th to the 20th harmonic decreased from 0.7% to 0.5% (p<0.107) for pressure and from 7.0% to 3.3% (p<10−6) for the flow rate. These results provide additional support for the use of 1-D reduced modelling to accurately simulate clinically relevant problems at a reasonable computational cost. PMID:21724188
NASA Astrophysics Data System (ADS)
Wang, C.; Winterfeld, P. H.; Wu, Y. S.; Wang, Y.; Chen, D.; Yin, C.; Pan, Z.
2014-12-01
Hydraulic fracturing combined with horizontal drilling has made it possible to economically produce natural gas from unconventional shale gas reservoirs. An efficient methodology for evaluating hydraulic fracturing operation parameters, such as fluid and proppant properties, injection rates, and wellhead pressure, is essential for the evaluation and efficient design of these processes. Traditional numerical evaluation and optimization approaches are usually based on simulated fracture properties such as the fracture area. In our opinion, a methodology based on simulated production data is better, because production is the goal of hydraulic fracturing and we can calibrate this approach with production data that is already known. This numerical methodology requires a fully-coupled hydraulic fracture propagation and multi-phase flow model. In this paper, we present a general fully-coupled numerical framework to simulate hydraulic fracturing and post-fracture gas well performance. This three-dimensional, multi-phase simulator focuses on: (1) fracture width increase and fracture propagation that occurs as slurry is injected into the fracture, (2) erosion caused by fracture fluids and leakoff, (3) proppant subsidence and flowback, and (4) multi-phase fluid flow through various-scaled anisotropic natural and man-made fractures. Mathematical and numerical details on how to fully couple the fracture propagation and fluid flow parts are discussed. Hydraulic fracturing and production operation parameters, and properties of the reservoir, fluids, and proppants, are taken into account. The well may be horizontal, vertical, or deviated, as well as open-hole or cemented. The simulator is verified based on benchmarks from the literature and we show its application by simulating fracture network (hydraulic and natural fractures) propagation and production data history matching of a field in China. We also conduct a series of real-data modeling studies with different combinations of
Simulation systems for tsunami wave propagation forecasting within the French tsunami warning center
NASA Astrophysics Data System (ADS)
Gailler, A.; Hébert, H.; Loevenbruck, A.; Hernandez, B.
2012-04-01
Improvements in the availability of sea-level observations and advances in numerical modeling techniques are increasing the potential for tsunami warnings to be based on numerical model forecasts. Numerical tsunami propagation and inundation models are well developed, but they present a challenge to run in real-time, partly due to computational limitations and also to a lack of detailed knowledge on the earthquake rupture parameters. A first generation model-based tsunami prediction system is being developed as part of the French Tsunami Warning Center that will be operational by mid 2012. It involves a pre-computed unit source functions database (i.e., a number of tsunami model runs that are calculated ahead of time and stored) corresponding to tsunami scenarios generated by a source of seismic moment 1.75E+19 N.m with a rectangular fault 25 km by 20 km in size and 1 m in slip. The faults of the unit functions are placed adjacent to each other, following the discretization of the main seismogenic faults bounding the western Mediterranean and North-East Atlantic basins. An authomatized composite scenarios calculation tool is implemented to allow the simulation of any tsunami propagation scenario (i.e., of any seismic moment). The strategy is based on linear combinations and scaling of a finite number of pre-computed unit source functions. The number of unit functions involved varies with the magnitude of the wanted composite solution and the combined wave heights are multiplied by a given scaling factor to produce the new arbitrary scenario. Uncertainty on the magnitude of the detected event and inaccuracy on the epicenter location are taken into account in the composite scenarios calculation. For one tsunamigenic event, the tool produces finally 3 warning maps (i.e., most likely, minimum and maximum scenarios) together with the rough decision matrix representation. A no-dimension code representation is chosen to show zones in the main axis of energy at the basin
Numerical simulation of turbulent stratified flame propagation in a closed vessel
NASA Astrophysics Data System (ADS)
Gruselle, Catherine; Lartigue, Ghislain; Pepiot, Perrine; Moureau, Vincent; D'Angelo, Yves
2012-11-01
Reducing pollutants emissions while keeping a high combustion efficiency and a low fuel consumption is an important challenge for both gas turbine (GT) and internal combustion engines (ICE). To fulfill these new constraints, stratified combustion may constitute an efficient strategy. A tabulated chemistry approach based on FPI combined to a low-Mach number method is applied in the analysis of a turbulent propane-air flame with equivalence ratio (ER) stratification, which has been studied experimentally by Balusamy [S. Balusamy, Ph.D Thesis, INSA-Rouen (2010)]. Flame topology, along with flame velocity statistics, are well reproduced in the simulation, even if time-history effects are not accounted for in the tabulated approach. However, these effects may become significant when exhaust gas recirculation (EGR) is introduced. To better quantify them, both ER and EGR-stratified two-dimensional flames are simulated using finite-rate chemistry and a semi-detailed mechanism for propane oxidation. The numerical implementation is first investigated in terms of efficiency and accuracy, with a focus on splitting errors. The resulting flames are then analyzed to investigate potential extensions of the FPI technique to EGR stratification.
Booher, Stephen R.; Bacon, Larry Donald
2006-02-01
is only evaluated along a 2-D path in the vertical orientation. This precludes modeling propagation in the urban canyons of metropolitan areas, where horizontal paths are dominant. It also precludes modeling exterior to interior propagation. In view of the apparent inadequacy of urban propagation within mission level models, as evidenced by EADSIM, the study also attempts to address possible solutions to the problem. Correction of the sparsing techniques in both TIREM and SEKE models is recommended. Both SEKE and TIREM are optimized for DTED level 1 data, sparsed at 3 arc seconds resolution. This led to significant errors when map data was sparsed at higher or lower resolution. TIREM's errors would be significantly reduced if the 999 point array limit was eliminated. This would permit using interval sizes equal to the map resolution for larger areas. This same problem could be fixed in SEKE by changing the interval spacing from a fixed 3 arc second resolution ({approx}93 meters) to an interval which is set at the map resolution. Additionally, the cell elevation interpolation method which TIREM uses is inappropriate for the man-made structures encountered in urban environments. Turning this method of determining height off, or providing a selectable switch is desired. In the near term, it appears that further research into ray-tracing models is appropriate. Codes such as RF-ProTEC, which can be dynamically linked to mission level models such as EADSIM, can provide the higher fidelity propagation calculations required, and still permit the dynamic interactions required of the mission level model. Additional research should also be conducted on the best methods of representing man-made structures to determine whether codes other than ray-trace can be used.
Error-related negativity in the skilled brain of pianists reveals motor simulation.
Proverbio, Alice Mado; Cozzi, Matteo; Orlandi, Andrea; Carminati, Manuel
2017-03-27
Evidences have been provided of a crucial role of multimodal audio-visuomotor processing in subserving the musical ability. In this paper we investigated whether musical audiovisual stimulation might trigger the activation of motor information in the brain of professional pianists, due to the presence of permanent gestures/sound associations. At this aim EEG was recorded in 24 pianists and naive participants engaged in the detection of rare targets while watching hundreds of video clips showing a pair of hands in the act of playing, along with a compatible or incompatible piano soundtrack. Hands size and apparent distance allowed self-ownership and agency illusions, and therefore motor simulation. Event-related potentials (ERPs) and relative source reconstruction showed the presence of an Error-related negativity (ERN) to incongruent trials at anterior frontal scalp sites, only in pianists, with no difference in naïve participants. ERN was mostly explained by an anterior cingulate cortex (ACC) source. Other sources included "hands" IT regions, the superior temporal gyrus (STG) involved in conjoined auditory and visuomotor processing, SMA and cerebellum (representing and controlling motor subroutines), and regions involved in body parts representation (somatosensory cortex, uncus, cuneus and precuneus). The findings demonstrate that instrument-specific audiovisual stimulation is able to trigger error shooting and correction neural responses via motor resonance and mirroring, being a possible aid in learning and rehabilitation.
Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD
Smith, Thomas Michael; Shadid, John N.; Pawlowski, Roger P.; Cyr, Eric C.; Wildey, Timothy Michael
2014-01-01
This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.
Low-cost simulation of guided wave propagation in notched plate-like structures
NASA Astrophysics Data System (ADS)
Glushkov, E.; Glushkova, N.; Eremin, A.; Giurgiutiu, V.
2015-09-01
The paper deals with the development of low-cost tools for fast computer simulation of guided wave propagation and diffraction in plate-like structures of variable thickness. It is focused on notched surface irregularities, which are the basic model for corrosion damages. Their detection and identification by means of active ultrasonic structural health monitoring technologies assumes the use of guided waves generated and sensed by piezoelectric wafer active sensors as well as the use of laser Doppler vibrometry for surface wave scanning and visualization. To create a theoretical basis for these technologies, analytically based computer models of various complexity have been developed. The simplest models based on the Euler-Bernoulli beam and Kirchhoff plate equations have exhibited a sufficiently wide frequency range of reasonable coincidence with the results obtained within more complex integral equation based models. Being practically inexpensive, they allow one to carry out a fast parametric analysis revealing characteristic features of wave patterns that can be then made more exact using more complex models. In particular, the effect of resonance wave energy transmission through deep notches has been revealed within the plate model and then validated by the integral equation based calculations and experimental measurements.
Simulation study of axial ultrasonic wave propagation in heterogeneous bovine cortical bone.
Hata, Toshiho; Nagatani, Yoshiki; Takano, Koki; Matsukawa, Mami
2016-11-01
The effect of the heterogeneity of the long cortical bone is an important factor when applying the axial transmission technique. In this study, the axial longitudinal wave velocity distributions in specimens from the mid-shaft of a bovine femur were measured, in the MHz range. Bilinear interpolation and the piecewise cubic Hermite interpolating polynomial method were used to construct three-dimensional (3D) axial velocity models with a resolution of 40 μm. By assuming the uniaxial anisotropy of the bone and using the results of previous experimental studies [Yamato, Matsukawa, Yanagitani, Yamazaki, Mizukawa, and Nagano (2008b). Calcified Tissue Int. 82, 162-169; Nakatsuji, Yamamoto, Suga, Yanagitani, Matsukawa, Yamazaki, and Matsuyama (2011). Jpn. J. Appl. Phys. 50, 07HF18], the distributions of all elastic moduli were estimated to obtain a 3D heterogeneous bone model and a uniform model. In the heterogeneous model, moduli at the surface were smaller than those inside the model. The elastic finite-difference time-domain method was used to simulate axial ultrasonic wave propagation in these models. In the heterogeneous model, the wavefront of the first arriving signal (FAS) was dependent on the heterogeneity, and the FAS velocity depended on the measured position. These phenomena were not observed in the uniform model.
NASA Astrophysics Data System (ADS)
Okamoto, H.; Endo, M.; Fukushima, K.; Higaki, H.; Ito, K.; Moriya, K.; Yamaguchi, S.; Lund, S. M.
2014-01-01
An overview is given of the novel beam-dynamics experiments based on compact non-neutral plasma traps at Hiroshima University. We have designed and constructed two different classes of trap systems, one of which uses a radio-frequency electric field (Paul trap) and the other uses an axial magnetic field (Penning trap) for transverse plasma confinement. These systems are called "S-POD" (Simulator for Particle Orbit Dynamics). The S-POD systems can approximately reproduce the collective motion of a charged-particle beam propagating through long alternating-gradient (AG) quadrupole focusing channels using the Paul trap and long continuous focusing channels using the Penning trap. This allows us to study various beam-dynamics issues in compact and inexpensive experiments without relying on large-scale accelerators. So far, the linear Paul traps have been applied for the study of resonance-related issues including coherent-resonance-induced stop bands and their dependence on AG lattice structures, resonance crossing in fixed-field AG accelerators, ultralow-emittance beam stability, etc. The Penning trap with multi-ring electrodes has been employed primarily for the study of beam halo formation driven by initial distribution perturbations. In this paper, we briefly overview the S-POD systems, and then summarize recent experimental results on resonance effects and halo formation.
Estimation of crosstalk in LED fNIRS by photon propagation Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Iwano, Takayuki; Umeyama, Shinji
2015-12-01
fNIRS (functional near-Infrared spectroscopy) can measure brain activity non-invasively and has advantages such as low cost and portability. While the conventional fNIRS has used laser light, LED light fNIRS is recently becoming common in use. Using LED for fNIRS, equipment can be more inexpensive and more portable. LED light, however, has a wider illumination spectrum than laser light, which may change crosstalk between the calculated concentration change of oxygenated and deoxygenated hemoglobins. The crosstalk is caused by difference in light path length in the head tissues depending on wavelengths used. We conducted Monte Carlo simulations of photon propagation in the tissue layers of head (scalp, skull, CSF, gray matter, and white matter) to estimate the light path length in each layers. Based on the estimated path lengths, the crosstalk in fNIRS using LED light was calculated. Our results showed that LED light more increases the crosstalk than laser light does when certain combinations of wavelengths were adopted. Even in such cases, the crosstalk increased by using LED light can be effectively suppressed by replacing the value of extinction coefficients used in the hemoglobin calculation to their weighted average over illumination spectrum.
Simulation of crack propagation in fiber-reinforced concrete by fracture mechanics
Zhang Jun; Li, Victor C
2004-02-01
Mode I crack propagation in fiber-reinforced concrete (FRC) is simulated by a fracture mechanics approach. A superposition method is applied to calculate the crack tip stress intensity factor. The model relies on the fracture toughness of hardened cement paste (K{sub IC}) and the crack bridging law, so-called stress-crack width ({sigma}-{delta}) relationship of the material, as the fundamental material parameters for model input. As two examples, experimental data from steel FRC beams under three-point bending load are analyzed with the present fracture mechanics model. A good agreement has been found between model predictions and experimental results in terms of flexural stress-crack mouth opening displacement (CMOD) diagrams. These analyses and comparisons confirm that the structural performance of concrete and FRC elements, such as beams in bending, can be predicted by the simple fracture mechanics model as long as the related material properties, K{sub IC} and ({sigma}-{delta}) relationship, are known.
Sergeeva, E A; Kirillin, M Yu; Priezzhev, A V
2006-11-30
The time profile of a femtosecond pulse propagating in media with a high scattering anisotropy (g{>=}0.9) is studied in detail. The iteration method based on the expansion of the light field in a series in photon scattering orders with the account for the multiply scattered component is proposed to study analytically the structure of a scattered radiation pulse. The small-angle approximation of the radiation transfer theory used for calculations of low-order scatterings is modified to take into account the spread in the photon delay times. The shape of a scattered ultrashort pulse calculated theoretically well agrees with the shape obtained by the Monte-Carlo simulation. It is shown that the pulse profile in a scattering medium depends on the shape of the scattering phase function with the conservation of the anisotropy factor. A comparative analysis of contributions from different scattering orders to the pulse structure is performed depending on the optical properties of a scattering medium. (special issue devoted to multiple radiation scattering in random media)
Intensity images and statistics from numerical simulation of wave propagation in 3-D random media.
Martin, J M; Flatté, S M
1988-06-01
An extended random medium is modeled by a set of 2-D thin Gaussian phase-changing screens with phase power spectral densities appropriate to the natural medium being modeled. Details of the algorithm and limitations on its application to experimental conditions are discussed, concentrating on power-law spectra describing refractive-index fluctuations of the neutral atmosphere. Inner and outer scale effects on intensity scintillation spectra and intensity variance are also included. Images of single realizations of the intensity field at the observing plane are presented, showing that under weak scattering the small-scale Fresnel length structure of the medium dominates the intensity scattering pattern. As the strength of scattering increases, caustics and interference fringes around focal regions begin to form. Finally, in still stronger scatter, the clustering of bright regions begins to reflect the large-scale structure of the medium. For plane waves incident on the medium, physically reasonable inner scales do not produce the large values of intensity variance observed in the focusing region during laser propagation experiments over kilometer paths in the atmosphere. Values as large as experimental observations have been produced in the simulations, but they require inner scales of the order of 10 cm. Inclusion of an outer scale depresses the low-frequency end of the intensity spectrum and reduces the maximum of the intensity variance. Increasing the steepness of the power law also slightly increases the maximum value of intensity variance.
Propagation of Electrical Excitation in a Ring of Cardiac Cells: A Computer Simulation Study
NASA Technical Reports Server (NTRS)
Kogan, B. Y.; Karplus, W. J.; Karpoukhin, M. G.; Roizen, I. M.; Chudin, E.; Qu, Z.
1996-01-01
The propagation of electrical excitation in a ring of cells described by the Noble, Beeler-Reuter (BR), Luo-Rudy I (LR I), and third-order simplified (TOS) mathematical models is studied using computer simulation. For each of the models it is shown that after transition from steady-state circulation to quasi-periodicity achieved by shortening the ring length (RL), the action potential duration (APD) restitution curve becomes a double-valued function and is located below the original ( that of an isolated cell) APD restitution curve. The distributions of APD and diastolic interval (DI) along a ring for the entire range of RL corresponding to quasi-periodic oscillations remain periodic with the period slightly different from two RLs. The 'S' shape of the original APD restitution curve determines the appearance of the second steady-state circulation region for short RLs. For all the models and the wide variety of their original APD restitution curves, no transition from quasi-periodicity to chaos was observed.
Benchmark of numerical tools simulating beam propagation and secondary particles in ITER NBI
Sartori, E. Veltri, P.; Serianni, G.; Dlougach, E.; Hemsworth, R.; Singh, M.
2015-04-08
Injection of high energy beams of neutral particles is a method for plasma heating in fusion devices. The ITER injector, and its prototype MITICA (Megavolt ITER Injector and Concept Advancement), are large extrapolations from existing devices: therefore numerical modeling is needed to set thermo-mechanical requirements for all beam-facing components. As the power and charge deposition originates from several sources (primary beam, co-accelerated electrons, and secondary production by beam-gas, beam-surface, and electron-surface interaction), the beam propagation along the beam line is simulated by comprehensive 3D models. This paper presents a comparative study between two codes: BTR has been used for several years in the design of the ITER HNB/DNB components; SAMANTHA code was independently developed and includes additional phenomena, such as secondary particles generated by collision of beam particles with the background gas. The code comparison is valuable in the perspective of the upcoming experimental operations, in order to prepare a reliable numerical support to the interpretation of experimental measurements in the beam test facilities. The power density map calculated on the Electrostatic Residual Ion Dump (ERID) is the chosen benchmark, as it depends on the electric and magnetic fields as well as on the evolution of the beam species via interaction with the gas. Finally the paper shows additional results provided by SAMANTHA, like the secondary electrons produced by volume processes accelerated by the ERID fringe-field towards the Cryopumps.
A PIC-MCC code for simulation of streamer propagation in air
Chanrion, O. Neubert, T.
2008-07-20
{approx}3 times the breakdown field. At higher altitudes, the background electric field must be relatively larger to create a similar field in a streamer tip because of increased influence of photoionisation. It is shown that the role of photoionization increases with altitude and the effect is to decrease the space charge fields and increase the streamer propagation velocity. Finally, effects of electrons in the runaway regime on negative streamer dynamics are presented. It is shown the energetic electrons create enhanced ionization in front of negative streamers. The simulations suggest that the thermal runaway mechanism may operate at lower altitudes and be associated with lightning and thundercloud electrification while the mechanism is unlikely to be important in sprite generation at higher altitudes in the mesosphere.
NASA Astrophysics Data System (ADS)
Chubar, Oleg; Berman, Lonny; Chu, Yong S.; Fluerasu, Andrei; Hulbert, Steve; Idir, Mourad; Kaznatcheev, Konstantine; Shapiro, David; Shen, Qun; Baltser, Jana
2011-09-01
Partially-coherent wavefront propagation calculations have proven to be feasible and very beneficial in the design of beamlines for 3rd and 4th generation Synchrotron Radiation (SR) sources. These types of calculations use the framework of classical electrodynamics for the description, on the same accuracy level, of the emission by relativistic electrons moving in magnetic fields of accelerators, and the propagation of the emitted radiation wavefronts through beamline optical elements. This enables accurate prediction of performance characteristics for beamlines exploiting high SR brightness and/or high spectral flux. Detailed analysis of radiation degree of coherence, offered by the partially-coherent wavefront propagation method, is of paramount importance for modern storage-ring based SR sources, which, thanks to extremely small sub-nanometer-level electron beam emittances, produce substantial portions of coherent flux in X-ray spectral range. We describe the general approach to partially-coherent SR wavefront propagation simulations and present examples of such simulations performed using "Synchrotron Radiation Workshop" (SRW) code for the parameters of hard X-ray undulator based beamlines at the National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory. These examples illustrate general characteristics of partially-coherent undulator radiation beams in low-emittance SR sources, and demonstrate advantages of applying high-accuracy physical-optics simulations to the optimization and performance prediction of X-ray optical beamlines in these new sources.
Numerical errors in the computation of subfilter scalar variance in large eddy simulations
NASA Astrophysics Data System (ADS)
Kaul, C. M.; Raman, V.; Balarac, G.; Pitsch, H.
2009-05-01
Subfilter scalar variance is a key quantity for scalar mixing at the small scales of a turbulent flow and thus plays a crucial role in large eddy simulation of combustion. While prior studies have mainly focused on the physical aspects of modeling subfilter variance, the current work discusses variance models in conjunction with the numerical errors due to their implementation using finite-difference methods. A priori tests on data from direct numerical simulation of homogeneous turbulence are performed to evaluate the numerical implications of specific model forms. Like other subfilter quantities, such as kinetic energy, subfilter variance can be modeled according to one of two general methodologies. In the first of these, an algebraic equation relating the variance to gradients of the filtered scalar field is coupled with a dynamic procedure for coefficient estimation. Although finite-difference methods substantially underpredict the gradient of the filtered scalar field, the dynamic method is shown to mitigate this error through overestimation of the model coefficient. The second group of models utilizes a transport equation for the subfilter variance itself or for the second moment of the scalar. Here, it is shown that the model formulation based on the variance transport equation is consistently biased toward underprediction of the subfilter variance. The numerical issues in the variance transport equation stem from discrete approximations to chain-rule manipulations used to derive convection, diffusion, and production terms associated with the square of the filtered scalar. These approximations can be avoided by solving the equation for the second moment of the scalar, suggesting that model's numerical superiority.
Nakahata, K; Sugahara, H; Barth, M; Köhler, B; Schubert, F
2016-04-01
When modeling ultrasonic wave propagation in metals, it is important to introduce mesoscopic crystalline structures because the anisotropy of the crystal structure and the heterogeneity of grains disturb ultrasonic waves. In this paper, a three-dimensional (3D) polycrystalline structure generated by multiphase-field modeling was introduced to ultrasonic simulation for nondestructive testing. 3D finite-element simulations of ultrasonic waves were validated and compared with visualization results obtained from laser Doppler vibrometer measurements. The simulation results and measurements showed good agreement with respect to the velocity and front shape of the pressure wave, as well as multiple scattering due to grains. This paper discussed the applicability of a transversely isotropic approach to ultrasonic wave propagation in a polycrystalline metal with columnar structures.
NASA Astrophysics Data System (ADS)
Chang, Won-Seok; Kim, Jong-Ki; Cho, Jin-Ho; Lim, Jae-Hong
2016-09-01
With the advent of coherent X-ray sources, X-ray refraction has begun to be utilized for X-ray imaging of unprecedented sensitivity. The aim of this study was to develop a wave propagation simulator that provides a map of X-ray refraction after passing through an object. We applied the Fresnel diffraction integral for calculating the propagated wave and then obtained the refraction map by differentiating the phase in the refraction-analyzing direction. The simulation was validated by comparing the computed tomography (CT) reconstruction of a virtual phantom with its map of refractive index: the deviations were below 0.7% for soft tissues under our test condition. The simulator can be used for testing and developing highly-sensitive X-ray imaging techniques based on X-ray refraction analysis prior to experimentation.
NASA Astrophysics Data System (ADS)
Nagatani, Yoshiki; Imaizumi, Hirotaka; Fukuda, Takashi; Matsukawa, Mami; Watanabe, Yoshiaki; Otani, Takahiko
2006-09-01
In cancellous bone, longitudinal waves often separate into fast and slow waves depending on the alignment of bone trabeculae. This interesting phenomenon becomes an effective tool for the diagnosis of osteoporosis because wave propagation behavior depends on the bone structure. We have, therefore, simulated wave propagation in such a complex medium by the finite-difference time-domain (FDTD) method, using a three-dimensional X-ray computer tomography (CT) model of an actual cancellous bone. In this simulation, experimentally observed acoustic constants of the cortical bone were adopted. As a result, the generation of fast and slow waves was confirmed. The speed of fast waves and the amplitude of slow waves showed good correlations with the bone volume fraction. The simulated results were also compared with the experimental results obtained from the identical cancellous bone.
Simulated increases in body fat and errors in bone mineral density measurements by DXA and QCT.
Yu, Elaine W; Thomas, Bijoy J; Brown, J Keenan; Finkelstein, Joel S
2012-01-01
Major alterations in body composition, such as with obesity and weight loss, have complex effects on the measurement of bone mineral density (BMD) by dual-energy X-ray absorptiometry (DXA). The effects of altered body fat on quantitative computed tomography (QCT) measurements are unknown. We scanned a spine phantom by DXA and QCT before and after surrounding with sequential fat layers (up to 12 kg). In addition, we measured lumbar spine and proximal femur BMD by DXA and trabecular spine BMD by QCT in 13 adult volunteers before and after a simulated 7.5 kg increase in body fat. With the spine phantom, DXA BMD increased linearly with sequential fat layering at the normal (p < 0.01) and osteopenic (p < 0.01) levels, but QCT BMD did not change significantly. In humans, fat layering significantly reduced DXA spine BMD values (mean ± SD: -2.2 ± 3.7%, p = 0.05) and increased the variability of measurements. In contrast, fat layering increased QCT spine BMD in humans (mean ± SD: 1.5 ± 2.5%, p = 0.05). Fat layering did not change mean DXA BMD of the femoral neck or total hip in humans significantly, but measurements became less precise. Associations between baseline and fat-simulation scans were stronger for QCT of the spine (r(2)= 0.97) than for DXA of the spine (r(2)= 0.87), total hip (r(2) = 0.80), or femoral neck (r(2)= 0.75). Bland-Altman plots revealed that fat-associated errors were greater for DXA spine and hip BMD than for QCT trabecular spine BMD. Fat layering introduces error and decreases the reproducibility of DXA spine and hip BMD measurements in human volunteers. Although overlying fat also affects QCT BMD measurements, the error is smaller and more uniform than with DXA BMD. Caution must be used when interpreting BMD changes in humans whose body composition is changing.
Analysis of operator splitting errors for near-limit flame simulations
NASA Astrophysics Data System (ADS)
Lu, Zhen; Zhou, Hua; Li, Shan; Ren, Zhuyin; Lu, Tianfeng; Law, Chung K.
2017-04-01
High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction-diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction of ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory
Simulations of the propagation of multiple-FM smoothing by spectral dispersion on OMEGA EP
NASA Astrophysics Data System (ADS)
Kelly, J. H.; Shvydky, A.; Marozas, J. A.; Guardalben, M. J.; Kruschwitz, B. E.; Waxer, L. J.; Dorrer, C.; Hill, E.; Okishev, A. V.; Di Nicola, J.-M.
2013-02-01
A one-dimensional (1-D) smoothing by spectral dispersion (SSD) system for smoothing focal-spot nonuniformities using multiple modulation frequencies has been commissioned on one long-pulse beamline of OMEGA EP, the first use of such a system in a high-energy laser. Frequency modulation (FM) to amplitude modulation (AM) conversion in the infrared (IR) output, frequency conversion, and final optics affected the accumulation of B-integral in that beamline. Modeling of this FM-to-AM conversion using the code Miró [Morice, O., "Miró: Complete modeling and software for pulse amplification and propagation in high-power laser systems," Opt. Eng. 42(6), 1530-1541 (2003).] was used as input to set the beamline performance limits for picket (short) pulses with multi-FM SSD applied. This article first describes that modeling. The 1-D SSD analytical model of Chuang [Chuang, Y.-H., "Amplification of broad-bandwidth phase-modulated laser counterpropagating light waves in homogeneous plasma," Ph.D. thesis, University of Rochester (September 1991).] is first extended to the case of multiple modulators and then used to benchmark Miró simulations. Comparison is also made to an alternative analytic model developed by Hocquet et al. [Hocquet, S., Penninckx, D., Bordenave, E., Gouédard, C. and Jaouën, Y., "FM-to-AM conversion in high-power lasers," Appl. Opt. 47(18), 3338-3349 (2008).] With the confidence engendered by this benchmarking, Miró results for multi-FM SSD applied on OMEGA EP are then presented. The relevant output section(s) of the OMEGA EP Laser System are described. The additional B-integral in OMEGA EP IR components upstream of the frequency converters due to AM is modeled. The importance of locating the image of the SSD dispersion grating at the frequency converters is demonstrated. Finally, since frequency conversion is not performed in OMEGA EP's target chamber, the additional AM due to propagation to the target chamber's vacuum window is modeled.
Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh
2015-11-01
In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.
Molecular-level Simulations of Shock Generation and Propagation in Polyurea
2011-01-26
propagating shock is shown though) at the computational cell faces nor- mal to the x-direction. These shocks then propagate towards the computational-cell...in-elastic strain producing. However, as stated earlier, the amorphous nature of polyurea pre- cludes amore detailed/quantitative description of the
NASA Astrophysics Data System (ADS)
Bourdine, Anton V.
2008-12-01
Simulation results of few-mode signal propagation over silica graded multimode optical fibers with periodical slowly varying core diameter, excited by laser source, are represented. Weakly-guiding irregular multimode fibers with axialsymmetric graded index profile under central defect, local fluctuations and single outer cladding are considered. It is supposed, the core diameter varies slowly along fiber length according to sine function. The period of sine function is more then 106 times greater then fiber core diameter. Solution is based on proposed time domain model of piece-wise regular multimode fiber link under few-mode signal propagation under laser source exciting. Model takes into account differential mode delay, higher-order mode chromatic dispersion, mode mixing and power diffusion. Some results of pulse dynamics calculation during propagation over described fiber with periodical slowly varying core diameter are represented.
NASA Technical Reports Server (NTRS)
Rudraraju, Siva Shankar; Garikipati, Krishna; Waas, Anthony M.; Bednarcyk, Brett A.
2013-01-01
The phenomenon of crack propagation is among the predominant modes of failure in many natural and engineering structures, often leading to severe loss of structural integrity and catastrophic failure. Thus, the ability to understand and a priori simulate the evolution of this failure mode has been one of the cornerstones of applied mechanics and structural engineering and is broadly referred to as "fracture mechanics." The work reported herein focuses on extending this understanding, in the context of through-thickness crack propagation in cohesive materials, through the development of a continuum-level multiscale numerical framework, which represents cracks as displacement discontinuities across a surface of zero measure. This report presents the relevant theory, mathematical framework, numerical modeling, and experimental investigations of through-thickness crack propagation in fiber-reinforced composites using the Variational Multiscale Cohesive Method (VMCM) developed by the authors.
Driving error and anxiety related to iPod mp3 player use in a simulated driving experience.
Harvey, Ashley R; Carden, Randy L
2009-08-01
Driver distraction due to cellular phone usage has repeatedly been shown to increase the risk of vehicular accidents; however, the literature regarding the use of other personal electronic devices while driving is relatively sparse. It was hypothesized that the usage of an mp3 player would result in an increase in not only driving error while operating a driving simulator, but driver anxiety scores as well. It was also hypothesized that anxiety scores would be positively related to driving errors when using an mp3 player. 32 participants drove through a set course in a driving simulator twice, once with and once without an iPod mp3 player, with the order counterbalanced. Number of driving errors per course, such as leaving the road, impacts with stationary objects, loss of vehicular control, etc., and anxiety were significantly higher when an iPod was in use. Anxiety scores were unrelated to number of driving errors.
Impact of operational model nesting approaches and inherent errors for coastal simulations
NASA Astrophysics Data System (ADS)
Brown, Jennifer M.; Norman, Danielle L.; Amoudry, Laurent O.; Souza, Alejandro J.
2016-11-01
A region of freshwater influence (ROFI) under hypertidal conditions is used to demonstrate inherent problems for nested operational modelling systems. Such problems can impact the accurate simulation of freshwater export within shelf seas, so must be considered in coastal ocean modelling studies. In Liverpool Bay (our UK study site), freshwater inflow from 3 large estuaries forms a coastal front that moves in response to tides and winds. The cyclic occurrence of stratification and remixing is important for the biogeochemical cycles, as nutrient and pollutant loaded freshwater is introduced into the coastal system. Validation methods, using coastal observations from fixed moorings and cruise transects, are used to assess the simulation of the ROFI, through improved spatial structure and temporal variability of the front, as guidance for best practise model setup. A structured modelling system using a 180 m grid nested within a 1.8 km grid demonstrates how compensation for error at the coarser resolution can have an adverse impact on the nested, high resolution application. Using 2008, a year of typical calm and stormy periods with variable river influence, the sensitivities of the ROFI dynamics to initial and boundary conditions are investigated. It is shown that accurate representation of the initial water column structure is important at the regional scale and that the boundary conditions are most important at the coastal scale. Although increased grid resolution captures the frontal structure, the accuracy in frontal position is determined by the offshore boundary conditions and therefore the accuracy of the coarser regional model.
Further Developments in the Communication Link and Error Analysis (CLEAN) Simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1995-01-01
During the period 1 July 1993 - 30 June 1994, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed. Many of these were reported in the Semi-Annual report dated December 1993 which has been included in this report in Appendix A. Since December 1993, a number of additional modules have been added involving Unit-Memory Convolutional codes (UMC). These are: (1) Unit-Memory Convolutional Encoder module (UMCEncd); (2) Hard decision Unit-Memory Convolutional Decoder using the Viterbi decoding algorithm (VitUMC); and (3) a number of utility modules designed to investigate the performance of LTMC's such as LTMC column distance function (UMCdc), UMC free distance function (UMCdfree), UMC row distance function (UMCdr), and UMC Transformation (UMCTrans). The study of UMC's was driven, in part, by the desire to investigate high-rate convolutional codes which are better suited as inner codes for a concatenated coding scheme. A number of high-rate LTMC's were found which are good candidates for inner codes. Besides the further developments of the simulation, a study was performed to construct a table of the best known Unit-Memory Convolutional codes. Finally, a preliminary study of the usefulness of the Periodic Convolutional Interleaver (PCI) was completed and documented in a Technical note dated March 17, 1994. This technical note has also been included in this final report.
Fitzpatrick, Gianna M.; Wells, R. Glenn
2006-08-15
Heart disease is a leading killer in Canada and positron emission tomography (PET) provides clinicians with in vivo metabolic information for diagnosing heart disease. Transmission data are usually acquired with {sup 68}Ge, although the advent of PET/CT scanners has made computed tomography (CT) an alternative option. The fast data acquisition of CT compared to PET may cause potential misregistration problems, leading to inaccurate attenuation correction (AC). Using Monte Carlo simulations and an anthropomorphic dynamic computer phantom, this study determines the magnitude and location of respiratory-induced errors in radioactivity uptake measured in cardiac PET/CT. A homogeneous tracer distribution in the heart was considered. The AC was based on (1) a time-averaged attenuation map (2) CT maps from a single phase of the respiratory cycle, and (3) CT maps phase matched to the emission data. Circumferential profiles of the heart uptake were compared and differences of up to 24% were found between the single-phase CT-AC method and the true phantom values. Simulation results were supported by a PET/CT canine study which showed differences of up to 10% in the heart uptake in the lung-heart boundary region when comparing {sup 68}Ge- to CT-based AC with the CT map acquired at end inhalation.
Parallax error in long-axial field-of-view PET scanners—a simulation study
NASA Astrophysics Data System (ADS)
Schmall, Jeffrey P.; Karp, Joel S.; Werner, Matt; Surti, Suleman
2016-07-01
There is a growing interest in the design and construction of a PET scanner with a very long axial extent. One critical design challenge is the impact of the long axial extent on the scanner spatial resolution properties. In this work, we characterize the effect of parallax error in PET system designs having an axial field-of-view (FOV) of 198 cm (total-body PET scanner) using fully-3D Monte Carlo simulations. Two different scintillation materials were studied: LSO and LaBr3. The crystal size in both cases was 4 × 4 × 20 mm3. Several different depth-of-interaction (DOI) encoding techniques were investigated to characterize the improvement in spatial resolution when using a DOI capable detector. To measure spatial resolution we simulated point sources in a warm background in the center of the imaging FOV, where the effects of axial parallax are largest, and at several positions radially offset from the center. Using a line-of-response based ordered-subset expectation maximization reconstruction algorithm we found that the axial resolution in an LSO scanner degrades from 4.8 mm to 5.7 mm (full width at half max) at the center of the imaging FOV when extending the axial acceptance angle (α) from ±12° (corresponding to an axial FOV of 18 cm) to the maximum of ±67°—a similar result was obtained with LaBr3, in which the axial resolution degraded from 5.3 mm to 6.1 mm. For comparison we also measured the degradation due to radial parallax error in the transverse imaging FOV; the transverse resolution, averaging radial and tangential directions, of an LSO scanner was degraded from 4.9 mm to 7.7 mm, for a measurement at the center of the scanner compared to a measurement with a radial offset of 23 cm. Simulations of a DOI detector design improved the spatial resolution in all dimensions. The axial resolution in the LSO-based scanner, with α = ± 67°, was improved from 5.7 mm to 5.0 mm by
NASA Astrophysics Data System (ADS)
Sjöqvist, Lars; Henriksson, Markus; Fedina, Ekaterina; Fureby, Christer
2010-10-01
The exhaust from jet engines introduces extreme turbulence levels in local environments around aircrafts. This may degrade the performance of electro-optical missile warning and laser-based DIRCM systems used to protect aircrafts against heat-seeking missiles. Full scale trials using real engines are expensive and difficult to perform motivating numerical simulations of the turbulence properties within the jet engine exhaust. Large Eddy Simulations (LES) is a computational fluid dynamics method that can be used to calculate spatial and temporal refractive index dynamics of the turbulent flow in the engine exhaust. From LES simulations the instantaneous refractive index in each grid point can be derived and interpolated to phase screens for numerical laser beam propagation or used to estimate aberration effects from optical path differences. The high computation load of LES limits the available data in terms of the computational volume and number of time steps. In addition the phase screen method used in laser beam propagation may also be too slow. For this reason extraction of statistical parameters from the turbulence field and statistical beam propagation methods are studied. The temporal variation of the refractive index is used to define a spatially varying structure constant. Ray-tracing through the mean refractive index field provides integrated static aberrations and the path integrated structure constant. These parameters can be used in classical statistical parameterised models describing propagation through turbulence. One disadvantage of using the structure constant description is that the temporal information is lost. Methods for studying the variation of optical aberrations based on models of Zernike coefficients are discussed. The results of the propagation calculations using the different methods are compared to each other and to available experimental data. Advantages and disadvantages of the different methods are briefly discussed.
Simulations of the magnet misalignments, field errors and orbit correction for the SLC north arc
Kheifets, S.; Chao, A.; Jaeger, J.; Shoaee, H.
1983-11-01
Given the intensity of linac bunches and their repetition rate the desired luminosity of SLC 1.0 x 10/sup 30/ cm/sup -2/ sec/sup -1/ requires focusing the interaction bunches to a spot size in the micrometer (..mu..m) range. The lattice that achieves this goal is obtained by careful design of both the arcs and the final focus systems. For the micrometer range of the beam spot size both the second order geometric and chromatic aberrations may be completely destructive. The concept of second order achromat proved to be extremely important in this respect and the arcs are built essentially as a sequence of such achromats. Between the end of the linac and the interaction point (IP) there are three special sections in addition to the regular structure: matching section (MS) designed for matching the phase space from the linac to the arcs, reverse bend section (RB) which provides the matching when the sign of the curvature is reversed in the arc and the final focus system (FFS). The second order calculations are done by the program TURTLE. Using the TURTLE histogram in the x-y plane and assuming identical histogram for the south arc, corresponding 'luminosity' L is found. The simulation of the misalignments and error effects have to be done simultaneously with the design and simulation of the orbit correction scheme. Even after the orbit is corrected and the beam can be transmitted through the vacuum chamber, the focusing of the beam to the desired size at the IP remains a serious potential problem. It is found, as will be elaborated later, that even for the best achieved orbit correction, additional corrections of the dispersion function and possibly transfer matrix are needed. This report describes a few of the presently conceived correction schemes and summarizes some results of computer simulations done for the SLC north arc. 8 references, 12 figures, 6 tables.
NASA Astrophysics Data System (ADS)
Ramarohetra, J.; Sultan, B.
2013-12-01
As it enables the understanding and the quantification of the transfer of water in ecosystems and from ecosystems to the atmosphere, evapotranspiration is a key component to assess climate impact on hydrology and agriculture. In crop models, the estimation of the evapotranspiration rate requires first calculating potential or reference evapotranspiration from climate data. To compute reference evapotranspiration different formulas requiring more or less climate data are used. The choice of the formulation of this key process is very likely to have an impact on calculated crop yield. The FAO recommends using the Penman-Monteith (PM) equation if all the climate data required for this equation are available and using Hargreaves (H) equation when climate data, especially net radiation, are missing. The Priestley-Taylor equation is also widely used in crop models. Which of these equations is the most accurate when all the climate data required are available but contain errors ? Does the choice of the evapotranspiration equation have an impact on crop yield projection in a context of climate change ? Does the use of some equations induce more pessimistic crop yield projection ? We studied the impact of the reference evapotranspiration equations on simulated crop yield using climate data with errors. 4 equations (PM, H and 2 versions of the Priestley-Taylor equation - PT) were tested simulating pearl millet over 12 stations in Senegal. In this case, we found that the use of a PT equation may introduce a percent mean bias error of more than -35% on simulated crop yield while it is limited to 2% when using the H equation. The influence of the evapotranspiration equation on the quantification of climate change impact on crop yield is examined applying the AgMIP C3MP protocol over the 12 stations in Senegal then analyzing ISI-AgMIP GGCM Intercomparison fast-track project outputs over the world. Our preliminary results show that crop yields computed using a PT equation are
A finite element beam propagation method for simulation of liquid crystal devices.
Vanbrabant, Pieter J M; Beeckman, Jeroen; Neyts, Kristiaan; James, Richard; Fernandez, F Anibal
2009-06-22
An efficient full-vectorial finite element beam propagation method is presented that uses higher order vector elements to calculate the wide angle propagation of an optical field through inhomogeneous, anisotropic optical materials such as liquid crystals. The full dielectric permittivity tensor is considered in solving Maxwell's equations. The wide applicability of the method is illustrated with different examples: the propagation of a laser beam in a uniaxial medium, the tunability of a directional coupler based on liquid crystals and the near-field diffraction of a plane wave in a structure containing micrometer scale variations in the transverse refractive index, similar to the pixels of a spatial light modulator.
NASA Astrophysics Data System (ADS)
Burt, Jonathan M.; Josyula, Eswar
2016-11-01
A modification to DSMC collision routines is proposed to eliminate or reduce collision separation error in numerical transport coefficients. This modification follows from earlier DSMC error analysis based on Green-Kubo theory, and is currently limited to the case of a hard sphere monatomic simple gas simulation with approximately isotropic collision separation statistics. Further adjustments to the DSMC collision algorithm are proposed to reduce collision separation error associated with a finite time step interval. It is shown analytically that, for random collision partner selection at the small time step limit with a cell size equal to the mean free path, collision separation error in viscosity is reduced by approximately 37% while thermal conductivity error is completely removed. In a demonstration case involving hypersonic flow over a cylinder, the proposed modification is found to allow for large error reductions in both the total force and heat transfer rate. Although this modification is not intended as a general solution to the problem of DSMC collision separation error, it is hoped that the concept demonstrated here of utilizing Green-Kubo analysis for DSMC error reduction will in the future find more widespread applicability.
NUMERICAL SIMULATION OF PROPAGATION AND SCATTERING OF THE MHD WAVES IN SUNSPOTS
NASA Astrophysics Data System (ADS)
Parchevsky, K.; Kosovichev, A. G.; Khomenko, E.; Collados, M.
2009-12-01
We present comparison of numerical simulation results of MHD wave propagation in two different magnitostatic models of sunspots refferred to as "deep" and "shallow" models. The "deep" model has convex shape of magnetic field lines near the photosphere and non-zero horizorntal perturbations of the sound speed up to the bottom of the model (7.5 Mm). The "shallow" model has concave shape of the magnetic field lines near the photosphere and horizontally uniform sound speed below 2 Mm. Common feature of MHD waves behaviour in these two models is that for weak magnetic field (less than 1kG at the photosphere) waves reduce their amplitude when they reach the center of the sunspot and restore the amplitude when pass the center. For the "deep" model this effect is bigger than for the "shallow" model. The wave amplitude inside sunspots depends on the strength of the magnetic field. For the "shallow" model with photospheric magnetic field of 2.2 kG the wave amplitude inside the sunspot becomes bigger than outside (opposite to the weak magnetic field). The wave amplitude depends on the distance of the source from the sunspot center. For the "shallow" model and source distance of 9 Mm from the sunspot center the wave amplitude at some moment (when the wavefront passes the sunspot center) becomes bigger inside the sunspot than outside. For the source distance of 12 Mm the wave amplitude remains smaller inside the sunspot than outside for all moments of time. Using filtering technique we separated magnetoacoustic and magnetogravity waves. Simulations show that the sunspot changes the shape of the wave front and amplitude of the f-modes significantly stronger than the p-modes. It is shown, that inside the sunspot magnetoacoustic and magnetogravity waves are not spatially separated unlike the case of the horizontally uniform background model. Strong Alfven wave is generated at the wave source location in the "deep" model. This wave exists in the "shallow" model as well, but with
NASA Astrophysics Data System (ADS)
Terasaki, Hidenori; Miyahara, Yu; Ohata, Mitsuru; Moriguchi, Koji; Tomio, Yusaku; Hayashi, Kotaro
2015-12-01
Cleavage-crack propagation behavior was investigated in the simulated coarse-grained heat-affected zone (CGHAZ) of bainitic steel using electron backscattering diffraction (EBSD) pattern analysis when a low heat input welding was simulated. From viewpoint of crystallographic analysis, it was the condition in which the Bain zone was smaller than the close-packed plane (CP) group. It was clarified that the Bain zone and CP group boundaries provided crack-propagation resistance. The results revealed that when the Bain zone was smaller than the CP group, crack length was about one quarter the size of that measured when the CP group was smaller than the Bain zone because of the increasing Bain-zone boundaries. Furthermore, it was clarified that the plastic work associated with crack opening and resistance at the Bain and CP boundaries could be visualized by the kernel average misorientation maps.
NASA Technical Reports Server (NTRS)
Matsuda, Y.; Crawford, F. W.
1975-01-01
A hybrid plasma simulation model is described and applied to the study of electrostatic wave propagation in a one-dimensional Maxwellian plasma with periodic boundary conditions. The model employs a cloud-in-cell scheme which can drastically reduce the fluctuations in particle simulation models and greatly ease the computational difficulties of the Vlasov equation approach. A grid in velocity space is introduced and the particles are represented by points in the x-v phase space. The model is tested first in the absence of an applied signal and then in the presence of a small-amplitude perturbation. The method is also used to study propagation of an essentially monochromatic plane wave. Results on amplitude oscillation and nonlinear frequency shift are compared with available theories.
NASA Astrophysics Data System (ADS)
Wu, Kang-Hung; Su, Ching-Lun; Chu, Yen-Hsyang
2015-03-01
In this article, we use the International Reference Ionosphere (IRI) model to simulate temporal and spatial distributions of global E region electron densities retrieved by the FORMOSAT-3/COSMIC satellites by means of GPS radio occultation (RO) technique. Despite regional discrepancies in the magnitudes of the E region electron density, the IRI model simulations can, on the whole, describe the COSMIC measurements in quality and quantity. On the basis of global ionosonde network and the IRI model, the retrieval errors of the global COSMIC-measured E region peak electron density (NmE) from July 2006 to July 2011 are examined and simulated. The COSMIC measurement and the IRI model simulation both reveal that the magnitudes of the percentage error (PE) and root mean-square-error (RMSE) of the relative RO retrieval errors of the NmE values are dependent on local time (LT) and geomagnetic latitude, with minimum in the early morning and at high latitudes and maximum in the afternoon and at middle latitudes. In addition, the seasonal variation of PE and RMSE values seems to be latitude dependent. After removing the IRI model-simulated GPS RO retrieval errors from the original COSMIC measurements, the average values of the annual and monthly mean percentage errors of the RO retrieval errors of the COSMIC-measured E region electron density are, respectively, substantially reduced by a factor of about 2.95 and 3.35, and the corresponding root-mean-square errors show averaged decreases of 15.6% and 15.4%, respectively. It is found that, with this process, the largest reduction in the PE and RMSE of the COSMIC-measured NmE occurs at the equatorial anomaly latitudes 10°N-30°N in the afternoon from 14 to 18 LT, with a factor of 25 and 2, respectively. Statistics show that the residual errors that remained in the corrected COSMIC-measured NmE vary in a range of -20% to 38%, which are comparable to or larger than the percentage errors of the IRI-predicted NmE fluctuating in a
Roon, David A.; Waits, L.P.; Kendall, K.C.
2005-01-01
Non-invasive genetic sampling (NGS) is becoming a popular tool for population estimation. However, multiple NGS studies have demonstrated that polymerase chain reaction (PCR) genotyping errors can bias demographic estimates. These errors can be detected by comprehensive data filters such as the multiple-tubes approach, but this approach is expensive and time consuming as it requires three to eight PCR replicates per locus. Thus, researchers have attempted to correct PCR errors in NGS datasets using non-comprehensive error checking methods, but these approaches have not been evaluated for reliability. We simulated NGS studies with and without PCR error and 'filtered' datasets using non-comprehensive approaches derived from published studies and calculated mark-recapture estimates using CAPTURE. In the absence of data-filtering, simulated error resulted in serious inflations in CAPTURE estimates; some estimates exceeded N by ??? 200%. When data filters were used, CAPTURE estimate reliability varied with per-locus error (E??). At E?? = 0.01, CAPTURE estimates from filtered data displayed < 5% deviance from error-free estimates. When E?? was 0.05 or 0.09, some CAPTURE estimates from filtered data displayed biases in excess of 10%. Biases were positive at high sampling intensities; negative biases were observed at low sampling intensities. We caution researchers against using non-comprehensive data filters in NGS studies, unless they can achieve baseline per-locus error rates below 0.05 and, ideally, near 0.01. However, we suggest that data filters can be combined with careful technique and thoughtful NGS study design to yield accurate demographic information. ?? 2005 The Zoological Society of London.
Liu, Hao; Zhang, Yin; Kang, Wei; Zhang, Ping; Duan, Huiling; He, X T
2017-02-01
We present a molecular dynamics simulation of shock waves propagating in dense deuterium with the electron force field method [J. T. Su and W. A. Goddard, Phys. Rev. Lett. 99, 185003 (2007)PRLTAO0031-900710.1103/PhysRevLett.99.185003], which explicitly takes the excitation of electrons into consideration. Nonequilibrium features associated with the excitation of electrons are systematically investigated. We show that chemical bonds in D_{2} molecules lead to a more complicated shock wave structure near the shock front, compared with the results of classical molecular dynamics simulation. Charge separation can bring about accumulation of net charges on large scales, instead of the formation of a localized dipole layer, which might cause extra energy for the shock wave to propagate. In addition, the simulations also display that molecular dissociation at the shock front is the major factor that accounts for the "bump" structure in the principal Hugoniot. These results could help to build a more realistic picture of shock wave propagation in fuel materials commonly used in the inertial confinement fusion.
NASA Astrophysics Data System (ADS)
Liu, Hao; Zhang, Yin; Kang, Wei; Zhang, Ping; Duan, Huiling; He, X. T.
2017-02-01
We present a molecular dynamics simulation of shock waves propagating in dense deuterium with the electron force field method [J. T. Su and W. A. Goddard, Phys. Rev. Lett. 99, 185003 (2007), 10.1103/PhysRevLett.99.185003], which explicitly takes the excitation of electrons into consideration. Nonequilibrium features associated with the excitation of electrons are systematically investigated. We show that chemical bonds in D2 molecules lead to a more complicated shock wave structure near the shock front, compared with the results of classical molecular dynamics simulation. Charge separation can bring about accumulation of net charges on large scales, instead of the formation of a localized dipole layer, which might cause extra energy for the shock wave to propagate. In addition, the simulations also display that molecular dissociation at the shock front is the major factor that accounts for the "bump" structure in the principal Hugoniot. These results could help to build a more realistic picture of shock wave propagation in fuel materials commonly used in the inertial confinement fusion.
NASA Astrophysics Data System (ADS)
Fadde, J.; Venditti, J. G.; Sklar, L. S.; Wydzga, A.; Nelson, P. A.; Dietrich, W. E.
2005-12-01
Gravel augmentation is an increasingly common river restoration strategy for armored channels downstream of dams, however, few analytical tools are available to assist river managers in selecting the appropriate sediment volumes, grain sizes, and frequency of additions to achieve desired geomorphic and ecological outcomes. Coarse sediment additions are often intended to improve habitat for spawning salmonids by altering stream bed grain size distributions, and increasing the frequency of bed mobilization and the diversity of channel morphology. Here we report preliminary results of an ongoing laboratory investigation in which we simulate the gravel augmentation process and document the spatial and temporal evolution of the bed in response to pulses of elevated fine gravel supply. The experiments are conducted in a 30-m long, 0.86-m wide flume, with a calibrated sediment feed and a tipping bucket type sediment trap that provides a continuous record of sediment flux at the downstream end of the flume. We created an initial armored bed by first achieving an active transport equilibrium slope and then shutting off the sediment feed and allowing the bed to coarsen and degrade until the transport rate became negligible. We then introduced gravel pulses of various volumes and grain sizes, and mapped the propagation of the wave of added sediment as it moved through the flume. The sediments comprising each pulse are painted distinct colors to aid in mapping and to quantify the extent of exchange with the armored bed. Mapping techniques include planform maps of zones of active transport and temporal contours of width-averaged concentrations of added gravel. We also documented the changes in bed grain size distribution using manual pebble counts before and after each run and analysis of high resolution photographs of the bed taken during the run. We also collected frequent bedload samples at regular locations along the flume length to document the movement of the gravel pulse
Ward, Michael J.; Self, Wesley H.; Froehle, Craig M.
2015-01-01
Objectives To estimate how data errors in electronic health records (EHR) can affect the accuracy of common emergency department (ED) operational performance metrics. Methods Using a 3-month, 7,348-visit dataset of electronic timestamps from a suburban academic ED as a baseline, Monte Carlo simulation was used to introduce four types of data errors (substitution, missing, random, and systematic bias) at three frequency levels (2%, 4%, and 7%). Three commonly used ED operational metrics (arrival to clinician evaluation, disposition decision to exit for admitted patients, and ED length of stay for admitted patients) were calculated and the proportion of ED visits that achieved each performance goal was determined. Results Even small data errors have measurable effects on a clinical organization's ability to accurately determine whether it is meeting its operational performance goals. Systematic substitution errors, increased frequency of errors, and the use of shorter-duration metrics resulted in a lower proportion of ED visits reported as meeting the associated performance objectives. However, the presence of other error types mitigated somewhat the effect of the systematic substitution error. Longer time-duration metrics were found to be less sensitive to data errors than shorter time-duration metrics. Conclusions Infrequent and small-magnitude data errors in EHR timestamps can compromise a clinical organization's ability to determine accurately if it is meeting performance goals. By understanding the types and frequencies of data errors in an organization's EHR, organizational leaders can use data-management best practices to better measure true performance and enhance operational decision-making. PMID:26291051
Garden, A L; Mills, S A; Wilson, R; Watts, P; Griffin, J M; Gannon, S; Kapoor, I
2010-11-01
In response to a successful, although difficult resuscitation in one of our paediatric wards, we developed and implemented an educational program to improve the resuscitation skills, teamwork and safety climate in our multidisciplinary acute-care paediatric service. The program is ongoing and consists of didactic presentations, high-fidelity in situ simulation and facilitated debriefing to encourage reflective learning. The underlying goal, to provide this training to all staff over a two-year period, should be achieved by late 2011. In this preliminary report we describe teamwork difficulties that are commonly found during such training. These included inconsistent leadership behaviours, inadequate delegation of areas of responsibility, failure to communicate problems during the execution of technical tasks (such as difficulty opening the resuscitation trolley) and failure to challenge inadequate or inappropriate therapy (such as poor chest expansion during bag-mask ventilation). In addition, we unexpectedly discovered seven latent errors in our clinical environment during the first nine months of course delivery. The most disturbing of these was that participants repeatedly struggled to identify and overcome the locking-mechanism and tamper-proof device on a newly introduced resuscitation trolley.
Sala, F; Sala, S
1994-08-01
The use of voltage clamp with a single electrode has been useful in estimating kinetic parameters for a number of ionic whole-cell currents. There are two main types of such a technique: discontinuous voltage clamp (dSEVC) (Brennecke and Lindemann, 1974), and continuous voltage clamp (cSEVC) (Hamill et al., 1981). We have studied, by means of computer simulations, the performance of both types of clamp on estimating activation kinetics parameters of a typical neuronal Ca2+ current. Deviations from the theoretical values are shown to be sensitive on both set-up and cell properties. Both types of clamp are shown to lose voltage control when either access resistance or absolute membrane conductance are increased. In contrast, changes in membrane capacitance affect differently to the estimates obtained by the two types of clamp. Cell size is also shown to affect cSEVC performance but not that of dSEVC. The nature and magnitude of errors obtained by using both types of clamp in different situations are discussed.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
Geant4 Simulations of the SuperCDMS iZIP Detector Charge Carrier Propagation and FET Readout
NASA Astrophysics Data System (ADS)
Agnese, R.; Brandt, D.; Asai, M.; Cabrera, B.; Leman, S.; McCarthy, K.; Redl, P.; Saab, T.; Wright, D.
2014-09-01
The SuperCDMS experiment aims to directly detect dark matter particles called WIMPs (weakly interacting massive particles). The detectors measure phonon and ionization energy due to nuclear and electron recoils from incident particles. The SuperCDMS Detector Monte Carlo group uses Geant4 to simulate electron-hole pairs () and low temperature phonons. We use these simulations in order to study energy deposition in the detectors. Phonons and electron-hole pairs are tracked in a crystal detector. Because of the band structure of the crystals, the electrons undergo oblique propagation. The charge electrodes on each side of the detector are biased at different voltages while the phonon sensors are grounded. This creates a nearly uniform electric field through the bulk of the detector, with a complex shape near the surfaces. The electric field is calculated from interpolating on a tetrahedral mesh. The resulting TES phonon readout, as well as the FET charge readout are simulated. To calculate the FET readout, the Shockley-Ramo theorem is applied to simulate the current in the FET. The goal of this paper is to describe the theory and implementation of calculating the electric field, performing the charge carrier propagation, and simulating the FET readout of the SuperCDMS detectors.
NASA Astrophysics Data System (ADS)
Zhang, Yanqiu; Jiang, Shuyong; Zhu, Xiaoming; Zhao, Yanan
2017-03-01
Tensile deformation of nanoscale bicrystal nickel film with twist grain boundary, which includes various twist angles, is investigated via molecular dynamics simulation to obtain the influence of twist angle on crack propagation. The twist angle has a significant influence on crack propagation. At the tensile strain of 0.667, as for the twist angles of 0°, 3.54° and 7.05°, the bicrystal nickel films are subjected to complete fracture, while as for the twist angles of 16.1° and 33.96°, no complete fracture occurs in the bicrystal nickel films. When the twist angles are 16.1° and 33.96°, the dislocations emitted from the crack tip are almost unable to go across the grain boundary and enter into the other grain along the slip planes {111}. There should appear a critical twist angle above which the crack propagation is suppressed at the grain boundary. The higher energy in the grain boundary with larger twist angle contributes to facilitating the movement of the glissile dislocation along the grain boundary rather than across the grain boundary, which leads to the propagation of the crack along the grain boundary.
NASA Astrophysics Data System (ADS)
Bergmann-Wolf, I.; Dobslaw, H.; Mayer-Gürr, T.
2015-12-01
A realistically perturbed synthetic de-aliasing model consistent with the updated Earth System Model of the European Space Agency (Dobslaw et al., 2015) is now available for the years 1995 -- 2006. The data-set contains realizations of (i) errors at large spatial scales assessed individually for periods between 10 -- 30, 3 -- 10, and 1 -- 3 days, the S1 atmospheric tide, and sub-diurnal periods; (ii) errors at small spatial scales typically not covered by global models of atmosphere and ocean variability; and (iii) errors due to physical processes not represented in currently available de-aliasing products. The error magnitudes for each of the different frequency bands are derived from a small ensemble of four atmospheric and oceanic models. In order to demonstrate the plausibility of the error magnitudes chosen, we perform a variance component estimation based on daily GRACE normal equations from the ITSG-Grace2014 global gravity field series recently published by the University of Graz. All 12 years of the error model are used to calculate empirical error variance-covariance matrices describing the systematic dependencies of the errors both in time and in space individually for five continental and four oceanic regions, and daily GRACE normal equations are subsequently employed to obtain pre-factors for each of those matrices. For the largest spatial scales up to d/o = 40 and periods longer than 24 h, errors prepared for the updated ESM are found to be largely consistent with noise of a similar stochastic character contained in present-day GRACE solutions. Differences and similarities identified for all of the nine regions considered will be discussed in detail during the presentation.Dobslaw, H., I. Bergmann-Wolf, R. Dill, E. Forootan, V. Klemann, J. Kusche, and I. Sasgen (2015), The updated ESA Earth System Model for future gravity mission simulation studies, J. Geod., doi:10.1007/s00190-014-0787-8.
Glaser, E M; Wilson, P D
1998-11-01
The optical fractionator is a design-based two-stage systematic sampling method that is used to estimate the number of cells in a specified region of an organ when the population is too large to count exhaustively. The fractionator counts the cells found in optical disectors that have been systematically sampled in serial sections. Heretofore, evaluations of optical fractionator performance have been made by performing tests on actual tissue sections, but it is difficult to evaluate the coefficient of error (CE), i.e. the precision of a population size estimate, by using biological tissue samples because they do not permit a comparison of an estimated CE with the true CE. However, computer simulation does permit making such comparisons while avoiding the observational biases inherent in working with biological tissue. This study is the first instance in which computer simulation has been applied to population size estimation by the optical fractionator. We used computer simulation to evaluate the performance of three CE estimators. The estimated CEs were evaluated in tests of three types of non-random cell population distribution and one random cell population distribution. The non-random population distributions varied by differences in 'intensity', i.e. the expected cell counts per disector, according to both section and disector location within the section. Two distributions were sinusoidal and one was linearly increasing; in all three there was a six-fold difference between the high and low intensities. The sinusoidal distributions produced either a peak or a depression of cell intensity at the centre of the simulated region. The linear cell intensity gradually increased from the beginning to the end of the region that contained the cells. The random population distribution had a constant intensity over the region. A 'test condition' was defined by its population distribution, the period between consecutive sampled sections and the spacing between consecutive
NASA Astrophysics Data System (ADS)
Dodd, Evan S.; Schmitt, Mark J.
2001-10-01
The manipulation of ultra-short pulses (USPs) in the laboratory is affected by three main factors; (a) the layout of optical elements in the optical train, (b) the non-linear interaction of the pulse with the transmissive optical elements (including the intervening atmosphere) and (c) ionization effects near beam focal regions. These effects have been included in our simulation code in order to examine 3-D aspects of USP propagation through "real" optical systems. Our models for optical elements include the ability to examine the effects of element misalignments and asymmetric finite apertures. In the atmosphere, we have included the effect of the USP electric field intensity on the local index of refraction. A model to include the effects of ionization in the atmosphere has also been added. The collective behavior from these sources results in complex interactions within the laser pulse as it propagates. This is important since it reduces the distance the pulse may travel and the spatial and temporal energy distribution of the pulse after propagation. Simulation examples are presented.
NASA Technical Reports Server (NTRS)
Turon, Albert; Costa, Josep; Camanho, Pedro P.; Davila, Carlos G.
2006-01-01
A damage model for the simulation of delamination propagation under high-cycle fatigue loading is proposed. The basis for the formulation is a cohesive law that links fracture and damage mechanics to establish the evolution of the damage variable in terms of the crack growth rate dA/dN. The damage state is obtained as a function of the loading conditions as well as the experimentally-determined coefficients of the Paris Law crack propagation rates for the material. It is shown that by using the constitutive fatigue damage model in a structural analysis, experimental results can be reproduced without the need of additional model-specific curve-fitting parameters.
Böcklin, Christoph Baumann, Dirk; Fröhlich, Jürg
2014-02-14
A novel way to attain three dimensional fluence rate maps from Monte-Carlo simulations of photon propagation is presented in this work. The propagation of light in a turbid medium is described by the radiative transfer equation and formulated in terms of radiance. For many applications, particularly in biomedical optics, the fluence rate is a more useful quantity and directly derived from the radiance by integrating over all directions. Contrary to the usual way which calculates the fluence rate from absorbed photon power, the fluence rate in this work is directly calculated from the photon packet trajectory. The voxel based algorithm works in arbitrary geometries and material distributions. It is shown that the new algorithm is more efficient and also works in materials with a low or even zero absorption coefficient. The capabilities of the new algorithm are demonstrated on a curved layered structure, where a non-scattering, non-absorbing layer is sandwiched between two highly scattering layers.
NASA Astrophysics Data System (ADS)
Böcklin, Christoph; Baumann, Dirk; Fröhlich, Jürg
2014-02-01
A novel way to attain three dimensional fluence rate maps from Monte-Carlo simulations of photon propagation is presented in this work. The propagation of light in a turbid medium is described by the radiative transfer equation and formulated in terms of radiance. For many applications, particularly in biomedical optics, the fluence rate is a more useful quantity and directly derived from the radiance by integrating over all directions. Contrary to the usual way which calculates the fluence rate from absorbed photon power, the fluence rate in this work is directly calculated from the photon packet trajectory. The voxel based algorithm works in arbitrary geometries and material distributions. It is shown that the new algorithm is more efficient and also works in materials with a low or even zero absorption coefficient. The capabilities of the new algorithm are demonstrated on a curved layered structure, where a non-scattering, non-absorbing layer is sandwiched between two highly scattering layers.
A New Method for Very Fast Simulation of Blast Wave Propagation in Complex Built Environments
2010-01-01
as AUTODYN (ANSYS, 2008)). Unfortunately, three dimensional CFD models of blast wave propagation, even when limited to a single barrier and...The work reported in this paper was completed with the support of USAF, contract FA4819- 07-D-0001. References ANSYS. (2008). “ AUTODYN 2D and 3D
Yi, Grace Y; He, Wenqing
2012-05-01
It has been well known that ignoring measurement error may result in substantially biased estimates in many contexts including linear and nonlinear regressions. For survival data with measurement error in covariates, there has been extensive discussion in the literature with the focus on proportional hazards (PH) models. Recently, research interest has extended to accelerated failure time (AFT) and additive hazards (AH) models. However, the impact of measurement error on other models, such as the proportional odds model, has received relatively little attention, although these models are important alternatives when PH, AFT, or AH models are not appropriate to fit data. In this paper, we investigate this important problem and study the bias induced by the naive approach of ignoring covariate measurement error. To adjust for the induced bias, we describe the simulation-extrapolation method. The proposed method enjoys a number of appealing features. Its implementation is straightforward and can be accomplished with minor modifications of existing software. More importantly, the proposed method does not require modeling the covariate process, which is quite attractive in practice. As the precise values of error-prone covariates are often not observable, any modeling assumption on such covariates has the risk of model misspecification, hence yielding invalid inferences if this happens. The proposed method is carefully assessed both theoretically and empirically. Theoretically, we establish the asymptotic normality for resulting estimators. Numerically, simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error, along with an application to a data set arising from the Busselton Health Study. Sensitivity of the proposed method to misspecification of the error model is studied as well.
Li, Han; Lin, Kexin; Shahmirzadi, Danial
2016-01-01
This study aims to quantify the effects of geometry and stiffness of aneurysms on the pulse wave velocity (PWV) and propagation in fluid–solid interaction (FSI) simulations of arterial pulsatile flow. Spatiotemporal maps of both the wall displacement and fluid velocity were generated in order to obtain the pulse wave propagation through fluid and solid media, and to examine the interactions between the two waves. The results indicate that the presence of abdominal aortic aneurysm (AAA) sac and variations in the sac modulus affect the propagation of the pulse waves both qualitatively (eg, patterns of change of forward and reflective waves) and quantitatively (eg, decreasing of PWV within the sac and its increase beyond the sac as the sac stiffness increases). The sac region is particularly identified on the spatiotemporal maps with a region of disruption in the wave propagation with multiple short-traveling forward/reflected waves, which is caused by the change in boundary conditions within the saccular region. The change in sac stiffness, however, is more pronounced on the wall displacement spatiotemporal maps compared to those of fluid velocity. We conclude that the existence of the sac can be identified based on the solid and fluid pulse waves, while the sac properties can also be estimated. This study demonstrates the initial findings in numerical simulations of FSI dynamics during arterial pulsations that can be used as reference for experimental and in vivo studies. Future studies are needed to demonstrate the feasibility of the method in identifying very mild sacs, which cannot be detected from medical imaging, where the material property degradation exists under early disease initiation. PMID:27478394
NASA Astrophysics Data System (ADS)
Yu, R.; Lipatnikov, A. N.; Bai, X. S.
2014-08-01
In order to gain further insight into (i) the use of conditioned quantities for characterizing turbulence within a premixed flame brush and (ii) the influence of front propagation on turbulent scalar transport, a 3D Direct Numerical Simulation (DNS) study of an infinitely thin front that self-propagates in statistically stationary, homogeneous, isotropic, forced turbulence was performed by numerically integrating Navier-Stokes and level set equations. While this study was motivated by issues relevant to premixed combustion, the density was assumed to be constant in order (i) to avoid the influence of the front on the flow and, therefore, to know the true turbulence characteristics as reference quantities for assessment of conditioned moments and (ii) to separate the influence of front propagation on turbulent transport from the influence of pressure gradient induced by heat release. Numerical simulations were performed for two turbulence Reynolds numbers (50 and 100) and four ratios (1, 2, 5, and 10) of the rms turbulent velocity to the front speed. Obtained results show that, first, the mean front thickness is decreased when a ratio of the rms turbulent velocity to the front speed is decreased. Second, although the gradient diffusion closure yields the right direction of turbulent scalar flux obtained in the DNS, the diffusion coefficient Dt determined using the DNS data depends on the mean progress variable. Moreover, Dt is decreased when the front speed is increased, thus, indicating that the front propagation affects turbulent scalar transport even in a constant-density case. Third, conditioned moments of the velocity field differ from counterpart mean moments, thus, disputing the use of conditioned velocity moments for characterizing turbulence when modeling premixed turbulent combustion. Fourth, computed conditioned enstrophies are close to the mean enstrophy in all studied cases, thus, suggesting the use of conditioned enstrophy for characterizing turbulence
NASA Astrophysics Data System (ADS)
Watkins, Wendell R.; Zegel, Ferdinand H.; Triplett, Milton J.
1990-09-01
Various papers on the characterization, propagation, and simulation of IR scenes are presented. Individual topics addressed include: total radiant exitance measurements, absolute measurement of diffuse and specular reflectance using an FTIR spectrometer with an integrating sphere, fundamental limits in temperature estimation, incorporating the BRDF into an IR scene-generation system, characterizing IR dynamic response for foliage backgrounds, modeling sea surface effects in FLIR performance codes, automated imaging IR seeker performance evaluation system, generation of signature data bases with fast codes, background measurements using the NPS-IRST system. Also discussed are: naval ocean IR background analysis, camouflage simulation and effectiveness assessment for the individual soldier, discussion of IR scene generators, multiwavelength Scophony IR scene projector, LBIR target generator and calibrator for preflight seeker tests, dual-mode hardware-in-the-loop simulation facility, development of the IR blackbody source of gravity-type heat pipe and study of its characteristic.
Awdishu, Linda; Namba, Jennifer
2016-01-01
Objective. To evaluate first-year pharmacy students’ ability to identify medication errors involving the top 100 prescription medications. Design. In the first quarter of a 3-quarter pharmacy self-care course, a didactic lecture on the most common prescribing and dispensing prescription errors was presented to first-year pharmacy students (P1) in preparation for a prescription review simulation done individually and as a group. In the following quarter, they were given a formal prescription review workshop before a second simulation involving individual and group review of a different set of prescriptions. Students were evaluated based on the number of correctly checked prescriptions and a self-assessment of their confidence in reviewing prescriptions. Assessment. All 63 P1 students completed the prescription review simulations. The individual scores did not significantly change, but group scores improved from 79 (16.2%) in the fall quarter to 98.6 (4.7%) in the winter quarter. Students perceived improvement of their prescription checking skills, specifically in their ability to fill a prescription on their own, identify prescribing and dispensing errors, and perform pharmaceutical calculations. Conclusion. A prescription review module consisting of a didactic lecture, workshop and simulation-based methods to teach prescription analysis was successful at improving first year pharmacy students’ knowledge, confidence, and application of these skills. PMID:27402989
Atayee, Rabia S; Awdishu, Linda; Namba, Jennifer
2016-06-25
Objective. To evaluate first-year pharmacy students' ability to identify medication errors involving the top 100 prescription medications. Design. In the first quarter of a 3-quarter pharmacy self-care course, a didactic lecture on the most common prescribing and dispensing prescription errors was presented to first-year pharmacy students (P1) in preparation for a prescription review simulation done individually and as a group. In the following quarter, they were given a formal prescription review workshop before a second simulation involving individual and group review of a different set of prescriptions. Students were evaluated based on the number of correctly checked prescriptions and a self-assessment of their confidence in reviewing prescriptions. Assessment. All 63 P1 students completed the prescription review simulations. The individual scores did not significantly change, but group scores improved from 79 (16.2%) in the fall quarter to 98.6 (4.7%) in the winter quarter. Students perceived improvement of their prescription checking skills, specifically in their ability to fill a prescription on their own, identify prescribing and dispensing errors, and perform pharmaceutical calculations. Conclusion. A prescription review module consisting of a didactic lecture, workshop and simulation-based methods to teach prescription analysis was successful at improving first year pharmacy students' knowledge, confidence, and application of these skills.
NASA Astrophysics Data System (ADS)
Akther, Asma; Kafy, Abdullahil; Zhai, Lindong; Kim, Hyun Chan; Shishir, MD Imrul Reza; Kim, Jaehwan
2016-11-01
This study deals with ultrasonic wave propagation on a piezoelectric polymer substrate for tactile actuator. On the piezoelectric polymer substrate, a pair of interdigital transductor (IDT) electrodes is patterned by lift-off process and a resonator is made by exciting the IDTs. A standing wave is generated between the pair of IDT electrodes, of which the wavelength matches with the distance between two IDTs. The standing ultrasonic waves can give different textures to the users. The wave propagation in this periodic structure on the polymer substrate is studied by harmonic and transient analysis. Vertical displacement and induced voltage at the output IDT electrode are calculated and the ultrasonic wave generation is experimentally verified. The proposed concept of tactile actuator based on ultrasonic wave is explained.
Sirazetdinov, Vladimir S
2008-03-01
A detailed experimental study of spatial characteristics for laser beams propagating through the turbulent aerojet has been performed. The obtained results for radiation wavelengths of 0.53, 1.06, and 10.6 microm were used for the development of the numerical mathematical model for beam propagation through an extreme turbulent medium. The combination of parameters and algorithms for the numerical model was determined, which made it possible to obtain computational laser beam spatial characteristics that agreed quite well with the experimental data. Good agreement between the results points to the possibility, in principle, to regard the central jet area as a medium locally homogeneous in the statistical sense and anisotropic on the turbulent outer scales.
2005-12-01
Applications Underwater sound propagation has been used either for military applications like sonar, mine fields, voice communication , or civilian use such...as hydrographic surveys, oceanographic studies, and marine life research. Wireless communications to this date are a common part in our daily life...and the term wireless is usually associated with over the air communications and not related to underwater communications . Underwater networks may
NASA Astrophysics Data System (ADS)
Ou, X.; Sietsma, J.; Santofimia, M. J.
2016-06-01
Molecular dynamics simulations have been used to study the effects of different orientation relationships between fcc and bcc phases on the bcc/fcc interfacial propagation in pure iron systems at 300 K. Three semi-coherent bcc/fcc interfaces have been investigated. In all the cases, results show that growth of the bcc phase starts in the areas of low potential energy and progresses into the areas of high potential energy at the original bcc/fcc interfaces. The phase transformation in areas of low potential energy is of a martensitic nature while that in the high potential energy areas involves occasional diffusional jumps of atoms.
Computer Simulation Of Shock-Wave Propagation In Anisotropic Tectonic Structures
NASA Astrophysics Data System (ADS)
Gouliaev, V. I.; Lugovoy, P. Z.
1997-07-01
The problem of short shock waves propagating in anisotropic elastic layered media is investigated on a basis of the ray method. To determine the geometric parameters of the shock wave front, an analog of the eikonal equation is deduced whereby the first order partial differential equations are obtained. At each step of the numerical process we calculate for each numerical zone the orientation of the wave front, the types of quasi-longitudinal and quasi-shear waves and the directions and values of their propagation velocities. Thereafter, transition to the next step in the evolution of the shock wave front is carried out. Calculation of the stress intensity on the wave front and the value of the impulse carried by the wave is performed on the basis of conditions of energy and momentum conservation in a specified region. The outlined approach is used to study the reflection and penetration of shock and seismic waves through anisotropic media interfaces, and to investigate their propagation in natural tectonic wave guides.
Meldi, M.; Sagaut, P.; Salvetti, M. V.
2012-03-15
A stochastic approach based on generalized polynomial chaos (gPC) is used to quantify the error in large-eddy simulation (LES) of a spatially evolving mixing layer flow and its sensitivity to different simulation parameters, viz., the grid stretching in the streamwise and lateral directions and the subgrid-scale (SGS) Smagorinsky model constant (C{sub S}). The error is evaluated with respect to the results of a highly resolved LES and for different quantities of interest, namely, the mean streamwise velocity, the momentum thickness, and the shear stress. A typical feature of the considered spatially evolving flow is the progressive transition from a laminar regime, highly dependent on the inlet conditions, to a fully developed turbulent one. Therefore, the computational domain is divided in two different zones (inlet dependent and fully turbulent) and the gPC error analysis is carried out for these two zones separately. An optimization of the parameters is also carried out for both these zones. For all the considered quantities, the results point out that the error is mainly governed by the value of the C{sub S} constant. At the end of the inlet-dependent zone, a strong coupling between the normal stretching ratio and the C{sub S} value is observed. The error sensitivity to the parameter values is significantly larger in the inlet-dependent upstream region; however, low-error values can be obtained in this region for all the considered physical quantities by an ad hoc tuning of the parameters. Conversely, in the turbulent regime the error is globally lower and less sensitive to the parameter variations, but it is more difficult to find a set of parameter values leading to optimal results for all the analyzed physical quantities. A similar analysis is also carried out for the dynamic Smagorinsky model, by varying the grid stretching ratios. Comparing the databases generated with the different subgrid-scale models, it is possible to observe that the error cost
NASA Astrophysics Data System (ADS)
Abdessalem, K. B.; Sahtout, W.; Flaud, P.; Gazah, H.; Fakhfakh, Z.
2007-11-01
Literature shows a lack of works based on non-invasive methods for computing the propagation coefficient γ, a complex number related to dynamic vascular properties. Its imaginary part is inversely related to the wave speed C through the relationship C=ω/Im(γ), while its real part a, called attenuation, represents loss of pulse energy per unit of length. In this work an expression is derived giving the propagation coefficient when assuming a pulsatile flow through a viscoelastic vessel. The effects of physical and geometrical parameters of the tube are then studied. In particular, the effects of increasing the reflection coefficient, on the determination of the propagation coefficient are investigated in a first step. In a second step, we simulate a variation of tube length under physiological conditions. The method developed here is based on the knowledge of instantaneous velocity and radius values at only two sites. It takes into account the presence of a reflection site of unknown reflection coefficient, localised in the distal end of the vessel. The values of wave speed and attenuation obtained with this method are in a good agreement with the theory. This method has the advantage to be usable for small portions of the arterial tree.
ERIC Educational Resources Information Center
Budd, Mary-Jane; Hanley, J. Richard; Griffiths, Yvonne
2011-01-01
This study investigated whether Foygel and Dell's (2000) interactive two-step model of speech production could simulate the number and type of errors made in picture-naming by 68 children of elementary-school age. Results showed that the model provided a satisfactory simulation of the mean error profile of children aged five, six, seven, eight and…
Simulation of EMIC growth and propagation within the plasmaspheric plume density irregularities
NASA Astrophysics Data System (ADS)
de Soria-Santacruz Pich, M.; Spasojevic, M.
2012-12-01
In situ data from the Magnetospheric Plasma Analyzer (MPA) instruments onboard the LANL spacecraft are used to study the growth and propagation of electromagnetic ion cyclotron (EMIC) waves in the presence of cold plasma irregularities in the plasmaspheric plume. The data corresponds to the 9 June 2001 event, a period of moderate geomagnetic activity with highly irregular density structure within the plume as measured by the MPA instrument at geosynchoronus orbit. Theory and observations suggest that EMIC waves are responsible for energetic proton precipitation, which is stronger during geomagnetically disturbed intervals. These waves propagate below the proton gyrofrequency, and they appear in three frequency bands due to the presence of heavy ions, which strongly modify wave propagation characteristics. These waves are generated by ion cyclotron instability of ring current ions, whose temperature anisotropy provides the free energy required for wave growth. Growth maximizes for field-aligned propagation near the equatorial plane where the magnetic field gradient is small. Although the wave's group velocity typically stays aligned with the geomagnetic field direction, wave-normal vectors tend to become oblique due to the curvature and gradient of the field. On the other hand, radial density gradients have the capability of guiding the waves and competing against the magnetic field effect thus favoring wave growth conditions. In addition, enhanced cold plasma density reduces the proton resonant energy where higher fluxes are available for resonance, and hence explaining why wave growth is favored at higher L-shell regions where the ratio of plasma to cyclotron frequency is larger. The Stanford VLF 3D Raytracer is used together with path-integrated linear growth calculations to study the amplification and propagation characteristics of EMIC waves within the plasmaspheric plume formed during the 9 June 2001 event. Cold multi-ion plasma is assumed for raytracing
Luquet, David; Marchiano, Régis; Coulouvrat, François
2015-10-28
Many situations involve the propagation of acoustical shock waves through flows. Natural sources such as lightning, volcano explosions, or meteoroid atmospheric entries, emit loud, low frequency, and impulsive sound that is influenced by atmospheric wind and turbulence. The sonic boom produced by a supersonic aircraft and explosion noises are examples of intense anthropogenic sources in the atmosphere. The Buzz-Saw-Noise produced by turbo-engine fan blades rotating at supersonic speed also propagates in a fast flow within the engine nacelle. Simulating these situations is challenging, given the 3D nature of the problem, the long range propagation distances relative to the central wavelength, the strongly nonlinear behavior of shocks associated to a wide-band spectrum, and finally the key role of the flow motion. With this in view, the so-called FLHOWARD (acronym for FLow and Heterogeneous One-Way Approximation for Resolution of Diffraction) method is presented with three-dimensional applications. A scalar nonlinear wave equation is established in the framework of atmospheric applications, assuming weak heterogeneities and a slow wind. It takes into account diffraction, absorption and relaxation properties of the atmosphere, quadratic nonlinearities including weak shock waves, heterogeneities of the medium in sound speed and density, and presence of a flow (assuming a mean stratified wind and 3D turbulent ? flow fluctuations of smaller amplitude). This equation is solved in the framework of the one-way method. A split-step technique allows the splitting of the non-linear wave equation into simpler equations, each corresponding to a physical effect. Each sub-equation is solved using an analytical method if possible, and finite-differences otherwise. Nonlinear effects are solved in the time domain, and others in the frequency domain. Homogeneous diffraction is handled by means of the angular spectrum method. Ground is assumed perfectly flat and rigid. Due to the 3D
NASA Astrophysics Data System (ADS)
Luquet, David; Marchiano, Régis; Coulouvrat, François
2015-10-01
Many situations involve the propagation of acoustical shock waves through flows. Natural sources such as lightning, volcano explosions, or meteoroid atmospheric entries, emit loud, low frequency, and impulsive sound that is influenced by atmospheric wind and turbulence. The sonic boom produced by a supersonic aircraft and explosion noises are examples of intense anthropogenic sources in the atmosphere. The Buzz-Saw-Noise produced by turbo-engine fan blades rotating at supersonic speed also propagates in a fast flow within the engine nacelle. Simulating these situations is challenging, given the 3D nature of the problem, the long range propagation distances relative to the central wavelength, the strongly nonlinear behavior of shocks associated to a wide-band spectrum, and finally the key role of the flow motion. With this in view, the so-called FLHOWARD (acronym for FLow and Heterogeneous One-Way Approximation for Resolution of Diffraction) method is presented with three-dimensional applications. A scalar nonlinear wave equation is established in the framework of atmospheric applications, assuming weak heterogeneities and a slow wind. It takes into account diffraction, absorption and relaxation properties of the atmosphere, quadratic nonlinearities including weak shock waves, heterogeneities of the medium in sound speed and density, and presence of a flow (assuming a mean stratified wind and 3D turbulent ? flow fluctuations of smaller amplitude). This equation is solved in the framework of the one-way method. A split-step technique allows the splitting of the non-linear wave equation into simpler equations, each corresponding to a physical effect. Each sub-equation is solved using an analytical method if possible, and finite-differences otherwise. Nonlinear effects are solved in the time domain, and others in the frequency domain. Homogeneous diffraction is handled by means of the angular spectrum method. Ground is assumed perfectly flat and rigid. Due to the 3D
García-Grajales, Julián A.; Rucabado, Gabriel; García-Dopico, Antonio; Peña, José-María; Jérusalem, Antoine
2015-01-01
With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite—explicit and implicit—were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented
García-Grajales, Julián A; Rucabado, Gabriel; García-Dopico, Antonio; Peña, José-María; Jérusalem, Antoine
2015-01-01
With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite--explicit and implicit--were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented
NASA Astrophysics Data System (ADS)
Benedetti, A.; Stephens, G. L.
Data available from the Atmospheric Radiation Measurement-Unmanned Aerospace Vehicle (ARM-UAV) Spring 1999 experiment are used in this study to estimate errors in cirrus simulations from a 3D Cloud Resolving Model (CRM). The performance of the model, heritage of the CSU Regional Atmospheric Modeling System (RAMS) is assessed by direct comparison of modeled and observed fields. Results show that the CRM succeeds in placing the cloud at approximately the correct altitude, but consistently overestimates the Ice Water Content (IWC). A statistical approach is introduced and applied to quantify average model bias under the assumption of bias-free observations. An error covariance matrix associated with simulated fields is also computed, and used to identify model strengths and deficiencies. Model fields are then used in the context of an optimum estimation retrieval of IWC from a combination of radar and radiometric observations. The retrieval is based on the knowledge of an a priori profile and relative error covariance to ensure algorithm convergence and stability. RAMS average Ice Water Content, corrected for the bias, and the related error covariance matrix derived in this study are used to provide this a priori information to the retrieval.
Adjoint-field errors in high fidelity compressible turbulence simulations for sound control
NASA Astrophysics Data System (ADS)
Vishnampet, Ramanathan; Bodony, Daniel; Freund, Jonathan
2013-11-01
A consistent discrete adjoint for high-fidelity discretization of the three-dimensional Navier-Stokes equations is used to quantify the error in the sensitivity gradient predicted by the continuous adjoint method, and examine the aeroacoustic flow-control problem for free-shear-flow turbulence. A particular quadrature scheme for approximating the cost functional makes our discrete adjoint formulation for a fourth-order Runge-Kutta scheme with high-order finite differences practical and efficient. The continuous adjoint-based sensitivity gradient is shown to to be inconsistent due to discretization truncation errors, grid stretching and filtering near boundaries. These errors cannot be eliminated by increasing the spatial or temporal resolution since chaotic interactions lead them to become O (1) at the time of control actuation. Although this is a known behavior for chaotic systems, its effect on noise control is much harder to anticipate, especially given the different resolution needs of different parts of the turbulence and acoustic spectra. A comparison of energy spectra of the adjoint pressure fields shows significant error in the continuous adjoint at all wavenumbers, even though they are well-resolved. The effect of this error on the noise control mechanism is analyzed.
Richter, Martin; Fingerhut, Benjamin P
2016-07-12
We present an algorithm for the simulation of nonlinear 2D spectra of molecular systems in the UV-vis spectral region from atomistic molecular dynamics trajectories subject to nonadiabatic relaxation. We combine the nonlinear exciton propagation (NEP) protocol, that relies on a quasiparticle approach with the surface hopping methodology to account for quantum-classical feedback during the dynamics. Phenomena, such as dynamic Stokes shift due to nuclear relaxation, spectral diffusion, and population transfer among electronic states, are thus naturally included and benchmarked on a model of two electronic states coupled to a harmonic coordinate and a classical heatbath. The capabilities of the algorithm are further demonstrated for the bichromophore diphenylmethane that is described in a fully microscopic fashion including all 69 classical nuclear degrees of freedom. We demonstrate that simulated 2D signals are especially sensitive to the applied theoretical approximations (i.e., choice of active space in the CASSCF method) where population dynamics appears comparable.
Simulation of Gas Detonation Propagation in a Medium Having Variable Chemical Composition
NASA Astrophysics Data System (ADS)
Prokhorov, E. S.
2017-01-01
Within the framework of a quasi-one-dimensional approximation, a mathematical model of the propagation of a detonation wave in a tube filled with explosive gas mixture with spatially variable chemical composition has been formulated, and the respective problem has been solved numerically. The shift in the chemical equilibrium of detonation products as well as the friction and heat removal losses were taken into account. The proposed mathematical model allows one to describe steady-state (of Chapman-Jouguet) and over-compressed detonation regimes.
Analysis of the orbit errors in the CERN accelerators using model simulation
Lee, M.; Kleban, S.; Clearwater, S.; Scandale, W.; Pettersson, T.; Kugler, H.; Riche, A.; Chanel, M.; Martensson, E.; Lin, In-Ho
1987-09-01
This paper will describe the use of the PLUS program to find various types of machine and beam errors such as, quadrupole strength, dipole strength, beam position monitors (BPMs), energy profile, and beam launch. We refer to this procedure as the GOLD (Generic Orbit and Lattice Debugger) Method which is a general technique that can be applied to analysis of errors in storage rings and transport lines. One useful feature of the Method is that it analyzes segments of a machine at a time so that the application and efficiency is independent of the size of the overall machine. Because the techniques are the same for all the types of problems it solves, the user need learn only how to find one type of error in order to use the program.
Error analysis of ellipsoidal mirrors for soft X-ray focusing by wave-optical simulation
NASA Astrophysics Data System (ADS)
Motoyama, Hiroto; Saito, Takahiro; Mimura, Hidekazu
2014-02-01
The ellipsoidal mirror is an ideal soft X-ray focusing optic that enables achromatic and highly efficient focusing to a nanometer spot size; however, a high-quality surface is necessary for ideal focusing. Knowledge of the required figure accuracy is important for fabrication. In this paper, we analyze the effects of figure errors on the focusing performance through wave-optical calculations based on the Fresnel-Kirchhoff diffraction theory, assuming coherent soft X-rays. Figure errors are classified into three types from the viewpoint of manufacturing. The effect of the alignment error is also investigated. The analytical results quantitatively indicate criteria regarding figure accuracy, which are expected to be essential for the development of high-performance ellipsoidal soft X-ray focusing mirrors.
Local time-space mesh refinement for simulation of elastic wave propagation in multi-scale media
NASA Astrophysics Data System (ADS)
Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir
2015-01-01
This paper presents an original approach to local time-space grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are the application of temporal and spatial refinement on two different surfaces; the use of the embedded-stencil technique for the refinement of grid step with respect to time; the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.
Local time–space mesh refinement for simulation of elastic wave propagation in multi-scale media
Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir
2015-01-15
This paper presents an original approach to local time–space grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are –the application of temporal and spatial refinement on two different surfaces; –the use of the embedded-stencil technique for the refinement of grid step with respect to time; –the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.
NASA Astrophysics Data System (ADS)
Bhutia, Sangay; Ann Jenkins, Mary; Sun, Ruiyu
2010-01-01
Firebrand spotting is one of the most vexing problems associated with wildland fires, challenging the lives and efforts of fire-fighting planners. This work is an effort to model numerically the event of firebrand spotting for the purposes of reviewing past modelling approaches and of demonstrating a more current coupled fire/atmosphere approach. A simple, two-dimensional treatment of the process of firebrand lofting is examined under the restrictive conditions typical of a classical plume modelling approach. Using this approach, the differences in trajectories of combusting and non-combusting particles are investigated. Next, firebrand spotting is examined using a coupled fire/atmosphere LES (Large Eddy Simulator) in which the processes of firebrand lofting, propagation, and deposition are connected. The behaviour of combusting and non-combusting firebrands released from a moving grassfire into three-dimensional time-varying coupled atmosphere-wildfire induced circulations is examined. When these results are compared to the results of a classical plume model for firebrand spotting, it is found that firebrand propagation in the coupled LES simulated flow is significantly different from that obtained by the two-dimensional empirically-derived plume model approach. The coupled atmosphere-wildfire LES results are explorative and need to be subjected to direct testing.
Russ, Alissa L; Zillich, Alan J; Melton, Brittany L; Russell, Scott A; Chen, Siying; Spina, Jeffrey R; Weiner, Michael; Johnson, Elizabette G; Daggy, Joanne K; McManus, M Sue; Hawsey, Jason M; Puleo, Anthony G; Doebbeling, Bradley N; Saleem, Jason J
2014-01-01
Objective To apply human factors engineering principles to improve alert interface design. We hypothesized that incorporating human factors principles into alerts would improve usability, reduce workload for prescribers, and reduce prescribing errors. Materials and methods We performed a scenario-based simulation study using a counterbalanced, crossover design with 20 Veterans Affairs prescribers to compare original versus redesigned alerts. We redesigned drug–allergy, drug–drug interaction, and drug–disease alerts based upon human factors principles. We assessed usability (learnability of redesign, efficiency, satisfaction, and usability errors), perceived workload, and prescribing errors. Results Although prescribers received no training on the design changes, prescribers were able to resolve redesigned alerts more efficiently (median (IQR): 56 (47) s) compared to the original alerts (85 (71) s; p=0.015). In addition, prescribers rated redesigned alerts significantly higher than original alerts across several dimensions of satisfaction. Redesigned alerts led to a modest but significant reduction in workload (p=0.042) and significantly reduced the number of prescribing errors per prescriber (median (range): 2 (1–5) compared to original alerts: 4 (1–7); p=0.024). Discussion Aspects of the redesigned alerts that likely contributed to better prescribing include design modifications that reduced usability-related errors, providing clinical data closer to the point of decision, and displaying alert text in a tabular format. Displaying alert text in a tabular format may help prescribers extract information quickly and thereby increase responsiveness to alerts. Conclusions This simulation study provides evidence that applying human factors design principles to medication alerts can improve usability and prescribing outcomes. PMID:24668841
NASA Astrophysics Data System (ADS)
gravois, U.; Rogers, W. E.; Sheremet, A.; Jensen, T. G.
2012-12-01
This study focuses on the prediction of waves and surf on the nearshore reefs of South East Florida. The edge of this reefs tract, outside of Biscayne Bay, Miami, has a steep transition (1:30) from deep to shallow water and also marks the western wall of the Gulf Stream. Geographically the area is bordered by Florida, Cuba and the Bahamas Islands which block the propagation of swell energy and limit the fetch length in all directions except from the North. Related work by the authors on model hindcast validation for this area using HF radar and in situ data exposed the tendency for the wave model SWAN to over predict wave heights on these nearshore reefs for some NE swell events. Based on the findings of the hindcast validation, a series of theoretical SWAN simulations are set up to investigate the sensitivity of nearshore modeled wave heights to the deep water wave direction and also the effect of coupling with the Gulf Stream surface currents. SWAN is run on an outer wave grid centered about the nearshore reefs of interest and forced with a JONSWAP spectrum that is uniform across all of the boundaries for a suite of wave directions and frequencies. The output of the outer grid is used to force a higher resolution inner grid, run with and without Gulf Stream surface current coupling. Bulk wave parameters are output at a nearshore point location on the reef tract for analysis. There are several interesting findings as a result this study. First, there is only a narrow swell window that allows waves to propagate into the nearshore study location. This implies that a relatively small error in deep water swell angle could result in significant differences in the nearshore wave heights and is likely the source of error for the hindcast validation. Secondly, the swell window significantly shifts with the inclusion of the Gulf Stream current field. Gulf Stream refraction has more effect on shorter period wave forcing, so much so, that the optimal swell window is from the
NASA Astrophysics Data System (ADS)
Dhanya, M.; Chandrasekar, A.
2016-02-01
The background error covariance structure influences a variational data assimilation system immensely. The simulation of a weather phenomenon like monsoon depression can hence be influenced by the background correlation information used in the analysis formulation. The Weather Research and Forecasting Model Data assimilation (WRFDA) system includes an option for formulating multivariate background correlations for its three-dimensional variational (3DVar) system (cv6 option). The impact of using such a formulation in the simulation of three monsoon depressions over India is investigated in this study. Analysis and forecast fields generated using this option are compared with those obtained using the default formulation for regional background error correlations (cv5) in WRFDA and with a base run without any assimilation. The model rainfall forecasts are compared with rainfall observations from the Tropical Rainfall Measurement Mission (TRMM) and the other model forecast fields are compared with a high-resolution analysis as well as with European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-Interim reanalysis. The results of the study indicate that inclusion of additional correlation information in background error statistics has a moderate impact on the vertical profiles of relative humidity, moisture convergence, horizontal divergence and the temperature structure at the depression centre at the analysis time of the cv5/cv6 sensitivity experiments. Moderate improvements are seen in two of the three depressions investigated in this study. An improved thermodynamic and moisture structure at the initial time is expected to provide for improved rainfall simulation. The results of the study indicate that the skill scores of accumulated rainfall are somewhat better for the cv6 option as compared to the cv5 option for at least two of the three depression cases studied, especially at the higher threshold levels. Considering the importance of utilising improved
Mars gravity field error analysis from simulated radio tracking of Mars Observer
Smith, D.E.; Lerch, F.J. ); Chan, J.C.; Chinn, D.S.; Iz, H.B.; Mallama, A.; Patel, G.B. )
1990-08-30
The Mars Observer (MO) Mission, in a near-polar orbit at 360-410 km altitude for nearly a 2-year observing period, will greatly improve our understanding of the geophysics of Mars, including its gravity field. To assess the expected improvement of the gravity field, the authors have conducted an error analysis based upon the mission plan for the Mars Observer radio tracking data from the Deep Space Network. Their results indicate that it should be possible to obtain a high-resolution model (spherical harmonics complete to degree and order 50 corresponding to a 200-km horizontal resolution) for the gravitational field of the planet. This model, in combination with topography from MO altimetry, should provide for an improved determination of the broad scale density structure and stress state of the Martian crust and upper mantle. The mathematical model for the error analysis is based on the representation of doppler tracking data as a function of the Martian gravity field in spherical harmonics, solar radiation pressure, atmospheric drag, angular momentum desaturation residual acceleration (AMDRA) effects, tracking station biases, and the MO orbit parameters. Two approaches are employed. In the first case, the error covariance matrix of the gravity model is estimated including the effects from all the nongravitational parameters (noise-only case). In the second case, the gravity recovery error is computed as above but includes unmodelled systematic effects from atmospheric drag, AMDRA, and solar radiation pressure (biased case). The error spectrum of gravity shows an order of magnitude of improvement over current knowledge based on doppler data precision from a single station of 0.3 mm s{sup {minus}1} noise for 1-min integration intervals during three 60-day periods.
NASA Astrophysics Data System (ADS)
Lausch, A.; Jensen, N. K. G.; Chen, J.; Lee, T. Y.; Lock, M.; Wong, E.
2014-03-01
Purpose: To investigate the effects of registration error (RE) on parametric response map (PRM) analysis of pre and post-radiotherapy (RT) functional images. Methods: Arterial blood flow maps (ABF) were generated from the CT-perfusion scans of 5 patients with hepatocellular carcinoma. ABF values within each patient map were modified to produce seven new ABF maps simulating 7 distinct post-RT functional change scenarios. Ground truth PRMs were generated for each patient by comparing the simulated and original ABF maps. Each simulated ABF map was then deformed by different magnitudes of realistic respiratory motion in order to simulate RE. PRMs were generated for each of the deformed maps and then compared to the ground truth PRMs to produce estimates of RE-induced misclassification. Main findings: The percentage of voxels misclassified as decreasing, no change, and increasing, increased with RE For all patients, increasing RE was observed to increase the number of high post-RT ABF voxels associated with low pre-RT ABF voxels and vice versa. 3 mm of average tumour RE resulted in 18-45% tumour voxel misclassification rates. Conclusions: RE induced misclassification posed challenges for PRM analysis in the liver where registration accuracy tends to be lower. Quantitative understanding of the sensitivity of the PRM method to registration error is required if PRMs are to be used to guide radiation therapy dose painting techniques.
Mathematical simulation of sound propagation in a flow channel with impedance walls
NASA Astrophysics Data System (ADS)
Osipov, A. A.; Reent, K. S.
2012-07-01
The paper considers the specifics of calculating tonal sound propagating in a flow channel with an installed sound-absorbing device. The calculation is performed on the basis of numerical integrating on linearized nonstationary Euler equations using a code developed by the authors based on the so-called discontinuous Galerkin method. Using the linear theory of small perturbations, the effect of the sound-absorbing lining of the channel walls is described with the modified value of acoustic impedance proposed by the authors, for which, under flow channel conditions, the traditional classification of the active and reactive types of lining in terms of the real and imaginary impedance values, respectively, remains valid. To stabilize the computation process, a generalized impedance boundary condition is proposed in which, in addition to the impedance value itself, some additional parameters are introduced characterizing certain fictitious properties of inertia and elasticity of the impedance surface.
NASA Astrophysics Data System (ADS)
Voronov, Aleksandr V.; Tret'yakov, Evgeniy V.; Shuvalov, Vladimir V.
2004-06-01
Based on the path-integration technique and the Metropolis method, the original calculation scheme is developed for solving the problem of light propagation through highly scattering objects. The elimination of calculations of 'unnecessary' realisations and the phenomenological description of processes of multiple small-angle scattering provided a drastic increase (by nine and more orders of magnitude) in the calculation rate, retaining the specific features of the problem (consideration of spatial inhomogeneities, boundary conditions, etc.). The scheme allows one to verify other fast calculation algorithms and to obtain information required to reconstruct the internal structure of highly scattering objects (of size ~1000 scattered lengths and more) by the method of diffusion optical tomography.
NASA Technical Reports Server (NTRS)
Gupta, Vipul; Hochhalter, Jacob; Yamakov, Vesselin; Scott, Willard; Spear, Ashley; Smith, Stephen; Glaessgen, Edward
2013-01-01
A systematic study of crack tip interaction with grain boundaries is critical for improvement of multiscale modeling of microstructurally-sensitive fatigue crack propagation and for the computationally-assisted design of more durable materials. In this study, single, bi- and large-grain multi-crystal specimens of an aluminum-copper alloy are fabricated, characterized using electron backscattered diffraction (EBSD), and deformed under tensile loading and nano-indentation. 2D image correlation (IC) in an environmental scanning electron microscope (ESEM) is used to measure displacements near crack tips, grain boundaries and within grain interiors. The role of grain boundaries on slip transfer is examined using nano-indentation in combination with high-resolution EBSD. The use of detailed IC and EBSD-based experiments are discussed as they relate to crystal-plasticity finite element (CPFE) model calibration and validation.
FEM-simulation of laminar flame propagation. I: Two-dimensional flames
NASA Astrophysics Data System (ADS)
Michaelis, B.; Rogg, B.
2004-05-01
In this paper, we present a numerical model for two-dimensional low-Mach-number flows of reactive ideal-gas mixtures based on the fundamental conservation equations in primitive variables. Chemical reaction is described by a detailed mechanism of elementary reactions, and detailed models for molecular transport and thermodynamics are taken into account. The equations are discretized by a finite-element method on unstructured grids using the well known Taylor-Hood element. A streamline-diffusion upwinding technique is used to avoid instabilities in convection-dominated regions of the flowfield. A fully operative local adaptive mesh-refinement procedure is used. As numerical examples we consider steadily propagating laminar flames in flat channels, which appear in a variety of shapes depending on the boundary conditions.
Buckingham, Steven D; Spencer, Andrew N
2008-06-01
We applied compartmental computer modeling to test a model of spike shape change in the jellyfish, Polyorchis penicillatus, to determine whether adaptive spike shortening can be attributed to the inactivation properties of a potassium channel. We modeled the jellyfish outer nerve-ring as a continuous linear segment, using ion channel and membrane properties derived in earlier studies. The model supported action potentials that shortened as they propagated away from the site of initiation and this was found to be largely independent of potassium channel inactivation. Spike broadening near the site of initiation was found to be due to a depolarization plateau that collapsed as two spikes spread from the point of initiation. The lifetime of this plateau was found to depend critically on the inward current flux and the space constant of the membrane. These data suggest that the spike shape changes may be due not only to potassium channel inactivation, but also to the passive properties of the membrane.
NASA Astrophysics Data System (ADS)
Shiota, D.; Kataoka, R.
2016-02-01
Coronal mass ejections (CMEs) are the most important drivers of various types of space weather disturbance. Here we report a newly developed magnetohydrodynamic (MHD) simulation of the solar wind, including a series of multiple CMEs with internal spheromak-type magnetic fields. First, the polarity of the spheromak magnetic field is set as determined automatically according to the Hale-Nicholson law and the chirality law of Bothmer and Schwenn. The MHD simulation is therefore capable of predicting the time profile of the southward interplanetary magnetic field at the Earth, in relation to the passage of a magnetic cloud within a CME. This profile is the most important parameter for space weather forecasts of magnetic storms. In order to evaluate the current ability of our simulation, we demonstrate a test case: the propagation and interaction process of multiple CMEs associated with the highly complex active region NOAA 10486 in October to November 2003, and present the result of a simulation of the solar wind parameters at the Earth during the 2003 Halloween storms. We succeeded in reproducing the arrival at the Earth's position of a large amount of southward magnetic flux, which is capable of causing an intense magnetic storm. We find that the observed complex time profile of the solar wind parameters at the Earth could be reasonably well understood by the interaction of a few specific CMEs.
Heavner, Karyn; Burstyn, Igor
2015-08-24
Variation in the odds ratio (OR) resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations) used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff) curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to "small numbers." Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.
Sankaran, Sethuraman; Marsden, Alison L
2011-03-01
Simulations of blood flow in both healthy and diseased vascular models can be used to compute a range of hemodynamic parameters including velocities, time varying wall shear stress, pressure drops, and energy losses. The confidence in the data output from cardiovascular simulations depends directly on our level of certainty in simulation input parameters. In this work, we develop a general set of tools to evaluate the sensitivity of output parameters to input uncertainties in cardiovascular simulations. Uncertainties can arise from boundary conditions, geometrical parameters, or clinical data. These uncertainties result in a range of possible outputs which are quantified using probability density functions (PDFs). The objective is to systemically model the input uncertainties and quantify the confidence in the output of hemodynamic simulations. Input uncertainties are quantified and mapped to the stochastic space using the stochastic collocation technique. We develop an adaptive collocation algorithm for Gauss-Lobatto-Chebyshev grid points that significantly reduces computational cost. This analysis is performed on two idealized problems--an abdominal aortic aneurysm and a carotid artery bifurcation, and one patient specific problem--a Fontan procedure for congenital heart defects. In each case, relevant hemodynamic features are extracted and their uncertainty is quantified. Uncertainty quantification of the hemodynamic simulations is done using (a) stochastic space representations, (b) PDFs, and (c) the confidence intervals for a specified level of confidence in each problem.
NASA Technical Reports Server (NTRS)
Clement, W. F.; Allen, R. W.; Heffley, R. K.; Jewell, W. F.; Jex, H. R.; Mcruer, D. T.; Schulman, T. M.; Stapleford, R. L.
1980-01-01
The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included.
Mars gravity field error analysis from simulated radio tracking of Mars Observer
NASA Technical Reports Server (NTRS)
Smith, D. E.; Lerch, F. J.; Chan, J. C.; Chinn, D. S.; Iz, H. B.
1990-01-01
Results are presented on the analysis of the recovery of the Martian gravity field from tracking data in the presence of unmodeled error effects associated with different orbit orientations. The analysis was based on the mission plan for the Mars Observer (MO) radio tracking data from the Deep Space Network. From the analysis, a conservative estimate of the gravitational accuracy for the entire mission could be obtained. The results suggest that, because the atmospheric drag is the dominant error source, the spacecraft orbit could possibly be raised in altitude without a significant loss of gravitational signal. A change in altitude will also alleviate the large effects seen in the spectrum the satellite resonant orders.
Blackwell, David D.; Walker, David N.; Amatucci, William E.
2010-01-15
In previous papers, early whistler propagation measurements were presented [W. E. Amatucci et al., IEEE Trans. Plasma Sci. 33, 637 (2005)] as well as antenna impedance measurements [D. D. Blackwell et al., Phys. Plasmas 14, 092106 (2007)] performed in the Naval Research Laboratory Space Physics Simulation Chamber (SPSC). Since that time there have been major upgrades in the experimental capabilities of the laboratory in the form of improvement of both the plasma source and antennas. This has allowed access to plasma parameter space that was previously unattainable, and has resulted in measurements that provide a significantly clearer picture of whistler propagation in the laboratory environment. This paper presents some of the first whistler experimental results from the upgraded SPSC. Whereas previously measurements were limited to measuring the cyclotron resonance cutoff and elliptical polarization indicative of the whistler mode, now it is possible to experimentally plot the dispersion relation itself. The waves are driven and detected using balanced dipole and loop antennas connected to a network analyzer, which measures the amplitude and phase of the wave in two dimensions (r and z). In addition the frequency of the signals is also swept over a range of several hundreds of megahertz, providing a comprehensive picture of the near and far field antenna radiation patterns over a variety of plasma conditions. The magnetic field is varied from a few gauss to 200 G, with the density variable over at least 3 decades from 10{sup 7} to 10{sup 10} cm{sup -3}. The waves are shown to lie on the dispersion surface for whistler waves, with observation of resonance cones in agreement with theoretical predictions. The waves are also observed to propagate without loss of amplitude at higher power, a result in agreement with previous experiments and the notion of ducted whistlers.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
ELIASSI,MEHDI; GLASS JR.,ROBERT J.
2000-03-08
The authors consider the ability of the numerical solution of Richards equation to model gravity-driven fingers. Although gravity-driven fingers can be easily simulated using a partial downwind averaging method, they find the fingers are purely artificial, generated by the combined effects of truncation error induced oscillations and capillary hysteresis. Since Richards equation can only yield a monotonic solution for standard constitutive relations and constant flux boundary conditions, it is not the valid governing equation to model gravity-driven fingers, and therefore is also suspect for unsaturated flow in initially dry, highly nonlinear, and hysteretic media where these fingers occur. However, analysis of truncation error at the wetting front for the partial downwind method suggests the required mathematical behavior of a more comprehensive and physically based modeling approach for this region of parameter space.
Simulation of laser-driven plasma beat-wave propagation in collisional weakly relativistic plasmas
NASA Astrophysics Data System (ADS)
Kaur, Maninder; Nandan Gupta, Devki
2016-11-01
The process of interaction of lasers beating in a plasma has been explored by virtue of particle-in-cell (PIC) simulations in the presence of electron-ion collisions. A plasma beat wave is resonantly excited by ponderomotive force by two relatively long laser pulses of different frequencies. The amplitude of the plasma wave become maximum, when the difference in the frequencies is equal to the plasma frequency. We propose to demonstrate the energy transfer between the laser beat wave and the plasma wave in the presence of electron-ion collision in nearly relativistic regime with 2D-PIC simulations. The relativistic effect and electron-ion collision both affect the energy transfer between the interacting waves. The finding of simulation results shows that there is a considerable decay in the plasma wave and the field energy over time in the presence of electron-ion collisions.
Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David
2013-09-09
The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods.
Chen, Jacqueline H.; Hawkes, Evatt R.; Sankaran, Ramanan; Mason, Scott D.; Im, Hong G.
2006-04-15
The influence of thermal stratification on autoignition at constant volume and high pressure is studied by direct numerical simulation (DNS) with detailed hydrogen/air chemistry with a view to providing better understanding and modeling of combustion processes in homogeneous charge compression-ignition engines. Numerical diagnostics are developed to analyze the mode of combustion and the dependence of overall ignition progress on initial mixture conditions. The roles of dissipation of heat and mass are divided conceptually into transport within ignition fronts and passive scalar dissipation, which modifies the statistics of the preignition temperature field. Transport within ignition fronts is analyzed by monitoring the propagation speed of ignition fronts using the displacement speed of a scalar that tracks the location of maximum heat release rate. The prevalence of deflagrative versus spontaneous ignition front propagation is found to depend on the local temperature gradient, and may be identified by the ratio of the instantaneous front speed to the laminar deflagration speed. The significance of passive scalar mixing is examined using a mixing timescale based on enthalpy fluctuations. Finally, the predictions of the multizone modeling strategy are compared with the DNS, and the results are explained using the diagnostics developed. (author)
Razzaq, Misbah; Ahmad, Jamil
2015-01-01
Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework.
Razzaq, Misbah; Ahmad, Jamil
2015-01-01
Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework. PMID:26713449
Di Sarli, Valeria; Di Benedetto, Almerinda; Russo, Gennaro
2010-08-15
In this work, an assessment of different sub-grid scale (sgs) combustion models proposed for large eddy simulation (LES) of steady turbulent premixed combustion (Colin et al., Phys. Fluids 12 (2000) 1843-1863; Flohr and Pitsch, Proc. CTR Summer Program, 2000, pp. 61-82; Kim and Menon, Combust. Sci. Technol. 160 (2000) 119-150; Charlette et al., Combust. Flame 131 (2002) 159-180; Pitsch and Duchamp de Lageneste, Proc. Combust. Inst. 29 (2002) 2001-2008) was performed to identify the model that best predicts unsteady flame propagation in gas explosions. Numerical results were compared to the experimental data by Patel et al. (Proc. Combust. Inst. 29 (2002) 1849-1854) for premixed deflagrating flame in a vented chamber in the presence of three sequential obstacles. It is found that all sgs combustion models are able to reproduce qualitatively the experiment in terms of step of flame acceleration and deceleration around each obstacle, and shape of the propagating flame. Without adjusting any constants and parameters, the sgs model by Charlette et al. also provides satisfactory quantitative predictions for flame speed and pressure peak. Conversely, the sgs combustion models other than Charlette et al. give correct predictions only after an ad hoc tuning of constants and parameters.
NASA Astrophysics Data System (ADS)
Ávila-Carrera, R.; Sánchez-Sesma, F. J.; Spurlin, James H.; Valle-Molina, C.; Rodríguez-Castellanos, A.
2014-09-01
An analytic formulation to understand the scattering, diffraction and attenuation of elastic waves at the neighborhood of fluid filled wells is presented. An important, and not widely exploited, technique to carefully investigate the wave propagation in exploration wells is the logging of sonic waveforms. Fundamental decisions and production planning in petroleum reservoirs are made by interpretation of such recordings. Nowadays, geophysicists and engineers face problems related to the acquisition and interpretation under complex conditions associated with conducting open-hole measurements. A crucial problem that directly affects the response of sonic logs is the eccentricity of the measuring tool with respect to the center of the borehole. Even with the employment of centralizers, this simple variation, dramatically changes the physical conditions on the wave propagation around the well. Recent works in the numerical field reported advanced studies in modeling and simulation of acoustic wave propagation around wells, including complex heterogeneities and anisotropy. However, no analytical efforts have been made to formally understand the wireline sonic logging measurements acquired with borehole-eccentered tools. In this paper, the Graf's addition theorem was used to describe monopole sources in terms of solutions of the wave equation. The formulation was developed from the three-dimensional discrete wave-number method in the frequency domain. The cylindrical Bessel functions of the third kind and order zero were re-derived to obtain a simplified set of equations projected into a bi-dimensional plane-space for displacements and stresses. This new and condensed analytic formulation allows the straightforward calculation of all converted modes and their visualization in the time domain via Fourier synthesis. The main aim was to obtain spectral surfaces of transfer functions and synthetic seismograms that might be useful to understand the wave motion produced by the
We developed and applied a spatially-explicit, eco-hydrologic model to examine how a landscape disturbance affects hydrologic processes, ecosystem cycling of C and N, and ecosystem structure. We simulated how the pattern and magnitude of tree removal in a catchment influences fo...
Falvo, Cyril; Palmieri, Benoit; Mukamel, Shaul
2009-01-01
The two-dimensional vibrational response of the disordered strongly fluctuating OH exciton band in liquid water is investigated using a new simulation protocol. The direct nonlinear exciton propagation generalizes the nonlinear exciton equations to include nonadiabatic time dependent Hamiltonian and transition dipole fluctuations. The excitonic picture is retained and the large cancellation between Liouville pathways is built-in from the outset. The sensitivity of the photon echo and double-quantum-coherence techniques to frequency fluctuations, molecular reorientation, intermolecular coupling, and the two-exciton coherence is investigated. The photon echo is particularly sensitive to the frequency fluctuations and molecular reorientation, whereas the double-quantum coherence provides a unique probe for intermolecular couplings and two-exciton coherence. PMID:19449930
ITER test blanket module error field simulation experiments at DIII-D
NASA Astrophysics Data System (ADS)
Schaffer, M. J.; Snipes, J. A.; Gohil, P.; de Vries, P.; Evans, T. E.; Fenstermacher, M. E.; Gao, X.; Garofalo, A. M.; Gates, D. A.; Greenfield, C. M.; Heidbrink, W. W.; Kramer, G. J.; La Haye, R. J.; Liu, S.; Loarte, A.; Nave, M. F. F.; Osborne, T. H.; Oyama, N.; Park, J.-K.; Ramasubramanian, N.; Reimerdes, H.; Saibene, G.; Salmi, A.; Shinohara, K.; Spong, D. A.; Solomon, W. M.; Tala, T.; Zhu, Y. B.; Boedo, J. A.; Chuyanov, V.; Doyle, E. J.; Jakubowski, M.; Jhang, H.; Nazikian, R. M.; Pustovitov, V. D.; Schmitz, O.; Srinivasan, R.; Taylor, T. S.; Wade, M. R.; You, K.-I.; Zeng, L.; DIII-D Team
2011-10-01
Experiments at DIII-D investigated the effects of magnetic error fields similar to those expected from proposed ITER test blanket modules (TBMs) containing ferromagnetic material. Studied were effects on: plasma rotation and locking, confinement, L-H transition, the H-mode pedestal, edge localized modes (ELMs) and ELM suppression by resonant magnetic perturbations, energetic particle losses, and more. The experiments used a purpose-built three-coil mock-up of two magnetized ITER TBMs in one ITER equatorial port. The largest effect was a reduction in plasma toroidal rotation velocity v across the entire radial profile by as much as Δv/v ~ 60% via non-resonant braking. Changes to global Δn/n, Δβ/β and ΔH98/H98 were ~3 times smaller. These effects are stronger at higher β. Other effects were smaller. The TBM field increased sensitivity to locking by an applied known n = 1 test field in both L- and H-mode plasmas. Locked mode tolerance was completely restored in L-mode by re-adjusting the DIII-D n = 1 error field compensation system. Numerical modelling by IPEC reproduces the rotation braking and locking semi-quantitatively, and identifies plasma amplification of a few n = 1 Fourier harmonics as the main cause of braking. IPEC predicts that TBM braking in H-mode may be reduced by n = 1 control. Although extrapolation from DIII-D to ITER is still an open issue, these experiments suggest that a TBM-like error field will produce only a few potentially troublesome problems, and that they might be made acceptably small.
ITER Test Blanket Module Error Field Simulation Experiments at DIII-D
Schaffer, M. J.; Testa, D.; Snipes, J. A.; Gohil, P.; De Vries, P.; Evans, T. E.; Fenstermacher, M. E.; Gao, X.; Garofalo, A.; Gates, D.A.; Greenfield, C. M.; Heidbrink, W.; La Haye, R.; Liu, S.; Loarte, A.; Nave, M. F. F.; Oyama, N.; Osakabe, M.; Park, J. K.; Ramasubramanian, N.; Reimerdes, H.; Saibene, G.; Saimi, A.; Shinohara, K.; Spong, Donald A; Solomon, W. M.; Tala, T.; Zhu, Y. B.; Zhai, K.; Boedo, J.; Chuyanov, V.; Doyle, E. J.; Jakubowski, M. W.; Jhang, H.; Nazikian, Raffi; Pustovitov, V. D.; Schmitz, O.; Sanchez, Raul; Srinivasan, R.; Taylor, T. S.; Wade, M.; You, K. I.; Zeng, L.
2011-01-01
Experiments at DIII-D investigated the effects of magnetic error fields similar to those expected from proposed ITER test blanket modules (TBMs) containing ferromagnetic material. Studied were effects on: plasma rotation and locking, confinement, L-H transition, the H-mode pedestal, edge localized modes (ELMs) and ELM suppression by resonant magnetic perturbations, energetic particle losses, and more. The experiments used a purpose-built three-coil mock-up of two magnetized ITER TBMs in one ITER equatorial port. The largest effect was a reduction in plasma toroidal rotation velocity v across the entire radial profile by as much as Delta upsilon/upsilon similar to 60% via non-resonant braking. Changes to global Delta n/n, Delta beta/beta and Delta H(98)/H(98) were similar to 3 times smaller. These effects are stronger at higher beta. Other effects were smaller. The TBM field increased sensitivity to locking by an applied known n = 1 test field in both L-and H-mode plasmas. Locked mode tolerance was completely restored in L-mode by re-adjusting the DIII-D n = 1 error field compensation system. Numerical modelling by IPEC reproduces the rotation braking and locking semi-quantitatively, and identifies plasma amplification of a few n = 1 Fourier harmonics as the main cause of braking. IPEC predicts that TBM braking in H-mode may be reduced by n = 1 control. Although extrapolation from DIII-D to ITER is still an open issue, these experiments suggest that a TBM-like error field will produce only a few potentially troublesome problems, and that they might be made acceptably small.
NASA Astrophysics Data System (ADS)
Jin, M.; Manchester, W. B.; van der Holst, B.; Sokolov, I.; Tóth, G.; Vourlidas, A.; de Koning, C. A.; Gombosi, T. I.
2017-01-01
We perform and analyze the results of a global magnetohydrodynamic simulation of the fast coronal mass ejection (CME) that occurred on 2011 March 7. The simulation is made using the newly developed Alfvén Wave Solar Model (AWSoM), which describes the background solar wind starting from the upper chromosphere and extends to 24 R⊙. Coupling AWSoM to an inner heliosphere model with the Space Weather Modeling Framework extends the total domain beyond the orbit of Earth. Physical processes included in the model are multi-species thermodynamics, electron heat conduction (both collisional and collisionless formulations), optically thin radiative cooling, and Alfvén-wave turbulence that accelerates and heats the solar wind. The Alfvén-wave description is physically self-consistent, including non-Wentzel–Kramers–Brillouin reflection and physics-based apportioning of turbulent dissipative heating to both electrons and protons. Within this model, we initiate the CME by using the Gibson-Low analytical flux rope model and follow its evolution for days, in which time it propagates beyond STEREO A. A detailed comparison study is performed using remote as well as in situ observations. Although the flux rope structure is not compared directly due to lack of relevant ejecta observation at 1 au in this event, our results show that the new model can reproduce many of the observed features near the Sun (e.g., CME-driven extreme ultraviolet [EUV] waves, deflection of the flux rope from the coronal hole, “double-front” in the white light images) and in the heliosphere (e.g., shock propagation direction, shock properties at STEREO A).
Turbulence Scales, Rise Times, Caustics, and the Simulation of Sonic Boom Propagation
NASA Technical Reports Server (NTRS)
Pierce, Allan D.
1996-01-01
The general topic of atmospheric turbulence effects on sonic boom propagation is addressed with especial emphasis on taking proper and efficient account of the contributions of the portion oi the turbulence that is associated with extremely high wavenumber components. The recent work reported by Bart Lipkens in his doctoral thesis is reexamined to determine whether the good agreement between his measured rise times with the 1971 theory of the author is fortuitous. It is argued that Lipken's estimate of the distance to the first caustic was a gross overestimate because of the use of a sound speed correlation function shaped like a gaussian curve. In particular, it is argued that the expected distance to the first caustic varies with the kinematic viscosity nu and the energy epsilon dissipated per unit mass per unit time, and the sound speed c as : d(sub first caustic) = nu(exp 7/12) c(exp 2/3)/ epsilon(exp 5/12)(nu x epsilon/c(exp 4))(exp a), where the exponent a is greater than -7/12 and can be argued to be either O or 1/24. In any event, the surprising aspect of the relationship is that it actually goes to zero as the viscosity goes to zero with s held constant. It is argued that the apparent overabundance of caustics can be grossly reduced by a general computational and analytical perspective that partitions the turbulence into two parts, divided by a wavenumber k(sub c). Wavenumbers higher than kc correspond to small-scale turbulence, and the associated turbulence can be taken into account by a renormalization of the ambient sound speed so that the result has a small frequency dependence that results from a spatial averaging over of the smaller-scale turbulent fluctuations. Selection of k(sub c). can be made so large that only a very small number of caustics are encountered if one adopts the premise that the frequency dispersion of pulses is caused by that part of the turbulence spectrum which lies in the inertial range originally predicted by Kolmogoroff. The
NASA Astrophysics Data System (ADS)
Watson, Cameron S.; Carrivick, Jonathan; Quincey, Duncan
2015-10-01
Modelling glacial lake outburst floods (GLOFs) or 'jökulhlaups', necessarily involves the propagation of large and often stochastic uncertainties throughout the source to impact process chain. Since flood routing is primarily a function of underlying topography, communication of digital elevation model (DEM) uncertainty should accompany such modelling efforts. Here, a new stochastic first-pass assessment technique was evaluated against an existing GIS-based model and an existing 1D hydrodynamic model, using three DEMs with different spatial resolution. The analysis revealed the effect of DEM uncertainty and model choice on several flood parameters and on the prediction of socio-economic impacts. Our new model, which we call MC-LCP (Monte Carlo Least Cost Path) and which is distributed in the supplementary information, demonstrated enhanced 'stability' when compared to the two existing methods, and this 'stability' was independent of DEM choice. The MC-LCP model outputs an uncertainty continuum within its extent, from which relative socio-economic risk can be evaluated. In a comparison of all DEM and model combinations, the Shuttle Radar Topography Mission (SRTM) DEM exhibited fewer artefacts compared to those with the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), and were comparable to those with a finer resolution Advanced Land Observing Satellite Panchromatic Remote-sensing Instrument for Stereo Mapping (ALOS PRISM) derived DEM. Overall, we contend that the variability we find between flood routing model results suggests that consideration of DEM uncertainty and pre-processing methods is important when assessing flow routing and when evaluating potential socio-economic implications of a GLOF event. Incorporation of a stochastic variable provides an illustration of uncertainty that is important when modelling and communicating assessments of an inherently complex process.
Molecular-Level Simulations of Shock Generation and Propagation in Soda-Lime Glass
NASA Astrophysics Data System (ADS)
Grujicic, M.; Bell, W. C.; Pandurangan, B.; Cheeseman, B. A.; Fountzoulas, C.; Patel, P.
2012-08-01
A non-equilibrium molecular dynamics method is employed to study the mechanical response of soda-lime glass (a material commonly used in transparent armor applications) when subjected to the loading conditions associated with the generation and propagation of planar shock waves. Specific attention is given to the identification and characterization of various (inelastic-deformation and energy-dissipation) molecular-level phenomena and processes taking place at, or in the vicinity of, the shock front. The results obtained revealed that the shock loading causes a 2-4% (shock strength-dependent) density increase. In addition, an increase in the average coordination number of the silicon atoms is observed along with the creation of smaller Si-O rings. These processes are associated with substantial energy absorption and dissipation and are believed to greatly influence the blast/ballistic impact mitigation potential of soda-lime glass. The present work was also aimed at the determination of the shock Hugoniot (i.e., a set of axial stress vs. density/specific-volume vs. internal energy vs. particle velocity vs. temperature) material states obtained in soda-lime glass after the passage of a shock wave of a given strength (as quantified by the shock speed). The availability of a shock Hugoniot is critical for construction of a high deformation-rate, large-strain, high pressure material model which can be used within a continuum-level computational analysis to capture the response of a soda-lime glass based laminated transparent armor structure (e.g., a military vehicle windshield, door window, etc.) to blast/ballistic impact loading.
Simulation of nonlinear propagation of biomedical ultrasound using PZFlex and the KZK Texas code
NASA Astrophysics Data System (ADS)
Qiao, Shan; Jackson, Edward; Coussios, Constantin-C.; Cleveland, Robin
2015-10-01
In biomedical ultrasound nonlinear acoustics can be important in both diagnostic and therapeutic applications and robust simulations tools are needed in the design process but also for day-to-day use such as treatment planning. For most biomedical application the ultrasound sources generate focused sound beams of finite amplitude. The KZK equation is a common model as it accounts for nonlinearity, absorption and paraxial diffraction and there are a number of solvers available, primarily developed by research groups. We compare the predictions of the KZK Texas code (a finite-difference time-domain algorithm) to an FEM-based commercial software, PZFlex. PZFlex solves the continuity equation and momentum conservation equation with a correction for nonlinearity in the equation of state incorporated using an incrementally linear, 2nd order accurate, explicit algorithm in time domain. Nonlinear ultrasound beams from two transducers driven at 1 MHz and 3.3 MHz respectively were simulated by both the KZK Texas code and PZFlex, and the pressure field was also measured by a fibre-optic hydrophone to validate the models. Further simulations were carried out a wide range of frequencies. The comparisons showed good agreement for the fundamental frequency for PZFlex, the KZK Texas code and the experiments. For the harmonic components, the KZK Texas code was in good agreement with measurements but PZFlex underestimated the amplitude: 32% for the 2nd harmonic and 66% for the 3rd harmonic. The underestimation of harmonics by PZFlex was more significant when the fundamental frequency increased. Furthermore non-physical oscillations in the axial profile of harmonics occurred in the PZFlex results when the amplitudes were relatively low. These results suggest that careful benchmarking of nonlinear simulations is important.
Simulation of nonlinear propagation of biomedical ultrasound using PZFlex and the KZK Texas code
Qiao, Shan Jackson, Edward; Coussios, Constantin-C; Cleveland, Robin
2015-10-28
In biomedical ultrasound nonlinear acoustics can be important in both diagnostic and therapeutic applications and robust simulations tools are needed in the design process but also for day-to-day use such as treatment planning. For most biomedical application the ultrasound sources generate focused sound beams of finite amplitude. The KZK equation is a common model as it accounts for nonlinearity, absorption and paraxial diffraction and there are a number of solvers available, primarily developed by research groups. We compare the predictions of the KZK Texas code (a finite-difference time-domain algorithm) to an FEM-based commercial software, PZFlex. PZFlex solves the continuity equation and momentum conservation equation with a correction for nonlinearity in the equation of state incorporated using an incrementally linear, 2nd order accurate, explicit algorithm in time domain. Nonlinear ultrasound beams from two transducers driven at 1 MHz and 3.3 MHz respectively were simulated by both the KZK Texas code and PZFlex, and the pressure field was also measured by a fibre-optic hydrophone to validate the models. Further simulations were carried out a wide range of frequencies. The comparisons showed good agreement for the fundamental frequency for PZFlex, the KZK Texas code and the experiments. For the harmonic components, the KZK Texas code was in good agreement with measurements but PZFlex underestimated the amplitude: 32% for the 2nd harmonic and 66% for the 3rd harmonic. The underestimation of harmonics by PZFlex was more significant when the fundamental frequency increased. Furthermore non-physical oscillations in the axial profile of harmonics occurred in the PZFlex results when the amplitudes were relatively low. These results suggest that careful benchmarking of nonlinear simulations is important.
Measurement and Simulation of Signal Fluctuations Caused by Propagation through Trees
NASA Technical Reports Server (NTRS)
Durden, Stephen L.; Klein, Jeffrey D.; Zebker, Howard A.
1993-01-01
We present measured magnitude and phase fluctuations of UHF, L band, and C band signals that were transmitted from the ground through a forest canopy to an airborne radar. We find that the measured fluctuations are similar to those calculated by a simple Monte Carlo simulation. Both observed and calculated RMS fluctuations are typically several decibels in magnitude and tens of degrees in phase at all three frequencies.
2001-10-25
ventricle (Luo and Rudy model [6]) allows this kind of study, in which every variable can be controlled in contrast to experimental studies, where the...measure and control of some variables are complicated. Furthermore, computer simulations offer an important advantage, i.e. results are not sensible to...Work Unit Number Performing Organization Name(s) and Address(es) Departamento de Ingenieria Electronica Universidad Politecnica de Valencia
Verification and Validation of Rural Propagation in the Sage 2.0 Simulation
2016-08-01
Lethality Analysis Directorate, ARL Jayashree Harikumar, Patrick Honan, Jesse Jackman, and Brad Morgan Physical Science Laboratory (PSL) New Mexico...SUBJECT TERMS System of Systems, PSL, Physical Science Laboratory, MR1, Major Release 1, S4, System of Systems Survivability Simulation, JTRS-RR...lethality, and vulnerability (SLV) issues in a mission context containing multiple systems. SLAD and the Physical Science Laboratory (PSL) of New
BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...
NASA Astrophysics Data System (ADS)
Reeve, Samuel Temple; Strachan, Alejandro
2017-04-01
We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard-Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated under three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.
Pugh, Thomas J; Amos, Richard A; John Baptiste, Sandra; Choi, Seungtaek; Nhu Nguyen, Quyhn; Ronald Zhu, X; Palmer, Matthew B; Lee, Andrew K
2013-01-01
To evaluate the dosimetric consequences of rotational and translational alignment errors in patients receiving intensity-modulated proton therapy with multifield optimization (MFO-IMPT) for prostate cancer. Ten control patients with localized prostate cancer underwent treatment planning for MFO-IMPT. Rotational and translation errors were simulated along each of 3 axes: anterior-posterior (A-P), superior-inferior (S-I), and left-right. Clinical target-volume (CTV) coverage remained high with all alignment errors simulated. Rotational errors did not result in significant rectum or bladder dose perturbations. Translational errors resulted in larger dose perturbations to the bladder and rectum. Perturbations in rectum and bladder doses were minimal for rotational errors and larger for translational errors. Rectum V45 and V70 increased most with A-P misalignment, whereas bladder V45 and V70 changed most with S-I misalignment. The bladder and rectum V45 and V70 remained acceptable even with extreme alignment errors. Even with S-I and A-P translational errors of up to 5mm, the dosimetric profile of MFO-IMPT remained favorable. MFO-IMPT for localized prostate cancer results in robust coverage of the CTV without clinically meaningful dose perturbations to normal tissue despite extreme rotational and translational alignment errors.
Pugh, Thomas J.; Amos, Richard A.; John Baptiste, Sandra; Choi, Seungtaek; Nhu Nguyen, Quyhn; Ronald Zhu, X.; Palmer, Matthew B.; Lee, Andrew K.
2013-10-01
To evaluate the dosimetric consequences of rotational and translational alignment errors in patients receiving intensity-modulated proton therapy with multifield optimization (MFO-IMPT) for prostate cancer. Ten control patients with localized prostate cancer underwent treatment planning for MFO-IMPT. Rotational and translation errors were simulated along each of 3 axes: anterior-posterior (A-P), superior-inferior (S-I), and left-right. Clinical target-volume (CTV) coverage remained high with all alignment errors simulated. Rotational errors did not result in significant rectum or bladder dose perturbations. Translational errors resulted in larger dose perturbations to the bladder and rectum. Perturbations in rectum and bladder doses were minimal for rotational errors and larger for translational errors. Rectum V45 and V70 increased most with A-P misalignment, whereas bladder V45 and V70 changed most with S-I misalignment. The bladder and rectum V45 and V70 remained acceptable even with extreme alignment errors. Even with S-I and A-P translational errors of up to 5 mm, the dosimetric profile of MFO-IMPT remained favorable. MFO-IMPT for localized prostate cancer results in robust coverage of the CTV without clinically meaningful dose perturbations to normal tissue despite extreme rotational and translational alignment errors.
NASA Technical Reports Server (NTRS)
Sylvester, W. B.
1984-01-01
A series of SEASAT repeat orbits over a sequence of best Low center positions is simulated by using the Seatrak satellite calculator. These Low centers are, upon appropriate interpolation to hourly positions, Located at various times during the + or - 3 hour assimilation cycle. Error analysis for a sample of best cyclone center positions taken from the Atlantic and Pacific oceans reveals a minimum average error of 1.1 deg of Longitude and a standard deviation of 0.9 deg of Longitude. The magnitude of the average error seems to suggest that by utilizing the + or - 3 hour window in the assimilation cycle, the quality of the SASS data is degraded to the Level of the background. A further consequence of this assimilation scheme is the effect which is manifested as a result of the blending of two or more more juxtaposed vector winds, generally possessing different properties (vector quantity and time). The outcome of this is to reduce gradients in the wind field and to deform isobaric and frontal patterns of the intial field.
Plósz, Benedek Gy; De Clercq, Jeriffa; Nopens, Ingmar; Benedetti, Lorenzo; Vanrolleghem, Peter A
2011-01-01
In WWTP models, the accurate assessment of solids inventory in bioreactors equipped with solid-liquid separators, mostly described using one-dimensional (1-D) secondary settling tank (SST) models, is the most fundamental requirement of any calibration procedure. Scientific knowledge on characterising particulate organics in wastewater and on bacteria growth is well-established, whereas 1-D SST models and their impact on biomass concentration predictions are still poorly understood. A rigorous assessment of two 1-DSST models is thus presented: one based on hyperbolic (the widely used Takács-model) and one based on parabolic (the more recently presented Plósz-model) partial differential equations. The former model, using numerical approximation to yield realistic behaviour, is currently the most widely used by wastewater treatment process modellers. The latter is a convection-dispersion model that is solved in a numerically sound way. First, the explicit dispersion in the convection-dispersion model and the numerical dispersion for both SST models are calculated. Second, simulation results of effluent suspended solids concentration (XTSS,Eff), sludge recirculation stream (XTSS,RAS) and sludge blanket height (SBH) are used to demonstrate the distinct behaviour of the models. A thorough scenario analysis is carried out using SST feed flow rate, solids concentration, and overflow rate as degrees of freedom, spanning a broad loading spectrum. A comparison between the measurements and the simulation results demonstrates a considerably improved 1-D model realism using the convection-dispersion model in terms of SBH, XTSS,RAS and XTSS,Eff. Third, to assess the propagation of uncertainty derived from settler model structure to the biokinetic model, the impact of the SST model as sub-model in a plant-wide model on the general model performance is evaluated. A long-term simulation of a bulking event is conducted that spans temperature evolution throughout a summer
Treatment of numerical overflow in simulating error performance of free-space optical communication
NASA Astrophysics Data System (ADS)
Li, Fei; Hou, Zaihong; Wu, Yi
2012-11-01
Gamma-gamma distribution model was widely used in numerical simulations of the free-space optical communication system. The simulations are often interrupted by numerical overflow exception due to excessive parameters. Based on former researches, two modified models are presented using mathematical calculation software and computer program. By means of substitution and recurrence, factors of the original model are transformed into corresponding logarithmic formats, and potential overflow in calculation is eliminated. By numerical verification, the practicability and accuracy of the modified models are proved and the advantages and disadvantages are listed. The proper model should be selected according to practical conditions. The two models are also applicable to other numerical simulations based on gamma gamma distribution such as outrage probability and mean fade time of the free-space optical communication.
NASA Technical Reports Server (NTRS)
Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.
2004-01-01
One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows
Templeton, Jeremy Alan; Blaylock, Myra L.; Domino, Stefan P.; Hewson, John C.; Kumar, Pritvi Raj; Ling, Julia; Najm, Habib N.; Ruiz, Anthony; Safta, Cosmin; Sargsyan, Khachik; Stewart, Alessia; Wagner, Gregory
2015-09-01
The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.
A Priori Error-Controlled Simulation of Electromagnetic Phenomena for HPC
2013-09-12
MEEP from MIT, an open source FDTD code • cgmx part of “Overture” suite of simulation codes from LLNL - high order finite differences, second order...PDEs • CLAWPACK a finite difference suite of solvers from U.Washington Students from SMU focus on an FDTD implementation of CRBC in the Yee scheme for a
Monte Carlo analysis: error of extrapolated thermal conductivity from molecular dynamics simulations
Liu, Xiang-Yang; Andersson, Anders David
2016-11-07
In this short report, we give an analysis of the extrapolated thermal conductivity of UO2 from earlier molecular dynamics (MD) simulations [1]. Because almost all material properties are functions of temperature, e.g. fission gas release, the fuel thermal conductivity is the most important parameter from a model sensitivity perspective [2]. Thus, it is useful to perform such analysis.
Solovchuk, Maxim; Sheu, Tony W H; Thiriet, Marc
2013-11-01
This study investigates the influence of blood flow on temperature distribution during high-intensity focused ultrasound (HIFU) ablation of liver tumors. A three-dimensional acoustic-thermal-hydrodynamic coupling model is developed to compute the temperature field in the hepatic cancerous region. The model is based on the nonlinear Westervelt equation, bioheat equations for the perfused tissue and blood flow domains. The nonlinear Navier-Stokes equations are employed to describe the flow in large blood vessels. The effect of acoustic streaming is also taken into account in the present HIFU simulation study. A simulation of the Westervelt equation requires a prohibitively large amount of computer resources. Therefore a sixth-order accurate acoustic scheme in three-point stencil was developed for effectively solving the nonlinear wave equation. Results show that focused ultrasound beam with the peak intensity 2470 W/cm(2) can induce acoustic streaming velocities up to 75 cm/s in the vessel with a diameter of 3 mm. The predicted temperature difference for the cases considered with and without acoustic streaming effect is 13.5 °C or 81% on the blood vessel wall for the vein. Tumor necrosis was studied in a region close to major vessels. The theoretical feasibility to safely necrotize the tumors close to major hepatic arteries and veins was shown.
Direct simulations of outdoor blast wave propagation from source to receiver
NASA Astrophysics Data System (ADS)
Nguyen-Dinh, M.; Lardjane, N.; Duchenne, C.; Gainville, O.
2017-02-01
Outdoor blast waves generated by impulsive sources are deeply affected by numerous physical conditions such as source shape or height of burst in the near field, as well as topography, ground nature, or atmospheric conditions at larger distances. Application of classical linear acoustic methods may result in poor estimates of peak overpressures at intermediate ranges in the presence of these conditions. Here, we show, for the first time, that converged direct fully nonlinear simulations can be produced at a reasonable CPU cost in two-dimensional axisymmetric geometry from source location to more than 500 m/kg^{1/3} . The numerical procedure is based on a high-order finite-volume method with adaptive mesh refinement for solving the nonlinear Euler equations with a detonation model. It is applied to a real outdoor pyrotechnic site. A digital terrain model is built, micro-meteorological conditions are included through an effective sound speed, and a ground roughness model is proposed in order to account for the effects of vegetation and unresolved scales. Two-dimensional axisymmetric simulations are performed for several azimuths, and a comparison is made with experimental pressure signals recorded at scaled distances from 36 to 504 m/kg^{1/3} . The relative importance of the main physical effects is discussed.
Simulation of Cosmic Ray Acceleration, Propagation and Interaction in SNR Environment
NASA Astrophysics Data System (ADS)
Lee, S. H.; Kamae, T.; Ellison, D. C.
2007-07-01
Recent studies of young supernova remnants (SNRs) with Chandra, XMM, Suzaku and HESS have revealed complex morphologies and spectral features of the emission sites. The critical question of the relative importance of the two competing gamma-ray emission mechanisms in SNRs; inverse-Compton scattering by high-energy electrons and pion production by energetic protons, may be resolved by GLAST-LAT. To keep pace with the improved observations, we are developing a 3D model of particle acceleration, diffusion, and interaction in a SNR where broad-band emission from radio to multi-TeV energies, produced by shock accelerated electrons and ions, can be simulated for a given topology of shock fronts, magnetic field, and ISM densities. The 3D model takes as input, the particle spectra predicted by a hydrodynamic simulation of SNR evolution where nonlinear diffusive shock acceleration is coupled to the remnant dynamics (e.g., Ellison, Decourchelle & Ballet; Ellison & Cassam-Chenai Ellison, Berezhko & Baring). We will present preliminary models of the Galactic Ridge SNR RX J1713-3946 for selected choices of SNR parameters, magnetic field topology, and ISM density distributions. When constrained by broad-band observations, our models should predict the extent of coupling between spectral shape and morphology and provide direct information on the acceleration efficiency of cosmic-ray electrons and ions in SNRs.
Electromagnetic Simulations of Ground-Penetrating Radar Propagation near Lunar Pits and Lava Tubes
NASA Technical Reports Server (NTRS)
Zimmerman, M. I.; Carter, L. M.; Farrell, W. M.; Bleacher, J. E.; Petro, N. E.
2013-01-01
Placing an Orion capsule at the Earth-Moon L2 point (EML2) would potentially enable telerobotic operation of a rover on the lunar surface. The Human Exploration Virtual Institute (HEVI) is proposing that rover operations be carried out near one of the recently discovered lunar pits, which may provide radiation shielding for long duration human stays as well as a cross-disciplinary, science-rich target for nearer-term telerobotic exploration. Ground penetrating radar (GPR) instrumentation included onboard a rover has the potential to reveal many details of underground geologic structures near a pit, as well as characteristics of the pit itself. In the present work we employ the full-wave electromagnetic code MEEP to simulate such GPR reflections from a lunar pit and other subsurface features including lava tubes. These simulations will feed forward to mission concepts requiring knowledge of where to hide from harmful radiation and other environmental hazards such as plama charging and extreme diurnal temperatures.
Embedded wavelet video coding with error concealment
NASA Astrophysics Data System (ADS)
Chang, Pao-Chi; Chen, Hsiao-Ching; Lu, Ta-Te
2000-04-01
We present an error-concealed embedded wavelet (ECEW) video coding system for transmission over Internet or wireless networks. This system consists of two types of frames: intra (I) frames and inter, or predicted (P), frames. Inter frames are constructed by the residual frames formed by variable block-size multiresolution motion estimation (MRME). Motion vectors are compressed by arithmetic coding. The image data of intra frames and residual frames are coded by error-resilient embedded zerotree wavelet (ER-EZW) coding. The ER-EZW coding partitions the wavelet coefficients into several groups and each group is coded independently. Therefore, the error propagation effect resulting from an error is only confined in a group. In EZW coding any single error may result in a totally undecodable bitstream. To further reduce the error damage, we use the error concealment at the decoding end. In intra frames, the erroneous wavelet coefficients are replaced by neighbors. In inter frames, erroneous blocks of wavelet coefficients are replaced by data from the previous frame. Simulations show that the performance of ECEW is superior to ECEW without error concealment by 7 to approximately 8 dB at the error-rate of 10-3 in intra frames. The improvement still has 2 to approximately 3 dB at a higher error-rate of 10-2 in inter frames.
A map overlay error model based on boundary geometry
Gaeuman, D.; Symanzik, J.; Schmidt, J.C.
2005-01-01
An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.
NASA Astrophysics Data System (ADS)
Petrov, P.; Newman, G. A.
2011-12-01
averaging elastic coefficients and three averaging densities are necessary to describe the heterogeneous medium with VTI anisotropy. The resulting system is solved with iterative Krylov methods. The developed method will be incorporated in an inversion scheme for joint seismic-electromagnetic imaging. References. Brown, B.M., M. Jais, I.W. Knowles, 2005, A variational approach to an elastic inverse problem: Inverse Problems, 21, 1953-1973. Commer, M., G. Newman, 2008, New advances in three-dimensional controlled-source electromagnetic inversion: Geophysical Journal International, 172, 513-535. Newman, G. A., M. Commer and J.J. Carazzone, 2010, Imaging CSEM data in the presence of electrical anisotropy: Geophysics 75, 51-61 Petrov, P.V., G. A. Newman (2010), Using 3D Simulation of Elastic Wave Propagation in Laplace Domain for Electromagnetic-Seismic Inverse Modeling, Abstract T21A-2140 presented at 2010 Fall Meeting, AGU, San Francisco, Calif., 13-17 Dec. Shin, C. , W. Ha, 2008, A comparison between the behavior of objective functions for waveform inversion in the frequency and Laplace domains: Geophysics, 73, 119-133. Shin, C. , Y. H. Cha, 2008. Waveform inversion in the Laplace domain: Geophysical Journal International, 173, 922-931.
NASA Astrophysics Data System (ADS)
Jain, S.; Mo, G.; Qiao, L.
2017-02-01
Reactive molecular dynamics simulations were conducted to study the flame speed enhancement phenomenon of a solid mono-propellant, Pentaerythritol Tetranitrate (PETN), when coupled to highly conductive multi-walled carbon nanotubes (MWCNTs). The simulations were based on the first-principles derived reactive force field, ReaxFF, which includes both the physical changes such as thermal transport and the chemical changes such as bond breaking and forming. An annular deposition of a PETN layer around the MWCNTs was considered. The thickness of the PETN layer and the diameter of the MWCNT were varied to understand the effect of the MWCNT loading ratio on the flame propagation. Flame speed enhancements up to 3 times the bulk value were observed. An optimal MWCNT loading ratio was determined. The enhancement was attributed to the layering of the PETN molecules around the MWCNT, which increased the heat transport among the PETN molecules near the MWCNT surface, thus causing the flame to travel faster. Furthermore, a stronger ignition source was required for the MWCNT-PETN complex because of the higher thermal transport among the PETN molecules along the MWCNT, which makes the ignition energy dissipate more quickly. Lastly, the MWCNT remained unburned during the PETN combustion process.
NASA Astrophysics Data System (ADS)
Dziurzyński, Wacław; Krach, Andrzej; Pałka, Teresa
2014-03-01
In the paper the results of investigations aimed at further identification of the phenomena occurring in abandoned workings and connected with the flow of air-gas (methane, carbon dioxide, nitrogen, oxygen and carbon oxidation products) mixture with taking into consideration the impact of supplied mineral substances on the processes of self-heating of the coal left in goaves were presented. The known and successfully used method for the prevention of fires in abandoned workings is the technology of filling goaf with an ash-air mixture, which also raises the issue of the effective use of that mixture. The computer, i.e. digital simulation methods being developed and intended for the purpose of the process discussed here are a good complement of the use of that technology. A developed mathematical model describing the process of additional sealing of gob with wet slurry supplied with three pipelines is based on the balance of volume of the supplied mixture and contained in the body created in goaves. The form of that body was assessed on the basis of the observation results available in literature and the results of model investigations. The calculation examples carried out for the the longwall area and its goaf ventilated with the "U" system allow to state that the introduced modification of the mathematical model describing the flow of the mixture of air, gases, and wet slurry with consideration of the coal burning process in the fire source area was verified positively. The digital prognostic simulations have confirmed a vital impact of the wet slurry supplied into the goaf on the processes of coal burning and also the change of rate and volume flow rate of the air mixture in goaf. As a complement to the above it should be noted that such elements as the place of the slurry supply in comparison with the longwall inclination or fire source area location is of great importance for the effectiveness of the fire prevention used. The development of computer/ digital
Statistical numerical simulation of polarized terahertz radiation propagation in a cloud layer
NASA Astrophysics Data System (ADS)
Kablukova, E. G.; Kargin, B. A.; Lisenko, A. A.
2015-11-01
The results of numerical simulation of polarization characteristics of terahertz signals from a ground-based remote sensing system in stratus clouds for various models of liquid-droplet clouds are compared. Models of the scattering medium take into account the vertical stratification of the water vapor concentration in the atmosphere. The model of droplet size distribution includes droplets larger than 20 μm in radius. They are referred to as large droplets, while droplets with radius 1
FRANC2D: A two-dimensional crack propagation simulator. Version 2.7: User's guide
NASA Technical Reports Server (NTRS)
Wawrzynek, Paul; Ingraffea, Anthony
1994-01-01
FRANC 2D (FRacture ANalysis Code, 2 Dimensions) is a menu driven, interactive finite element computer code that performs fracture mechanics analyses of 2-D structures. The code has an automatic mesh generator for triangular and quadrilateral elements. FRANC2D calculates the stress intensity factor using linear elastic fracture mechanics and evaluates crack extension using several methods that may be selected by the user. The code features a mesh refinement and adaptive mesh generation capability that is automatically developed according to the predicted crack extension direction and length. The code also has unique features that permit the analysis of layered structure with load transfer through simulated mechanical fasteners or bonded joints. The code was written for UNIX workstations with X-windows graphics and may be executed on the following computers: DEC DecStation 3000 and 5000 series, IBM RS/6000 series, Hewlitt-Packard 9000/700 series, SUN Sparc stations, and most Silicon Graphics models.
Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; Littlewood, David J.; Baines, Andrew J.
2016-03-16
Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cell represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.
Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; ...
2016-03-16
Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less
Simulations in evolution. II. Relative fitness and the propagation of mutants.
Testa, Bernard; Bojarski, Andrzej J
2009-03-01
In Neo-Darwinism, variation and natural selection are the two evolutionary mechanisms which propel biological evolution. Our previous article presented a histogram model [1] consisting in populations of individuals whose number changed under the influence of variation and/or fitness, the total population remaining constant. Individuals are classified into bins, and the content of each bin is calculated generation after generation by an Excel spreadsheet. Here, we apply the histogram model to a stable population with fitness F(1)=1.00 in which one or two fitter mutants emerge. In a first scenario, a single mutant emerged in the population whose fitness was greater than 1.00. The simulations ended when the original population was reduced to a single individual. The histogram model was validated by excellent agreement between its predictions and those of a classical continuous function (Eqn. 1) which predicts the number of generations needed for a favorable mutation to spread throughout a population. But in contrast to Eqn. 1, our histogram model is adaptable to more complex scenarios, as demonstrated here. In the second and third scenarios, the original population was present at time zero together with two mutants which differed from the original population by two higher and distinct fitness values. In the fourth scenario, the large original population was present at time zero together with one fitter mutant. After a number of generations, when the mutant offspring had multiplied, a second mutant was introduced whose fitness was even greater. The histogram model also allows Shannon entropy (SE) to be monitored continuously as the information content of the total population decreases or increases. The results of these simulations illustrate, in a graphically didactic manner, the influence of natural selection, operating through relative fitness, in the emergence and dominance of a fitter mutant.
Wang, S.; Chen, Z. Y.; Wang, X. H. Li, D.; Yang, A. J.; Liu, D. X.; Rong, M. Z.; Chen, H. L.; Kong, M. G.
2015-11-28
Cold atmospheric-pressure plasmas have potential to be used for endoscope sterilization. In this study, a long quartz tube was used as the simulated endoscope channel, and an array of electrodes was warped one by one along the tube. Plasmas were generated in the inner channel of the tube, and their propagation characteristics in He+O{sub 2} feedstock gases were studied as a function of the oxygen concentration. It is found that each of the plasmas originates at the edge of an instantaneous cathode, and then it propagates bidirectionally. Interestingly, a plasma head with bright spots is formed in the hollow instantaneous cathode and moves towards its center part, and a plasma tail expands through the electrode gap and then forms a swallow tail in the instantaneous anode. The plasmas are in good axisymmetry when [O{sub 2}] ≤ 0.3%, but not for [O{sub 2}] ≥ 1%, and even behave in a stochastic manner when [O{sub 2}] = 3%. The antibacterial agents are charged species and reactive oxygen species, so their wall fluxes represent the “plasma dosage” for the sterilization. Such fluxes mainly act on the inner wall in the hollow electrode rather than that in the electrode gap, and they get to the maximum efficiency when the oxygen concentration is around 0.3%. It is estimated that one can reduce the electrode gap and enlarge the electrode width to achieve more homogenous and efficient antibacterial effect, which have benefits for sterilization applications.
NASA Astrophysics Data System (ADS)
Kim, Jihoon; Moridis, George J.
2013-10-01
We developed a hydraulic fracturing simulator by coupling a flow simulator to a geomechanics code, namely T+M simulator. Modeling of the vertical fracture development involves continuous updating of the boundary conditions and of the data connectivity, based on the finite element method for geomechanics. The T+M simulator can model the initial fracture development during the hydraulic fracturing operations, after which the domain description changes from single continuum to double or multiple continua in order to rigorously model both flow and geomechanics for fracture-rock matrix systems. The T+H simulator provides two-way coupling between fluid-heat flow and geomechanics, accounting for thermo-poro-mechanics, treats nonlinear permeability and geomechanical moduli explicitly, and dynamically tracks changes in the fracture(s) and in the pore volume. We also fully account for leak-off in all directions during hydraulic fracturing. We first test the T+M simulator, matching numerical solutions with the analytical solutions for poromechanical effects, static fractures, and fracture propagations. Then, from numerical simulation of various cases of the planar fracture propagation, shear failure can limit the vertical fracture propagation of tensile failure, because of leak-off into the reservoirs. Slow injection causes more leak-off, compared with fast injection, when the same amount of fluid is injected. Changes in initial total stress and contributions of shear effective stress to tensile failure can also affect formation of the fractured areas, and the geomechanical responses are still well-posed.
Kim, Jihoon; Moridis, George J.
2013-10-01
We developed a hydraulic fracturing simulator by coupling a flow simulator to a geomechanics code, namely T+M simulator. Modeling of the vertical fracture development involves continuous updating of the boundary conditions and of the data connectivity, based on the finite element method for geomechanics. The T+M simulator can model the initial fracture development during the hydraulic fracturing operations, after which the domain description changes from single continuum to double or multiple continua in order to rigorously model both flow and geomechanics for fracture-rock matrix systems. The T+H simulator provides two-way coupling between fluid-heat flow and geomechanics, accounting for thermoporomechanics, treats nonlinear permeability and geomechanical moduli explicitly, and dynamically tracks changes in the fracture(s) and in the pore volume. We also fully accounts for leak-off in all directions during hydraulic fracturing. We first validate the T+M simulator, matching numerical solutions with the analytical solutions for poromechanical effects, static fractures, and fracture propagations. Then, from numerical simulation of various cases of the planar fracture propagation, shear failure can limit the vertical fracture propagation of tensile failure, because of leak-off into the reservoirs. Slow injection causes more leak-off, compared with fast injection, when the same amount of fluid is injected. Changes in initial total stress and contributions of shear effective stress to tensile failure can also affect formation of the fractured areas, and the geomechanical responses are still well-posed.
Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.
1999-05-06
Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity to error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.
NASA Astrophysics Data System (ADS)
Themessl, M. J.; Gobiet, A.; Heinrich, G.; Regional; Local Climate Modeling; Analysis Research Group
2010-12-01
State-of-the-art regional climate models (RCMs) have shown their capability to reproduce mesoscale and even finer climate variability satisfactorily. However, considerable differences between model results and observational data remain, due to scale discrepancies and model errors. This limits the direct utilization of RCM results in climate change impact studies. Besides continuous climate model improvement, empirical-statistical post-processing approaches (model output statistics) offer an immediate pathway to mitigate these model problems and to provide better input data for climate change impact assessments. Among various statistical approaches, quantile mapping (QM) represents one powerful non-parametric technique to post-process RCM outputs. In this study, results from a transient regional climate simulation (period: 1951 to 2050; general circulation model: HadCM3; emission scenario: A1B; RCM: CLM) with horizontal grid spacing of 25 km is error corrected for entire Europe based on the E-OBS European daily gridded observational dataset (http://ensembles-eu.org). Firstly, the performance of QM for correcting daily temperature and precipitation for long-term simulations is evaluated in a decadal cross-validation framework between 1961 and 2000 and the error characteristics are discussed. In the case of precipitation amount a frequency adaptation tool is presented which deals with rare situations where the probability for non-precipitation days is lower in the observations than in the model. Secondly, the issue of generating new extremes in future scenarios is raised. For this purpose, the ERA-40 reanalysis driven hindcast is used to assure best possible temporal correlation between observations and model output. The hindcast is split such that the independent validation period contains observed extremes outside the range of the calibration period. Two extrapolation schemes at the tails of the calibrated correction functions are tested and compared to the simple
Drakaki, E; Makropoulou, M; Serafetinides, A A
2008-07-01
In dermatology, the in vivo spectral fluorescence measurements of human skin can serve as a valuable supplement to standard non-invasive techniques for diagnosing various skin diseases. However, quantitative analysis of the fluorescence spectra is complicated by the fact that skin is a complex multi-layered and inhomogeneous organ, with varied optical properties and biophysical characteristics. In this work, we recorded, in vitro, the laser-induced fluorescence emission signals of healthy porcine skin, one of the animals, which is considered as one of the most common models for investigations related to medical diagnostics of human cutaneous tissues. Differences were observed in the form and intensity of the fluorescence signal of the porcine skin, which can be attributed to the different concentrations of the native fluorophores and the variable physical and biological conditions of the skin tissue. As the light transport in the tissue target is directly influencing the absorption and the fluorescence emission signals, we performed Monte Carlo simulation of the light distribution in a five-layer model of human skin tissue, with a pulsed ultraviolet laser beam.
Modeling and Numerical Simulation of Microwave Pulse Propagation in Air Breakdown Environment
NASA Technical Reports Server (NTRS)
Kuo, S. P.; Kim, J.
1991-01-01
Numerical simulation is used to investigate the extent of the electron density at a distant altitude location which can be generated by a high-power ground-transmitted microwave pulse. This is done by varying the power, width, shape, and carrier frequency of the pulse. The results show that once the breakdown threshold field is exceeded in the region below the desired altitude location, electron density starts to build up in that region through cascading breakdown. The generated plasma attenuates the pulse energy (tail erosion) and thus deteriorates the energy transmission to the destined altitude. The electron density saturates at a level limited by the pulse width and the tail erosion process. As the pulse continues to travel upward, though the breakdown threshold field of the background air decreases, the pulse energy (width) is reduced more severely by the tail erosion process. Thus, the electron density grows more quickly at the higher altitude, but saturates at a lower level. Consequently, the maximum electron density produced by a single pulse at 50 km altitude, for instance, is limited to a value below 10(exp 6) cm(exp -3). Three different approaches are examined to determine if the ionization at the destined location can be improved: a repetitive pulse approach, a focused pulse approach, and two intersecting beams. Only the intersecting beam approach is found to be practical for generating the desired density level.
NASA Astrophysics Data System (ADS)
Fomin, Vladimir; Gusev, Anatoly; Diansky, Nikolay
2014-05-01
The numerical modelling of the Black Sea (BS) is performed by using INMOM (Institute of Numerical Mathematics Ocean Model). The model is based on the primitive equations in spherical s-coordinate system with free surface boundary condition. The numerical algorithm is based on the method of multicomponent splitting and has a flexible modular structure. The splitting with respect to physical processes and spatial coordinate is used. A computational method is proposed of the polluting substances (PS) transport in the BS region adjacent to the Great Sochi. It is based on INMOM application for the BS in two versions: M1 and M2. In the M1 INMOM has a uniform spatial resolution ~4 km, while M2 has non-uniform one with refinement to 50 m in the BS region near Great Sochi coast. The M2 is used only during the periods of PS transport computation for which the initial hydrothermodynamic conditions are taken from M1. Both versions reveal complexity of the BS circulation nature, however, M2 more adequately reproduces eddy circulation due to higher horizontal resolution in its eastern part. Hence, a suggestion is made that BS eddy structure simulation requires model resolution ~1.5 km, and the major factor of quasistationary Batumi anti-cyclonic gyre formation is the topographical features in this part of the sea. A computation of PS distribution from the rivers Sochi, Host and Mzymta and from 18 pipes of deep-water sewage production was performed for the high-water period from 01.04.2007 to 30.04.2007. It is shown that the significant contribution to PS distribution from these punctual sources is made by whirlwind mesoscale formations generating complicated 3-dimensional PS distribution.
NASA Astrophysics Data System (ADS)
Costantino, Lorenzo; Heinrich, Philippe; Mzé, Nahoudha; Hauchecorne, Alain
2016-04-01
In this work we perform numerical simulations of convective gravity waves (GWs), using the WRF (Weather Research and Forecasting) model. We first run an idealized, simplified and highly resolved simulation with model top at 80 km. Below 60 km of altitude, a vertical grid spacing smaller than 1 km is supposed to reliably resolve the effects of GW breaking. An eastward linear wind shear interacts with the GW field generated by a single convective thunderstorm. After 70 min of integration time, averaging within a radius of 300 km from the storm centre, results show that wave breaking in the upper stratosphere is largely dominated by saturation effects, driving an average drag force up to -41 m s -1 day -1. In the lower stratosphere, mean wave drag is positive and equal to 4.4 m s -1 day -1. In a second step, realistic WRF simulations are compared with lidar measurements from the NDACC network (Network for the Detection of Atmospheric Composition Changes) of gravity wave potential energy (E p) over OHP (Haute-Provence Observatory, southern France). Using a vertical grid spacing smaller than 1 km below 50 km of altitude, WRF seems to reliably reproduce the effect of GW dynamics and capture qualitative aspects of wave momentum and energy propagation a