Inversion of Robin coefficient by a spectral stochastic finite element approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin Bangti; Zou Jun
2008-03-01
This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.
Stochastic inversion of cross-borehole radar data from metalliferous vein detection
NASA Astrophysics Data System (ADS)
Zeng, Zhaofa; Huai, Nan; Li, Jing; Zhao, Xueyu; Liu, Cai; Hu, Yingsa; Zhang, Ling; Hu, Zuzhi; Yang, Hui
2017-12-01
In the exploration and evaluation of the metalliferous veins with a cross-borehole radar system, traditional linear inversion methods (least squares inversion, LSQR) only get indirect parameters (permittivity, resistivity, or velocity) to estimate the target structure. They cannot accurately reflect the geological parameters of the metalliferous veins’ media properties. In order to get the intrinsic geological parameters and internal distribution, in this paper, we build a metalliferous veins model based on the stochastic effective medium theory, and carry out stochastic inversion and parameter estimation based on the Monte Carlo sampling algorithm. Compared with conventional LSQR, the stochastic inversion can get higher resolution inversion permittivity and velocity of the target body. We can estimate more accurately the distribution characteristics of abnormality and target internal parameters. It provides a new research idea to evaluate the properties of complex target media.
Stochastic Gabor reflectivity and acoustic impedance inversion
NASA Astrophysics Data System (ADS)
Hariri Naghadeh, Diako; Morley, Christopher Keith; Ferguson, Angus John
2018-02-01
To delineate subsurface lithology to estimate petrophysical properties of a reservoir, it is possible to use acoustic impedance (AI) which is the result of seismic inversion. To change amplitude to AI, removal of wavelet effects from the seismic signal in order to get a reflection series, and subsequently transforming those reflections to AI, is vital. To carry out seismic inversion correctly it is important to not assume that the seismic signal is stationary. However, all stationary deconvolution methods are designed following that assumption. To increase temporal resolution and interpretation ability, amplitude compensation and phase correction are inevitable. Those are pitfalls of stationary reflectivity inversion. Although stationary reflectivity inversion methods are trying to estimate reflectivity series, because of incorrect assumptions their estimations will not be correct, but may be useful. Trying to convert those reflection series to AI, also merging with the low frequency initial model, can help us. The aim of this study was to apply non-stationary deconvolution to eliminate time variant wavelet effects from the signal and to convert the estimated reflection series to the absolute AI by getting bias from well logs. To carry out this aim, stochastic Gabor inversion in the time domain was used. The Gabor transform derived the signal’s time-frequency analysis and estimated wavelet properties from different windows. Dealing with different time windows gave an ability to create a time-variant kernel matrix, which was used to remove matrix effects from seismic data. The result was a reflection series that does not follow the stationary assumption. The subsequent step was to convert those reflections to AI using well information. Synthetic and real data sets were used to show the ability of the introduced method. The results highlight that the time cost to get seismic inversion is negligible related to general Gabor inversion in the frequency domain. Also, obtaining bias could help the method to estimate reliable AI. To justify the effect of random noise on deterministic and stochastic inversion results, a stationary noisy trace with signal-to-noise ratio equal to 2 was used. The results highlight the inability of deterministic inversion in dealing with a noisy data set even using a high number of regularization parameters. Also, despite the low level of signal, stochastic Gabor inversion not only can estimate correctly the wavelet’s properties but also, because of bias from well logs, the inversion result is very close to the real AI. Comparing deterministic and introduced inversion results on a real data set shows that low resolution results, especially in the deeper parts of seismic sections using deterministic inversion, creates significant reliability problems for seismic prospects, but this pitfall is solved completely using stochastic Gabor inversion. The estimated AI using Gabor inversion in the time domain is much better and faster than general Gabor inversion in the frequency domain. This is due to the extra number of windows required to analyze the time-frequency information and also the amount of temporal increment between windows. In contrast, stochastic Gabor inversion can estimate trustable physical properties close to the real characteristics. Applying to a real data set could give an ability to detect the direction of volcanic intrusion and the ability of lithology distribution delineation along the fan. Comparing the inversion results highlights the efficiency of stochastic Gabor inversion to delineate lateral lithology changes because of the improved frequency content and zero phasing of the final inversion volume.
NASA Astrophysics Data System (ADS)
Haris, A.; Novriyani, M.; Suparno, S.; Hidayat, R.; Riyanto, A.
2017-07-01
This study presents the integration of seismic stochastic inversion and multi-attributes for delineating the reservoir distribution in term of lithology and porosity in the formation within depth interval between the Top Sihapas and Top Pematang. The method that has been used is a stochastic inversion, which is integrated with multi-attribute seismic by applying neural network Probabilistic Neural Network (PNN). Stochastic methods are used to predict the probability mapping sandstone as the result of impedance varied with 50 realizations that will produce a good probability. Analysis of Stochastic Seismic Tnversion provides more interpretive because it directly gives the value of the property. Our experiment shows that AT of stochastic inversion provides more diverse uncertainty so that the probability value will be close to the actual values. The produced AT is then used for an input of a multi-attribute analysis, which is used to predict the gamma ray, density and porosity logs. To obtain the number of attributes that are used, stepwise regression algorithm is applied. The results are attributes which are used in the process of PNN. This PNN method is chosen because it has the best correlation of others neural network method. Finally, we interpret the product of the multi-attribute analysis are in the form of pseudo-gamma ray volume, density volume and volume of pseudo-porosity to delineate the reservoir distribution. Our interpretation shows that the structural trap is identified in the southeastern part of study area, which is along the anticline.
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
NASA Technical Reports Server (NTRS)
Bloxham, Jeremy
1987-01-01
The method of stochastic inversion is extended to the simultaneous inversion of both main field and secular variation. In the present method, the time dependency is represented by an expansion in Legendre polynomials, resulting in a simple diagonal form for the a priori covariance matrix. The efficient preconditioned Broyden-Fletcher-Goldfarb-Shanno algorithm is used to solve the large system of equations resulting from expansion of the field spatially to spherical harmonic degree 14 and temporally to degree 8. Application of the method to observatory data spanning the 1900-1980 period results in a data fit of better than 30 nT, while providing temporally and spatially smoothly varying models of the magnetic field at the core-mantle boundary.
Breast ultrasound computed tomography using waveform inversion with source encoding
NASA Astrophysics Data System (ADS)
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Ligon, D A; Gillespie, J B; Pellegrino, P
2000-08-20
The feasibility of using a generalized stochastic inversion methodology to estimate aerosol size distributions accurately by use of spectral extinction, backscatter data, or both is examined. The stochastic method used, inverse Monte Carlo (IMC), is verified with both simulated and experimental data from aerosols composed of spherical dielectrics with a known refractive index. Various levels of noise are superimposed on the data such that the effect of noise on the stability and results of inversion can be determined. Computational results show that the application of the IMC technique to inversion of spectral extinction or backscatter data or both can produce good estimates of aerosol size distributions. Specifically, for inversions for which both spectral extinction and backscatter data are used, the IMC technique was extremely accurate in determining particle size distributions well outside the wavelength range. Also, the IMC inversion results proved to be stable and accurate even when the data had significant noise, with a signal-to-noise ratio of 3.
NASA Astrophysics Data System (ADS)
Torres-Verdin, C.
2007-05-01
This paper describes the successful implementation of a new 3D AVA stochastic inversion algorithm to quantitatively integrate pre-stack seismic amplitude data and well logs. The stochastic inversion algorithm is used to characterize flow units of a deepwater reservoir located in the central Gulf of Mexico. Conventional fluid/lithology sensitivity analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generates typical Class III AVA responses. On the other hand, layer- dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution. Accordingly, AVA stochastic inversion, which combines the advantages of AVA analysis with those of geostatistical inversion, provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties (P-velocity, S-velocity, density), and lithotype (sand- shale) distributions. The quantitative use of rock/fluid information through AVA seismic amplitude data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, yields accurate 3D models of petrophysical properties such as porosity and permeability. Finally, by fully integrating pre-stack seismic amplitude data and well logs, the vertical resolution of inverted products is higher than that of deterministic inversions methods.
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
NASA Astrophysics Data System (ADS)
Llopis-Albert, Carlos; Palacios-Marqués, Daniel; Merigó, José M.
2014-04-01
In this paper a methodology for the stochastic management of groundwater quality problems is presented, which can be used to provide agricultural advisory services. A stochastic algorithm to solve the coupled flow and mass transport inverse problem is combined with a stochastic management approach to develop methods for integrating uncertainty; thus obtaining more reliable policies on groundwater nitrate pollution control from agriculture. The stochastic inverse model allows identifying non-Gaussian parameters and reducing uncertainty in heterogeneous aquifers by constraining stochastic simulations to data. The management model determines the spatial and temporal distribution of fertilizer application rates that maximizes net benefits in agriculture constrained by quality requirements in groundwater at various control sites. The quality constraints can be taken, for instance, by those given by water laws such as the EU Water Framework Directive (WFD). Furthermore, the methodology allows providing the trade-off between higher economic returns and reliability in meeting the environmental standards. Therefore, this new technology can help stakeholders in the decision-making process under an uncertainty environment. The methodology has been successfully applied to a 2D synthetic aquifer, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques.
An ambiguity of information content and error in an ill-posed satellite inversion
NASA Astrophysics Data System (ADS)
Koner, Prabhat
According to Rodgers (2000, stochastic approach), the averaging kernel (AK) is the representational matrix to understand the information content in a scholastic inversion. On the other hand, in deterministic approach this is referred to as model resolution matrix (MRM, Menke 1989). The analysis of AK/MRM can only give some understanding of how much regularization is imposed on the inverse problem. The trace of the AK/MRM matrix, which is the so-called degree of freedom from signal (DFS; stochastic) or degree of freedom in retrieval (DFR; deterministic). There are no physical/mathematical explanations in the literature: why the trace of the matrix is a valid form to calculate this quantity? We will present an ambiguity between information and error using a real life problem of SST retrieval from GOES13. The stochastic information content calculation is based on the linear assumption. The validity of such mathematics in satellite inversion will be questioned because it is based on the nonlinear radiative transfer and ill-conditioned inverse problems. References: Menke, W., 1989: Geophysical data analysis: discrete inverse theory. San Diego academic press. Rodgers, C.D., 2000: Inverse methods for atmospheric soundings: theory and practice. Singapore :World Scientific.
Teaching Tip: When a Matrix and Its Inverse Are Stochastic
ERIC Educational Resources Information Center
Ding, J.; Rhee, N. H.
2013-01-01
A stochastic matrix is a square matrix with nonnegative entries and row sums 1. The simplest example is a permutation matrix, whose rows permute the rows of an identity matrix. A permutation matrix and its inverse are both stochastic. We prove the converse, that is, if a matrix and its inverse are both stochastic, then it is a permutation matrix.
Application of a stochastic inverse to the geophysical inverse problem
NASA Technical Reports Server (NTRS)
Jordan, T. H.; Minster, J. B.
1972-01-01
The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
3D aquifer characterization using stochastic streamline calibration
NASA Astrophysics Data System (ADS)
Jang, Minchul
2007-03-01
In this study, a new inverse approach, stochastic streamline calibration is proposed. Using both a streamline concept and a stochastic technique, stochastic streamline calibration optimizes an identified field to fit in given observation data in a exceptionally fast and stable fashion. In the stochastic streamline calibration, streamlines are adopted as basic elements not only for describing fluid flow but also for identifying the permeability distribution. Based on the streamline-based inversion by Agarwal et al. [Agarwal B, Blunt MJ. Streamline-based method with full-physics forward simulation for history matching performance data of a North sea field. SPE J 2003;8(2):171-80], Wang and Kovscek [Wang Y, Kovscek AR. Streamline approach for history matching production data. SPE J 2000;5(4):353-62], permeability is modified rather along streamlines than at the individual gridblocks. Permeabilities in the gridblocks which a streamline passes are adjusted by being multiplied by some factor such that we can match flow and transport properties of the streamline. This enables the inverse process to achieve fast convergence. In addition, equipped with a stochastic module, the proposed technique supportively calibrates the identified field in a stochastic manner, while incorporating spatial information into the field. This prevents the inverse process from being stuck in local minima and helps search for a globally optimized solution. Simulation results indicate that stochastic streamline calibration identifies an unknown permeability exceptionally quickly. More notably, the identified permeability distribution reflected realistic geological features, which had not been achieved in the original work by Agarwal et al. with the limitations of the large modifications along streamlines for matching production data only. The constructed model by stochastic streamline calibration forecasted transport of plume which was similar to that of a reference model. By this, we can expect the proposed approach to be applied to the construction of an aquifer model and forecasting of the aquifer performances of interest.
A non-stochastic iterative computational method to model light propagation in turbid media
NASA Astrophysics Data System (ADS)
McIntyre, Thomas J.; Zemp, Roger J.
2015-03-01
Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.
Stochastic seismic inversion based on an improved local gradual deformation method
NASA Astrophysics Data System (ADS)
Yang, Xiuwei; Zhu, Peimin
2017-12-01
A new stochastic seismic inversion method based on the local gradual deformation method is proposed, which can incorporate seismic data, well data, geology and their spatial correlations into the inversion process. Geological information, such as sedimentary facies and structures, could provide significant a priori information to constrain an inversion and arrive at reasonable solutions. The local a priori conditional cumulative distributions at each node of model to be inverted are first established by indicator cokriging, which integrates well data as hard data and geological information as soft data. Probability field simulation is used to simulate different realizations consistent with the spatial correlations and local conditional cumulative distributions. The corresponding probability field is generated by the fast Fourier transform moving average method. Then, optimization is performed to match the seismic data via an improved local gradual deformation method. Two improved strategies are proposed to be suitable for seismic inversion. The first strategy is that we select and update local areas of bad fitting between synthetic seismic data and real seismic data. The second one is that we divide each seismic trace into several parts and obtain the optimal parameters for each part individually. The applications to a synthetic example and a real case study demonstrate that our approach can effectively find fine-scale acoustic impedance models and provide uncertainty estimations.
Stochastic reduced order models for inverse problems under uncertainty
Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.
2014-01-01
This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115
NASA Astrophysics Data System (ADS)
Marie, S.; Irving, J. D.; Looms, M. C.; Nielsen, L.; Holliger, K.
2011-12-01
Geophysical methods such as ground-penetrating radar (GPR) can provide valuable information on the hydrological properties of the vadose zone. In particular, there is evidence to suggest that the stochastic inversion of such data may allow for significant reductions in uncertainty regarding subsurface van-Genuchten-Mualem (VGM) parameters, which characterize unsaturated hydrodynamic behaviour as defined by the combination of the water retention and hydraulic conductivity functions. A significant challenge associated with the use of geophysical methods in a hydrological context is that they generally exhibit an indirect and/or weak sensitivity to the hydraulic parameters of interest. A novel and increasingly popular means of addressing this issue involves the acquisition of geophysical data in a time-lapse fashion while changes occur in the hydrological condition of the probed subsurface region. Another significant challenge when attempting to use geophysical data for the estimation of subsurface hydrological properties is the inherent non-linearity and non-uniqueness of the corresponding inverse problems. Stochastic inversion approaches have the advantage of providing a comprehensive exploration of the model space, which makes them ideally suited for addressing such issues. In this work, we present the stochastic inversion of time-lapse zero-offset-profile (ZOP) crosshole GPR traveltime data, collected during a forced infiltration experiment at the Arreneas field site in Denmark, in order to estimate subsurface VGM parameters and their corresponding uncertainties. We do this using a Bayesian Markov-chain-Monte-Carlo (MCMC) inversion approach. We find that the Bayesian-MCMC methodology indeed allows for a substantial refinement in the inferred posterior parameter distributions of the VGM parameters as compared to the corresponding priors. To further understand the potential impact on capturing the underlying hydrological behaviour, we also explore how the posterior VGM parameter distributions affect the hydrodynamic characteristics. In doing so, we find clear evidence that the approach pursued in this study allows for effective characterization of the hydrological behaviour of the probed subsurface region.
Seismic stochastic inversion identify river channel sand body
NASA Astrophysics Data System (ADS)
He, Z.
2015-12-01
The technology of seismic inversion is regarded as one of the most important part of geophysics. By using the technology of seismic inversion and the theory of stochastic simulation, the concept of seismic stochastic inversion is proposed.Seismic stochastic inversion can play an significant role in the identifying river channel sand body. Accurate sand body description is a crucial parameter to measure oilfield development and oilfield stimulation during the middle and later periods. Besides, rational well spacing density is an essential condition for efficient production. Based on the geological knowledge of a certain oilfield, in line with the use of seismic stochastic inversion, the river channel sand body in the work area is identified. In this paper, firstly, the single river channel body from the composite river channel body is subdivided. Secondly, the distribution of river channel body is ascertained in order to ascertain the direction of rivers. Morever, the superimposed relationship among the sand body is analyzed, especially among the inter-well sand body. The last but not at the least, via the analysis of inversion results of first vacuating the wells and continuous infilling later, it is meeted the most needs well spacing density that can obtain the optimal inversion result. It would serve effective guidance for oilfield stimulation.
NASA Astrophysics Data System (ADS)
Contreras, Arturo Javier
This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.
1987-09-01
inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in
Point-source stochastic-method simulations of ground motions for the PEER NGA-East Project
Boore, David
2015-01-01
Ground-motions for the PEER NGA-East project were simulated using a point-source stochastic method. The simulated motions are provided for distances between of 0 and 1200 km, M from 4 to 8, and 25 ground-motion intensity measures: peak ground velocity (PGV), peak ground acceleration (PGA), and 5%-damped pseudoabsolute response spectral acceleration (PSA) for 23 periods ranging from 0.01 s to 10.0 s. Tables of motions are provided for each of six attenuation models. The attenuation-model-dependent stress parameters used in the stochastic-method simulations were derived from inversion of PSA data from eight earthquakes in eastern North America.
NASA Astrophysics Data System (ADS)
Son, J.; Medina-Cetina, Z.
2017-12-01
We discuss the comparison between deterministic and stochastic optimization approaches to the nonlinear geophysical full-waveform inverse problem, based on the seismic survey data from Mississippi Canyon in the Northern Gulf of Mexico. Since the subsea engineering and offshore construction projects actively require reliable ground models from various site investigations, the primary goal of this study is to reconstruct the accurate subsurface information of the soil and rock material profiles under the seafloor. The shallow sediment layers have naturally formed heterogeneous formations which may cause unwanted marine landslides or foundation failures of underwater infrastructure. We chose the quasi-Newton and simulated annealing as deterministic and stochastic optimization algorithms respectively. Seismic forward modeling based on finite difference method with absorbing boundary condition implements the iterative simulations in the inverse modeling. We briefly report on numerical experiments using a synthetic data as an offshore ground model which contains shallow artificial target profiles of geomaterials under the seafloor. We apply the seismic migration processing and generate Voronoi tessellation on two-dimensional space-domain to improve the computational efficiency of the imaging stratigraphical velocity model reconstruction. We then report on the detail of a field data implementation, which shows the complex geologic structures in the Northern Gulf of Mexico. Lastly, we compare the new inverted image of subsurface site profiles in the space-domain with the previously processed seismic image in the time-domain at the same location. Overall, stochastic optimization for seismic inversion with migration and Voronoi tessellation show significant promise to improve the subsurface imaging of ground models and improve the computational efficiency required for the full waveform inversion. We anticipate that by improving the inversion process of shallow layers from geophysical data will better support the offshore site investigation.
NASA Astrophysics Data System (ADS)
Babaee, Hessam; Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em
2017-09-01
We develop a new robust methodology for the stochastic Navier-Stokes equations based on the dynamically-orthogonal (DO) and bi-orthogonal (BO) methods [1-3]. Both approaches are variants of a generalized Karhunen-Loève (KL) expansion in which both the stochastic coefficients and the spatial basis evolve according to system dynamics, hence, capturing the low-dimensional structure of the solution. The DO and BO formulations are mathematically equivalent [3], but they exhibit computationally complimentary properties. Specifically, the BO formulation may fail due to crossing of the eigenvalues of the covariance matrix, while both BO and DO become unstable when there is a high condition number of the covariance matrix or zero eigenvalues. To this end, we combine the two methods into a robust hybrid framework and in addition we employ a pseudo-inverse technique to invert the covariance matrix. The robustness of the proposed method stems from addressing the following issues in the DO/BO formulation: (i) eigenvalue crossing: we resolve the issue of eigenvalue crossing in the BO formulation by switching to the DO near eigenvalue crossing using the equivalence theorem and switching back to BO when the distance between eigenvalues is larger than a threshold value; (ii) ill-conditioned covariance matrix: we utilize a pseudo-inverse strategy to invert the covariance matrix; (iii) adaptivity: we utilize an adaptive strategy to add/remove modes to resolve the covariance matrix up to a threshold value. In particular, we introduce a soft-threshold criterion to allow the system to adapt to the newly added/removed mode and therefore avoid repetitive and unnecessary mode addition/removal. When the total variance approaches zero, we show that the DO/BO formulation becomes equivalent to the evolution equation of the Optimally Time-Dependent modes [4]. We demonstrate the capability of the proposed methodology with several numerical examples, namely (i) stochastic Burgers equation: we analyze the performance of the method in the presence of eigenvalue crossing and zero eigenvalues; (ii) stochastic Kovasznay flow: we examine the method in the presence of a singular covariance matrix; and (iii) we examine the adaptivity of the method for an incompressible flow over a cylinder where for large stochastic forcing thirteen DO/BO modes are active.
Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.
Ebert, M
1997-12-01
This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.
Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method
Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...
2017-11-20
The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less
Simulation studies of phase inversion in agitated vessels using a Monte Carlo technique.
Yeo, Leslie Y; Matar, Omar K; Perez de Ortiz, E Susana; Hewitt, Geoffrey F
2002-04-15
A speculative study on the conditions under which phase inversion occurs in agitated liquid-liquid dispersions is conducted using a Monte Carlo technique. The simulation is based on a stochastic model, which accounts for fundamental physical processes such as drop deformation, breakup, and coalescence, and utilizes the minimization of interfacial energy as a criterion for phase inversion. Profiles of the interfacial energy indicate that a steady-state equilibrium is reached after a sufficiently large number of random moves and that predictions are insensitive to initial drop conditions. The calculated phase inversion holdup is observed to increase with increasing density and viscosity ratio, and to decrease with increasing agitation speed for a fixed viscosity ratio. It is also observed that, for a fixed viscosity ratio, the phase inversion holdup remains constant for large enough agitation speeds. The proposed model is therefore capable of achieving reasonable qualitative agreement with general experimental trends and of reproducing key features observed experimentally. The results of this investigation indicate that this simple stochastic method could be the basis upon which more advanced models for predicting phase inversion behavior can be developed.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Inversion of particle-size distribution from angular light-scattering data with genetic algorithms.
Ye, M; Wang, S; Lu, Y; Hu, T; Zhu, Z; Xu, Y
1999-04-20
A stochastic inverse technique based on a genetic algorithm (GA) to invert particle-size distribution from angular light-scattering data is developed. This inverse technique is independent of any given a priori information of particle-size distribution. Numerical tests show that this technique can be successfully applied to inverse problems with high stability in the presence of random noise and low susceptibility to the shape of distributions. It has also been shown that the GA-based inverse technique is more efficient in use of computing time than the inverse Monte Carlo method recently developed by Ligon et al. [Appl. Opt. 35, 4297 (1996)].
NASA Astrophysics Data System (ADS)
Quintero-Chavarria, E.; Ochoa Gutierrez, L. H.
2016-12-01
Applications of the Self-potential Method in the fields of Hydrogeology and Environmental Sciences have had significant developments during the last two decades with a strong use on groundwater flows identification. Although only few authors deal with the forward problem's solution -especially in geophysics literature- different inversion procedures are currently being developed but in most cases they are compared with unconventional groundwater velocity fields and restricted to structured meshes. This research solves the forward problem based on the finite element method using the St. Venant's Principle to transform a point dipole, which is the field generated by a single vector, into a distribution of electrical monopoles. Then, two simple aquifer models were generated with specific boundary conditions and head potentials, velocity fields and electric potentials in the medium were computed. With the model's surface electric potential, the inverse problem is solved to retrieve the source of electric potential (vector field associated to groundwater flow) using deterministic and stochastic approaches. The first approach was carried out by implementing a Tikhonov regularization with a stabilized operator adapted to the finite element mesh while for the second a hierarchical Bayesian model based on Markov chain Monte Carlo (McMC) and Markov Random Fields (MRF) was constructed. For all implemented methods, the result between the direct and inverse models was contrasted in two ways: 1) shape and distribution of the vector field, and 2) magnitude's histogram. Finally, it was concluded that inversion procedures are improved when the velocity field's behavior is considered, thus, the deterministic method is more suitable for unconfined aquifers than confined ones. McMC has restricted applications and requires a lot of information (particularly in potentials fields) while MRF has a remarkable response especially when dealing with confined aquifers.
A stochastic vortex structure method for interacting particles in turbulent shear flows
NASA Astrophysics Data System (ADS)
Dizaji, Farzad F.; Marshall, Jeffrey S.; Grant, John R.
2018-01-01
In a recent study, we have proposed a new synthetic turbulence method based on stochastic vortex structures (SVSs), and we have demonstrated that this method can accurately predict particle transport, collision, and agglomeration in homogeneous, isotropic turbulence in comparison to direct numerical simulation results. The current paper extends the SVS method to non-homogeneous, anisotropic turbulence. The key element of this extension is a new inversion procedure, by which the vortex initial orientation can be set so as to generate a prescribed Reynolds stress field. After validating this inversion procedure for simple problems, we apply the SVS method to the problem of interacting particle transport by a turbulent planar jet. Measures of the turbulent flow and of particle dispersion, clustering, and collision obtained by the new SVS simulations are shown to compare well with direct numerical simulation results. The influence of different numerical parameters, such as number of vortices and vortex lifetime, on the accuracy of the SVS predictions is also examined.
Discovering network behind infectious disease outbreak
NASA Astrophysics Data System (ADS)
Maeno, Yoshiharu
2010-11-01
Stochasticity and spatial heterogeneity are of great interest recently in studying the spread of an infectious disease. The presented method solves an inverse problem to discover the effectively decisive topology of a heterogeneous network and reveal the transmission parameters which govern the stochastic spreads over the network from a dataset on an infectious disease outbreak in the early growth phase. Populations in a combination of epidemiological compartment models and a meta-population network model are described by stochastic differential equations. Probability density functions are derived from the equations and used for the maximal likelihood estimation of the topology and parameters. The method is tested with computationally synthesized datasets and the WHO dataset on the SARS outbreak.
Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
NASA Astrophysics Data System (ADS)
Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.
2008-05-15
We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
Simultaneous stochastic inversion for geomagnetic main field and secular variation. II - 1820-1980
NASA Technical Reports Server (NTRS)
Bloxham, Jeremy; Jackson, Andrew
1989-01-01
With the aim of producing readable time-dependent maps of the geomagnetic field at the core-mantle boundary, the method of simultaneous stochastic inversion for the geomagnetic main field and secular variation, described by Bloxham (1987), was applied to survey data from the period 1820-1980 to yield two time-dependent geomagnetic-field models, one for the period 1900-1980 and the other for 1820-1900. Particular consideration was given to the effect of crustal fields on observations. It was found that the existing methods of accounting for these fields as sources of random noise are inadequate in two circumstances: (1) when sequences of measurements are made at one particular site, and (2) for measurements made at satellite altitude. The present model shows many of the features in the earth's magnetic field at the core-mantle boundary described by Bloxham and Gubbins (1985) and supports many of their earlier conclusions.
NASA Astrophysics Data System (ADS)
Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui
2016-07-01
Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.
Nonholonomic relativistic diffusion and exact solutions for stochastic Einstein spaces
NASA Astrophysics Data System (ADS)
Vacaru, S. I.
2012-03-01
We develop an approach to the theory of nonholonomic relativistic stochastic processes in curved spaces. The Itô and Stratonovich calculus are formulated for spaces with conventional horizontal (holonomic) and vertical (nonholonomic) splitting defined by nonlinear connection structures. Geometric models of the relativistic diffusion theory are elaborated for nonholonomic (pseudo) Riemannian manifolds and phase velocity spaces. Applying the anholonomic deformation method, the field equations in Einstein's gravity and various modifications are formally integrated in general forms, with generic off-diagonal metrics depending on some classes of generating and integration functions. Choosing random generating functions we can construct various classes of stochastic Einstein manifolds. We show how stochastic gravitational interactions with mixed holonomic/nonholonomic and random variables can be modelled in explicit form and study their main geometric and stochastic properties. Finally, the conditions when non-random classical gravitational processes transform into stochastic ones and inversely are analyzed.
NASA Astrophysics Data System (ADS)
Pedesseau, Laurent; Jouanna, Paul
2004-12-01
The SASP (semianalytical stochastic perturbations) method is an original mixed macro-nano-approach dedicated to the mass equilibrium of multispecies phases, periphases, and interphases. This general method, applied here to the reflexive relation Ck⇔μk between the concentrations Ck and the chemical potentials μk of k species within a fluid in equilibrium, leads to the distribution of the particles at the atomic scale. The macroaspects of the method, based on analytical Taylor's developments of chemical potentials, are intimately mixed with the nanoaspects of molecular mechanics computations on stochastically perturbed states. This numerical approach, directly linked to definitions, is universal by comparison with current approaches, DLVO Derjaguin-Landau-Verwey-Overbeek, grand canonical Monte Carlo, etc., without any restriction on the number of species, concentrations, or boundary conditions. The determination of the relation Ck⇔μk implies in fact two problems: a direct problem Ck⇒μk and an inverse problem μk⇒Ck. Validation of the method is demonstrated in case studies A and B which treat, respectively, a direct problem and an inverse problem within a free saturated gypsum solution. The flexibility of the method is illustrated in case study C dealing with an inverse problem within a solution interphase, confined between two (120) gypsum faces, remaining in connection with a reference solution. This last inverse problem leads to the mass equilibrium of ions and water molecules within a 3 Å thick gypsum interface. The major unexpected observation is the repulsion of SO42- ions towards the reference solution and the attraction of Ca2+ ions from the reference solution, the concentration being 50 times higher within the interphase as compared to the free solution. The SASP method is today the unique approach able to tackle the simulation of the number and distribution of ions plus water molecules in such extreme confined conditions. This result is of prime importance for all coupled chemical-mechanical problems dealing with interfaces, and more generally for a wide variety of applications such as phase changes, osmotic equilibrium, surface energy, etc., in complex chemical-physics situations.
Butler, Troy; Graham, L.; Estep, D.; ...
2015-02-03
The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less
Energy diffusion controlled reaction rate of reacting particle driven by broad-band noise
NASA Astrophysics Data System (ADS)
Deng, M. L.; Zhu, W. Q.
2007-10-01
The energy diffusion controlled reaction rate of a reacting particle with linear weak damping and broad-band noise excitation is studied by using the stochastic averaging method. First, the stochastic averaging method for strongly nonlinear oscillators under broad-band noise excitation using generalized harmonic functions is briefly introduced. Then, the reaction rate of the classical Kramers' reacting model with linear weak damping and broad-band noise excitation is investigated by using the stochastic averaging method. The averaged Itô stochastic differential equation describing the energy diffusion and the Pontryagin equation governing the mean first-passage time (MFPT) are established. The energy diffusion controlled reaction rate is obtained as the inverse of the MFPT by solving the Pontryagin equation. The results of two special cases of broad-band noises, i.e. the harmonic noise and the exponentially corrected noise, are discussed in details. It is demonstrated that the general expression of reaction rate derived by the authors can be reduced to the classical ones via linear approximation and high potential barrier approximation. The good agreement with the results of the Monte Carlo simulation verifies that the reaction rate can be well predicted using the stochastic averaging method.
Optimization of contrast resolution by genetic algorithm in ultrasound tissue harmonic imaging.
Ménigot, Sébastien; Girault, Jean-Marc
2016-09-01
The development of ultrasound imaging techniques such as pulse inversion has improved tissue harmonic imaging. Nevertheless, no recommendation has been made to date for the design of the waveform transmitted through the medium being explored. Our aim was therefore to find automatically the optimal "imaging" wave which maximized the contrast resolution without a priori information. To overcome assumption regarding the waveform, a genetic algorithm investigated the medium thanks to the transmission of stochastic "explorer" waves. Moreover, these stochastic signals could be constrained by the type of generator available (bipolar or arbitrary). To implement it, we changed the current pulse inversion imaging system by including feedback. Thus the method optimized the contrast resolution by adaptively selecting the samples of the excitation. In simulation, we benchmarked the contrast effectiveness of the best found transmitted stochastic commands and the usual fixed-frequency command. The optimization method converged quickly after around 300 iterations in the same optimal area. These results were confirmed experimentally. In the experimental case, the contrast resolution measured on a radiofrequency line could be improved by 6% with a bipolar generator and it could still increase by 15% with an arbitrary waveform generator. Copyright © 2016 Elsevier B.V. All rights reserved.
Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin
2008-05-01
For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.
A new stochastic algorithm for inversion of dust aerosol size distribution
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Feng; Yang, Ma-ying
2015-08-01
Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.
NASA Astrophysics Data System (ADS)
Jardani, A.; Revil, A.; Dupont, J. P.
2013-02-01
The assessment of hydraulic conductivity of heterogeneous aquifers is a difficult task using traditional hydrogeological methods (e.g., steady state or transient pumping tests) due to their low spatial resolution. Geophysical measurements performed at the ground surface and in boreholes provide additional information for increasing the resolution and accuracy of the inverted hydraulic conductivity field. We used a stochastic joint inversion of Direct Current (DC) resistivity and self-potential (SP) data plus in situ measurement of the salinity in a downstream well during a synthetic salt tracer experiment to reconstruct the hydraulic conductivity field between two wells. The pilot point parameterization was used to avoid over-parameterization of the inverse problem. Bounds on the model parameters were used to promote a consistent Markov chain Monte Carlo sampling of the model parameters. To evaluate the effectiveness of the joint inversion process, we compared eight cases in which the geophysical data are coupled or not to the in situ sampling of the salinity to map the hydraulic conductivity. We first tested the effectiveness of the inversion of each type of data alone (concentration sampling, self-potential, and DC resistivity), and then we combined the data two by two. We finally combined all the data together to show the value of each type of geophysical data in the joint inversion process because of their different sensitivity map. We also investigated a case in which the data were contaminated with noise and the variogram unknown and inverted stochastically. The results of the inversion revealed that incorporating the self-potential data improves the estimate of hydraulic conductivity field especially when the self-potential data were combined to the salt concentration measurement in the second well or to the time-lapse cross-well electrical resistivity data. Various tests were also performed to quantify the uncertainty in the inverted hydraulic conductivity field.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
NASA Astrophysics Data System (ADS)
Jia, Ningning; Y Lam, Edmund
2010-04-01
Inverse lithography technology (ILT) synthesizes photomasks by solving an inverse imaging problem through optimization of an appropriate functional. Much effort on ILT is dedicated to deriving superior masks at a nominal process condition. However, the lower k1 factor causes the mask to be more sensitive to process variations. Robustness to major process variations, such as focus and dose variations, is desired. In this paper, we consider the focus variation as a stochastic variable, and treat the mask design as a machine learning problem. The stochastic gradient descent approach, which is a useful tool in machine learning, is adopted to train the mask design. Compared with previous work, simulation shows that the proposed algorithm is effective in producing robust masks.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
Selected inversion as key to a stable Langevin evolution across the QCD phase boundary
NASA Astrophysics Data System (ADS)
Bloch, Jacques; Schenk, Olaf
2018-03-01
We present new results of full QCD at nonzero chemical potential. In PRD 92, 094516 (2015) the complex Langevin method was shown to break down when the inverse coupling decreases and enters the transition region from the deconfined to the confined phase. We found that the stochastic technique used to estimate the drift term can be very unstable for indefinite matrices. This may be avoided by using the full inverse of the Dirac operator, which is, however, too costly for four-dimensional lattices. The major breakthrough in this work was achieved by realizing that the inverse elements necessary for the drift term can be computed efficiently using the selected inversion technique provided by the parallel sparse direct solver package PARDISO. In our new study we show that no breakdown of the complex Langevin method is encountered and that simulations can be performed across the phase boundary.
Stochastic DT-MRI connectivity mapping on the GPU.
McGraw, Tim; Nadar, Mariappan
2007-01-01
We present a method for stochastic fiber tract mapping from diffusion tensor MRI (DT-MRI) implemented on graphics hardware. From the simulated fibers we compute a connectivity map that gives an indication of the probability that two points in the dataset are connected by a neuronal fiber path. A Bayesian formulation of the fiber model is given and it is shown that the inversion method can be used to construct plausible connectivity. An implementation of this fiber model on the graphics processing unit (GPU) is presented. Since the fiber paths can be stochastically generated independently of one another, the algorithm is highly parallelizable. This allows us to exploit the data-parallel nature of the GPU fragment processors. We also present a framework for the connectivity computation on the GPU. Our implementation allows the user to interactively select regions of interest and observe the evolving connectivity results during computation. Results are presented from the stochastic generation of over 250,000 fiber steps per iteration at interactive frame rates on consumer-grade graphics hardware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foxall, W; Cunningham, C; Mellors, R
Many clandestine development and production activities can be conducted underground to evade surveillance. The purpose of the study reported here was to develop a technique to detect underground facilities by broad-area search and then to characterize the facilities by inversion of the collected data. This would enable constraints to be placed on the types of activities that would be feasible at each underground site, providing a basis the design of targeted surveillance and analysis for more complete characterization. Excavation of underground cavities causes deformation in the host material and overburden that produces displacements at the ground surface. Such displacements aremore » often measurable by a variety of surveying or geodetic techniques. One measurement technique, Interferometric Synthetic Aperture Radar (InSAR), uses data from satellite-borne (or airborne) synthetic aperture radars (SARs) and so is ideal for detecting and measuring surface displacements in denied access regions. Depending on the radar frequency and the acquisition mode and the surface conditions, displacement maps derived from SAR interferograms can provide millimeter- to centimeter-level measurement accuracy on regional and local scales at spatial resolution of {approx}1-10 m. Relatively low-resolution ({approx}20 m, say) maps covering large regions can be used for broad-area detection, while finer resolutions ({approx}1 m) can be used to image details of displacement fields over targeted small areas. Surface displacements are generally expected to be largest during or a relatively short time after active excavation, but, depending on the material properties, measurable displacement may continue at a decreasing rate for a considerable time after completion. For a given excavated volume in a given geological setting, the amplitude of the surface displacements decreases as the depth of excavation increases, while the area of the discernable displacement pattern increases. Therefore, the ability to detect evidence for an underground facility using InSAR depends on the displacement sensitivity and spatial resolution of the interferogram, as well as on the size and depth of the facility and the time since its completion. The methodology development described in this report focuses on the exploitation of synthetic aperture radar data that are available commercially from a number of satellite missions. Development of the method involves three components: (1) Evaluation of the capability of InSAR to detect and characterize underground facilities ; (2) inversion of InSAR data to infer the location, depth, shape and volume of a subsurface facility; and (3) evaluation and selection of suitable geomechanical forward models to use in the inversion. We adapted LLNL's general-purpose Bayesian Markov Chain-Monte Carlo procedure, the 'Stochastic Engine' (SE), to carry out inversions to characterize subsurface void geometries. The SE performs forward simulations for a large number of trial source models to identify the set of models that are consistent with the observations and prior constraints. The inverse solution produced by this kind of stochastic method is a posterior probability density function (pdf) over alternative models, which forms an appropriate input to risk-based decision analyses to evaluate subsequent response strategies. One major advantage of a stochastic inversion approach is its ability to deal with complex, non-linear forward models employing empirical, analytical or numerical methods. However, while a geomechanical model must incorporate adequate physics to enable sufficiently accurate prediction of surface displacements, it must also be computationally fast enough to render the large number of forward realizations needed in stochastic inversion feasible. This latter requirement prompted us first to investigate computationally efficient empirical relations and closed-form analytical solutions. However, our evaluation revealed severe limitations in the ability of existing empirical and analytical forms to predict deformations from underground cavities with an accuracy consistent with the potential resolution and precision of InSAR data. We followed two approaches to overcoming these limitations. The first was to develop a new analytical solution for a 3D cavity excavated in an elastic half-space. The second was to adapt a fast parallelized finite element method to the SE and evaluate the feasibility of using in the stochastic inversion. To date we have demonstrated the ability of InSAR to detect underground facilities and measure the associated surface displacements by mapping surface deformations that track the excavation of the Los Angeles Metro system. The Stochastic Engine implementation has been completed and undergone functional testing.« less
Inverse random source scattering for the Helmholtz equation in inhomogeneous media
NASA Astrophysics Data System (ADS)
Li, Ming; Chen, Chuchu; Li, Peijun
2018-01-01
This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Revil, A.; Jardani, A.; Dupont, J.
2012-12-01
The assessment of hydraulic conductivity of heterogeneous aquifers is a difficult task using traditional hydrogeological methods (e.g., steady state or transient pumping tests) due to their low spatial resolution associated with a low density of available piezometers. Geophysical measurements performed at the ground surface and in boreholes provide additional information for increasing the resolution and accuracy of the inverted hydraulic conductivity. We use a stochastic joint inversion of Direct Current (DC) resistivity and Self-Potential (SP) data plus in situ measurement of the salinity in a downstream well during a synthetic salt tracer experiment to reconstruct the hydraulic conductivity field of an heterogeneous aquifer. The pilot point parameterization is used to avoid over-parameterization of the inverse problem. Bounds on the model parameters are used to promote a consistent Markov chain Monte Carlo sampling of the hydrogeological parameters of the model. To evaluate the effectiveness of the inversion process, we compare several scenarios where the geophysical data are coupled or not to the hydrogeological data to map the hydraulic conductivity. We first test the effectiveness of the inversion of each type of data alone, and then we combine the methods two by two. We finally combine all the information together to show the value of each type of geophysical data in the joint inversion process because of their different sensitivity map. The results of the inversion reveal that the self-potential data improve the estimate of hydraulic conductivity especially when the self-potential data are combined to the salt concentration measurement in the second well or to the time-lapse electrical resistivity data. Various tests are also performed to quantify the uncertainty in the inversion when for instance the semi-variogram is not known and its parameters should be inverted as well.
NASA Astrophysics Data System (ADS)
Bérubé, Charles L.; Chouteau, Michel; Shamsipour, Pejman; Enkin, Randolph J.; Olivo, Gema R.
2017-08-01
Spectral induced polarization (SIP) measurements are now widely used to infer mineralogical or hydrogeological properties from the low-frequency electrical properties of the subsurface in both mineral exploration and environmental sciences. We present an open-source program that performs fast multi-model inversion of laboratory complex resistivity measurements using Markov-chain Monte Carlo simulation. Using this stochastic method, SIP parameters and their uncertainties may be obtained from the Cole-Cole and Dias models, or from the Debye and Warburg decomposition approaches. The program is tested on synthetic and laboratory data to show that the posterior distribution of a multiple Cole-Cole model is multimodal in particular cases. The Warburg and Debye decomposition approaches yield unique solutions in all cases. It is shown that an adaptive Metropolis algorithm performs faster and is less dependent on the initial parameter values than the Metropolis-Hastings step method when inverting SIP data through the decomposition schemes. There are no advantages in using an adaptive step method for well-defined Cole-Cole inversion. Finally, the influence of measurement noise on the recovered relaxation time distribution is explored. We provide the geophysics community with a open-source platform that can serve as a base for further developments in stochastic SIP data inversion and that may be used to perform parameter analysis with various SIP models.
Online learning in optical tomography: a stochastic approach
NASA Astrophysics Data System (ADS)
Chen, Ke; Li, Qin; Liu, Jian-Guo
2018-07-01
We study the inverse problem of radiative transfer equation (RTE) using stochastic gradient descent method (SGD) in this paper. Mathematically, optical tomography amounts to recovering the optical parameters in RTE using the incoming–outgoing pair of light intensity. We formulate it as a PDE-constraint optimization problem, where the mismatch of computed and measured outgoing data is minimized with same initial data and RTE constraint. The memory and computation cost it requires, however, is typically prohibitive, especially in high dimensional space. Smart iterative solvers that only use partial information in each step is called for thereafter. Stochastic gradient descent method is an online learning algorithm that randomly selects data for minimizing the mismatch. It requires minimum memory and computation, and advances fast, therefore perfectly serves the purpose. In this paper we formulate the problem, in both nonlinear and its linearized setting, apply SGD algorithm and analyze the convergence performance.
Reconstruction of stochastic temporal networks through diffusive arrival times
NASA Astrophysics Data System (ADS)
Li, Xun; Li, Xiang
2017-06-01
Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications.
Reconstruction of stochastic temporal networks through diffusive arrival times
Li, Xun; Li, Xiang
2017-01-01
Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications. PMID:28604687
Generalised filtering and stochastic DCM for fMRI.
Li, Baojuan; Daunizeau, Jean; Stephan, Klaas E; Penny, Will; Hu, Dewen; Friston, Karl
2011-09-15
This paper is about the fitting or inversion of dynamic causal models (DCMs) of fMRI time series. It tries to establish the validity of stochastic DCMs that accommodate random fluctuations in hidden neuronal and physiological states. We compare and contrast deterministic and stochastic DCMs, which do and do not ignore random fluctuations or noise on hidden states. We then compare stochastic DCMs, which do and do not ignore conditional dependence between hidden states and model parameters (generalised filtering and dynamic expectation maximisation, respectively). We first characterise state-noise by comparing the log evidence of models with different a priori assumptions about its amplitude, form and smoothness. Face validity of the inversion scheme is then established using data simulated with and without state-noise to ensure that DCM can identify the parameters and model that generated the data. Finally, we address construct validity using real data from an fMRI study of internet addiction. Our analyses suggest the following. (i) The inversion of stochastic causal models is feasible, given typical fMRI data. (ii) State-noise has nontrivial amplitude and smoothness. (iii) Stochastic DCM has face validity, in the sense that Bayesian model comparison can distinguish between data that have been generated with high and low levels of physiological noise and model inversion provides veridical estimates of effective connectivity. (iv) Relaxing conditional independence assumptions can have greater construct validity, in terms of revealing group differences not disclosed by variational schemes. Finally, we note that the ability to model endogenous or random fluctuations on hidden neuronal (and physiological) states provides a new and possibly more plausible perspective on how regionally specific signals in fMRI are generated. Copyright © 2011. Published by Elsevier Inc.
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
Stochastic inversion of ocean color data using the cross-entropy method.
Salama, Mhd Suhyb; Shen, Fang
2010-01-18
Improving the inversion of ocean color data is an ever continuing effort to increase the accuracy of derived inherent optical properties. In this paper we present a stochastic inversion algorithm to derive inherent optical properties from ocean color, ship and space borne data. The inversion algorithm is based on the cross-entropy method where sets of inherent optical properties are generated and converged to the optimal set using iterative process. The algorithm is validated against four data sets: simulated, noisy simulated in-situ measured and satellite match-up data sets. Statistical analysis of validation results is based on model-II regression using five goodness-of-fit indicators; only R2 and root mean square of error (RMSE) are mentioned hereafter. Accurate values of total absorption coefficient are derived with R2 > 0.91 and RMSE, of log transformed data, less than 0.55. Reliable values of the total backscattering coefficient are also obtained with R2 > 0.7 (after removing outliers) and RMSE < 0.37. The developed algorithm has the ability to derive reliable results from noisy data with R2 above 0.96 for the total absorption and above 0.84 for the backscattering coefficients. The algorithm is self contained and easy to implement and modify to derive the variability of chlorophyll-a absorption that may correspond to different phytoplankton species. It gives consistently accurate results and is therefore worth considering for ocean color global products.
NASA Astrophysics Data System (ADS)
O'Malley, D.; Le, E. B.; Vesselinov, V. V.
2015-12-01
We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.
Stochastic series expansion simulation of the t -V model
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Ye-Hua; Troyer, Matthias
2016-04-01
We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.
Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A
2016-09-06
Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Avdyushev, Victor A.
2017-12-01
Orbit determination from a small sample of observations over a very short observed orbital arc is a strongly nonlinear inverse problem. In such problems an evaluation of orbital uncertainty due to random observation errors is greatly complicated, since linear estimations conventionally used are no longer acceptable for describing the uncertainty even as a rough approximation. Nevertheless, if an inverse problem is weakly intrinsically nonlinear, then one can resort to the so-called method of disturbed observations (aka observational Monte Carlo). Previously, we showed that the weaker the intrinsic nonlinearity, the more efficient the method, i.e. the more accurate it enables one to simulate stochastically the orbital uncertainty, while it is strictly exact only when the problem is intrinsically linear. However, as we ascertained experimentally, its efficiency was found to be higher than that of other stochastic methods widely applied in practice. In the present paper we investigate the intrinsic nonlinearity in complicated inverse problems of Celestial Mechanics when orbits are determined from little informative samples of observations, which typically occurs for recently discovered asteroids. To inquire into the question, we introduce an index of intrinsic nonlinearity. In asteroid problems it evinces that the intrinsic nonlinearity can be strong enough to affect appreciably probabilistic estimates, especially at the very short observed orbital arcs that the asteroids travel on for about a hundredth of their orbital periods and less. As it is known from regression analysis, the source of intrinsic nonlinearity is the nonflatness of the estimation subspace specified by a dynamical model in the observation space. Our numerical results indicate that when determining asteroid orbits it is actually very slight. However, in the parametric space the effect of intrinsic nonlinearity is exaggerated mainly by the ill-conditioning of the inverse problem. Even so, as for the method of disturbed observations, we conclude that it practically should be still entirely acceptable to adequately describe the orbital uncertainty since, from a geometrical point of view, the efficiency of the method directly depends only on the nonflatness of the estimation subspace and it gets higher as the nonflatness decreases.
Adaptive optimal stochastic state feedback control of resistive wall modes in tokamaks
NASA Astrophysics Data System (ADS)
Sun, Z.; Sen, A. K.; Longman, R. W.
2006-01-01
An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least-square method with exponential forgetting factor and covariance resetting is used to identify (experimentally determine) the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time-dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.
Adaptive Optimal Stochastic State Feedback Control of Resistive Wall Modes in Tokamaks
NASA Astrophysics Data System (ADS)
Sun, Z.; Sen, A. K.; Longman, R. W.
2007-06-01
An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least square method with exponential forgetting factor and covariance resetting is used to identify the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.
Functional Wigner representation of quantum dynamics of Bose-Einstein condensate
NASA Astrophysics Data System (ADS)
Opanchuk, B.; Drummond, P. D.
2013-04-01
We develop a method of simulating the full quantum field dynamics of multi-mode multi-component Bose-Einstein condensates in a trap. We use the truncated Wigner representation to obtain a probabilistic theory that can be sampled. This method produces c-number stochastic equations which may be solved using conventional stochastic methods. The technique is valid for large mode occupation numbers. We give a detailed derivation of methods of functional Wigner representation appropriate for quantum fields. Our approach describes spatial evolution of spinor components and properly accounts for nonlinear losses. Such techniques are applicable to calculating the leading quantum corrections, including effects such as quantum squeezing, entanglement, EPR correlations, and interactions with engineered nonlinear reservoirs. By using a consistent expansion in the inverse density, we are able to explain an inconsistency in the nonlinear loss equations found by earlier authors.
Miklós, István; Darling, Aaron E
2009-06-22
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.
FRACTIONAL PEARSON DIFFUSIONS.
Leonenko, Nikolai N; Meerschaert, Mark M; Sikorskii, Alla
2013-07-15
Pearson diffusions are governed by diffusion equations with polynomial coefficients. Fractional Pearson diffusions are governed by the corresponding time-fractional diffusion equation. They are useful for modeling sub-diffusive phenomena, caused by particle sticking and trapping. This paper provides explicit strong solutions for fractional Pearson diffusions, using spectral methods. It also presents stochastic solutions, using a non-Markovian inverse stable time change.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, School of Mathematical Science, MOELSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Shu, Ruiwen, E-mail: rshu2@math.wisc.edu
In this paper we consider a kinetic-fluid model for disperse two-phase flows with uncertainty. We propose a stochastic asymptotic-preserving (s-AP) scheme in the generalized polynomial chaos stochastic Galerkin (gPC-sG) framework, which allows the efficient computation of the problem in both kinetic and hydrodynamic regimes. The s-AP property is proved by deriving the equilibrium of the gPC version of the Fokker–Planck operator. The coefficient matrices that arise in a Helmholtz equation and a Poisson equation, essential ingredients of the algorithms, are proved to be positive definite under reasonable and mild assumptions. The computation of the gPC version of a translation operatormore » that arises in the inversion of the Fokker–Planck operator is accelerated by a spectrally accurate splitting method. Numerical examples illustrate the s-AP property and the efficiency of the gPC-sG method in various asymptotic regimes.« less
Wang, Wenzheng; Wang, Yanming; Song, Wujun; Li, Xueqin
2017-03-20
A multiband infrared diagnostic (MBID) method for methane emission monitoring in limited underground environments was presented considering the strong optical background of gas/solid attenuation. Based on spatial distribution of aerosols and complex refractive index of dust particles, forward calculations were carried out with/without methane to obtain the spectral transmittance through the participating atmosphere in a mine roadway. Considering the concurrent attenuation and absorption behavior of dust and gases, four infrared wavebands were selected to retrieve the methane concentration combined with a stochastic particle swarm optimization (SPSO) algorithm. Inversion results prove that the presented MBID method is robust and effective in identifying methane at concentrations of 0.1% or even lower with inversed relative error within 10%. Further analyses illustrate that the four selected wavebands are indispensable, and the MBID method is still valid with transmission signal disturbance in a conventional dust-polluted atmosphere under mechanized mining condition. However, the effective detection distance should be limited within 50 m to ensure inversed relative error less than 5% at 1% methane concentration.
A stochastic approach for model reduction and memory function design in hydrogeophysical inversion
NASA Astrophysics Data System (ADS)
Hou, Z.; Kellogg, A.; Terry, N.
2009-12-01
Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the memory function as a new prior and generate samples from it for further updating when more geophysical data is available. We applied this approach for deep oil reservoir characterization and for shallow subsurface flow monitoring. The model reduction approach reliably helps reduce the joint seismic/EM/radar inversion computational time to reasonable levels. Continuous inversion images are obtained using time-lapse data with the “memory function” applied in the Bayesian inversion.
Functional Wigner representation of quantum dynamics of Bose-Einstein condensate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opanchuk, B.; Drummond, P. D.
2013-04-15
We develop a method of simulating the full quantum field dynamics of multi-mode multi-component Bose-Einstein condensates in a trap. We use the truncated Wigner representation to obtain a probabilistic theory that can be sampled. This method produces c-number stochastic equations which may be solved using conventional stochastic methods. The technique is valid for large mode occupation numbers. We give a detailed derivation of methods of functional Wigner representation appropriate for quantum fields. Our approach describes spatial evolution of spinor components and properly accounts for nonlinear losses. Such techniques are applicable to calculating the leading quantum corrections, including effects such asmore » quantum squeezing, entanglement, EPR correlations, and interactions with engineered nonlinear reservoirs. By using a consistent expansion in the inverse density, we are able to explain an inconsistency in the nonlinear loss equations found by earlier authors.« less
Darling, Aaron E.
2009-01-01
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186
Application of stochastic inversion in auroral tomography
NASA Astrophysics Data System (ADS)
Nygrén, T.; Markkanen, M.; Lehtinen, M.; Kaila, K.
1996-11-01
A software package originally developed for satellite radio tomography is briefly introduced and its use in two-dimensional auroral tomography is described. The method is based on stochastic inversion, i.e. finding the most probable values of the unknown volume emission rates once the optical measurements are made using either a scanning photometer or an auroral camera. A set of simulation results is shown for a different number and separations of optical instruments at ground level. It is observed that arcs with a thickness of a few kilometers and separated by a few tens of kilometers are easily reconstructed. The maximum values of the inversion results, however, are often weaker than in the model. The most obvious reason for this is the grid size, which cannot be much smaller than the arc thickness. The grid necessarily generates a spatial averaging effect broadening the arc cross-sections and reducing the peak values. Finally, results from TV-camera observations at Tromsø and Esrange are shown. Although these sites are separated by more than 200 km, arcs close to Tromsø have been successfully reconstructed. Acknowledgements. The work done by P. Henelius and E. Vilenius in programme development is gratefully acknowledged. Topical Editor D. Alcayde thanks I. Pryse and A. Vallance-Jones for their help in evaluating this paper.--> Correspondence to: T. Nygrén-->
NASA Astrophysics Data System (ADS)
Schmitt, R. J. P.; Bizzi, S.; Castelletti, A. F.; Kondolf, G. M.
2018-01-01
Sediment supply to rivers, subsequent fluvial transport, and the resulting sediment connectivity on network scales are often sparsely monitored and subject to major uncertainty. We propose to approach that uncertainty by adopting a stochastic method for modeling network sediment connectivity, which we present for the Se Kong, Se San, and Sre Pok (3S) tributaries of the Mekong. We quantify how unknown properties of sand sources translate into uncertainty regarding network connectivity by running the CASCADE (CAtchment Sediment Connectivity And DElivery) modeling framework in a Monte Carlo approach for 7,500 random realizations. Only a small ensemble of realizations reproduces downstream observations of sand transport. This ensemble presents an inverse stochastic approximation of the magnitude and variability of transport capacity, sediment flux, and grain size distribution of the sediment transported in the network (i.e., upscaling point observations to the entire network). The approximated magnitude of sand delivered from each tributary to the Mekong is controlled by reaches of low transport capacity ("bottlenecks"). These bottlenecks limit the ability to predict transport in the upper parts of the catchment through inverse stochastic approximation, a limitation that could be addressed by targeted monitoring upstream of identified bottlenecks. Nonetheless, bottlenecks also allow a clear partitioning of natural sand deliveries from the 3S to the Mekong, with the Se Kong delivering less (1.9 Mt/yr) and coarser (median grain size: 0.4 mm) sand than the Se San (5.3 Mt/yr, 0.22 mm) and Sre Pok (11 Mt/yr, 0.19 mm).
Vecherin, Sergey N; Ostashev, Vladimir E; Ziemann, A; Wilson, D Keith; Arnold, K; Barth, M
2007-09-01
Acoustic travel-time tomography allows one to reconstruct temperature and wind velocity fields in the atmosphere. In a recently published paper [S. Vecherin et al., J. Acoust. Soc. Am. 119, 2579 (2006)], a time-dependent stochastic inversion (TDSI) was developed for the reconstruction of these fields from travel times of sound propagation between sources and receivers in a tomography array. TDSI accounts for the correlation of temperature and wind velocity fluctuations both in space and time and therefore yields more accurate reconstruction of these fields in comparison with algebraic techniques and regular stochastic inversion. To use TDSI, one needs to estimate spatial-temporal covariance functions of temperature and wind velocity fluctuations. In this paper, these spatial-temporal covariance functions are derived for locally frozen turbulence which is a more general concept than a widely used hypothesis of frozen turbulence. The developed theory is applied to reconstruction of temperature and wind velocity fields in the acoustic tomography experiment carried out by University of Leipzig, Germany. The reconstructed temperature and velocity fields are presented and errors in reconstruction of these fields are studied.
Non-Gaussianity in a quasiclassical electronic circuit
NASA Astrophysics Data System (ADS)
Suzuki, Takafumi J.; Hayakawa, Hisao
2017-05-01
We study the non-Gaussian dynamics of a quasiclassical electronic circuit coupled to a mesoscopic conductor. Non-Gaussian noise accompanying the nonequilibrium transport through the conductor significantly modifies the stationary probability density function (PDF) of the flux in the dissipative circuit. We incorporate weak quantum fluctuation of the dissipative LC circuit with a stochastic method and evaluate the quantum correction of the stationary PDF. Furthermore, an inverse formula to infer the statistical properties of the non-Gaussian noise from the stationary PDF is derived in the classical-quantum crossover regime. The quantum correction is indispensable to correctly estimate the microscopic transfer events in the QPC with the quasiclassical inverse formula.
NASA Astrophysics Data System (ADS)
Dizaji, Farzad; Marshall, Jeffrey; Grant, John; Jin, Xing
2017-11-01
Accounting for the effect of subgrid-scale turbulence on interacting particles remains a challenge when using Reynolds-Averaged Navier Stokes (RANS) or Large Eddy Simulation (LES) approaches for simulation of turbulent particulate flows. The standard stochastic Lagrangian method for introducing turbulence into particulate flow computations is not effective when the particles interact via collisions, contact electrification, etc., since this method is not intended to accurately model relative motion between particles. We have recently developed the stochastic vortex structure (SVS) method and demonstrated its use for accurate simulation of particle collision in homogeneous turbulence; the current work presents an extension of the SVS method to turbulent shear flows. The SVS method simulates subgrid-scale turbulence using a set of randomly-positioned, finite-length vortices to generate a synthetic fluctuating velocity field. It has been shown to accurately reproduce the turbulence inertial-range spectrum and the probability density functions for the velocity and acceleration fields. In order to extend SVS to turbulent shear flows, a new inversion method has been developed to orient the vortices in order to generate a specified Reynolds stress field. The extended SVS method is validated in the present study with comparison to direct numerical simulations for a planar turbulent jet flow. This research was supported by the U.S. National Science Foundation under Grant CBET-1332472.
Computational methods for yeast prion curing curves.
Ridout, Martin S
2008-10-01
If the chemical guanidine hydrochloride is added to a dividing culture of yeast cells in which some of the protein Sup35p is in its prion form, the proportion of cells that carry replicating units of the prion, termed propagons, decreases gradually over time. Stochastic models to describe this process of 'curing' have been developed in earlier work. The present paper investigates the use of numerical methods of Laplace transform inversion to calculate curing curves and contrasts this with an alternative, more direct, approach that involves numerical integration. Transform inversion is found to provide a much more efficient computational approach that allows different models to be investigated with minimal programming effort. The method is used to investigate the robustness of the curing curve to changes in the assumed distribution of cell generation times. Matlab code is available for carrying out the calculations.
USDA-ARS?s Scientific Manuscript database
This study evaluated the impact of gas concentration and wind sensor locations on the accuracy of the backward Lagrangian stochastic inverse-dispersion technique (bLS) for measuring gas emission rates from a typical lagoon environment. Path-integrated concentrations (PICs) and 3-dimensional (3D) wi...
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitanidis, Peter
As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less
NASA Astrophysics Data System (ADS)
Zhang, D.; Liao, Q.
2016-12-01
The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of computational efficiency.
Bayesian inference in geomagnetism
NASA Technical Reports Server (NTRS)
Backus, George E.
1988-01-01
The inverse problem in empirical geomagnetic modeling is investigated, with critical examination of recently published studies. Particular attention is given to the use of Bayesian inference (BI) to select the damping parameter lambda in the uniqueness portion of the inverse problem. The mathematical bases of BI and stochastic inversion are explored, with consideration of bound-softening problems and resolution in linear Gaussian BI. The problem of estimating the radial magnetic field B(r) at the earth core-mantle boundary from surface and satellite measurements is then analyzed in detail, with specific attention to the selection of lambda in the studies of Gubbins (1983) and Gubbins and Bloxham (1985). It is argued that the selection method is inappropriate and leads to lambda values much larger than those that would result if a reasonable bound on the heat flow at the CMB were assumed.
Algebraic, geometric, and stochastic aspects of genetic operators
NASA Technical Reports Server (NTRS)
Foo, N. Y.; Bosworth, J. L.
1972-01-01
Genetic algorithms for function optimization employ genetic operators patterned after those observed in search strategies employed in natural adaptation. Two of these operators, crossover and inversion, are interpreted in terms of their algebraic and geometric properties. Stochastic models of the operators are developed which are employed in Monte Carlo simulations of their behavior.
Miklós, István
2003-10-01
As more and more genomes have been sequenced, genomic data is rapidly accumulating. Genome-wide mutations are believed more neutral than local mutations such as substitutions, insertions and deletions, therefore phylogenetic investigations based on inversions, transpositions and inverted transpositions are less biased by the hypothesis on neutral evolution. Although efficient algorithms exist for obtaining the inversion distance of two signed permutations, there is no reliable algorithm when both inversions and transpositions are considered. Moreover, different type of mutations happen with different rates, and it is not clear how to weight them in a distance based approach. We introduce a Markov Chain Monte Carlo method to genome rearrangement based on a stochastic model of evolution, which can estimate the number of different evolutionary events needed to sort a signed permutation. The performance of the method was tested on simulated data, and the estimated numbers of different types of mutations were reliable. Human and Drosophila mitochondrial data were also analysed with the new method. The mixing time of the Markov Chain is short both in terms of CPU times and number of proposals. The source code in C is available on request from the author.
Benard, Emmanuel; Michel, Christian J
2009-08-01
We present here the SEGM web server (Stochastic Evolution of Genetic Motifs) in order to study the evolution of genetic motifs both in the direct evolutionary sense (past-present) and in the inverse evolutionary sense (present-past). The genetic motifs studied can be nucleotides, dinucleotides and trinucleotides. As an example of an application of SEGM and to understand its functionalities, we give an analysis of inverse mutations of splice sites of human genome introns. SEGM is freely accessible at http://lsiit-bioinfo.u-strasbg.fr:8080/webMathematica/SEGM/SEGM.html directly or by the web site http://dpt-info.u-strasbg.fr/~michel/. To our knowledge, this SEGM web server is to date the only computational biology software in this evolutionary approach.
[Research on the measurement of flue-dust concentration in Vis, IR spectral region].
Sun, Xiao-gang; Tang, Hong; Yuan, Gui-bin
2008-10-01
In the measurement of flue-dust concentration based on the transmission method, the dependent model algorithm was used to invert the flue-dust concentration in the visible, infrared and visible-infrared spectral regions respectively. By the analysis and comparison of the accuracy, linearity and sensitivity of the inversion flue-dust concentration, the optimal spectral region was determined. Meanwhile, the influence of the water droplet with different size distribution and volume concentration was simulated, and a method was proposed which has advantages of simplicity, rapidity, and suitability for on line measurement. Simulation experiments illustrate that the flue-dust concentration can be inverted very well in the visible-infrared spectral region, and it is feasible to use the ratio of the constrained light extinction method to overcome the influence of water droplet. The inverse results all remain satisfactory when 2% stochastic noise is added to the value of the light extinction.
Modeling stochastic noise in gene regulatory systems
Meister, Arwen; Du, Chao; Li, Ye Henry; Wong, Wing Hung
2014-01-01
The Master equation is considered the gold standard for modeling the stochastic mechanisms of gene regulation in molecular detail, but it is too complex to solve exactly in most cases, so approximation and simulation methods are essential. However, there is still a lack of consensus about the best way to carry these out. To help clarify the situation, we review Master equation models of gene regulation, theoretical approximations based on an expansion method due to N.G. van Kampen and R. Kubo, and simulation algorithms due to D.T. Gillespie and P. Langevin. Expansion of the Master equation shows that for systems with a single stable steady-state, the stochastic model reduces to a deterministic model in a first-order approximation. Additional theory, also due to van Kampen, describes the asymptotic behavior of multistable systems. To support and illustrate the theory and provide further insight into the complex behavior of multistable systems, we perform a detailed simulation study comparing the various approximation and simulation methods applied to synthetic gene regulatory systems with various qualitative characteristics. The simulation studies show that for large stochastic systems with a single steady-state, deterministic models are quite accurate, since the probability distribution of the solution has a single peak tracking the deterministic trajectory whose variance is inversely proportional to the system size. In multistable stochastic systems, large fluctuations can cause individual trajectories to escape from the domain of attraction of one steady-state and be attracted to another, so the system eventually reaches a multimodal probability distribution in which all stable steady-states are represented proportional to their relative stability. However, since the escape time scales exponentially with system size, this process can take a very long time in large systems. PMID:25632368
NASA Astrophysics Data System (ADS)
Penna, Pedro A. A.; Mascarenhas, Nelson D. A.
2018-02-01
The development of new methods to denoise images still attract researchers, who seek to combat the noise with the minimal loss of resolution and details, like edges and fine structures. Many algorithms have the goal to remove additive white Gaussian noise (AWGN). However, it is not the only type of noise which interferes in the analysis and interpretation of images. Therefore, it is extremely important to expand the filters capacity to different noise models present in li-terature, for example the multiplicative noise called speckle that is present in synthetic aperture radar (SAR) images. The state-of-the-art algorithms in remote sensing area work with similarity between patches. This paper aims to develop two approaches using the non local means (NLM), developed for AWGN. In our research, we expanded its capacity for intensity SAR ima-ges speckle. The first approach is grounded on the use of stochastic distances based on the G0 distribution without transforming the data to the logarithm domain, like homomorphic transformation. It takes into account the speckle and backscatter to estimate the parameters necessary to compute the stochastic distances on NLM. The second method uses a priori NLM denoising with a homomorphic transformation and applies the inverse Gamma distribution to estimate the parameters that were used into NLM with stochastic distances. The latter method also presents a new alternative to compute the parameters for the G0 distribution. Finally, this work compares and analyzes the synthetic and real results of the proposed methods with some recent filters of the literature.
Probabilistic dual heuristic programming-based adaptive critic
NASA Astrophysics Data System (ADS)
Herzallah, Randa
2010-02-01
Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.
INFO-RNA--a fast approach to inverse RNA folding.
Busch, Anke; Backofen, Rolf
2006-08-01
The structure of RNA molecules is often crucial for their function. Therefore, secondary structure prediction has gained much interest. Here, we consider the inverse RNA folding problem, which means designing RNA sequences that fold into a given structure. We introduce a new algorithm for the inverse folding problem (INFO-RNA) that consists of two parts; a dynamic programming method for good initial sequences and a following improved stochastic local search that uses an effective neighbor selection method. During the initialization, we design a sequence that among all sequences adopts the given structure with the lowest possible energy. For the selection of neighbors during the search, we use a kind of look-ahead of one selection step applying an additional energy-based criterion. Afterwards, the pre-ordered neighbors are tested using the actual optimization criterion of minimizing the structure distance between the target structure and the mfe structure of the considered neighbor. We compared our algorithm to RNAinverse and RNA-SSD for artificial and biological test sets. Using INFO-RNA, we performed better than RNAinverse and in most cases, we gained better results than RNA-SSD, the probably best inverse RNA folding tool on the market. www.bioinf.uni-freiburg.de?Subpages/software.html.
Ramirez, Abelardo; Foxall, William
2014-05-28
Stochastic inversions of InSAR data were carried out to assess the probability that pressure perturbations resulting from CO 2 injection into well KB-502 at In Salah penetrated into the lower caprock seal above the reservoir. Inversions of synthetic data were employed to evaluate the factors that affect the vertical resolution of overpressure distributions, and to assess the impact of various sources of uncertainty in prior constraints on inverse solutions. These include alternative pressure-driven deformation modes within reservoir and caprock, the geometry of a sub-vertical fracture zone in the caprock identified in previous studies, and imperfect estimates of the rock mechanicalmore » properties. Inversions of field data indicate that there is a high probability that a pressure perturbation during the first phase of injection extended upwards along the fracture zone ~ 150 m above the reservoir, and less than 50% probability that it reached the Hot Shale unit at 1500 m depth. Within the uncertainty bounds considered, it was concluded that it is very unlikely that the pressure perturbation approached within 150 m of the top of the lower caprock at the Hercynian Unconformity. The results are consistent with previous deterministic inversion and forward modeling studies.« less
NASA Astrophysics Data System (ADS)
Kaulakys, B.; Alaburda, M.; Ruseckas, J.
2016-05-01
A well-known fact in the financial markets is the so-called ‘inverse cubic law’ of the cumulative distributions of the long-range memory fluctuations of market indicators such as a number of events of trades, trading volume and the logarithmic price change. We propose the nonlinear stochastic differential equation (SDE) giving both the power-law behavior of the power spectral density and the long-range dependent inverse cubic law of the cumulative distribution. This is achieved using the suggestion that when the market evolves from calm to violent behavior there is a decrease of the delay time of multiplicative feedback of the system in comparison to the driving noise correlation time. This results in a transition from the Itô to the Stratonovich sense of the SDE and yields a long-range memory process.
Clinical Applications of Stochastic Dynamic Models of the Brain, Part I: A Primer.
Roberts, James A; Friston, Karl J; Breakspear, Michael
2017-04-01
Biological phenomena arise through interactions between an organism's intrinsic dynamics and stochastic forces-random fluctuations due to external inputs, thermal energy, or other exogenous influences. Dynamic processes in the brain derive from neurophysiology and anatomical connectivity; stochastic effects arise through sensory fluctuations, brainstem discharges, and random microscopic states such as thermal noise. The dynamic evolution of systems composed of both dynamic and random effects can be studied with stochastic dynamic models (SDMs). This article, Part I of a two-part series, offers a primer of SDMs and their application to large-scale neural systems in health and disease. The companion article, Part II, reviews the application of SDMs to brain disorders. SDMs generate a distribution of dynamic states, which (we argue) represent ideal candidates for modeling how the brain represents states of the world. When augmented with variational methods for model inversion, SDMs represent a powerful means of inferring neuronal dynamics from functional neuroimaging data in health and disease. Together with deeper theoretical considerations, this work suggests that SDMs will play a unique and influential role in computational psychiatry, unifying empirical observations with models of perception and behavior. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models.
Daunizeau, J; Friston, K J; Kiebel, S J
2009-11-01
In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.
Dynamic data integration and stochastic inversion of a confined aquifer
NASA Astrophysics Data System (ADS)
Wang, D.; Zhang, Y.; Irsa, J.; Huang, H.; Wang, L.
2013-12-01
Much work has been done in developing and applying inverse methods to aquifer modeling. The scope of this paper is to investigate the applicability of a new direct method for large inversion problems and to incorporate uncertainty measures in the inversion outcomes (Wang et al., 2013). The problem considered is a two-dimensional inverse model (50×50 grid) of steady-state flow for a heterogeneous ground truth model (500×500 grid) with two hydrofacies. From the ground truth model, decreasing number of wells (12, 6, 3) were sampled for facies types, based on which experimental indicator histograms and directional variograms were computed. These parameters and models were used by Sequential Indicator Simulation to generate 100 realizations of hydrofacies patterns in a 100×100 (geostatistical) grid, which were conditioned to the facies measurements at wells. These realizations were smoothed with Simulated Annealing, coarsened to the 50×50 inverse grid, before they were conditioned with the direct method to the dynamic data, i.e., observed heads and groundwater fluxes at the same sampled wells. A set of realizations of estimated hydraulic conductivities (Ks), flow fields, and boundary conditions were created, which centered on the 'true' solutions from solving the ground truth model. Both hydrofacies conductivities were computed with an estimation accuracy of ×10% (12 wells), ×20% (6 wells), ×35% (3 wells) of the true values. For boundary condition estimation, the accuracy was within × 15% (12 wells), 30% (6 wells), and 50% (3 wells) of the true values. The inversion system of equations was solved with LSQR (Paige et al, 1982), for which coordinate transform and matrix scaling preprocessor were used to improve the condition number (CN) of the coefficient matrix. However, when the inverse grid was refined to 100×100, Gaussian Noise Perturbation was used to limit the growth of the CN before the matrix solve. To scale the inverse problem up (i.e., without smoothing and coarsening and therefore reducing the associated estimation uncertainty), a parallel LSQR solver was written and verified. For the 50×50 grid, the parallel solver sped up the serial solution time by 14X using 4 CPUs (research on parallel performance and scaling is ongoing). A sensitivity analysis was conducted to examine the relation between the observed data and the inversion outcomes, where measurement errors of increasing magnitudes (i.e., ×1, 2, 5, 10% of the total head variation and up to ×2% of the total flux variation) were imposed on the observed data. Inversion results were stable but the accuracy of Ks and boundary estimation degraded with increasing errors, as expected. In particular, quality of the observed heads is critical to hydraulic head recovery, while quality of the observed fluxes plays a dominant role in K estimation. References: Wang, D., Y. Zhang, J. Irsa, H. Huang, and L. Wang (2013), Data integration and stochastic inversion of a confined aquifer with high performance computing, Advances in Water Resources, in preparation. Paige, C. C., and M. A. Saunders (1982), LSQR: an algorithm for sparse linear equations and sparse least squares, ACM Transactions on Mathematical Software, 8(1), 43-71.
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
Stochastic sediment property inversion in Shallow Water 06.
Michalopoulou, Zoi-Heleni
2017-11-01
Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.
NASA Astrophysics Data System (ADS)
King, Thomas Steven
A hybrid gravity modeling method is developed to investigate the structure of sedimentary mass bodies. The method incorporates as constraints surficial basement/sediment contacts and topography of a mass target with a quadratically varying density distribution. The inverse modeling utilizes a genetic algorithm (GA) to scan a wide range of the solution space to determine initial models and the Marquardt-Levenberg (ML) nonlinear inversion to determine final models that meet pre-assigned misfit criteria, thus providing an estimate of model variability and uncertainty. The surface modeling technique modifies Delaunay triangulation by allowing individual facets to be manually constructed and non-convex boundaries to be incorporated into the triangulation scheme. The sedimentary body is represented by a set of uneven prisms and edge elements, comprised of tetrahedrons, capped by polyhedrons. Each underlying prism and edge element's top surface is located by determining its point of tangency with the overlying terrain. The remaining overlying mass is gravitationally evaluated and subtracted from the observation points. Inversion then proceeds in the usual sense, but on an irregular tiered surface with each element's density defined relative to their top surface. Efficiency is particularly important due to the large number of facets evaluated for surface representations and the many repeated element evaluations of the stochastic GA. The gravitation of prisms, triangular faceted polygons, and tetrahedrons can be formulated in different ways, either mathematically or by physical approximations, each having distinct characteristics, such as evaluation time, accuracy over various spatial ranges, and computational singularities. A decision tree or switching routine is constructed for each element by combining these characteristics into a single cohesive package that optimizes the computation for accuracy and speed while avoiding singularities. The GA incorporates a subspace technique and parameter dependency to maintain model smoothness during development, thus minimizing creating nonphysical models. The stochastic GA explores the solution space, producing a broad range of unbiased initial models, while the ML inversion is deterministic and thus quickly converges to the final model. The combination allows many solution models to be determined from the same observed data.
NASA Astrophysics Data System (ADS)
Li, Zhen; Bian, Xin; Yang, Xiu; Karniadakis, George Em
2016-07-01
We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.
Li, Zhen; Bian, Xin; Yang, Xiu; Karniadakis, George Em
2016-07-28
We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.
CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties
2017-03-01
inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse
Sun, Xiao-gang; Tang, Hong; Dai, Jing-min
2008-12-01
The problem of determining the particle size range in the visible-infrared region was studied using the independent model algorithm in the total scattering technique. By the analysis and comparison of the accuracy of the inversion results for different R-R distributions, the measurement range of particle size was determined. Meanwhile, the corrected extinction coefficient was used instead of the original extinction coefficient, which could determine the measurement range of particle size with higher accuracy. Simulation experiments illustrate that the particle size distribution can be retrieved very well in the range from 0. 05 to 18 microm at relative refractive index m=1.235 in the visible-infrared spectral region, and the measurement range of particle size will vary with the varied wavelength range and relative refractive index. It is feasible to use the constrained least squares inversion method in the independent model to overcome the influence of the measurement error, and the inverse results are all still satisfactory when 1% stochastic noise is added to the value of the light extinction.
A deterministic (non-stochastic) low frequency method for geoacoustic inversion.
Tolstoy, A
2010-06-01
It is well known that multiple frequency sources are necessary for accurate geoacoustic inversion. This paper presents an inversion method which uses the low frequency (LF) spectrum only to estimate bottom properties even in the presence of expected errors in source location, phone depths, and ocean sound-speed profiles. Matched field processing (MFP) along a vertical array is used. The LF method first conducts an exhaustive search of the (five) parameter search space (sediment thickness, sound-speed at the top of the sediment layer, the sediment layer sound-speed gradient, the half-space sound-speed, and water depth) at 25 Hz and continues by retaining only the high MFP value parameter combinations. Next, frequency is slowly increased while again retaining only the high value combinations. At each stage of the process, only those parameter combinations which give high MFP values at all previous LF predictions are considered (an ever shrinking set). It is important to note that a complete search of each relevant parameter space seems to be necessary not only at multiple (sequential) frequencies but also at multiple ranges in order to eliminate sidelobes, i.e., false solutions. Even so, there are no mathematical guarantees that one final, unique "solution" will be found.
Cox process representation and inference for stochastic reaction-diffusion processes
NASA Astrophysics Data System (ADS)
Schnoerr, David; Grima, Ramon; Sanguinetti, Guido
2016-05-01
Complex behaviour in many systems arises from the stochastic interactions of spatially distributed particles or agents. Stochastic reaction-diffusion processes are widely used to model such behaviour in disciplines ranging from biology to the social sciences, yet they are notoriously difficult to simulate and calibrate to observational data. Here we use ideas from statistical physics and machine learning to provide a solution to the inverse problem of learning a stochastic reaction-diffusion process from data. Our solution relies on a non-trivial connection between stochastic reaction-diffusion processes and spatio-temporal Cox processes, a well-studied class of models from computational statistics. This connection leads to an efficient and flexible algorithm for parameter inference and model selection. Our approach shows excellent accuracy on numeric and real data examples from systems biology and epidemiology. Our work provides both insights into spatio-temporal stochastic systems, and a practical solution to a long-standing problem in computational modelling.
USDA-ARS?s Scientific Manuscript database
The backward Lagrangian stochastic (bLS) inverse-dispersion technique has been used to measure fugitive gas emissions from livestock operations. The accuracy of the bLS technique, as indicated by the percentages of gas recovery in various tracer-release experiments, has generally been within ± 10% o...
Emulation: A fast stochastic Bayesian method to eliminate model space
NASA Astrophysics Data System (ADS)
Roberts, Alan; Hobbs, Richard; Goldstein, Michael
2010-05-01
Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much more tightly constrain the input model space for a deterministic inversion or MCMC method. By using this technique jointly on several datasets (specifically seismic, gravity, and magnetotelluric (MT) describing the same region), we can include in our modelling uncertainties in the data measurements, the relationships between the various physical parameters involved, as well as the model representation uncertainty, and at the same time further reduce the range of plausible models to several percent of the original model space. Being stochastic in nature, the output posterior parameter distributions also allow our understanding of/beliefs about a geological region can be objectively updated, with full assessment of uncertainties, and so the emulator is also an inversion-type tool in it's own right, with the advantage (as with any Bayesian method) that our uncertainties from all sources (both data and model) can be fully evaluated.
NASA Astrophysics Data System (ADS)
Brenner, Tom; Chen, Johnny; Stait-Gardner, Tim; Zheng, Gang; Matsukawa, Shingo; Price, William S.
2018-03-01
A new family of binomial-like inversion sequences, named jump-and-return sandwiches (JRS), has been developed by inserting a binomial-like sequence into a standard jump-and-return sequence, discovered through use of a stochastic Genetic Algorithm optimisation. Compared to currently used binomial-like inversion sequences (e.g., 3-9-19 and W5), the new sequences afford wider inversion bands and narrower non-inversion bands with an equal number of pulses. As an example, two jump-and-return sandwich 10-pulse sequences achieved 95% inversion at offsets corresponding to 9.4% and 10.3% of the non-inversion band spacing, compared to 14.7% for the binomial-like W5 inversion sequence, i.e., they afforded non-inversion bands about two thirds the width of the W5 non-inversion band.
Stochastic Inversion of 2D Magnetotelluric Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong
2010-07-01
The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, itmore » provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less
On some stochastic formulations and related statistical moments of pharmacokinetic models.
Matis, J H; Wehrly, T E; Metzler, C M
1983-02-01
This paper presents the deterministic and stochastic model for a linear compartment system with constant coefficients, and it develops expressions for the mean residence times (MRT) and the variances of the residence times (VRT) for the stochastic model. The expressions are relatively simple computationally, involving primarily matrix inversion, and they are elegant mathematically, in avoiding eigenvalue analysis and the complex domain. The MRT and VRT provide a set of new meaningful response measures for pharmacokinetic analysis and they give added insight into the system kinetics. The new analysis is illustrated with an example involving the cholesterol turnover in rats.
A Novel Weighted Kernel PCA-Based Method for Optimization and Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Thimmisetty, C.; Talbot, C.; Chen, X.; Tong, C. H.
2016-12-01
It has been demonstrated that machine learning methods can be successfully applied to uncertainty quantification for geophysical systems through the use of the adjoint method coupled with kernel PCA-based optimization. In addition, it has been shown through weighted linear PCA how optimization with respect to both observation weights and feature space control variables can accelerate convergence of such methods. Linear machine learning methods, however, are inherently limited in their ability to represent features of non-Gaussian stochastic random fields, as they are based on only the first two statistical moments of the original data. Nonlinear spatial relationships and multipoint statistics leading to the tortuosity characteristic of channelized media, for example, are captured only to a limited extent by linear PCA. With the aim of coupling the kernel-based and weighted methods discussed, we present a novel mathematical formulation of kernel PCA, Weighted Kernel Principal Component Analysis (WKPCA), that both captures nonlinear relationships and incorporates the attribution of significance levels to different realizations of the stochastic random field of interest. We also demonstrate how new instantiations retaining defining characteristics of the random field can be generated using Bayesian methods. In particular, we present a novel WKPCA-based optimization method that minimizes a given objective function with respect to both feature space random variables and observation weights through which optimal snapshot significance levels and optimal features are learned. We showcase how WKPCA can be applied to nonlinear optimal control problems involving channelized media, and in particular demonstrate an application of the method to learning the spatial distribution of material parameter values in the context of linear elasticity, and discuss further extensions of the method to stochastic inversion.
Correlation-based regularization and gradient operators for (joint) inversion on unstructured meshes
NASA Astrophysics Data System (ADS)
Jordi, Claudio; Doetsch, Joseph; Günther, Thomas; Schmelzbach, Cedric; Robertsson, Johan
2017-04-01
When working with unstructured meshes for geophysical inversions, special attention should be paid to the design of the operators that are used for regularizing the inverse problem and coupling of different property models in joint inversions. Regularization constraints for inversions on unstructured meshes are often defined in a rather ad-hoc manner and usually only involve the cell to which the operator is applied and its direct neighbours. Similarly, most structural coupling operators for joint inversion, such as the popular cross-gradients operator, are only defined in the direct neighbourhood of a cell. As a result, the regularization and coupling length scales and strength of these operators depend on the discretization as well as cell sizes and shape. Especially for unstructured meshes, where the cell sizes vary throughout the model domain, the dependency of the operator on the discretization may lead to artefacts. Designing operators that are based on a spatial correlation model allows to define correlation length scales over which an operator acts (called footprint), reducing the dependency on the discretization and the effects of variable cell sizes. Moreover, correlation-based operators can accommodate for expected anisotropy by using different length scales in horizontal and vertical directions. Correlation-based regularization operators also known as stochastic regularization operators have already been successfully applied to inversions on regular grids. Here, we formulate stochastic operators for unstructured meshes and apply them in 2D surface and 3D cross-well electrical resistivity tomography data inversion examples of layered media. Especially for the synthetic cross-well example, improved inversion results are achieved when stochastic regularization is used instead of a classical smoothness constraint. For the case of cross-gradients operators for joint inversion, the correlation model is used to define the footprint of the operator and weigh the contributions of the property values that are used to calculate the cross-gradients. In a first series of synthetic-data tests, we examined the mesh dependency of the cross-gradients operators. Compared to operators that are only defined in the direct neighbourhood of a cell, the dependency on the cell size of the cross-gradients calculation is markedly reduced when using operators with larger footprints. A second test with synthetic models focussed on the effect of small-scale variabilities of the parameter value on the cross-gradients calculation. Small-scale variabilities that are superimposed on a global trend of the property value can potentially degrade the cross-gradients calculation and destabilize joint inversion. We observe that the cross-gradients from operators with footprints larger than the length scale of the variabilities are less affected compared to operators with a small footprint. In joint inversions on unstructured meshes, we thus expect the correlation-based coupling operators to ensure robust coupling on a physically meaningful scale.
Alternatives to the stochastic "noise vector" approach
NASA Astrophysics Data System (ADS)
de Forcrand, Philippe; Jäger, Benjamin
2018-03-01
Several important observables, like the quark condensate and the Taylor coefficients of the expansion of the QCD pressure with respect to the chemical potential, are based on the trace of the inverse Dirac operator and of its powers. Such traces are traditionally estimated with "noise vectors" sandwiching the operator. We explore alternative approaches based on polynomial approximations of the inverse Dirac operator.
NASA Technical Reports Server (NTRS)
Backus, George
1987-01-01
Let R be the real numbers, R(n) the linear space of all real n-tuples, and R(infinity) the linear space of all infinite real sequences x = (x sub 1, x sub 2,...). Let P sub n :R(infinity) approaches R(n) be the projection operator with P sub n (x) = (x sub 1,...,x sub n). Let p(infinity) be a probability measure on the smallest sigma-ring of subsets of R(infinity) which includes all of the cylinder sets P sub n(-1) (B sub n), where B sub n is an arbitrary Borel subset of R(n). Let p sub n be the marginal distribution of p(infinity) on R(n), so p sub n(B sub n) = p(infinity)(P sub n to the -1(B sub n)) for each B sub n. A measure on R(n) is isotropic if it is invariant under all orthogonal transformations of R(n). All members of the set of all isotropic probability distributions on R(n) are described. The result calls into question both stochastic inversion and Bayesian inference, as currently used in many geophysical inverse problems.
Brenner, Tom; Chen, Johnny; Stait-Gardner, Tim; Zheng, Gang; Matsukawa, Shingo; Price, William S
2018-03-01
A new family of binomial-like inversion sequences, named jump-and-return sandwiches (JRS), has been developed by inserting a binomial-like sequence into a standard jump-and-return sequence, discovered through use of a stochastic Genetic Algorithm optimisation. Compared to currently used binomial-like inversion sequences (e.g., 3-9-19 and W5), the new sequences afford wider inversion bands and narrower non-inversion bands with an equal number of pulses. As an example, two jump-and-return sandwich 10-pulse sequences achieved 95% inversion at offsets corresponding to 9.4% and 10.3% of the non-inversion band spacing, compared to 14.7% for the binomial-like W5 inversion sequence, i.e., they afforded non-inversion bands about two thirds the width of the W5 non-inversion band. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Calo, M.; Bodin, T.; Yuan, H.; Romanowicz, B. A.; Larmat, C. S.; Maceira, M.
2013-12-01
Seismic tomography is currently evolving towards 3D earth models that satisfy full seismic waveforms at increasingly high frequencies. This evolution is possible thanks to the advent of powerful numerical methods such as the Spectral Element Method (SEM) that allow accurate computation of the seismic wavefield in complex media, and the drastic increase of computational resources. However, the production of such models requires handling complex misfit functions with more than one local minimum. Standard linearized inversion methods (such as gradient methods) have two main drawbacks: 1) they produce solution models highly dependent on the starting model; 2) they do not provide a means of estimating true model uncertainties. However, these issues can be addressed with stochastic methods that can sample the space of possible solutions efficiently. Such methods are prohibitively challenging computationally in 3D, but increasingly accessible in 1D. In previous work (Yuan and Romanowicz, 2010; Yuan et al., 2011) we developed a continental scale anisotropic upper mantle model of north America based on a combination of long period seismic waveforms and SKS splitting measurements, showing the pervasive presence of layering of anisotropy in the cratonic lithosphere with significant variations in depth of the mid-lithospheric boundary. The radial anisotropy part of the model has been recently updated using the spectral element method for forward wavefield computations and waveform data from the latest deployments of USarray (Yuan and Romanowicz, 2013). However, the long period waveforms (periods > 40s) themselves only provide a relatively smooth view of the mantle if the starting model is smooth, and the mantle discontinuities necessary for geodynamical interpretation are not imaged. Increasing the frequency of the computations to constrain smaller scale features is possible, but challenging computationally, and at the risk of falling in local minima of the misfit function. In this work we propose instead to directly tackle the non-linearity of the inverse problem by using stochastic methods to construct a 3D starting model with a good estimate of the depths of the main layering interfaces. We present preliminary results of the construction of such a starting 3D model based on: (1) Regionalizing the study area to define provinces within which lateral variations are smooth; (2) Applying trans-dimensional stochastic inversion (Bodin et al., 2012) to obtain accurate 1D models in each province as well as the corresponding error distribution, constrained by receiver function and surface wave dispersion data as well as the previously constructed 3D model (name), and (3) connecting these models laterally using data-driven smoothing operators to obtain a starting 3D model with errors. References Bodin, T.,et al. 2012, Transdimensional inversion of receiver functions and surface wave dispersion, J. Geophys. Res., 117, B02301, doi:10.1029/2011JB008560. Yuan and Romanowicz, 2013, in revison. Yuan, H., et al. 2011, 3-D shear wave radially and azimuthally anisotropic velocity model of the North American upper mantle. Geophysical Journal International, 184: 1237-1260. doi: 10.1111/j.1365-246X.2010.04901.x Yuan, H. & Romanowicz, B., 2010. Lithospheric layering in the North American Craton, Nature, 466, 1063-1068.
Stochastic simulation of spatially correlated geo-processes
Christakos, G.
1987-01-01
In this study, developments in the theory of stochastic simulation are discussed. The unifying element is the notion of Radon projection in Euclidean spaces. This notion provides a natural way of reconstructing the real process from a corresponding process observable on a reduced dimensionality space, where analysis is theoretically easier and computationally tractable. Within this framework, the concept of space transformation is defined and several of its properties, which are of significant importance within the context of spatially correlated processes, are explored. The turning bands operator is shown to follow from this. This strengthens considerably the theoretical background of the geostatistical method of simulation, and some new results are obtained in both the space and frequency domains. The inverse problem is solved generally and the applicability of the method is extended to anisotropic as well as integrated processes. Some ill-posed problems of the inverse operator are discussed. Effects of the measurement error and impulses at origin are examined. Important features of the simulated process as described by geomechanical laws, the morphology of the deposit, etc., may be incorporated in the analysis. The simulation may become a model-dependent procedure and this, in turn, may provide numerical solutions to spatial-temporal geologic models. Because the spatial simu??lation may be technically reduced to unidimensional simulations, various techniques of generating one-dimensional realizations are reviewed. To link theory and practice, an example is computed in detail. ?? 1987 International Association for Mathematical Geology.
NASA Astrophysics Data System (ADS)
Hu, Zhenhua; Gao, Shen; Xiang, Bowen
2016-01-01
An analytical expression of transient four-wave mixing (TFWM) in inverted semiconductor with carrier-injection pumping was derived from both the density matrix equation and the complex stochastic stationary statistical method of incoherent light. Numerical analysis showed that the TFWM decayed decay is towards the limit of extreme homogeneous and inhomogeneous broadenings in atoms and the decaying time is inversely proportional to half the power of the net carrier densities for a low carrier-density injection and other high carrier-density injection, while it obeys an usual exponential decay with other decaying time that is inversely proportional to half the power of the net carrier density or it obeys an unusual exponential decay with the decaying time that is inversely proportional to a third power of the net carrier density for a moderate carrier-density injection. The results can be applied to studying ultrafast carrier dephasing in the inverted semiconductors such as semiconductor laser amplifier and semiconductor optical amplifier.
Rydzy, M; Deslauriers, R; Smith, I C; Saunders, J K
1990-08-01
A systematic study was performed to optimize the accuracy of kinetic parameters derived from magnetization transfer measurements. Three techniques were investigated: time-dependent saturation transfer (TDST), saturation recovery (SRS), and inversion recovery (IRS). In the last two methods, one of the resonances undergoing exchange is saturated throughout the experiment. The three techniques were compared with respect to the accuracy of the kinetic parameters derived from experiments performed in a given, fixed, amount of time. Stochastic simulation of magnetization transfer experiments was performed to optimize experimental design. General formulas for the relative accuracies of the unidirectional rate constant (k) were derived for each of the three experimental methods. It was calculated that for k values between 0.1 and 1.0 s-1, T1 values between 1 and 10 s, and relaxation delays appropriate for the creatine kinase reaction, the SRS method yields more accurate values of k than does the IRS method. The TDST method is more accurate than the SRS method for reactions where T1 is long and k is large, within the range of k and T1 values examined. Experimental verification of the method was carried out on a solution in which the forward (PCr----ATP) rate constant (kf) of the creatine kinase reaction was measured.
A general rough-surface inversion algorithm: Theory and application to SAR data
NASA Technical Reports Server (NTRS)
Moghaddam, M.
1993-01-01
Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.
Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...
2017-03-05
Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.
[Analysis of visible extinction spectrum of particle system and selection of optimal wavelength].
Sun, Xiao-gang; Tang, Hong; Yuan, Gui-bin
2008-09-01
In the total light scattering particle sizing technique, the extinction spectrum of particle system contains some information about the particle size and refractive index. The visible extinction spectra of the common monomodal and biomodal R-R particle size distribution were computed, and the variation in the visible extinction spectrum with the particle size and refractive index was analyzed. The corresponding wavelengths were selected as the measurement wavelengths at which the second order differential extinction spectrum was discontinuous. Furthermore, the minimum and the maximum wavelengths in the visible region were also selected as the measurement wavelengths. The genetic algorithm was used as the inversion method under the dependent model The computer simulation and experiments illustrate that it is feasible to make an analysis of the extinction spectrum and use this selection method of the optimal wavelength in the total light scattering particle sizing. The rough contour of the particle size distribution can be determined after the analysis of visible extinction spectrum, so the search range of the particle size parameter is reduced in the optimal algorithm, and then a more accurate inversion result can be obtained using the selection method. The inversion results of monomodal and biomodal distribution are all still satisfactory when 1% stochastic noise is put in the transmission extinction measurement values.
Refractory pulse counting processes in stochastic neural computers.
McNeill, Dean K; Card, Howard C
2005-03-01
This letter quantitiatively investigates the effect of a temporary refractory period or dead time in the ability of a stochastic Bernoulli processor to record subsequent pulse events, following the arrival of a pulse. These effects can arise in either the input detectors of a stochastic neural network or in subsequent processing. A transient period is observed, which increases with both the dead time and the Bernoulli probability of the dead-time free system, during which the system reaches equilibrium. Unless the Bernoulli probability is small compared to the inverse of the dead time, the mean and variance of the pulse count distributions are both appreciably reduced.
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
NASA Astrophysics Data System (ADS)
Linde, N.; Vrugt, J. A.
2009-04-01
Geophysical models are increasingly used in hydrological simulations and inversions, where they are typically treated as an artificial data source with known uncorrelated "data errors". The model appraisal problem in classical deterministic linear and non-linear inversion approaches based on linearization is often addressed by calculating model resolution and model covariance matrices. These measures offer only a limited potential to assign a more appropriate "data covariance matrix" for future hydrological applications, simply because the regularization operators used to construct a stable inverse solution bear a strong imprint on such estimates and because the non-linearity of the geophysical inverse problem is not explored. We present a parallelized Markov Chain Monte Carlo (MCMC) scheme to efficiently derive the posterior spatially distributed radar slowness and water content between boreholes given first-arrival traveltimes. This method is called DiffeRential Evolution Adaptive Metropolis (DREAM_ZS) with snooker updater and sampling from past states. Our inverse scheme does not impose any smoothness on the final solution, and uses uniform prior ranges of the parameters. The posterior distribution of radar slowness is converted into spatially distributed soil moisture values using a petrophysical relationship. To benchmark the performance of DREAM_ZS, we first apply our inverse method to a synthetic two-dimensional infiltration experiment using 9421 traveltimes contaminated with Gaussian errors and 80 different model parameters, corresponding to a model discretization of 0.3 m × 0.3 m. After this, the method is applied to field data acquired in the vadose zone during snowmelt. This work demonstrates that fully non-linear stochastic inversion can be applied with few limiting assumptions to a range of common two-dimensional tomographic geophysical problems. The main advantage of DREAM_ZS is that it provides a full view of the posterior distribution of spatially distributed soil moisture, which is key to appropriately treat geophysical parameter uncertainty and infer hydrologic models.
An Inverse Problem Formulation Methodology for Stochastic Models
2010-05-02
form the surveillance data Infection control measures were implemented in the form of health care worker hand - hygiene before and after patients contact...manuscript derives from our interest in understanding the spread of infectious diseases in particular, nosocomial infections , in order to prevent major...given by the inverse of the parameter of the exponential distribution. A hand - hygiene policy applied to health care workers on isolated VRE colonized
Sustainability of transport structures - some aspects of the nonlinear reliability assessment
NASA Astrophysics Data System (ADS)
Pukl, Radomír; Sajdlová, Tereza; Strauss, Alfred; Lehký, David; Novák, Drahomír
2017-09-01
Efficient techniques for both nonlinear numerical analysis of concrete structures and advanced stochastic simulation methods have been combined in order to offer an advanced tool for assessment of realistic behaviour, failure and safety assessment of transport structures. The utilized approach is based on randomization of the non-linear finite element analysis of the structural models. Degradation aspects such as carbonation of concrete can be accounted in order predict durability of the investigated structure and its sustainability. Results can serve as a rational basis for the performance and sustainability assessment based on advanced nonlinear computer analysis of the structures of transport infrastructure such as bridges or tunnels. In the stochastic simulation the input material parameters obtained from material tests including their randomness and uncertainty are represented as random variables or fields. Appropriate identification of material parameters is crucial for the virtual failure modelling of structures and structural elements. Inverse analysis using artificial neural networks and virtual stochastic simulations approach is applied to determine the fracture mechanical parameters of the structural material and its numerical model. Structural response, reliability and sustainability have been investigated on different types of transport structures made from various materials using the above mentioned methodology and tools.
Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
Discrimination of particulate matter emission sources using stochastic methods
NASA Astrophysics Data System (ADS)
Szczurek, Andrzej; Maciejewska, Monika; Wyłomańska, Agnieszka; Sikora, Grzegorz; Balcerek, Michał; Teuerle, Marek
2016-12-01
Particulate matter (PM) is one of the criteria pollutants which has been determined as harmful to public health and the environment. For this reason the ability to recognize its emission sources is very important. There are a number of measurement methods which allow to characterize PM in terms of concentration, particles size distribution, and chemical composition. All these information are useful to establish a link between the dust found in the air, its emission sources and influence on human as well as the environment. However, the methods are typically quite sophisticated and not applicable outside laboratories. In this work, we considered PM emission source discrimination method which is based on continuous measurements of PM concentration with a relatively cheap instrument and stochastic analysis of the obtained data. The stochastic analysis is focused on the temporal variation of PM concentration and it involves two steps: (1) recognition of the category of distribution for the data i.e. stable or the domain of attraction of stable distribution and (2) finding best matching distribution out of Gaussian, stable and normal-inverse Gaussian (NIG). We examined six PM emission sources. They were associated with material processing in industrial environment, namely machining and welding aluminum, forged carbon steel and plastic with various tools. As shown by the obtained results, PM emission sources may be distinguished based on statistical distribution of PM concentration variations. Major factor responsible for the differences detectable with our method was the type of material processing and the tool applied. In case different materials were processed by the same tool the distinction of emission sources was difficult. For successful discrimination it was crucial to consider size-segregated mass fraction concentrations. In our opinion the presented approach is very promising. It deserves further study and development.
Inverse and forward modeling under uncertainty using MRE-based Bayesian approach
NASA Astrophysics Data System (ADS)
Hou, Z.; Rubin, Y.
2004-12-01
A stochastic inverse approach for subsurface characterization is proposed and applied to shallow vadose zone at a winery field site in north California and to a gas reservoir at the Ormen Lange field site in the North Sea. The approach is formulated in a Bayesian-stochastic framework, whereby the unknown parameters are identified in terms of their statistical moments or their probabilities. Instead of the traditional single-valued estimation /prediction provided by deterministic methods, the approach gives a probability distribution for an unknown parameter. This allows calculating the mean, the mode, and the confidence interval, which is useful for a rational treatment of uncertainty and its consequences. The approach also allows incorporating data of various types and different error levels, including measurements of state variables as well as information such as bounds on or statistical moments of the unknown parameters, which may represent prior information. To obtain minimally subjective prior probabilities required for the Bayesian approach, the principle of Minimum Relative Entropy (MRE) is employed. The approach is tested in field sites for flow parameters identification and soil moisture estimation in the vadose zone and for gas saturation estimation at great depth below the ocean floor. Results indicate the potential of coupling various types of field data within a MRE-based Bayesian formalism for improving the estimation of the parameters of interest.
Bayesian ISOLA: new tool for automated centroid moment tensor inversion
NASA Astrophysics Data System (ADS)
Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John
2017-04-01
Focal mechanisms are important for understanding seismotectonics of a region, and they serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances and high signal-to-noise are rejected, and full-waveform inversion in a space-time grid around a provided hypocenter. The method is innovative in the following aspects: (i) The CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected latter on many figures which are automatically plotted.(ii) The automated process includes detection of disturbances based on MouseTrap code, so disturbed recordings do not affect inversion.(iii) A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies.(iv) Bayesian approach is used, so not only the best solution is obtained, but also the posterior probability density function.(v) A space-time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to obtain more accurate results compared to stochastic methods. The method has been tested on synthetic and observed data. It has been tested by comparison with manually processed moment tensors of all events greater than M≥3 in the Swiss catalogue over 16 years using data available at the Swiss data center (http://arclink.ethz.ch). The quality of the results of the presented automated process is comparable with careful manual processing of data. The software package programmed in Python has been designed to be as versatile as possible in order to be applicable in various networks ranging from local to regional. The method can be applied either to the everyday network data flow, or to process large previously existing earthquake catalogues and data sets.
Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.
Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2017-05-01
Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.
Discreteness-induced concentration inversion in mesoscopic chemical systems.
Ramaswamy, Rajesh; González-Segredo, Nélido; Sbalzarini, Ivo F; Grima, Ramon
2012-04-10
Molecular discreteness is apparent in small-volume chemical systems, such as biological cells, leading to stochastic kinetics. Here we present a theoretical framework to understand the effects of discreteness on the steady state of a monostable chemical reaction network. We consider independent realizations of the same chemical system in compartments of different volumes. Rate equations ignore molecular discreteness and predict the same average steady-state concentrations in all compartments. However, our theory predicts that the average steady state of the system varies with volume: if a species is more abundant than another for large volumes, then the reverse occurs for volumes below a critical value, leading to a concentration inversion effect. The addition of extrinsic noise increases the size of the critical volume. We theoretically predict the critical volumes and verify, by exact stochastic simulations, that rate equations are qualitatively incorrect in sub-critical volumes.
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
Rayleigh wave dispersion curve inversion by using particle swarm optimization and genetic algorithm
NASA Astrophysics Data System (ADS)
Buyuk, Ersin; Zor, Ekrem; Karaman, Abdullah
2017-04-01
Inversion of surface wave dispersion curves with its highly nonlinear nature has some difficulties using traditional linear inverse methods due to the need and strong dependence to the initial model, possibility of trapping in local minima and evaluation of partial derivatives. There are some modern global optimization methods to overcome of these difficulties in surface wave analysis such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). GA is based on biologic evolution consisting reproduction, crossover and mutation operations, while PSO algorithm developed after GA is inspired from the social behaviour of birds or fish of swarms. Utility of these methods require plausible convergence rate, acceptable relative error and optimum computation cost that are important for modelling studies. Even though PSO and GA processes are similar in appearence, the cross-over operation in GA is not used in PSO and the mutation operation is a stochastic process for changing the genes within chromosomes in GA. Unlike GA, the particles in PSO algorithm changes their position with logical velocities according to particle's own experience and swarm's experience. In this study, we applied PSO algorithm to estimate S wave velocities and thicknesses of the layered earth model by using Rayleigh wave dispersion curve and also compared these results with GA and we emphasize on the advantage of using PSO algorithm for geophysical modelling studies considering its rapid convergence, low misfit error and computation cost.
Relativistic analysis of stochastic kinematics
NASA Astrophysics Data System (ADS)
Giona, Massimiliano
2017-10-01
The relativistic analysis of stochastic kinematics is developed in order to determine the transformation of the effective diffusivity tensor in inertial frames. Poisson-Kac stochastic processes are initially considered. For one-dimensional spatial models, the effective diffusion coefficient measured in a frame Σ moving with velocity w with respect to the rest frame of the stochastic process is inversely proportional to the third power of the Lorentz factor γ (w ) =(1-w2/c2) -1 /2 . Subsequently, higher-dimensional processes are analyzed and it is shown that the diffusivity tensor in a moving frame becomes nonisotropic: The diffusivities parallel and orthogonal to the velocity of the moving frame scale differently with respect to γ (w ) . The analysis of discrete space-time diffusion processes permits one to obtain a general transformation theory of the tensor diffusivity, confirmed by several different simulation experiments. Several implications of the theory are also addressed and discussed.
Efficient Storage Scheme of Covariance Matrix during Inverse Modeling
NASA Astrophysics Data System (ADS)
Mao, D.; Yeh, T. J.
2013-12-01
During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yu; Hou, Zhangshuan; Huang, Maoyi
2013-12-10
This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) - Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find thatmore » using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to the different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.« less
NASA Astrophysics Data System (ADS)
Rocadenbosch, Francesc; Comeron, Adolfo; Vazquez, Gregori; Rodriguez-Gomez, Alejandro; Soriano, Cecilia; Baldasano, Jose M.
1998-12-01
Up to now, retrieval of the atmospheric extinction and backscatter has mainly relied on standard straightforward non-memory procedures such as slope-method, exponential- curve fitting and Klett's method. Yet, their performance becomes ultimately limited by the inherent lack of adaptability as they only work with present returns and neither past estimations, nor the statistics of the signals or a prior uncertainties are taken into account. In this work, a first inversion of the backscatter and extinction- to-backscatter ratio from pulsed elastic-backscatter lidar returns is tackled by means of an extended Kalman filter (EKF), which overcomes these limitations. Thus, as long as different return signals income,the filter updates itself weighted by the unbalance between the a priori estimates of the optical parameters and the new ones based on a minimum variance criterion. Calibration errors or initialization uncertainties can be assimilated also. The study begins with the formulation of the inversion problem and an appropriate stochastic model. Based on extensive simulation and realistic conditions, it is shown that the EKF approach enables to retrieve the sought-after optical parameters as time-range-dependent functions and hence, to track the atmospheric evolution, its performance being only limited by the quality and availability of the 'a priori' information and the accuracy of the atmospheric model assumed. The study ends with an encouraging practical inversion of a live-scene measured with the Nd:YAG elastic-backscatter lidar station at our premises in Barcelona.
Stochastic description of geometric phase for polarized waves in random media
NASA Astrophysics Data System (ADS)
Boulanger, Jérémie; Le Bihan, Nicolas; Rossetto, Vincent
2013-01-01
We present a stochastic description of multiple scattering of polarized waves in the regime of forward scattering. In this regime, if the source is polarized, polarization survives along a few transport mean free paths, making it possible to measure an outgoing polarization distribution. We consider thin scattering media illuminated by a polarized source and compute the probability distribution function of the polarization on the exit surface. We solve the direct problem using compound Poisson processes on the rotation group SO(3) and non-commutative harmonic analysis. We obtain an exact expression for the polarization distribution which generalizes previous works and design an algorithm solving the inverse problem of estimating the scattering properties of the medium from the measured polarization distribution. This technique applies to thin disordered layers, spatially fluctuating media and multiple scattering systems and is based on the polarization but not on the signal amplitude. We suggest that it can be used as a non-invasive testing method.
NASA Astrophysics Data System (ADS)
Wei, Lin-Yang; Qi, Hong; Ren, Ya-Tao; Ruan, Li-Ming
2016-11-01
Inverse estimation of the refractive index distribution in one-dimensional participating media with graded refractive index (GRI) is investigated. The forward radiative transfer problem is solved by the Chebyshev collocation spectral method. The stochastic particle swarm optimization (SPSO) algorithm is employed to retrieve three kinds of GRI distribution, i.e. the linear, sinusoidal and quadratic GRI distribution. The retrieval accuracy of GRI distribution with different wall emissivity, optical thickness, absorption coefficients and scattering coefficients are discussed thoroughly. To improve the retrieval accuracy of quadratic GRI distribution, a double-layer model is proposed to supply more measurement information. The influence of measurement errors upon the precision of estimated results is also investigated. Considering the GRI distribution is unknown beforehand in practice, a quadratic function is employed to retrieve the linear GRI by SPSO algorithm. All the results show that the SPSO algorithm is applicable to retrieve different GRI distributions in participating media accurately even with noisy data.
Particle Swarm Optimization algorithms for geophysical inversion, practical hints
NASA Astrophysics Data System (ADS)
Garcia Gonzalo, E.; Fernandez Martinez, J.; Fernandez Alvarez, J.; Kuzma, H.; Menendez Perez, C.
2008-12-01
PSO is a stochastic optimization technique that has been successfully used in many different engineering fields. PSO algorithm can be physically interpreted as a stochastic damped mass-spring system (Fernandez Martinez and Garcia Gonzalo 2008). Based on this analogy we present a whole family of PSO algorithms and their respective first order and second order stability regions. Their performance is also checked using synthetic functions (Rosenbrock and Griewank) showing a degree of ill-posedness similar to that found in many geophysical inverse problems. Finally, we present the application of these algorithms to the analysis of a Vertical Electrical Sounding inverse problem associated to a seawater intrusion in a coastal aquifer in South Spain. We analyze the role of PSO parameters (inertia, local and global accelerations and discretization step), both in convergence curves and in the a posteriori sampling of the depth of an intrusion. Comparison is made with binary genetic algorithms and simulated annealing. As result of this analysis, practical hints are given to select the correct algorithm and to tune the corresponding PSO parameters. Fernandez Martinez, J.L., Garcia Gonzalo, E., 2008a. The generalized PSO: a new door to PSO evolution. Journal of Artificial Evolution and Applications. DOI:10.1155/2008/861275.
Rocadenbosch, F; Soriano, C; Comerón, A; Baldasano, J M
1999-05-20
A first inversion of the backscatter profile and extinction-to-backscatter ratio from pulsed elastic-backscatter lidar returns is treated by means of an extended Kalman filter (EKF). The EKF approach enables one to overcome the intrinsic limitations of standard straightforward nonmemory procedures such as the slope method, exponential curve fitting, and the backward inversion algorithm. Whereas those procedures are inherently not adaptable because independent inversions are performed for each return signal and neither the statistics of the signals nor a priori uncertainties (e.g., boundary calibrations) are taken into account, in the case of the Kalman filter the filter updates itself because it is weighted by the imbalance between the a priori estimates of the optical parameters (i.e., past inversions) and the new estimates based on a minimum-variance criterion, as long as there are different lidar returns. Calibration errors and initialization uncertainties can be assimilated also. The study begins with the formulation of the inversion problem and an appropriate atmospheric stochastic model. Based on extensive simulation and realistic conditions, it is shown that the EKF approach enables one to retrieve the optical parameters as time-range-dependent functions and hence to track the atmospheric evolution; the performance of this approach is limited only by the quality and availability of the a priori information and the accuracy of the atmospheric model used. The study ends with an encouraging practical inversion of a live scene measured at the Nd:YAG elastic-backscatter lidar station at our premises at the Polytechnic University of Catalonia, Barcelona.
Modelisations et inversions tri-dimensionnelles en prospections gravimetrique et electrique
NASA Astrophysics Data System (ADS)
Boulanger, Olivier
The aim of this thesis is the application of gravity and resistivity methods for mining prospecting. The objectives of the present study are: (1) to build a fast gravity inversion method to interpret surface data; (2) to develop a tool for modelling the electrical potential acquired at surface and in boreholes when the resistivity distribution is heterogeneous; and (3) to define and implement a stochastic inversion scheme allowing the estimation of the subsurface resistivity from electrical data. The first technique concerns the elaboration of a three dimensional (3D) inversion program allowing the interpretation of gravity data using a selection of constraints such as the minimum distance, the flatness, the smoothness and the compactness. These constraints are integrated in a Lagrangian formulation. A multi-grid technique is also implemented to resolve separately large and short gravity wavelengths. The subsurface in the survey area is divided into juxtaposed rectangular prismatic blocks. The problem is solved by calculating the model parameters, i.e. the densities of each block. Weights are given to each block depending on depth, a priori information on density, and density range allowed for the region under investigation. The present code is tested on synthetic data. Advantages and behaviour of each method are compared in the 3D reconstruction. Recovery of geometry (depth, size) and density distribution of the original model is dependent on the set of constraints used. The best combination of constraints experimented for multiple bodies seems to be flatness and minimum volume for multiple bodies. The inversion method is tested on real gravity data. The second tool developed in this thesis is a three-dimensional electrical resistivity modelling code to interpret surface and subsurface data. Based on the integral equation, it calculates the charge density caused by conductivity gradients at each interface of the mesh allowing an exact estimation of the potential. Modelling generates a huge matrix made of Green's functions which is stored by using the method of pyramidal compression. The third method consists to interpret electrical potential measurements from a non-linear geostatistical approach including new constraints. This method estimates an analytical covariance model for the resistivity parameters from the potential data. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Bershadskii, A.
1994-10-01
The quantitative (scaling) results of a recent lattice-gas simulation of granular flows [1] are interpreted in terms of Kolmogorov-Obukhov approach revised for strong space-intermittent systems. Renormalised power spectrum with exponent '-4/3' seems to be an universal spectrum of scalar fluctuations convected by stochastic velocity fields in dissipative systems with inverse energy transfer (some other laboratory and geophysic turbulent flows with this power spectrum as well as an analogy between this phenomenon and turbulent percolation on elastic backbone are pointed out).
Stochastic ontogenetic growth model
NASA Astrophysics Data System (ADS)
West, B. J.; West, D.
2012-02-01
An ontogenetic growth model (OGM) for a thermodynamically closed system is generalized to satisfy both the first and second law of thermodynamics. The hypothesized stochastic ontogenetic growth model (SOGM) is shown to entail the interspecies allometry relation by explicitly averaging the basal metabolic rate and the total body mass over the steady-state probability density for the total body mass (TBM). This is the first derivation of the interspecies metabolic allometric relation from a dynamical model and the asymptotic steady-state distribution of the TBM is fit to data and shown to be inverse power law.
NASA Astrophysics Data System (ADS)
Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.
2018-06-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.
Stochastic theory of size exclusion chromatography by the characteristic function approach.
Dondi, Francesco; Cavazzini, Alberto; Remelli, Maurizio; Felinger, Attila; Martin, Michel
2002-01-18
A general stochastic theory of size exclusion chromatography (SEC) able to account for size dependence on both pore ingress and egress processes, moving zone dispersion and pore size distribution, was developed. The relationship between stochastic-chromatographic and batch equilibrium conditions are discussed and the fundamental role of the 'ergodic' hypothesis in establishing a link between them is emphasized. SEC models are solved by means of the characteristic function method and chromatographic parameters like plate height, peak skewness and excess are derived. The peak shapes are obtained by numerical inversion of the characteristic function under the most general conditions of the exploited models. Separate size effects on pore ingress and pore egress processes are investigated and their effects on both retention selectivity and efficiency are clearly shown. The peak splitting phenomenon and peak tailing due to incomplete sample sorption near to the exclusion limit is discussed. An SEC model for columns with two types of pores is discussed and several effects on retention selectivity and efficiency coming from pore size differences and their relative abundance are singled out. The relevance of moving zone dispersion on separation is investigated. The present approach proves to be general and able to account for more complex SEC conditions such as continuous pore size distributions and mixed retention mechanism.
Deriving Link Travel-Time Distributions via Stochastic Speed Processes
2004-02-01
general, an exact expression for the inverse transform is available when Equation (9) is a vector of rational functions in both of the complex variables...Otherwise, recovery of the original function is accom- plished through the inverse transform f t= 1 2(j ∫ c+j c−j estf ∗s ds (13) which is usually...given by f x t = f ∗s1 s2 = ∫ 0 ∫ 0 e−s1x+s2tf x t dx dt (14) with inverse transform f x t = 1 4(2 ∫ c1+j c1−j ∫ c2+j c2−j e
NASA Technical Reports Server (NTRS)
Ostroff, Aaron J.
1998-01-01
This paper contains a study of two methods for use in a generic nonlinear simulation tool that could be used to determine achievable control dynamics and control power requirements while performing perfect tracking maneuvers over the entire flight envelope. The two methods are NDI (nonlinear dynamic inversion) and the SOFFT(Stochastic Optimal Feedforward and Feedback Technology) feedforward control structure. Equivalent discrete and continuous SOFFT feedforward controllers have been developed. These equivalent forms clearly show that the closed-loop plant model loop is a plant inversion and is the same as the NDI formulation. The main difference is that the NDI formulation has a closed-loop controller structure whereas SOFFT uses an open-loop command model. Continuous, discrete, and hybrid controller structures have been developed and integrated into the formulation. Linear simulation results show that seven different configurations all give essentially the same response, with the NDI hybrid being slightly different. The SOFFT controller gave better tracking performance compared to the NDI controller when a nonlinear saturation element was added. Future plans include evaluation using a nonlinear simulation.
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less
Yu, Han; Hageman Blair, Rachael
2016-01-01
Understanding community structure in networks has received considerable attention in recent years. Detecting and leveraging community structure holds promise for understanding and potentially intervening with the spread of influence. Network features of this type have important implications in a number of research areas, including, marketing, social networks, and biology. However, an overwhelming majority of traditional approaches to community detection cannot readily incorporate information of node attributes. Integrating structural and attribute information is a major challenge. We propose a exible iterative method; inverse regularized Markov Clustering (irMCL), to network clustering via the manipulation of the transition probability matrix (aka stochastic flow) corresponding to a graph. Similar to traditional Markov Clustering, irMCL iterates between "expand" and "inflate" operations, which aim to strengthen the intra-cluster flow, while weakening the inter-cluster flow. Attribute information is directly incorporated into the iterative method through a sigmoid (logistic function) that naturally dampens attribute influence that is contradictory to the stochastic flow through the network. We demonstrate advantages and the exibility of our approach using simulations and real data. We highlight an application that integrates breast cancer gene expression data set and a functional network defined via KEGG pathways reveal significant modules for survival.
Inverse Stochastic Resonance in Cerebellar Purkinje Cells
Häusser, Michael; Gutkin, Boris S.; Roth, Arnd
2016-01-01
Purkinje neurons play an important role in cerebellar computation since their axons are the only projection from the cerebellar cortex to deeper cerebellar structures. They have complex internal dynamics, which allow them to fire spontaneously, display bistability, and also to be involved in network phenomena such as high frequency oscillations and travelling waves. Purkinje cells exhibit type II excitability, which can be revealed by a discontinuity in their f-I curves. We show that this excitability mechanism allows Purkinje cells to be efficiently inhibited by noise of a particular variance, a phenomenon known as inverse stochastic resonance (ISR). While ISR has been described in theoretical models of single neurons, here we provide the first experimental evidence for this effect. We find that an adaptive exponential integrate-and-fire model fitted to the basic Purkinje cell characteristics using a modified dynamic IV method displays ISR and bistability between the resting state and a repetitive activity limit cycle. ISR allows the Purkinje cell to operate in different functional regimes: the all-or-none toggle or the linear filter mode, depending on the variance of the synaptic input. We propose that synaptic noise allows Purkinje cells to quickly switch between these functional regimes. Using mutual information analysis, we demonstrate that ISR can lead to a locally optimal information transfer between the input and output spike train of the Purkinje cell. These results provide the first experimental evidence for ISR and suggest a functional role for ISR in cerebellar information processing. PMID:27541958
Stochastic kinetic mean field model
NASA Astrophysics Data System (ADS)
Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.
2016-07-01
This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on http://skmf.eu website). We will show that the result of one SKMF run may correspond to the average of several KMC runs. The number of KMC runs is inversely proportional to the amplitude square of the noise in SKMF. This makes SKMF an ideal tool also for statistical purposes.
Compensating for estimation smoothing in kriging
Olea, R.A.; Pawlowsky, Vera
1996-01-01
Smoothing is a characteristic inherent to all minimum mean-square-error spatial estimators such as kriging. Cross-validation can be used to detect and model such smoothing. Inversion of the model produces a new estimator-compensated kriging. A numerical comparison based on an exhaustive permeability sampling of a 4-fr2 slab of Berea Sandstone shows that the estimation surface generated by compensated kriging has properties intermediate between those generated by ordinary kriging and stochastic realizations resulting from simulated annealing and sequential Gaussian simulation. The frequency distribution is well reproduced by the compensated kriging surface, which also approximates the experimental semivariogram well - better than ordinary kriging, but not as well as stochastic realizations. Compensated kriging produces surfaces that are more accurate than stochastic realizations, but not as accurate as ordinary kriging. ?? 1996 International Association for Mathematical Geology.
Finite-fault source inversion using adjoint methods in 3D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-04-01
Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Finite-fault source inversion using adjoint methods in 3-D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-07-01
Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
Convergence analysis of surrogate-based methods for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Yan, Liang; Zhang, Yuan-Xiang
2017-12-01
The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.
Multiple well-shutdown tests and site-scale flow simulation in fractured rocks
Tiedeman, Claire; Lacombe, Pierre J.; Goode, Daniel J.
2010-01-01
A new method was developed for conducting aquifer tests in fractured-rock flow systems that have a pump-and-treat (P&T) operation for containing and removing groundwater contaminants. The method involves temporary shutdown of individual pumps in wells of the P&T system. Conducting aquifer tests in this manner has several advantages, including (1) no additional contaminated water is withdrawn, and (2) hydraulic containment of contaminants remains largely intact because pumping continues at most wells. The well-shutdown test method was applied at the former Naval Air Warfare Center (NAWC), West Trenton, New Jersey, where a P&T operation is designed to contain and remove trichloroethene and its daughter products in the dipping fractured sedimentary rocks underlying the site. The detailed site-scale subsurface geologic stratigraphy, a three-dimensional MODFLOW model, and inverse methods in UCODE_2005 were used to analyze the shutdown tests. In the model, a deterministic method was used for representing the highly heterogeneous hydraulic conductivity distribution and simulations were conducted using an equivalent porous media method. This approach was very successful for simulating the shutdown tests, contrary to a common perception that flow in fractured rocks must be simulated using a stochastic or discrete fracture representation of heterogeneity. Use of inverse methods to simultaneously calibrate the model to the multiple shutdown tests was integral to the effectiveness of the approach.
Mathematic and the Quest for Fundamental Principles of Biology
2017-05-05
stochasticity as part of the process, rather than as extrinsic noise. In some sense, like all organisms, we must continually solve inverse problems...predictions that could not be made before, ideally while simultaneously elucidating new mechanisms and proposing new experiments. The meeting concluded with
Parameter Estimation for Geoscience Applications Using a Measure-Theoretic Approach
NASA Astrophysics Data System (ADS)
Dawson, C.; Butler, T.; Mattis, S. A.; Graham, L.; Westerink, J. J.; Vesselinov, V. V.; Estep, D.
2016-12-01
Effective modeling of complex physical systems arising in the geosciences is dependent on knowing parameters which are often difficult or impossible to measure in situ. In this talk we focus on two such problems, estimating parameters for groundwater flow and contaminant transport, and estimating parameters within a coastal ocean model. The approach we will describe, proposed by collaborators D. Estep, T. Butler and others, is based on a novel stochastic inversion technique based on measure theory. In this approach, given a probability space on certain observable quantities of interest, one searches for the sets of highest probability in parameter space which give rise to these observables. When viewed as mappings between sets, the stochastic inversion problem is well-posed in certain settings, but there are computational challenges related to the set construction. We will focus the talk on estimating scalar parameters and fields in a contaminant transport setting, and in estimating bottom friction in a complicated near-shore coastal application.
Inverse stochastic-dynamic models for high-resolution Greenland ice core records
NASA Astrophysics Data System (ADS)
Boers, Niklas; Chekroun, Mickael D.; Liu, Honghu; Kondrashov, Dmitri; Rousseau, Denis-Didier; Svensson, Anders; Bigler, Matthias; Ghil, Michael
2017-12-01
Proxy records from Greenland ice cores have been studied for several decades, yet many open questions remain regarding the climate variability encoded therein. Here, we use a Bayesian framework for inferring inverse, stochastic-dynamic models from δ18O and dust records of unprecedented, subdecadal temporal resolution. The records stem from the North Greenland Ice Core Project (NGRIP), and we focus on the time interval 59-22 ka b2k. Our model reproduces the dynamical characteristics of both the δ18O and dust proxy records, including the millennial-scale Dansgaard-Oeschger variability, as well as statistical properties such as probability density functions, waiting times and power spectra, with no need for any external forcing. The crucial ingredients for capturing these properties are (i) high-resolution training data, (ii) cubic drift terms, (iii) nonlinear coupling terms between the δ18O and dust time series, and (iv) non-Markovian contributions that represent short-term memory effects.
A single promoter inversion switches Photorhabdus between pathogenic and mutualistic states.
Somvanshi, Vishal S; Sloup, Rudolph E; Crawford, Jason M; Martin, Alexander R; Heidt, Anthony J; Kim, Kwi-suk; Clardy, Jon; Ciche, Todd A
2012-07-06
Microbial populations stochastically generate variants with strikingly different properties, such as virulence or avirulence and antibiotic tolerance or sensitivity. Photorhabdus luminescens bacteria have a variable life history in which they alternate between pathogens to a wide variety of insects and mutualists to their specific host nematodes. Here, we show that the P. luminescens pathogenic variant (P form) switches to a smaller-cell variant (M form) to initiate mutualism in host nematode intestines. A stochastic promoter inversion causes the switch between the two distinct forms. M-form cells are much smaller (one-seventh the volume), slower growing, and less bioluminescent than P-form cells; they are also avirulent and produce fewer secondary metabolites. Observations of form switching by individual cells in nematodes revealed that the M form persisted in maternal nematode intestines, were the first cells to colonize infective juvenile (IJ) offspring, and then switched to P form in the IJ intestine, which armed these nematodes for the next cycle of insect infection.
Inverse problems and computational cell metabolic models: a statistical approach
NASA Astrophysics Data System (ADS)
Calvetti, D.; Somersalo, E.
2008-07-01
In this article, we give an overview of the Bayesian modelling of metabolic systems at the cellular and subcellular level. The models are based on detailed description of key biochemical reactions occurring in tissue, which may in turn be compartmentalized into cytosol and mitochondria, and of transports between the compartments. The classical deterministic approach which models metabolic systems as dynamical systems with Michaelis-Menten kinetics, is replaced by a stochastic extension where the model parameters are interpreted as random variables with an appropriate probability density. The inverse problem of cell metabolism in this setting consists of estimating the density of the model parameters. After discussing some possible approaches to solving the problem, we address the issue of how to assess the reliability of the predictions of a stochastic model by proposing an output analysis in terms of model uncertainties. Visualization modalities for organizing the large amount of information provided by the Bayesian dynamic sensitivity analysis are also illustrated.
The Calderón problem with corrupted data
NASA Astrophysics Data System (ADS)
Caro, Pedro; Garcia, Andoni
2017-08-01
We consider the inverse Calderón problem consisting of determining the conductivity inside a medium by electrical measurements on its surface. Ideally, these measurements determine the Dirichlet-to-Neumann map and, therefore, one usually assumes the data to be given by such a map. This situation corresponds to having access to infinite-precision measurements, which is totally unrealistic. In this paper, we study the Calderón problem assuming the data to contain measurement errors and provide formulas to reconstruct the conductivity and its normal derivative on the surface. Additionally, we state the rate convergence of the method. Our approach is theoretical and has a stochastic flavour.
NASA Astrophysics Data System (ADS)
Murakami, H.; Chen, X.; Hahn, M. S.; Over, M. W.; Rockhold, M. L.; Vermeul, V.; Hammond, G. E.; Zachara, J. M.; Rubin, Y.
2010-12-01
Subsurface characterization for predicting groundwater flow and contaminant transport requires us to integrate large and diverse datasets in a consistent manner, and quantify the associated uncertainty. In this study, we sequentially assimilated multiple types of datasets for characterizing a three-dimensional heterogeneous hydraulic conductivity field at the Hanford 300 Area. The datasets included constant-rate injection tests, electromagnetic borehole flowmeter tests, lithology profile and tracer tests. We used the method of anchored distributions (MAD), which is a modular-structured Bayesian geostatistical inversion method. MAD has two major advantages over the other inversion methods. First, it can directly infer a joint distribution of parameters, which can be used as an input in stochastic simulations for prediction. In MAD, in addition to typical geostatistical structural parameters, the parameter vector includes multiple point values of the heterogeneous field, called anchors, which capture local trends and reduce uncertainty in the prediction. Second, MAD allows us to integrate the datasets sequentially in a Bayesian framework such that it updates the posterior distribution, as a new dataset is included. The sequential assimilation can decrease computational burden significantly. We applied MAD to assimilate different combinations of the datasets, and then compared the inversion results. For the injection and tracer test assimilation, we calculated temporal moments of pressure build-up and breakthrough curves, respectively, to reduce the data dimension. A massive parallel flow and transport code PFLOTRAN is used for simulating the tracer test. For comparison, we used different metrics based on the breakthrough curves not used in the inversion, such as mean arrival time, peak concentration and early arrival time. This comparison intends to yield the combined data worth, i.e. which combination of the datasets is the most effective for a certain metric, which will be useful for guiding the further characterization effort at the site and also the future characterization projects at the other sites.
NASA Technical Reports Server (NTRS)
Morgera, S. D.; Cooper, D. B.
1976-01-01
The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.
Exact solution of a model DNA-inversion genetic switch with orientational control.
Visco, Paolo; Allen, Rosalind J; Evans, Martin R
2008-09-12
DNA inversion is an important mechanism by which bacteria and bacteriophage switch reversibly between phenotypic states. In such switches, the orientation of a short DNA element is flipped by a site-specific recombinase enzyme. We propose a simple model for a DNA-inversion switch in which recombinase production is dependent on the switch state (orientational control). Our model is inspired by the fim switch in E. coli. We present an exact analytical solution of the chemical master equation for the model switch, as well as stochastic simulations. Orientational control causes the switch to deviate from Poissonian behavior: the distribution of times in the on state shows a peak and successive flip times are correlated.
Venugopal, M.; Roy, D.; Rajendran, K.; Guillas, S.; Dias, F.
2017-01-01
Numerical inversions for earthquake source parameters from tsunami wave data usually incorporate subjective elements to stabilize the search. In addition, noisy and possibly insufficient data result in instability and non-uniqueness in most deterministic inversions, which are barely acknowledged. Here, we employ the satellite altimetry data for the 2004 Sumatra–Andaman tsunami event to invert the source parameters. We also include kinematic parameters that improve the description of tsunami generation and propagation, especially near the source. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a new type of nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large uncertainties in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest that objective inversion strategies should incorporate more sophisticated physical models of seabed deformation in order to significantly improve the performance of early warning systems. PMID:28989311
Gopinathan, D; Venugopal, M; Roy, D; Rajendran, K; Guillas, S; Dias, F
2017-09-01
Numerical inversions for earthquake source parameters from tsunami wave data usually incorporate subjective elements to stabilize the search. In addition, noisy and possibly insufficient data result in instability and non-uniqueness in most deterministic inversions, which are barely acknowledged. Here, we employ the satellite altimetry data for the 2004 Sumatra-Andaman tsunami event to invert the source parameters. We also include kinematic parameters that improve the description of tsunami generation and propagation, especially near the source. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a new type of nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large uncertainties in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest that objective inversion strategies should incorporate more sophisticated physical models of seabed deformation in order to significantly improve the performance of early warning systems.
NASA Astrophysics Data System (ADS)
Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby
2013-12-01
This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedmann, S J
Carbon capture and sequestration (CCS) has emerged as a key technology for dramatic short-term reduction in greenhouse gas emissions in particular from large stationary. A key challenge in this arena is the monitoring and verification (M&V) of CO2 plumes in the deep subsurface. Towards that end, we have developed a tool that can simultaneously invert multiple sub-surface data sets to constrain the location, geometry, and saturation of subsurface CO2 plumes. We have focused on a suite of unconventional geophysical approaches that measure changes in electrical properties (electrical resistance tomography, electromagnetic induction tomography) and bulk crustal deformation (til-meters). We had alsomore » used constraints of the geology as rendered in a shared earth model (ShEM) and of the injection (e.g., total injected CO{sub 2}). We describe a stochastic inversion method for mapping subsurface regions where CO{sub 2} saturation is changing. The technique combines prior information with measurements of injected CO{sub 2} volume, reservoir deformation and electrical resistivity. Bayesian inference and a Metropolis simulation algorithm form the basis for this approach. The method can (a) jointly reconstruct disparate data types such as surface or subsurface tilt, electrical resistivity, and injected CO{sub 2} volume measurements, (b) provide quantitative measures of the result uncertainty, (c) identify competing models when the available data are insufficient to definitively identify a single optimal model and (d) rank the alternative models based on how well they fit available data. We present results from general simulations of a hypothetical case derived from a real site. We also apply the technique to a field in Wyoming, where measurements collected during CO{sub 2} injection for enhanced oil recovery serve to illustrate the method's performance. The stochastic inversions provide estimates of the most probable location, shape, volume of the plume and most likely CO{sub 2} saturation. The results suggest that the method can reconstruct data with poor signal to noise ratio and use hard constraints available from many sites and applications. External interest in the approach and method is high, and already commercial and DOE entities have requested technical work using the newly developed methodology for CO{sub 2} monitoring.« less
Error decomposition and estimation of inherent optical properties.
Salama, Mhd Suhyb; Stein, Alfred
2009-09-10
We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.
Dushaw, Brian D; Sagen, Hanne
2017-12-01
Ocean acoustic tomography depends on a suitable reference ocean environment with which to set the basic parameters of the inverse problem. Some inverse problems may require a reference ocean that includes the small-scale variations from internal waves, small mesoscale, or spice. Tomographic inversions that employ data of stable shadow zone arrivals, such as those that have been observed in the North Pacific and Canary Basin, are an example. Estimating temperature from the unique acoustic data that have been obtained in Fram Strait is another example. The addition of small-scale variability to augment a smooth reference ocean is essential to understanding the acoustic forward problem in these cases. Rather than a hindrance, the stochastic influences of the small scale can be exploited to obtain accurate inverse estimates. Inverse solutions are readily obtained, and they give computed arrival patterns that matched the observations. The approach is not ad hoc, but universal, and it has allowed inverse estimates for ocean temperature variations in Fram Strait to be readily computed on several acoustic paths for which tomographic data were obtained.
The effects of noise on binocular rivalry waves: a stochastic neural field model
NASA Astrophysics Data System (ADS)
Webber, Matthew A.; Bressloff, Paul C.
2013-03-01
We analyze the effects of extrinsic noise on traveling waves of visual perception in a competitive neural field model of binocular rivalry. The model consists of two one-dimensional excitatory neural fields, whose activity variables represent the responses to left-eye and right-eye stimuli, respectively. The two networks mutually inhibit each other, and slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We first show how, in the absence of any noise, the system supports a propagating composite wave consisting of an invading activity front in one network co-moving with a retreating front in the other network. Using a separation of time scales and perturbation methods previously developed for stochastic reaction-diffusion equations, we then show how extrinsic noise in the activity variables leads to a diffusive-like displacement (wandering) of the composite wave from its uniformly translating position at long time scales, and fluctuations in the wave profile around its instantaneous position at short time scales. We use our analysis to calculate the first-passage-time distribution for a stochastic rivalry wave to travel a fixed distance, which we find to be given by an inverse Gaussian. Finally, we investigate the effects of noise in the depression variables, which under an adiabatic approximation lead to quenched disorder in the neural fields during propagation of a wave.
Two-point T1 measurement: wide-coverage optimizations by stochastic simulations.
Lin, M S; Fletcher, J W; Donati, R M
1986-08-01
Stochastic reliability of T1 measurement from image signal ratios is examined in the ideal case by stochastic simulations in the context of wide-coverage optimizations. Precise measurements prove to be accurate, and accurate ones precise. Sign-preserved inversion-recovery (IR)/non-IR techniques are the best ratio method, reciprocal non-IR/IR ones being equivalent, but inconvenient. Wide-coverage optima are relatively unsharp. Suggested guidelines for covering the 150- to 1500-ms T1 band are minimal relevant TE; TI about 400 ms; effective repetition times about in the ratio, TR2(IR)/TR1 (non-IR) = 2.5-3.0, and in a sum as long as possible up to about TR1 + TR2 = 3.5-4.0 s; signal-averaging after and only after TR1 + TR2 has been lengthened to the said region. Also suggested are different guidelines for covering T1 bands, 120-1200 and 200-1800 ms. Typically, precisions and accuracies improve linearly or faster with increasing S/N and (S/N)2, respectively. Unnecessarily high pixel resolutions or thin slicings exact great penalties in accuracies. Progressively shortening TR1 eventually transforms a wide coverage into a sharp targeting with small potential gains in a narrow T1 locality and large compromises almost everywhere else. The simulations yield an insight into applicabilities of standard error propagation analyses in two-point T1 measurement.
Final Technical Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knio, Omar M.
QUEST is a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, University of Southern California, Massachusetts Institute of Technology, University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The Duke effort focused on the development of algorithms and utility software for non-intrusive sparse UQ representations, and on participation in the organization of annual workshops and tutorials to disseminate UQ tools to the community, and to gather inputmore » in order to adapt approaches to the needs of SciDAC customers. In particular, fundamental developments were made in (a) multiscale stochastic preconditioners, (b) gradient-based approaches to inverse problems, (c) adaptive pseudo-spectral approximations, (d) stochastic limit cycles, and (e) sensitivity analysis tools for noisy systems. In addition, large-scale demonstrations were performed, namely in the context of ocean general circulation models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrosian, Vahe; Chen Qingrong
2010-04-01
The model of stochastic acceleration of particles by turbulence has been successful in explaining many observed features of solar flares. Here, we demonstrate a new method to obtain the accelerated electron spectrum and important acceleration model parameters from the high-resolution hard X-ray (HXR) observations provided by RHESSI. In our model, electrons accelerated at or very near the loop top (LT) produce thin target bremsstrahlung emission there and then escape downward producing thick target emission at the loop footpoints (FPs). Based on the electron flux spectral images obtained by the regularized spectral inversion of the RHESSI count visibilities, we derive severalmore » important parameters for the acceleration model. We apply this procedure to the 2003 November 3 solar flare, which shows an LT source up to 100-150 keV in HXR with a relatively flat spectrum in addition to two FP sources. The results imply the presence of strong scattering and a high density of turbulence energy with a steep spectrum in the acceleration region.« less
NASA Astrophysics Data System (ADS)
Provenzano, Giuseppe; Vardy, Mark E.; Henstock, Timothy J.
2018-06-01
Characterisation of the top 10-50 m of the subseabed is key for landslide hazard assessment, offshore structure engineering design and underground gas-storage monitoring. In this paper, we present a methodology for the stochastic inversion of ultra-high-frequency (UHF, 0.2-4.0 kHz) pre-stack seismic reflection waveforms, designed to obtain a decimetric-resolution remote elastic characterisation of the shallow sediments with minimal pre-processing and little a-priori information. We use a genetic algorithm in which the space of possible solutions is sampled by explicitly decoupling the short and long wavelengths of the P-wave velocity model. This approach, combined with an objective function robust to cycle skipping, outperforms a conventional model parametrisation when the ground-truth is offset from the centre of the search domain. The robust P-wave velocity model is used to precondition the width of the search range of the multi-parameter elastic inversion, thereby improving the efficiency in high dimensional parametrizations. Multiple independent runs provide a set of independent results from which the reproducibility of the solution can be estimated. In a real dataset acquired in Finneidfjord, Norway, we also demonstrate the sensitivity of UHF seismic inversion to shallow subseabed anomalies that play a role in submarine slope stability. Thus, the methodology has the potential to become an important practical tool for marine ground model building in spatially heterogeneous areas, reducing the reliance on expensive and time-consuming coring campaigns for geohazard mitigation in marine areas.
NASA Astrophysics Data System (ADS)
Pankratov, Oleg; Kuvshinov, Alexey
2016-01-01
Despite impressive progress in the development and application of electromagnetic (EM) deterministic inverse schemes to map the 3-D distribution of electrical conductivity within the Earth, there is one question which remains poorly addressed—uncertainty quantification of the recovered conductivity models. Apparently, only an inversion based on a statistical approach provides a systematic framework to quantify such uncertainties. The Metropolis-Hastings (M-H) algorithm is the most popular technique for sampling the posterior probability distribution that describes the solution of the statistical inverse problem. However, all statistical inverse schemes require an enormous amount of forward simulations and thus appear to be extremely demanding computationally, if not prohibitive, if a 3-D set up is invoked. This urges development of fast and scalable 3-D modelling codes which can run large-scale 3-D models of practical interest for fractions of a second on high-performance multi-core platforms. But, even with these codes, the challenge for M-H methods is to construct proposal functions that simultaneously provide a good approximation of the target density function while being inexpensive to be sampled. In this paper we address both of these issues. First we introduce a variant of the M-H method which uses information about the local gradient and Hessian of the penalty function. This, in particular, allows us to exploit adjoint-based machinery that has been instrumental for the fast solution of deterministic inverse problems. We explain why this modification of M-H significantly accelerates sampling of the posterior probability distribution. In addition we show how Hessian handling (inverse, square root) can be made practicable by a low-rank approximation using the Lanczos algorithm. Ultimately we discuss uncertainty analysis based on stochastic inversion results. In addition, we demonstrate how this analysis can be performed within a deterministic approach. In the second part, we summarize modern trends in the development of efficient 3-D EM forward modelling schemes with special emphasis on recent advances in the integral equation approach.
Theoretical consideration of the energy resolution in planar HPGe detectors for low energy X-rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samedov, Victor V.
In this work, theoretical consideration of the processes in planar High Purity Ge (HPGe) detectors for low energy X-rays using the random stochastic processes formalism was carried out. Using the random stochastic processes formalism, the generating function of the processes of X-rays registration in a planar HPGe detector was derived. The power serial expansions of the detector amplitude and the variance in terms of the inverse bias voltage were derived. The coefficients of these expansions allow determining the Fano factor, electron mobility lifetime product, nonuniformity of the trap density, and other characteristics of the semiconductor material. (authors)
The isolation limits of stochastic vibration
NASA Technical Reports Server (NTRS)
Knopse, C. R.; Allaire, P. E.
1993-01-01
The vibration isolation problem is formulated as a 1D kinematic problem. The geometry of the stochastic wall trajectories arising from the stroke constraint is defined in terms of their significant extrema. An optimal control solution for the minimum acceleration return path determines a lower bound on platform mean square acceleration. This bound is expressed in terms of the probability density function on the significant maxima and the conditional fourth moment of the first passage time inverse. The first of these is found analytically while the second is found using a Monte Carlo simulation. The rms acceleration lower bound as a function of available space is then determined through numerical quadrature.
Mathiopoulos, K D; Lanzaro, G C
1995-06-01
The epidemiology of malaria in Africa is complicated by the fact that its principal vector, the mosquito Anopheles gambiae, constitutes a complex of six sibling species. Each species is characterized by a unique array of paracentric inversions, as deduced by karyotypic analysis. In addition, most of the species carry a number of polymorphic inversions. In order to develop an understanding of the evolutionary histories of different parts of the genome, we compared the genetic variation of areas inside and outside inversions in two distinct inversion karyotypes of A. gambiae. Thirty-five cDNA clones were mapped on the five arms of the A. gambiae chromosomes with divisional probes. Sixteen of these clones, localized both inside and outside inversions of chromosome 2, were used as probes in order to determine the nucleotide diversity of different parts of the genome in the two inversion karyotypes. We observed that the sequence diversity inside the inversion is more than three-fold lower than in areas outside the inversion and that the degree of divergence increases gradually at loci at increasing distance from the inversion. To interpret the data we present a selectionist and a stochastic model, both of which point to a relatively recent origin of the studied inversion and may suggest differences between the evolutionary history of inversions in Anopheles and Drosophila species.
NASA Astrophysics Data System (ADS)
Irving, J.; Koepke, C.; Elsheikh, A. H.
2017-12-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.
Banerjee, Biswanath; Roy, Debasish; Vasu, Ram Mohan
2009-08-01
A computationally efficient pseudodynamical filtering setup is established for elasticity imaging (i.e., reconstruction of shear modulus distribution) in soft-tissue organs given statically recorded and partially measured displacement data. Unlike a regularized quasi-Newton method (QNM) that needs inversion of ill-conditioned matrices, the authors explore pseudodynamic extended and ensemble Kalman filters (PD-EKF and PD-EnKF) that use a parsimonious representation of states and bypass explicit regularization by recursion over pseudotime. Numerical experiments with QNM and the two filters suggest that the PD-EnKF is the most robust performer as it exhibits no sensitivity to process noise covariance and yields good reconstruction even with small ensemble sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir
Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Errormore » analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.« less
Stochastic Evolutionary Algorithms for Planning Robot Paths
NASA Technical Reports Server (NTRS)
Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard
2006-01-01
A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.
Siddiqui, Hasib; Bouman, Charles A
2007-03-01
Conventional halftoning methods employed in electrophotographic printers tend to produce Moiré artifacts when used for printing images scanned from printed material, such as books and magazines. We present a novel approach for descreening color scanned documents aimed at providing an efficient solution to the Moiré problem in practical imaging devices, including copiers and multifunction printers. The algorithm works by combining two nonlinear image-processing techniques, resolution synthesis-based denoising (RSD), and modified smallest univalue segment assimilating nucleus (SUSAN) filtering. The RSD predictor is based on a stochastic image model whose parameters are optimized beforehand in a separate training procedure. Using the optimized parameters, RSD classifies the local window around the current pixel in the scanned image and applies filters optimized for the selected classes. The output of the RSD predictor is treated as a first-order estimate to the descreened image. The modified SUSAN filter uses the output of RSD for performing an edge-preserving smoothing on the raw scanned data and produces the final output of the descreening algorithm. Our method does not require any knowledge of the screening method, such as the screen frequency or dither matrix coefficients, that produced the printed original. The proposed scheme not only suppresses the Moiré artifacts, but, in addition, can be trained with intrinsic sharpening for deblurring scanned documents. Finally, once optimized for a periodic clustered-dot halftoning method, the same algorithm can be used to inverse halftone scanned images containing stochastic error diffusion halftone noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griebel, M., E-mail: griebel@ins.uni-bonn.de, E-mail: ruettgers@ins.uni-bonn.de; Rüttgers, A., E-mail: griebel@ins.uni-bonn.de, E-mail: ruettgers@ins.uni-bonn.de
The multiscale FENE model is applied to a 3D square-square contraction flow problem. For this purpose, the stochastic Brownian configuration field method (BCF) has been coupled with our fully parallelized three-dimensional Navier-Stokes solver NaSt3DGPF. The robustness of the BCF method enables the numerical simulation of high Deborah number flows for which most macroscopic methods suffer from stability issues. The results of our simulations are compared with that of experimental measurements from literature and show a very good agreement. In particular, flow phenomena such as a strong vortex enhancement, streamline divergence and a flow inversion for highly elastic flows are reproduced.more » Due to their computational complexity, our simulations require massively parallel computations. Using a domain decomposition approach with MPI, the implementation achieves excellent scale-up results for up to 128 processors.« less
Stochastic Industrial Source Detection Using Lower Cost Methods
NASA Astrophysics Data System (ADS)
Thoma, E.; George, I. J.; Brantley, H.; Deshmukh, P.; Cansler, J.; Tang, W.
2017-12-01
Hazardous air pollutants (HAPs) can be emitted from a variety of sources in industrial facilities, energy production, and commercial operations. Stochastic industrial sources (SISs) represent a subcategory of emissions from fugitive leaks, variable area sources, malfunctioning processes, and improperly controlled operations. From the shared perspective of industries and communities, cost-effective detection of mitigable SIS emissions can yield benefits such as safer working environments, cost saving through reduced product loss, lower air shed pollutant impacts, and improved transparency and community relations. Methods for SIS detection can be categorized by their spatial regime of operation, ranging from component-level inspection to high-sensitivity kilometer scale surveys. Methods can be temporally intensive (providing snap-shot measures) or sustained in both time-integrated and continuous forms. Each method category has demonstrated utility, however, broad adoption (or routine use) has thus far been limited by cost and implementation viability. Described here are a subset of SIS methods explored by the U.S EPA's next generation emission measurement (NGEM) program that focus on lower cost methods and models. An emerging systems approach that combines multiple forms to help compensate for reduced performance factors of lower cost systems is discussed. A case study of a multi-day HAP emission event observed by a combination of low cost sensors, open-path spectroscopy, and passive samplers is detailed. Early field results of a novel field gas chromatograph coupled with a fast HAP concentration sensor is described. Progress toward near real-time inverse source triangulation assisted by pre-modeled facility profiles using the Los Alamos Quick Urban & Industrial Complex (QUIC) model is discussed.
FAST: a framework for simulation and analysis of large-scale protein-silicon biosensor circuits.
Gu, Ming; Chakrabartty, Shantanu
2013-08-01
This paper presents a computer aided design (CAD) framework for verification and reliability analysis of protein-silicon hybrid circuits used in biosensors. It is envisioned that similar to integrated circuit (IC) CAD design tools, the proposed framework will be useful for system level optimization of biosensors and for discovery of new sensing modalities without resorting to laborious fabrication and experimental procedures. The framework referred to as FAST analyzes protein-based circuits by solving inverse problems involving stochastic functional elements that admit non-linear relationships between different circuit variables. In this regard, FAST uses a factor-graph netlist as a user interface and solving the inverse problem entails passing messages/signals between the internal nodes of the netlist. Stochastic analysis techniques like density evolution are used to understand the dynamics of the circuit and estimate the reliability of the solution. As an example, we present a complete design flow using FAST for synthesis, analysis and verification of our previously reported conductometric immunoassay that uses antibody-based circuits to implement forward error-correction (FEC).
Bowl Inversion and Electronic Switching of Buckybowls on Gold.
Fujii, Shintaro; Ziatdinov, Maxim; Higashibayashi, Shuhei; Sakurai, Hidehiro; Kiguchi, Manabu
2016-09-21
Bowl-shaped π-conjugated compounds, or buckybowls, are a novel class of sp(2)-hybridized nanocarbon materials. In contrast to tubular carbon nanotubes and ball-shaped fullerenes, the buckybowls feature structural flexibility. Bowl-to-bowl structural inversion is one of the unique properties of the buckybowls in solutions. Bowl inversion on a surface modifies the metal-molecule interactions through bistable switching between bowl-up and bowl-down states on the surface, which makes surface-adsorbed buckybowls a relevant model system for elucidation of the mechano-electronic properties of nanocarbon materials. Here, we report a combination of scanning tunneling microscopy (STM) measurements and ab initio atomistic simulations to identify the adlayer structure of the sumanene buckybowl on Au(111) and reveal its unique bowl inversion behavior. We demonstrate that the bowl inversion can be induced by approaching the STM tip toward the molecule. By tuning the local metal-molecule interaction using the STM tip, the sumanene buckybowl exhibits structural bistability with a switching rate that is two orders of magnitude faster than that of the stochastic inversion process.
A Stochastic Seismic Model for the European Arctic
NASA Astrophysics Data System (ADS)
Hauser, J.; Dyer, K.; Pasyanos, M. E.; Bungum, H.; Faleide, J. I.; Clark, S. A.
2009-12-01
The development of three-dimensional seismic models for the crust and upper mantle has traditionally focused on finding one model that provides the best fit to the data, while observing some regularization constraints. Such deterministic models however ignore a fundamental property of many inverse problems in geophysics, non-uniqueness, that is, if a model can be found to satisfy given datasets an infinite number of alternative models will exist that satisfy the datasets equally well. The solution to the inverse problem presented here is therefore a stochastic model, an ensemble of models that satisfy all available data to the same degree, the posterior distribution. It is based on two sources of information, (1) the data, in this work surface-wave group velocities, regional body-wave travel times, gravity data, compiled 1D velocity models, and thickness relationships between sedimentary rocks and underlying crystalline rocks, and (2) prior information, which is independent from the data. A Monte Carlo Markov Chain (MCMC) algorithm allows us to sample models from the prior distribution and test them against the data to generate the posterior distribution. While being computationally much more expensive, such a stochastic inversion provides a more complete picture of solution space and allows to seamlessly combine various datasets. The resulting stochastic model gives an overview of the different structures that can explain the observed datasets while taking the uncertainties in the data into account. Stochastic models are important for improving seismic monitoring capabilities as they allow to not only predict new observables but also their uncertainties. The model introduced here for the crust and upper mantle structure of the European Arctic is parametrized by a series of 8 layers in an equidistant mesh. Within each layer the seismic parameters (Vp, Vs and density) can vary linearly with depth. This allows to model changes of seismic parameters within the sediments and the crystalline crust without introducing artificial discontinuities that would result from parametrizing the structure using layers with constant seismic parameters. The complex geology of the region, encompassing oceanic crust, continental shelf regions, rift basins and old cratonic crust, and the non-uniform coverage of the region by data with varying levels of uncertainty makes the European Arctic a challenging setting for any imaging technique and therefore an ideal environment for demonstrating the practical advantages of a stochastic model. Maps of sediment thickness and thickness of the crystalline crust derived from the posterior distribution are in good agreement with knowledge of the regional tectonic setting. The predicted uncertainties, which are more important than the absolute values, correlate well with the variation in data coverage and data quality in the region. This indicates that the technique behaves as expected, thus we are properly tuning the methodology by allowing the Markov Chain adequate time to fully sample the model space.
NASA Astrophysics Data System (ADS)
Wu, Cheng-Feng; Huang, Huey-Chu
2015-10-01
The Taiwan Chelungpu Fault Drilling Project (TCDP) drilled a 2-km-deep hole 2.4 km east of the surface rupture of the 1999 Chi-Chi earthquake ( M w 7.6), near the town of Dakeng. Geophysical well logs at the TCDP site were run over depths ranging from 500 to 1,900 m to obtain the physical properties of the fault zones and adjacent damage zones. These data provide good reference material for examining the validity of velocity structures using microtremor array measurement; therefore, we conduct array measurements for a total of four arrays at two sites near the TCDP drilling sites. The phase velocities at frequencies of 0.2-5 Hz are calculated using the frequency-wavenumber ( f- k) spectrum method. Then the S-wave velocity structures are estimated by employing surface wave inversion techniques. The S-wave velocity from the differential inversion technique gradually increases from 1.52 to 2.22 km/s at depths between 585 and 1,710 m. This result is similar to those from the velocity logs, which range from 1.4 km/s at a depth of 597 m to 2.98 km/s at a depth of 1,705 m. The stochastic inversion results are similar to those from the seismic reflection methods and the lithostratigraphy of TCDP-A borehole, comparatively. These results show that microtremor array measurement provides a good tool for estimating deep S-wave velocity structure.
GARCH modelling of covariance in dynamical estimation of inverse solutions
NASA Astrophysics Data System (ADS)
Galka, Andreas; Yamashita, Okito; Ozaki, Tohru
2004-12-01
The problem of estimating unobserved states of spatially extended dynamical systems poses an inverse problem, which can be solved approximately by a recently developed variant of Kalman filtering; in order to provide the model of the dynamics with more flexibility with respect to space and time, we suggest to combine the concept of GARCH modelling of covariance, well known in econometrics, with Kalman filtering. We formulate this algorithm for spatiotemporal systems governed by stochastic diffusion equations and demonstrate its feasibility by presenting a numerical simulation designed to imitate the situation of the generation of electroencephalographic recordings by the human cortex.
Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-04-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Mathelin, Lionel; Hussaini, M. Yousuff; Bataille, Francoise
2003-01-01
This paper describes a fully spectral, Polynomial Chaos method for the propagation of uncertainty in numerical simulations of compressible, turbulent flow, as well as a novel stochastic collocation algorithm for the same application. The stochastic collocation method is key to the efficient use of stochastic methods on problems with complex nonlinearities, such as those associated with the turbulence model equations in compressible flow and for CFD schemes requiring solution of a Riemann problem. Both methods are applied to compressible flow in a quasi-one-dimensional nozzle. The stochastic collocation method is roughly an order of magnitude faster than the fully Galerkin Polynomial Chaos method on the inviscid problem.
Pareto joint inversion of 2D magnetotelluric and gravity data
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2015-04-01
In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where interesting density distributions are relatively shallow and resistivity changes are related to deeper parts. This kind of conditions are well suited for joint inversion of MT and gravity data. In the next stage of the solution development of further code optimization and extensive tests for real data will be realized. Presented work was supported by Polish National Centre for Research and Development under the contract number POIG.01.04.00-12-279/13
Stochastic simulation of karst conduit networks
NASA Astrophysics Data System (ADS)
Pardo-Igúzquiza, Eulogio; Dowd, Peter A.; Xu, Chaoshui; Durán-Valsero, Juan José
2012-01-01
Karst aquifers have very high spatial heterogeneity. Essentially, they comprise a system of pipes (i.e., the network of conduits) superimposed on rock porosity and on a network of stratigraphic surfaces and fractures. This heterogeneity strongly influences the hydraulic behavior of the karst and it must be reproduced in any realistic numerical model of the karst system that is used as input to flow and transport modeling. However, the directly observed karst conduits are only a small part of the complete karst conduit system and knowledge of the complete conduit geometry and topology remains spatially limited and uncertain. Thus, there is a special interest in the stochastic simulation of networks of conduits that can be combined with fracture and rock porosity models to provide a realistic numerical model of the karst system. Furthermore, the simulated model may be of interest per se and other uses could be envisaged. The purpose of this paper is to present an efficient method for conditional and non-conditional stochastic simulation of karst conduit networks. The method comprises two stages: generation of conduit geometry and generation of topology. The approach adopted is a combination of a resampling method for generating conduit geometries from templates and a modified diffusion-limited aggregation method for generating the network topology. The authors show that the 3D karst conduit networks generated by the proposed method are statistically similar to observed karst conduit networks or to a hypothesized network model. The statistical similarity is in the sense of reproducing the tortuosity index of conduits, the fractal dimension of the network, the direction rose of directions, the Z-histogram and Ripley's K-function of the bifurcation points (which differs from a random allocation of those bifurcation points). The proposed method (1) is very flexible, (2) incorporates any experimental data (conditioning information) and (3) can easily be modified when implemented in a hydraulic inverse modeling procedure. Several synthetic examples are given to illustrate the methodology and real conduit network data are used to generate simulated networks that mimic real geometries and topology.
Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Peng, E-mail: peng@ices.utexas.edu; Schwab, Christoph, E-mail: christoph.schwab@sam.math.ethz.ch
2016-07-01
We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by themore » so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data assimilation and for Bayesian estimation. They also open a perspective for optimal experimental design.« less
NASA Astrophysics Data System (ADS)
Hu, R.; Brauchler, R.; Herold, M.; Bayer, P.; Sauter, M.
2009-04-01
Rarely is it possible to draw a significant conclusion about the geometry and the properties of geological structures of the underground using the information which is typically obtained from boreholes, since soil exploration is only representative of the position where the soil sample is taken from. Conventional aquifer investigation methods like pumping tests can provide hydraulic properties of a larger area; however, they lead to integral information. This information is insufficient to develop groundwater models, especially contaminant transport models, which require information about the spatial distribution of the hydraulic properties of the subsurface. Hydraulic tomography is an innovative method which has the potential to spatially resolve three dimensional structures of natural aquifer bodies. The method employs hydraulic short term tests performed between two or more wells, whereby the pumped intervals (sources) and the observation points (receivers) are separated by double packer systems. In order to optimize the computationally intensive tomographic inversion of transient hydraulic data we have decided to couple two inversion approaches (a) hydraulic travel time inversion and (b) steady shape inversion. (a) Hydraulic travel time inversion is based on the solution of the travel time integral, which describes the relationship between travel time of maximum signal variation of a transient hydraulic signal and the diffusivity between source and receiver. The travel time inversion is computationally extremely effective and robust, however, it is limited to the determination of diffusivity. In order to overcome this shortcoming we use the estimated diffusivity distribution as starting model for the steady shape inversion with the goal to separate the estimated diffusivity distribution into its components, hydraulic conductivity and specific storage. (b) The steady shape inversion utilizes the fact that at steady shape conditions, drawdown varies with time but the hydraulic gradient does not. By this trick, transient data can be analyzed with the computational efficiency of a steady state model, which proceeds hundreds of times faster than transient models. Finally, a specific storage distribution can be calculated from the diffusivity and hydraulic conductivity reconstructions derived from travel time and steady shape inversion. The groundwork of this study is the aquifer-analogue study from BAYER (1999), in which six parallel profiles of a natural sedimentary body with a size of 16m x 10m x 7m were mapped in high resolution with respect to structural and hydraulic parameters. Based on these results and using geostatistical interpolation methods, MAJI (2005) designed a three dimensional hydraulic model with a resolution of 5cm x 5cm x 5cm. This hydraulic model was used to simulate a large number of short term pumping tests in a tomographical array. The high resolution parameter reconstructions gained from the inversion of simulated pumping test data demonstrate that the proposed inversion scheme allows reconstructing the individual architectural elements and their hydraulic properties with a higher resolution compared to conventional hydraulic and geological investigation methods. Bayer P (1999) Aquifer-Analog-Studium in grobklastischen braided river Ablagerungen: Sedimentäre/hydrogeologische Wandkartierung und Kalibrierung von Georadarmessungen, Diplomkartierung am Lehrstuhl für Angewandte Geologie, Universität Tübingen, 25 pp. Maji, R. (2005) Conditional Stochastic Modelling of DNAPL Migration and Dissolution in a High-resolution Aquifer Analog, Ph.D. thesis at the University of Waterloo, 187 pp.
Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran
2018-06-22
Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.
Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...
2014-06-03
A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less
NASA Astrophysics Data System (ADS)
Nevison, C. D.; Andrews, A. E.; Thoning, K. W.; Saikawa, E.; Dlugokencky, E. J.; Sweeney, C.; Benmergui, J. S.
2016-12-01
The Carbon Tracker Lagrange (CTL) regional inversion framework is used to estimate North American nitrous oxide (N2O) emissions of 1.6 ± 0.4 Tg N/yr over 2008-2013. More than half of the North American emissions are estimated to come from the central agricultural belt, extending from southern Canada to Texas, and are strongest in spring and early summer, consistent with a nitrogen fertilizer-driven source. The estimated N2O flux from the Midwestern corn/soybean belt and the more northerly wheat belt corresponds to 5% of synthetic + organic N fertilizer applied to those regions. While earlier regional atmospheric inversion studies have suggested that global inventories such as EDGAR may be underestimating U.S. anthropogenic N2O emissions by a factor of 3 or more, our results, integrated over a full calendar year, are generally consistent with those inventories and with global inverse model results and budget constraints. The CTL framework is a Bayesian method based on footprints from the Stochastic Time-Inverted Lagrangian Transport (STILT) model applied to atmospheric N2O data from the National Oceanic and Atmospheric Administration (NOAA) Global Greenhouse Gas Reference Network, including surface, aircraft and tall tower platforms. The CTL inversion results are sensitive to the prescribed boundary condition or background value of N2O, which is estimated based on a new Empirical BackGround (EBG) product derived from STILT back trajectories applied to NOAA data. Analysis of the N2O EBG products suggests a significant, seasonally-varying influence on surface N2O data due to the stratospheric influx of N2O-depleted air. Figure 1. Posterior annual mean N2O emissions for 2010 estimated with the CTL regional inversion framework. The locations of NOAA surface and aircraft data used in the inversion are superimposed as black circles and grey triangles, respectively. Mobile surface sites are indicated with asterisks.
Hybrid ODE/SSA methods and the cell cycle model
NASA Astrophysics Data System (ADS)
Wang, S.; Chen, M.; Cao, Y.
2017-07-01
Stochastic effect in cellular systems has been an important topic in systems biology. Stochastic modeling and simulation methods are important tools to study stochastic effect. Given the low efficiency of stochastic simulation algorithms, the hybrid method, which combines an ordinary differential equation (ODE) system with a stochastic chemically reacting system, shows its unique advantages in the modeling and simulation of biochemical systems. The efficiency of hybrid method is usually limited by reactions in the stochastic subsystem, which are modeled and simulated using Gillespie's framework and frequently interrupt the integration of the ODE subsystem. In this paper we develop an efficient implementation approach for the hybrid method coupled with traditional ODE solvers. We also compare the efficiency of hybrid methods with three widely used ODE solvers RADAU5, DASSL, and DLSODAR. Numerical experiments with three biochemical models are presented. A detailed discussion is presented for the performances of three ODE solvers.
Towards a new technique to construct a 3D shear-wave velocity model based on converted waves
NASA Astrophysics Data System (ADS)
Hetényi, G.; Colavitti, L.
2017-12-01
A 3D model is essential in all branches of solid Earth sciences because geological structures can be heterogeneous and change significantly in their lateral dimension. The main target of this research is to build a crustal S-wave velocity structure in 3D. The currently popular methodologies to construct 3D shear-wave velocity models are Ambient Noise Tomography (ANT) and Local Earthquake Tomography (LET). Here we propose a new technique to map Earth discontinuities and velocities at depth based on the analysis of receiver functions. The 3D model is obtained by simultaneously inverting P-to-S converted waveforms recorded at a dense array. The individual velocity models corresponding to each trace are extracted from the 3D initial model along ray paths that are calculated using the shooting method, and the velocity model is updated during the inversion. We consider a spherical approximation of ray propagation using a global velocity model (iasp91, Kennett and Engdahl, 1991) for the teleseismic part, while we adopt Cartesian coordinates and a local velocity model for the crust. During the inversion process we work with a multi-layer crustal model for shear-wave velocity, with a flexible mesh for the depth of the interfaces. The RFs inversion represents a complex problem because the amplitude and the arrival time of different phases depend in a non-linear way on the depth of interfaces and the characteristics of the velocity structure. The solution we envisage to manage the inversion problem is the stochastic Neighbourhood Algorithm (NA, Sambridge, 1999), whose goal is to find an ensemble of models that sample the good data-fitting regions of a multidimensional parameter space. Depending on the studied area, this method can accommodate possible independent and complementary geophysical data (gravity, active seismics, LET, ANT, etc.), helping to reduce the non-linearity of the inversion. Our first focus of application is the Central Alps, where a 20-year long dataset of high-quality teleseismic events recorded at 81 stations is available, and we have high-resolution P-wave velocity model available (Diehl et al., 2009). We plan to extend the 3D shear-wave velocity inversion method to the entire Alpine domain in frame of the AlpArray project, and apply it to other areas with a dense network of broadband seismometers.
NASA Astrophysics Data System (ADS)
Oware, E. K.
2017-12-01
Geophysical quantification of hydrogeological parameters typically involve limited noisy measurements coupled with inadequate understanding of the target phenomenon. Hence, a deterministic solution is unrealistic in light of the largely uncertain inputs. Stochastic imaging (SI), in contrast, provides multiple equiprobable realizations that enable probabilistic assessment of aquifer properties in a realistic manner. Generation of geologically realistic prior models is central to SI frameworks. Higher-order statistics for representing prior geological features in SI are, however, usually borrowed from training images (TIs), which may produce undesirable outcomes if the TIs are unpresentatitve of the target structures. The Markov random field (MRF)-based SI strategy provides a data-driven alternative to TI-based SI algorithms. In the MRF-based method, the simulation of spatial features is guided by Gibbs energy (GE) minimization. Local configurations with smaller GEs have higher likelihood of occurrence and vice versa. The parameters of the Gibbs distribution for computing the GE are estimated from the hydrogeophysical data, thereby enabling the generation of site-specific structures in the absence of reliable TIs. In Metropolis-like SI methods, the variance of the transition probability controls the jump-size. The procedure is a standard Markov chain Monte Carlo (McMC) method when a constant variance is assumed, and becomes simulated annealing (SA) when the variance (cooling temperature) is allowed to decrease gradually with time. We observe that in certain problems, the large variance typically employed at the beginning to hasten burn-in may be unideal for sampling at the equilibrium state. The powerfulness of SA stems from its flexibility to adaptively scale the variance at different stages of the sampling. Degeneration of results were reported in a previous implementation of the MRF-based SI strategy based on a constant variance. Here, we present an updated version of the algorithm based on SA that appears to resolve the degeneration problem with seemingly improved results. We illustrate the performance of the SA version with a joint inversion of time-lapse concentration and electrical resistivity measurements in a hypothetical trinary hydrofacies aquifer characterization problem.
Fractional calculus in hydrologic modeling: A numerical perspective
Benson, David A.; Meerschaert, Mark M.; Revielle, Jordan
2013-01-01
Fractional derivatives can be viewed either as handy extensions of classical calculus or, more deeply, as mathematical operators defined by natural phenomena. This follows the view that the diffusion equation is defined as the governing equation of a Brownian motion. In this paper, we emphasize that fractional derivatives come from the governing equations of stable Lévy motion, and that fractional integration is the corresponding inverse operator. Fractional integration, and its multi-dimensional extensions derived in this way, are intimately tied to fractional Brownian (and Lévy) motions and noises. By following these general principles, we discuss the Eulerian and Lagrangian numerical solutions to fractional partial differential equations, and Eulerian methods for stochastic integrals. These numerical approximations illuminate the essential nature of the fractional calculus. PMID:23524449
NASA Astrophysics Data System (ADS)
Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.
2017-07-01
We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.
NASA Astrophysics Data System (ADS)
Robinson, B. H.; Dalton, L. R.
1981-01-01
The modulation perturbation treatment of Galloway and Dalton is applied to the solution of the stochastic Liouville equation for the spin density matrix which incorporates an anisotropic rotational diffusion operator. Pseudosecular and saturation terms of the spin hamiltonian are explicitly considered as is the interaction of the electron spins with the applied Zeeman modulation field. The modulation perturbation treatment results in a factor of four improvement in computational speed relative to inversion of the full supermatrix with little or no loss of computational accuracy. The theoretical simulations of EPR and ST-EPR spectra are in nearly quantitative agreement with experimental spectra taken under high resolution conditions.
A stochastic method for computing hadronic matrix elements
Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; ...
2014-01-24
In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.
Three Dimensional Time Dependent Stochastic Method for Cosmic-ray Modulation
NASA Astrophysics Data System (ADS)
Pei, C.; Bieber, J. W.; Burger, R. A.; Clem, J. M.
2009-12-01
A proper understanding of the different behavior of intensities of galactic cosmic rays in different solar cycle phases requires solving the modulation equation with time dependence. We present a detailed description of our newly developed stochastic approach for cosmic ray modulation which we believe is the first attempt to solve the time dependent Parker equation in 3D evolving from our 3D steady state stochastic approach, which has been benchmarked extensively by using the finite difference method. Our 3D stochastic method is different from other stochastic approaches in literature (Ball et al 2005, Miyake et al 2005, and Florinski 2008) in several ways. For example, we employ spherical coordinates which makes the code much more efficient by reducing coordinate transformations. What's more, our stochastic differential equations are different from others because our map from Parker's original equation to the Fokker-Planck equation extends the method used by Jokipii and Levy 1977 while others don't although all 3D stochastic methods are essentially based on Ito formula. The advantage of the stochastic approach is that it also gives the probability information of travel times and path lengths of cosmic rays besides the intensities. We show that excellent agreement exists between solutions obtained by our steady state stochastic method and by the traditional finite difference method. We also show time dependent solutions for an idealized heliosphere which has a Parker magnetic field, a planar current sheet, and a simple initial condition.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Berner, J.; Sardeshmukh, P. D.
2017-12-01
Stochastic parameterizations have been used for more than a decade in atmospheric models. They provide a way to represent model uncertainty through representing the variability of unresolved sub-grid processes, and have been shown to have a beneficial effect on the spread and mean state for medium- and extended-range forecasts. There is increasing evidence that stochastic parameterization of unresolved processes can improve the bias in mean and variability, e.g. by introducing a noise-induced drift (nonlinear rectification), and by changing the residence time and structure of flow regimes. We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. SPPT results in a significant improvement in the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. We use a Linear Inverse Modelling framework to gain insight into the mechanisms by which SPPT has improved ENSO-variability.
Rupture Propagation for Stochastic Fault Models
NASA Astrophysics Data System (ADS)
Favreau, P.; Lavallee, D.; Archuleta, R.
2003-12-01
The inversion of strong motion data of large earhquakes give the spatial distribution of pre-stress on the ruptured faults and it can be partially reproduced by stochastic models, but a fundamental question remains: how rupture propagates, constrained by the presence of spatial heterogeneity? For this purpose we investigate how the underlying random variables, that control the pre-stress spatial variability, condition the propagation of the rupture. Two stochastic models of prestress distributions are considered, respectively based on Cauchy and Gaussian random variables. The parameters of the two stochastic models have values corresponding to the slip distribution of the 1979 Imperial Valley earthquake. We use a finite difference code to simulate the spontaneous propagation of shear rupture on a flat fault in a 3D continuum elastic body. The friction law is the slip dependent friction law. The simulations show that the propagation of the rupture front is more complex, incoherent or snake-like for a prestress distribution based on Cauchy random variables. This may be related to the presence of a higher number of asperities in this case. These simulations suggest that directivity is stronger in the Cauchy scenario, compared to the smoother rupture of the Gauss scenario.
Cui, Longzhu; Neoh, Hui-min; Iwamoto, Akira; Hiramatsu, Keiichi
2012-06-19
Genome inversions are ubiquitous in organisms ranging from prokaryotes to eukaryotes. Typical examples can be identified by comparing the genomes of two or more closely related organisms, where genome inversion footprints are clearly visible. Although the evolutionary implications of this phenomenon are huge, little is known about the function and biological meaning of this process. Here, we report our findings on a bacterium that generates a reversible, large-scale inversion of its chromosome (about half of its total genome) at high frequencies of up to once every four generations. This inversion switches on or off bacterial phenotypes, including colony morphology, antibiotic susceptibility, hemolytic activity, and expression of dozens of genes. Quantitative measurements and mathematical analyses indicate that this reversible switching is stochastic but self-organized so as to maintain two forms of stable cell populations (i.e., small colony variant, normal colony variant) as a bet-hedging strategy. Thus, this heritable and reversible genome fluctuation seems to govern the bacterial life cycle; it has a profound impact on the course and outcomes of bacterial infections.
Numerical methods for stochastic differential equations
NASA Astrophysics Data System (ADS)
Kloeden, Peter; Platen, Eckhard
1991-06-01
The numerical analysis of stochastic differential equations differs significantly from that of ordinary differential equations due to the peculiarities of stochastic calculus. This book provides an introduction to stochastic calculus and stochastic differential equations, both theory and applications. The main emphasise is placed on the numerical methods needed to solve such equations. It assumes an undergraduate background in mathematical methods typical of engineers and physicists, through many chapters begin with a descriptive summary which may be accessible to others who only require numerical recipes. To help the reader develop an intuitive understanding of the underlying mathematicals and hand-on numerical skills exercises and over 100 PC Exercises (PC-personal computer) are included. The stochastic Taylor expansion provides the key tool for the systematic derivation and investigation of discrete time numerical methods for stochastic differential equations. The book presents many new results on higher order methods for strong sample path approximations and for weak functional approximations, including implicit, predictor-corrector, extrapolation and variance-reduction methods. Besides serving as a basic text on such methods. the book offers the reader ready access to a large number of potential research problems in a field that is just beginning to expand rapidly and is widely applicable.
A random optimization approach for inherent optic properties of nearshore waters
NASA Astrophysics Data System (ADS)
Zhou, Aijun; Hao, Yongshuai; Xu, Kuo; Zhou, Heng
2016-10-01
Traditional method of water quality sampling is time-consuming and highly cost. It can not meet the needs of social development. Hyperspectral remote sensing technology has well time resolution, spatial coverage and more general segment information on spectrum. It has a good potential in water quality supervision. Via the method of semi-analytical, remote sensing information can be related with the water quality. The inherent optical properties are used to quantify the water quality, and an optical model inside the water is established to analysis the features of water. By stochastic optimization algorithm Threshold Acceptance, a global optimization of the unknown model parameters can be determined to obtain the distribution of chlorophyll, organic solution and suspended particles in water. Via the improvement of the optimization algorithm in the search step, the processing time will be obviously reduced, and it will create more opportunity for the increasing the number of parameter. For the innovation definition of the optimization steps and standard, the whole inversion process become more targeted, thus improving the accuracy of inversion. According to the application result for simulated data given by IOCCG and field date provided by NASA, the approach model get continuous improvement and enhancement. Finally, a low-cost, effective retrieval model of water quality from hyper-spectral remote sensing can be achieved.
Statistics of the Kolkata Paise Restaurant problem
NASA Astrophysics Data System (ADS)
Ghosh, Asim; Chatterjee, Arnab; Mitra, Manipushpak; Chakrabarti, Bikas K.
2010-07-01
We study the dynamics of a few stochastic learning strategies for the 'Kolkata Paise Restaurant' problem, where N agents choose among N equally priced but differently ranked restaurants every evening, such that each agent tries to get dinner in the best restaurant (with each restaurant serving only one customer and the rest of the customers arriving there going without dinner that evening). We consider the learning strategies to be similar for all the agents, and assume that each follows the same probabilistic or stochastic strategy dependent on information about past successes in the game. We show that some 'naive' strategies lead to much better utilization of services than some relatively 'smarter' strategies. We also show that a service utilization fraction as high as 0.80 can result for a stochastic strategy, where each agent sticks to his past choice (independent of success achieved or not, with probability decreasing inversely in the past crowd size). The numerical results for the utilization fraction of the services in some limiting cases are analytically examined.
NASA Astrophysics Data System (ADS)
Song, X.; Jordan, T. H.
2017-12-01
The seismic anisotropy of the continental crust is dominated by two mechanisms: the local (intrinsic) anisotropy of crustal rocks caused by the lattice-preferred orientation of their constituent minerals, and the geometric (extrinsic) anisotropy caused by the alignment and layering of elastic heterogeneities by sedimentation and deformation. To assess the relative importance of these mechanisms, we have applied Jordan's (GJI, 2015) self-consistent, second-order theory to compute the effective elastic parameters of stochastic media with hexagonal local anisotropy and small-scale 3D heterogeneities that have transversely isotropic (TI) statistics. The theory pertains to stochastic TI media in which the eighth-order covariance tensor of the elastic moduli can be separated into a one-point variance tensor that describes the local anisotropy in terms of a anisotropy orientation ratio (ξ from 0 to ∞), and a two-point correlation function that describes the geometric anisotropy in terms of a heterogeneity aspect ratio (η from 0 to ∞). If there is no local anisotropy, then, in the limiting case of a horizontal stochastic laminate (η→∞), the effective-medium equations reduce to the second-order equations derived by Backus (1962) for a stochastically layered medium. This generalization of the Backus equations to 3D stochastic media, as well as the introduction of local, stochastically rotated anisotropy, provides a powerful theory for interpreting the anisotropic signatures of sedimentation and deformation in continental environments; in particular, the parameterizations that we propose are suitable for tomographic inversions. We have verified this theory through a series high-resolution numerical experiments using both isotropic and anisotropic wave-propagation codes.
Confidence set interference with a prior quadratic bound. [in geophysics
NASA Technical Reports Server (NTRS)
Backus, George E.
1989-01-01
Neyman's (1937) theory of confidence sets is developed as a replacement for Bayesian interference (BI) and stochastic inversion (SI) when the prior information is a hard quadratic bound. It is recommended that BI and SI be replaced by confidence set interference (CSI) only in certain circumstances. The geomagnetic problem is used to illustrate the general theory of CSI.
Development and Tuning of a 3D Stochastic Inversion Methodology to the European Arctic
2010-09-01
from previous studies covering the region, in particular from Breivik et al. (2002). Our MCMC algorithm shown in Figure 3 has two major components...criteria, Geophys. J. Int., 156: 483–496, doi:10.1111/j.1365-246X.2004.570 02070.x. Breivik , A., R. Mjelde, P. Grogan, H. Shimamura, Y. Murai, Y
Quantum algorithms for Gibbs sampling and hitting-time estimation
Chowdhury, Anirban Narayan; Somma, Rolando D.
2017-02-01
In this paper, we present quantum algorithms for solving two problems regarding stochastic processes. The first algorithm prepares the thermal Gibbs state of a quantum system and runs in time almost linear in √Nβ/Ζ and polynomial in log(1/ϵ), where N is the Hilbert space dimension, β is the inverse temperature, Ζ is the partition function, and ϵ is the desired precision of the output state. Our quantum algorithm exponentially improves the dependence on 1/ϵ and quadratically improves the dependence on β of known quantum algorithms for this problem. The second algorithm estimates the hitting time of a Markov chain. Formore » a sparse stochastic matrix Ρ, it runs in time almost linear in 1/(ϵΔ 3/2), where ϵ is the absolute precision in the estimation and Δ is a parameter determined by Ρ, and whose inverse is an upper bound of the hitting time. Our quantum algorithm quadratically improves the dependence on 1/ϵ and 1/Δ of the analog classical algorithm for hitting-time estimation. Finally, both algorithms use tools recently developed in the context of Hamiltonian simulation, spectral gap amplification, and solving linear systems of equations.« less
A Stochastic Climate Generator for Agriculture in Southeast Asian Domains
NASA Astrophysics Data System (ADS)
Greene, A. M.; Allis, E. C.
2014-12-01
We extend a previously-described method for generating future climate scenarios, suitable for driving agricultural models, to selected domains in Lao PDR, Bangladesh and Indonesia. There are notable differences in climatology among the study regions, most importantly the inverse seasonal relationship of southeast Asian and Australian monsoons. These differences necessitate a partially-differentiated modeling approach, utilizing common features for better estimation while allowing independent modeling of divergent attributes. The method attempts to constrain uncertainty due to both anthropogenic and natural influences, providing a measure of how these effects may combine during specified future decades. Seasonal climate fields are downscaled to the daily time step by resampling the AgMERRA dataset, providing a full suite of agriculturally relevant variables and enabling the propagation of climate uncertainty to agricultural outputs. The role of this research in a broader project, conducted under the auspices of the International Fund for Agricultural Development (IFAD), is discussed.
The quasi-optimality criterion in the linear functional strategy
NASA Astrophysics Data System (ADS)
Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey
2018-07-01
The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.
Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.
Salis, Howard; Kaznessis, Yiannis
2005-02-01
The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.
Biochemical simulations: stochastic, approximate stochastic and hybrid approaches.
Pahle, Jürgen
2009-01-01
Computer simulations have become an invaluable tool to study the sometimes counterintuitive temporal dynamics of (bio-)chemical systems. In particular, stochastic simulation methods have attracted increasing interest recently. In contrast to the well-known deterministic approach based on ordinary differential equations, they can capture effects that occur due to the underlying discreteness of the systems and random fluctuations in molecular numbers. Numerous stochastic, approximate stochastic and hybrid simulation methods have been proposed in the literature. In this article, they are systematically reviewed in order to guide the researcher and help her find the appropriate method for a specific problem.
Biochemical simulations: stochastic, approximate stochastic and hybrid approaches
2009-01-01
Computer simulations have become an invaluable tool to study the sometimes counterintuitive temporal dynamics of (bio-)chemical systems. In particular, stochastic simulation methods have attracted increasing interest recently. In contrast to the well-known deterministic approach based on ordinary differential equations, they can capture effects that occur due to the underlying discreteness of the systems and random fluctuations in molecular numbers. Numerous stochastic, approximate stochastic and hybrid simulation methods have been proposed in the literature. In this article, they are systematically reviewed in order to guide the researcher and help her find the appropriate method for a specific problem. PMID:19151097
Agonist-induced modulation of inverse agonist efficacy at the beta 2-adrenergic receptor.
Chidiac, P; Nouet, S; Bouvier, M
1996-09-01
Sustained stimulation of several G protein-coupled receptors is known to lead to a reduction in the signaling efficacy. This phenomenon, named agonist-induced desensitization, has been best studied for the beta 2-adrenergic receptor (AR) and is characterized by a decreased efficacy of beta-adrenergic agonists to stimulate the adenylyl cyclase activity. Recently, several beta-adrenergic ligands were found to inhibit the spontaneous agonist-independent activity of the beta 2AR. These compounds, termed inverse agonists, have different inhibitory efficacies, ranging from almost neutral antagonists to full inverse agonists. The current study was undertaken to determine whether, as is the case for agonists, desensitization can affect the efficacies of inverse agonists. Agonist-promoted desensitization of the human beta 2AR expressed in Sf9 cells potentiated the inhibitory actions of the inverse agonists, with the extent of the potentiation being inversely proportional to their intrinsic activity. For example, desensitization increased the inhibitory action of the weak inverse agonist labetalol by 29%, whereas inhibition of the spontaneous activity by the strong inverse agonist timolol was not enhanced by the desensitizing stimuli. Interestingly, dichloroisoproterenol acted stochastically as either a weak partial agonist or a weak inverse agonist in control conditions but always behaved as an inverse agonist after desensitization. These data demonstrate that like for agonists, the efficacies of inverse agonists can be modulated by a desensitizing treatment. Also, the data show that the initial state of the receptor can determine whether a ligand behaves as a partial agonist or an inverse agonist.
Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications
2016-10-17
finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16
The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godfrey, Devon J.; Page McAdams, H.; Dobbins, James T. III
2013-02-15
Purpose: Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Methods: Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes,more » three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. Results: For scan angles of 20 Degree-Sign and 5 mm plane separation, seven MITS planes must be averaged to sufficiently remove partial-pixel artifacts. MITSa7 does appear to subtly reduce the contrast of high-frequency 'edge' information, but the removal of partial-pixel artifacts makes the appearance of low-contrast, fine-detail anatomy even more conspicuous in MITSa7 slices. MITSa7 also appears to render simulated subtle 5 mm pulmonary nodules with greater visibility than MITS alone, in both the open lung and regions overlying the mediastinum. Finally, the MITSa7 technique reduces stochastic image variance, though the in-plane stochastic SNR (for very thin objects which do not span multiple MITS planes) is only improved at spatial frequencies between 0.05 and 0.20 cycles/mm. Conclusions: The MITSa7 method is an improvement over traditional single-plane MITS for thoracic imaging and the pulmonary nodule detection task, and thus the authors plan to use the MITSa7 approach for all future MITS research at the authors' institution.« less
Stochastic differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sobczyk, K.
1990-01-01
This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshoremore » structures.« less
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor
2018-02-01
Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.
Analysis of stability for stochastic delay integro-differential equations.
Zhang, Yu; Li, Longsuo
2018-01-01
In this paper, we concern stability of numerical methods applied to stochastic delay integro-differential equations. For linear stochastic delay integro-differential equations, it is shown that the mean-square stability is derived by the split-step backward Euler method without any restriction on step-size, while the Euler-Maruyama method could reproduce the mean-square stability under a step-size constraint. We also confirm the mean-square stability of the split-step backward Euler method for nonlinear stochastic delay integro-differential equations. The numerical experiments further verify the theoretical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chuchu, E-mail: chenchuchu@lsec.cc.ac.cn; Hong, Jialin, E-mail: hjl@lsec.cc.ac.cn; Zhang, Liying, E-mail: lyzhang@lsec.cc.ac.cn
Stochastic Maxwell equations with additive noise are a system of stochastic Hamiltonian partial differential equations intrinsically, possessing the stochastic multi-symplectic conservation law. It is shown that the averaged energy increases linearly with respect to the evolution of time and the flow of stochastic Maxwell equations with additive noise preserves the divergence in the sense of expectation. Moreover, we propose three novel stochastic multi-symplectic methods to discretize stochastic Maxwell equations in order to investigate the preservation of these properties numerically. We make theoretical discussions and comparisons on all of the three methods to observe that all of them preserve the correspondingmore » discrete version of the averaged divergence. Meanwhile, we obtain the corresponding dissipative property of the discrete averaged energy satisfied by each method. Especially, the evolution rates of the averaged energies for all of the three methods are derived which are in accordance with the continuous case. Numerical experiments are performed to verify our theoretical results.« less
Analytical Assessment for Transient Stability Under Stochastic Continuous Disturbances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ju, Ping; Li, Hongyu; Gan, Chun
Here, with the growing integration of renewable power generation, plug-in electric vehicles, and other sources of uncertainty, increasing stochastic continuous disturbances are brought to power systems. The impact of stochastic continuous disturbances on power system transient stability attracts significant attention. To address this problem, this paper proposes an analytical assessment method for transient stability of multi-machine power systems under stochastic continuous disturbances. In the proposed method, a probability measure of transient stability is presented and analytically solved by stochastic averaging. Compared with the conventional method (Monte Carlo simulation), the proposed method is many orders of magnitude faster, which makes itmore » very attractive in practice when many plans for transient stability must be compared or when transient stability must be analyzed quickly. Also, it is found that the evolution of system energy over time is almost a simple diffusion process by the proposed method, which explains the impact mechanism of stochastic continuous disturbances on transient stability in theory.« less
NASA Astrophysics Data System (ADS)
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-02-01
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.
Wang, Feng; Kang, Mengzhen; Lu, Qi; Letort, Véronique; Han, Hui; Guo, Yan; de Reffye, Philippe; Li, Baoguo
2011-01-01
Background and Aims Mongolian Scots pine (Pinus sylvestris var. mongolica) is one of the principal species used for windbreak and sand stabilization in arid and semi-arid areas in northern China. A model-assisted analysis of its canopy architectural development and functions is valuable for better understanding its behaviour and roles in fragile ecosystems. However, due to the intrinsic complexity and variability of trees, the parametric identification of such models is currently a major obstacle to their evaluation and their validation with respect to real data. The aim of this paper was to present the mathematical framework of a stochastic functional–structural model (GL2) and its parameterization for Mongolian Scots pines, taking into account inter-plant variability in terms of topological development and biomass partitioning. Methods In GL2, plant organogenesis is determined by the realization of random variables representing the behaviour of axillary or apical buds. The associated probabilities are calibrated for Mongolian Scots pines using experimental data including means and variances of the numbers of organs per plant in each order-based class. The functional part of the model relies on the principles of source–sink regulation and is parameterized by direct observations of living trees and the inversion method using measured data for organ mass and dimensions. Key Results The final calibration accuracy satisfies both organogenetic and morphogenetic processes. Our hypothesis for the number of organs following a binomial distribution is found to be consistent with the real data. Based on the calibrated parameters, stochastic simulations of the growth of Mongolian Scots pines in plantations are generated by the Monte Carlo method, allowing analysis of the inter-individual variability of the number of organs and biomass partitioning. Three-dimensional (3D) architectures of young Mongolian Scots pines were simulated for 4-, 6- and 8-year-old trees. Conclusions This work provides a new method for characterizing tree structures and biomass allocation that can be used to build a 3D virtual Mongolian Scots pine forest. The work paves the way for bridging the gap between a single-plant model and a stand model. PMID:21062760
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
Magnusson, P; Olsson, L E
2000-08-01
Magnetic response image plane nonuniformity and stochastic noise are properties that greatly influence the outcome of quantitative magnetic resonance imaging (MRI) evaluations such as gel dosimetry measurements using MRI. To study these properties, robust and accurate image analysis methods are required. New nonuniformity level assessment methods were designed, since previous methods were found to be insufficiently robust and accurate. The new and previously reported nonuniformity level assessment methods were analyzed with respect to, for example, insensitivity to stochastic noise; and previously reported stochastic noise level assessment methods with respect to insensitivity to nonuniformity. Using the same image data, different methods were found to assess significantly different levels of nonuniformity. Nonuniformity levels obtained using methods that count pixels in an intensity interval, and obtained using methods that use only intensity values, were found not to be comparable. The latter were found preferable, since they assess the quantity intrinsically sought. A new method which calculates a deviation image, with every pixel representing the deviation from a reference intensity, was least sensitive to stochastic noise. Furthermore, unlike any other analyzed method, it includes all intensity variations across the phantom area and allows for studies of nonuniformity shapes. This new method was designed for accurate studies of nonuniformities in gel dosimetry measurements, but could also be used with benefit in quality assurance and acceptance testing of MRI, scintillation camera, and computer tomography systems. The stochastic noise level was found to be greatly method dependent. Two methods were found to be insensitive to nonuniformity and also simple to use in practice. One method assesses the stochastic noise level as the average of the levels at five different positions within the phantom area, and the other assesses the stochastic noise in a region outside the phantom area.
2008-10-01
modeling operator and dobs is the observed data (details in Pasion 2007). Figure 42. Geometry of EM61HH-MK2 sensor. The transmitter and receiver...1979. Stochastic models, estimation, and control (Vol. 141). Pasion , L. R., 2007. Inversion of Time Domain Electromagnetic Data for the Detection of...Unexploded Ordnance. Ph.D. Thesis, The University of British Columbia. Pasion , L. R., Oldenburg, D. W., 2001. A Discrimination Algorithm for UXO
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousedík, Bedřich, E-mail: sousedik@umbc.edu; Elman, Howard C., E-mail: elman@cs.umd.edu
2016-07-01
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
Sousedík, Bedřich; Elman, Howard C.
2016-04-12
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less
M-matrices with prescribed elementary divisors
NASA Astrophysics Data System (ADS)
Soto, Ricardo L.; Díaz, Roberto C.; Salas, Mario; Rojo, Oscar
2017-09-01
A real matrix A is said to be an M-matrix if it is of the form A=α I-B, where B is a nonnegative matrix with Perron eigenvalue ρ (B), and α ≥slant ρ (B) . This paper provides sufficient conditions for the existence and construction of an M-matrix A with prescribed elementary divisors, which are the characteristic polynomials of the Jordan blocks of the Jordan canonical form of A. This inverse problem on M-matrices has not been treated until now. We solve the inverse elementary divisors problem for diagonalizable M-matrices and the symmetric generalized doubly stochastic inverse M-matrix problem for lists of real numbers and for lists of complex numbers of the form Λ =\\{λ 1, a+/- bi, \\ldots, a+/- bi\\} . The constructive nature of our results allows for the computation of a solution matrix. The paper also discusses an application of M-matrices to a capacity problem in wireless communications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qinzhuo, E-mail: liaoqz@pku.edu.cn; Zhang, Dongxiao; Tchelepi, Hamdi
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod–Patterson–Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiencymore » of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.« less
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology
Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.
2016-01-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915
Problems of Mathematical Finance by Stochastic Control Methods
NASA Astrophysics Data System (ADS)
Stettner, Łukasz
The purpose of this paper is to present main ideas of mathematics of finance using the stochastic control methods. There is an interplay between stochastic control and mathematics of finance. On the one hand stochastic control is a powerful tool to study financial problems. On the other hand financial applications have stimulated development in several research subareas of stochastic control in the last two decades. We start with pricing of financial derivatives and modeling of asset prices, studying the conditions for the absence of arbitrage. Then we consider pricing of defaultable contingent claims. Investments in bonds lead us to the term structure modeling problems. Special attention is devoted to historical static portfolio analysis called Markowitz theory. We also briefly sketch dynamic portfolio problems using viscosity solutions to Hamilton-Jacobi-Bellman equation, martingale-convex analysis method or stochastic maximum principle together with backward stochastic differential equation. Finally, long time portfolio analysis for both risk neutral and risk sensitive functionals is introduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir
2014-08-01
In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show themore » accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.« less
Pokharel, Shyam; Rana, Suresh; Blikenstaff, Joseph; Sadeghi, Amir; Prestidge, Bradley
2013-07-08
The purpose of this study is to investigate the effectiveness of the HIPO planning and optimization algorithm for real-time prostate HDR brachytherapy. This study consists of 20 patients who underwent ultrasound-based real-time HDR brachytherapy of the prostate using the treatment planning system called Oncentra Prostate (SWIFT version 3.0). The treatment plans for all patients were optimized using inverse dose-volume histogram-based optimization followed by graphical optimization (GRO) in real time. The GRO is manual manipulation of isodose lines slice by slice. The quality of the plan heavily depends on planner expertise and experience. The data for all patients were retrieved later, and treatment plans were created and optimized using HIPO algorithm with the same set of dose constraints, number of catheters, and set of contours as in the real-time optimization algorithm. The HIPO algorithm is a hybrid because it combines both stochastic and deterministic algorithms. The stochastic algorithm, called simulated annealing, searches the optimal catheter distributions for a given set of dose objectives. The deterministic algorithm, called dose-volume histogram-based optimization (DVHO), optimizes three-dimensional dose distribution quickly by moving straight downhill once it is in the advantageous region of the search space given by the stochastic algorithm. The PTV receiving 100% of the prescription dose (V100) was 97.56% and 95.38% with GRO and HIPO, respectively. The mean dose (D(mean)) and minimum dose to 10% volume (D10) for the urethra, rectum, and bladder were all statistically lower with HIPO compared to GRO using the student pair t-test at 5% significance level. HIPO can provide treatment plans with comparable target coverage to that of GRO with a reduction in dose to the critical structures.
Resonance Properties of Class I and Class II Neurons Differentially Modulated by Channel Noise
NASA Astrophysics Data System (ADS)
Wang, Lei
2018-01-01
Resonance properties of two different neuron types (Class I and Class II) induced by channel noise are investigated in this study. It is found that for Class I neuron, spiking activity is enhanced when certain noise intensity is presented, especially under weak current stimuli -- a typical phenomenon of stochastic resonance (SR); while for Class II neuron, in addition to perform the SR, certain noise intensity would inhibit neuronal activity under some current stimuli -- a typical phenomenon of inverse stochastic resonance (ISR). Moreover, we show that only sodium channel noise or potassium channel noise variation can achieve the similar phenomena. Consequently, the model results suggest that channel noise may exert differential roles in modulating the resonance properties of Class I and Class II neurons.
Fast model updating coupling Bayesian inference and PGD model reduction
NASA Astrophysics Data System (ADS)
Rubio, Paul-Baptiste; Louf, François; Chamoin, Ludovic
2018-04-01
The paper focuses on a coupled Bayesian-Proper Generalized Decomposition (PGD) approach for the real-time identification and updating of numerical models. The purpose is to use the most general case of Bayesian inference theory in order to address inverse problems and to deal with different sources of uncertainties (measurement and model errors, stochastic parameters). In order to do so with a reasonable CPU cost, the idea is to replace the direct model called for Monte-Carlo sampling by a PGD reduced model, and in some cases directly compute the probability density functions from the obtained analytical formulation. This procedure is first applied to a welding control example with the updating of a deterministic parameter. In the second application, the identification of a stochastic parameter is studied through a glued assembly example.
Doubly stochastic radial basis function methods
NASA Astrophysics Data System (ADS)
Yang, Fenglian; Yan, Liang; Ling, Leevan
2018-06-01
We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).
NASA Astrophysics Data System (ADS)
Tran, A. P.; Dafflon, B.; Hubbard, S.
2017-12-01
Soil organic carbon (SOC) is crucial for predicting carbon climate feedbacks in the vulnerable organic-rich Arctic region. However, it is challenging to achieve this property due to the general limitations of conventional core sampling and analysis methods. In this study, we develop an inversion scheme that uses single or multiple datasets, including soil liquid water content, temperature and ERT data, to estimate the vertical profile of SOC content. Our approach relies on the fact that SOC content strongly influences soil hydrological-thermal parameters, and therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. The scheme includes several advantages. First, this is the first time SOC content is estimated by using a coupled hydrogeophysical inversion. Second, by using the Community Land Model, we can account for the land surface dynamics (evapotranspiration, snow accumulation and melting) and ice/liquid phase transition. Third, we combine a deterministic and an adaptive Markov chain Monte Carlo optimization algorithm to better estimate the posterior distributions of desired model parameters. Finally, the simulated subsurface variables are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using synthetic experiments. The results show that compared to inversion of single dataset, joint inversion of these datasets significantly reduces parameter uncertainty. The joint inversion approach is able to estimate SOC content within the shallow active layer with high reliability. Next, we apply the scheme to estimate OC content along an intensive ERT transect in Barrow, Alaska using multiple datasets acquired in the 2013-2015 period. The preliminary results show a good agreement between modeled and measured soil temperature, thaw layer thickness and electrical resistivity. The accuracy of estimated SOC content will be evaluated by comparison with measurements from soil samples along the transect. Our study presents a new surface-subsurface, deterministic-stochastic hydrogeophysical inversion approach, as well as the benefit of including multiple types of data to estimate SOC and associated hydrological-thermal dynamics.
NASA Astrophysics Data System (ADS)
Riechers, Paul M.; Crutchfield, James P.
2018-06-01
Nonlinearities in finite dimensions can be linearized by projecting them into infinite dimensions. Unfortunately, the familiar linear operator techniques that one would then hope to use often fail since the operators cannot be diagonalized. The curse of nondiagonalizability also plays an important role even in finite-dimensional linear operators, leading to analytical impediments that occur across many scientific domains. We show how to circumvent it via two tracks. First, using the well-known holomorphic functional calculus, we develop new practical results about spectral projection operators and the relationship between left and right generalized eigenvectors. Second, we generalize the holomorphic calculus to a meromorphic functional calculus that can decompose arbitrary functions of nondiagonalizable linear operators in terms of their eigenvalues and projection operators. This simultaneously simplifies and generalizes functional calculus so that it is readily applicable to analyzing complex physical systems. Together, these results extend the spectral theorem of normal operators to a much wider class, including circumstances in which poles and zeros of the function coincide with the operator spectrum. By allowing the direct manipulation of individual eigenspaces of nonnormal and nondiagonalizable operators, the new theory avoids spurious divergences. As such, it yields novel insights and closed-form expressions across several areas of physics in which nondiagonalizable dynamics arise, including memoryful stochastic processes, open nonunitary quantum systems, and far-from-equilibrium thermodynamics. The technical contributions include the first full treatment of arbitrary powers of an operator, highlighting the special role of the zero eigenvalue. Furthermore, we show that the Drazin inverse, previously only defined axiomatically, can be derived as the negative-one power of singular operators within the meromorphic functional calculus and we give a new general method to construct it. We provide new formulae for constructing spectral projection operators and delineate the relations among projection operators, eigenvectors, and left and right generalized eigenvectors. By way of illustrating its application, we explore several, rather distinct examples. First, we analyze stochastic transition operators in discrete and continuous time. Second, we show that nondiagonalizability can be a robust feature of a stochastic process, induced even by simple counting. As a result, we directly derive distributions of the time-dependent Poisson process and point out that nondiagonalizability is intrinsic to it and the broad class of hidden semi-Markov processes. Third, we show that the Drazin inverse arises naturally in stochastic thermodynamics and that applying the meromorphic functional calculus provides closed-form solutions for the dynamics of key thermodynamic observables. Finally, we draw connections to the Ruelle-Frobenius-Perron and Koopman operators for chaotic dynamical systems and propose how to extract eigenvalues from a time-series.
An interval model updating strategy using interval response surface models
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin
2015-08-01
Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.
Long, Zhili; Wang, Rui; Fang, Jiwen; Dai, Xufei; Li, Zuohua
2017-07-01
Piezoelectric actuators invariably exhibit hysteresis nonlinearities that tend to become significant under the open-loop condition and could cause oscillations and errors in nanometer-positioning tasks. Chaotic map modified particle swarm optimization (MPSO) is proposed and implemented to identify the Prandtl-Ishlinskii model for piezoelectric actuators. Hysteresis compensation is attained through application of an inverse Prandtl-Ishlinskii model, in which the parameters are formulated based on the original model with chaotic map MPSO. To strengthen the diversity and improve the searching ergodicity of the swarm, an initial method of adaptive inertia weight based on a chaotic map is proposed. To compare and prove that the swarm's convergence occurs before stochastic initialization and to attain an optimal particle swarm optimization algorithm, the parameters of a proportional-integral-derivative controller are searched using self-tuning, and the simulated results are used to verify the search effectiveness of chaotic map MPSO. The results show that chaotic map MPSO is superior to its competitors for identifying the Prandtl-Ishlinskii model and that the inverse Prandtl-Ishlinskii model can provide hysteresis compensation under different conditions in a simple and effective manner.
On decoupling of volatility smile and term structure in inverse option pricing
NASA Astrophysics Data System (ADS)
Egger, Herbert; Hein, Torsten; Hofmann, Bernd
2006-08-01
Correct pricing of options and other financial derivatives is of great importance to financial markets and one of the key subjects of mathematical finance. Usually, parameters specifying the underlying stochastic model are not directly observable, but have to be determined indirectly from observable quantities. The identification of local volatility surfaces from market data of European vanilla options is one very important example of this type. As with many other parameter identification problems, the reconstruction of local volatility surfaces is ill-posed, and reasonable results can only be achieved via regularization methods. Moreover, due to the sparsity of data, the local volatility is not uniquely determined, but depends strongly on the kind of regularization norm used and a good a priori guess for the parameter. By assuming a multiplicative structure for the local volatility, which is motivated by the specific data situation, the inverse problem can be decomposed into two separate sub-problems. This removes part of the non-uniqueness and allows us to establish convergence and convergence rates under weak assumptions. Additionally, a numerical solution of the two sub-problems is much cheaper than that of the overall identification problem. The theoretical results are illustrated by numerical tests.
Inverse problems in complex material design: Applications to non-crystalline solids
NASA Astrophysics Data System (ADS)
Biswas, Parthapratim; Drabold, David; Elliott, Stephen
The design of complex amorphous materials is one of the fundamental problems in disordered condensed-matter science. While impressive developments of ab-initio simulation methods during the past several decades have brought tremendous success in understanding materials property from micro- to mesoscopic length scales, a major drawback is that they fail to incorporate existing knowledge of the materials in simulation methodologies. Since an essential feature of materials design is the synergy between experiment and theory, a properly developed approach to design materials should be able to exploit all available knowledge of the materials from measured experimental data. In this talk, we will address the design of complex disordered materials as an inverse problem involving experimental data and available empirical information. We show that the problem can be posed as a multi-objective non-convex optimization program, which can be addressed using a number of recently-developed bio-inspired global optimization techniques. In particular, we will discuss how a population-based stochastic search procedure can be used to determine the structure of non-crystalline solids (e.g. a-SiH, a-SiO2, amorphous graphene, and Fe and Ni clusters). The work is partially supported by NSF under Grant Nos. DMR 1507166 and 1507670.
Data-driven Climate Modeling and Prediction
NASA Astrophysics Data System (ADS)
Kondrashov, D. A.; Chekroun, M.
2016-12-01
Global climate models aim to simulate a broad range of spatio-temporal scales of climate variability with state vector having many millions of degrees of freedom. On the other hand, while detailed weather prediction out to a few days requires high numerical resolution, it is fairly clear that a major fraction of large-scale climate variability can be predicted in a much lower-dimensional phase space. Low-dimensional models can simulate and predict this fraction of climate variability, provided they are able to account for linear and nonlinear interactions between the modes representing large scales of climate dynamics, as well as their interactions with a much larger number of modes representing fast and small scales. This presentation will highlight several new applications by Multilayered Stochastic Modeling (MSM) [Kondrashov, Chekroun and Ghil, 2015] framework that has abundantly proven its efficiency in the modeling and real-time forecasting of various climate phenomena. MSM is a data-driven inverse modeling technique that aims to obtain a low-order nonlinear system of prognostic equations driven by stochastic forcing, and estimates both the dynamical operator and the properties of the driving noise from multivariate time series of observations or a high-end model's simulation. MSM leads to a system of stochastic differential equations (SDEs) involving hidden (auxiliary) variables of fast-small scales ranked by layers, which interact with the macroscopic (observed) variables of large-slow scales to model the dynamics of the latter, and thus convey memory effects. New MSM climate applications focus on development of computationally efficient low-order models by using data-adaptive decomposition methods that convey memory effects by time-embedding techniques, such as Multichannel Singular Spectrum Analysis (M-SSA) [Ghil et al. 2002] and recently developed Data-Adaptive Harmonic (DAH) decomposition method [Chekroun and Kondrashov, 2016]. In particular, new results by DAH-MSM modeling and prediction of Arctic Sea Ice, as well as decadal predictions of near-surface Earth temperatures will be presented.
SU-E-J-161: Inverse Problems for Optical Parameters in Laser Induced Thermal Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahrenholtz, SJ; Stafford, RJ; Fuentes, DT
Purpose: Magnetic resonance-guided laser-induced thermal therapy (MRgLITT) is investigated as a neurosurgical intervention for oncological applications throughout the body by active post market studies. Real-time MR temperature imaging is used to monitor ablative thermal delivery in the clinic. Additionally, brain MRgLITT could improve through effective planning for laser fiber's placement. Mathematical bioheat models have been extensively investigated but require reliable patient specific physical parameter data, e.g. optical parameters. This abstract applies an inverse problem algorithm to characterize optical parameter data obtained from previous MRgLITT interventions. Methods: The implemented inverse problem has three primary components: a parameter-space search algorithm, a physicsmore » model, and training data. First, the parameter-space search algorithm uses a gradient-based quasi-Newton method to optimize the effective optical attenuation coefficient, μ-eff. A parameter reduction reduces the amount of optical parameter-space the algorithm must search. Second, the physics model is a simplified bioheat model for homogeneous tissue where closed-form Green's functions represent the exact solution. Third, the training data was temperature imaging data from 23 MRgLITT oncological brain ablations (980 nm wavelength) from seven different patients. Results: To three significant figures, the descriptive statistics for μ-eff were 1470 m{sup −1} mean, 1360 m{sup −1} median, 369 m{sup −1} standard deviation, 933 m{sup −1} minimum and 2260 m{sup −1} maximum. The standard deviation normalized by the mean was 25.0%. The inverse problem took <30 minutes to optimize all 23 datasets. Conclusion: As expected, the inferred average is biased by underlying physics model. However, the standard deviation normalized by the mean is smaller than literature values and indicates an increased precision in the characterization of the optical parameters needed to plan MRgLITT procedures. This investigation demonstrates the potential for the optimization and validation of more sophisticated bioheat models that incorporate the uncertainty of the data into the predictions, e.g. stochastic finite element methods.« less
Stochastic system identification in structural dynamics
Safak, Erdal
1988-01-01
Recently, new identification methods have been developed by using the concept of optimal-recursive filtering and stochastic approximation. These methods, known as stochastic identification, are based on the statistical properties of the signal and noise, and do not require the assumptions of current methods. The criterion for stochastic system identification is that the difference between the recorded output and the output from the identified system (i.e., the residual of the identification) should be equal to white noise. In this paper, first a brief review of the theory is given. Then, an application of the method is presented by using ambient vibration data from a nine-story building.
Stochastic approach for radionuclides quantification
NASA Astrophysics Data System (ADS)
Clement, A.; Saurel, N.; Perrin, G.
2018-01-01
Gamma spectrometry is a passive non-destructive assay used to quantify radionuclides present in more or less complex objects. Basic methods using empirical calibration with a standard in order to quantify the activity of nuclear materials by determining the calibration coefficient are useless on non-reproducible, complex and single nuclear objects such as waste packages. Package specifications as composition or geometry change from one package to another and involve a high variability of objects. Current quantification process uses numerical modelling of the measured scene with few available data such as geometry or composition. These data are density, material, screen, geometric shape, matrix composition, matrix and source distribution. Some of them are strongly dependent on package data knowledge and operator backgrounds. The French Commissariat à l'Energie Atomique (CEA) is developing a new methodology to quantify nuclear materials in waste packages and waste drums without operator adjustment and internal package configuration knowledge. This method suggests combining a global stochastic approach which uses, among others, surrogate models available to simulate the gamma attenuation behaviour, a Bayesian approach which considers conditional probability densities of problem inputs, and Markov Chains Monte Carlo algorithms (MCMC) which solve inverse problems, with gamma ray emission radionuclide spectrum, and outside dimensions of interest objects. The methodology is testing to quantify actinide activity in different kind of matrix, composition, and configuration of sources standard in terms of actinide masses, locations and distributions. Activity uncertainties are taken into account by this adjustment methodology.
NASA Astrophysics Data System (ADS)
Pei, C.; Bieber, J. W.; Burger, R. A.; Clem, J.
2010-12-01
We present a detailed description of our newly developed stochastic approach for solving Parker's transport equation, which we believe is the first attempt to solve it with time dependence in 3-D, evolving from our 3-D steady state stochastic approach. Our formulation of this method is general and is valid for any type of heliospheric magnetic field, although we choose the standard Parker field as an example to illustrate the steps to calculate the transport of galactic cosmic rays. Our 3-D stochastic method is different from other stochastic approaches in the literature in several ways. For example, we employ spherical coordinates to integrate directly, which makes the code much more efficient by reducing coordinate transformations. What is more, the equivalence between our stochastic differential equations and Parker's transport equation is guaranteed by Ito's theorem in contrast to some other approaches. We generalize the technique for calculating particle flux based on the pseudoparticle trajectories for steady state solutions and for time-dependent solutions in 3-D. To validate our code, first we show that good agreement exists between solutions obtained by our steady state stochastic method and a traditional finite difference method. Then we show that good agreement also exists for our time-dependent method for an idealized and simplified heliosphere which has a Parker magnetic field and a simple initial condition for two different inner boundary conditions.
Tsunamis: stochastic models of occurrence and generation mechanisms
Geist, Eric L.; Oglesby, David D.
2014-01-01
The devastating consequences of the 2004 Indian Ocean and 2011 Japan tsunamis have led to increased research into many different aspects of the tsunami phenomenon. In this entry, we review research related to the observed complexity and uncertainty associated with tsunami generation, propagation, and occurrence described and analyzed using a variety of stochastic methods. In each case, seismogenic tsunamis are primarily considered. Stochastic models are developed from the physical theories that govern tsunami evolution combined with empirical models fitted to seismic and tsunami observations, as well as tsunami catalogs. These stochastic methods are key to providing probabilistic forecasts and hazard assessments for tsunamis. The stochastic methods described here are similar to those described for earthquakes (Vere-Jones 2013) and volcanoes (Bebbington 2013) in this encyclopedia.
2–stage stochastic Runge–Kutta for stochastic delay differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosli, Norhayati; Jusoh Awang, Rahimah; Bahar, Arifah
2015-05-15
This paper proposes a newly developed one-step derivative-free method, that is 2-stage stochastic Runge-Kutta (SRK2) to approximate the solution of stochastic delay differential equations (SDDEs) with a constant time lag, r > 0. General formulation of stochastic Runge-Kutta for SDDEs is introduced and Stratonovich Taylor series expansion for numerical solution of SRK2 is presented. Local truncation error of SRK2 is measured by comparing the Stratonovich Taylor expansion of the exact solution with the computed solution. Numerical experiment is performed to assure the validity of the method in simulating the strong solution of SDDEs.
Assessing the causal effect of policies: an example using stochastic interventions.
Díaz, Iván; van der Laan, Mark J
2013-11-19
Assessing the causal effect of an exposure often involves the definition of counterfactual outcomes in a hypothetical world in which the stochastic nature of the exposure is modified. Although stochastic interventions are a powerful tool to measure the causal effect of a realistic intervention that intends to alter the population distribution of an exposure, their importance to answer questions about plausible policy interventions has been obscured by the generalized use of deterministic interventions. In this article, we follow the approach described in Díaz and van der Laan (2012) to define and estimate the effect of an intervention that is expected to cause a truncation in the population distribution of the exposure. The observed data parameter that identifies the causal parameter of interest is established, as well as its efficient influence function under the non-parametric model. Inverse probability of treatment weighted (IPTW), augmented IPTW and targeted minimum loss-based estimators (TMLE) are proposed, their consistency and efficiency properties are determined. An extension to longitudinal data structures is presented and its use is demonstrated with a real data example.
Model-Free Stochastic Localization of CBRN Releases
2013-01-01
Ioannis Ch. Paschalidis,‡ Senior Member, IEEE Abstract—We present a novel two-stage methodology for locating a Chemical, Biological, Radiological, or...Nuclear (CBRN) source in an urban area using a network of sensors. In contrast to earlier work, our approach does not solve an inverse dispersion problem...but relies on data obtained from a simulation of the CBRN dispersion to obtain probabilistic descriptors of sensor measurements under a variety of CBRN
Yago, Tomoaki; Wakasa, Masanobu
2015-04-21
A practical method to calculate time evolutions of magnetic field effects (MFEs) on photochemical reactions involving radical pairs is developed on the basis of the theory of the chemically induced dynamic spin polarization proposed by Pedersen and Freed. In theory, the stochastic Liouville equation (SLE), including the spin Hamiltonian, diffusion motions of the radical pair, chemical reactions, and spin relaxations, is solved by using the Laplace and the inverse Laplace transformation technique. In our practical approach, time evolutions of the MFEs are successfully calculated by applying the Miller-Guy method instead of the final value theorem to the inverse Laplace transformation process. Especially, the SLE calculations are completed in a short time when the radical pair dynamics can be described by the chemical kinetics consisting of diffusions, reactions and spin relaxations. The SLE analysis with a short calculation time enables one to examine the various parameter sets for fitting the experimental date. Our study demonstrates that simultaneous fitting of the time evolution of the MFE and of the magnetic field dependence of the MFE provides valuable information on the diffusion motions of the radical pairs in nano-structured materials such as micelles where the lifetimes of radical pairs are longer than hundreds of nano-seconds and the magnetic field dependence of the spin relaxations play a major role for the generation of the MFE.
Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian
NASA Astrophysics Data System (ADS)
Teneng, Dean
2013-09-01
We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.
Hierarchical Bayesian modeling of ionospheric TEC disturbances as non-stationary processes
NASA Astrophysics Data System (ADS)
Seid, Abdu Mohammed; Berhane, Tesfahun; Roininen, Lassi; Nigussie, Melessew
2018-03-01
We model regular and irregular variation of ionospheric total electron content as stationary and non-stationary processes, respectively. We apply the method developed to SCINDA GPS data set observed at Bahir Dar, Ethiopia (11.6 °N, 37.4 °E) . We use hierarchical Bayesian inversion with Gaussian Markov random process priors, and we model the prior parameters in the hyperprior. We use Matérn priors via stochastic partial differential equations, and use scaled Inv -χ2 hyperpriors for the hyperparameters. For drawing posterior estimates, we use Markov Chain Monte Carlo methods: Gibbs sampling and Metropolis-within-Gibbs for parameter and hyperparameter estimations, respectively. This allows us to quantify model parameter estimation uncertainties as well. We demonstrate the applicability of the method proposed using a synthetic test case. Finally, we apply the method to real GPS data set, which we decompose to regular and irregular variation components. The result shows that the approach can be used as an accurate ionospheric disturbance characterization technique that quantifies the total electron content variability with corresponding error uncertainties.
Palacios, Julia A; Minin, Vladimir N
2013-03-01
Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method. Copyright © 2013, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Loubet, Benjamin; Carozzi, Marco
2015-04-01
Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28 days. The meteorological dataset of the fluxnet FR-Gri site (Grignon, FR) in 2008 was employed. Several sensor heights were tested, from 0.25 m to 2 m. The multi-source inverse problem was solved based on several sampling and field trial strategies: considering 1 or 2 heights over each field, considering the background concentration as known or unknown, and considering block-repetitions in the field set-up (3 repetitions). The inverse modelling approach demonstrated to be adapted for discriminating large differences in NH3 emissions from small agronomic plots using integrating sensors. The method is sensitive to sensor heights. The uncertainties and systematic biases are evaluated and discussed.
Further studies using matched filter theory and stochastic simulation for gust loads prediction
NASA Technical Reports Server (NTRS)
Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd Iii
1993-01-01
This paper describes two analysis methods -- one deterministic, the other stochastic -- for computing maximized and time-correlated gust loads for aircraft with nonlinear control systems. The first method is based on matched filter theory; the second is based on stochastic simulation. The paper summarizes the methods, discusses the selection of gust intensity for each method and presents numerical results. A strong similarity between the results from the two methods is seen to exist for both linear and nonlinear configurations.
Wang, Feng; Kang, Mengzhen; Lu, Qi; Letort, Véronique; Han, Hui; Guo, Yan; de Reffye, Philippe; Li, Baoguo
2011-04-01
Mongolian Scots pine (Pinus sylvestris var. mongolica) is one of the principal species used for windbreak and sand stabilization in arid and semi-arid areas in northern China. A model-assisted analysis of its canopy architectural development and functions is valuable for better understanding its behaviour and roles in fragile ecosystems. However, due to the intrinsic complexity and variability of trees, the parametric identification of such models is currently a major obstacle to their evaluation and their validation with respect to real data. The aim of this paper was to present the mathematical framework of a stochastic functional-structural model (GL2) and its parameterization for Mongolian Scots pines, taking into account inter-plant variability in terms of topological development and biomass partitioning. In GL2, plant organogenesis is determined by the realization of random variables representing the behaviour of axillary or apical buds. The associated probabilities are calibrated for Mongolian Scots pines using experimental data including means and variances of the numbers of organs per plant in each order-based class. The functional part of the model relies on the principles of source-sink regulation and is parameterized by direct observations of living trees and the inversion method using measured data for organ mass and dimensions. The final calibration accuracy satisfies both organogenetic and morphogenetic processes. Our hypothesis for the number of organs following a binomial distribution is found to be consistent with the real data. Based on the calibrated parameters, stochastic simulations of the growth of Mongolian Scots pines in plantations are generated by the Monte Carlo method, allowing analysis of the inter-individual variability of the number of organs and biomass partitioning. Three-dimensional (3D) architectures of young Mongolian Scots pines were simulated for 4-, 6- and 8-year-old trees. This work provides a new method for characterizing tree structures and biomass allocation that can be used to build a 3D virtual Mongolian Scots pine forest. The work paves the way for bridging the gap between a single-plant model and a stand model.
A two-level stochastic collocation method for semilinear elliptic equations with random coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Luoping; Zheng, Bin; Lin, Guang
In this work, we propose a novel two-level discretization for solving semilinear elliptic equations with random coefficients. Motivated by the two-grid method for deterministic partial differential equations (PDEs) introduced by Xu, our two-level stochastic collocation method utilizes a two-grid finite element discretization in the physical space and a two-level collocation method in the random domain. In particular, we solve semilinear equations on a coarse meshmore » $$\\mathcal{T}_H$$ with a low level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_{P}$$) and solve linearized equations on a fine mesh $$\\mathcal{T}_h$$ using high level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_p$$). We prove that the approximated solution obtained from this method achieves the same order of accuracy as that from solving the original semilinear problem directly by stochastic collocation method with $$\\mathcal{T}_h$$ and $$\\mathcal{P}_p$$. The two-level method is computationally more efficient, especially for nonlinear problems with high random dimensions. Numerical experiments are also provided to verify the theoretical results.« less
Errea, Ion; Calandra, Matteo; Mauri, Francesco
2013-10-25
Palladium hydrides display the largest isotope effect anomaly known in the literature. Replacement of hydrogen with the heavier isotopes leads to higher superconducting temperatures, a behavior inconsistent with harmonic theory. Solving the self-consistent harmonic approximation by a stochastic approach, we obtain the anharmonic free energy, the thermal expansion, and the superconducting properties fully ab initio. We find that the phonon spectra are strongly renormalized by anharmonicity far beyond the perturbative regime. Superconductivity is phonon mediated, but the harmonic approximation largely overestimates the superconducting critical temperatures. We explain the inverse isotope effect, obtaining a -0.38 value for the isotope coefficient in good agreement with experiments, hydrogen anharmonicity being mainly responsible for the isotope anomaly.
Silicon-carbon bond inversions driven by 60-keV electrons in graphene.
Susi, Toma; Kotakoski, Jani; Kepaptsoglou, Demie; Mangler, Clemens; Lovejoy, Tracy C; Krivanek, Ondrej L; Zan, Recep; Bangert, Ursel; Ayala, Paola; Meyer, Jannik C; Ramasse, Quentin
2014-09-12
We demonstrate that 60-keV electron irradiation drives the diffusion of threefold-coordinated Si dopants in graphene by one lattice site at a time. First principles simulations reveal that each step is caused by an electron impact on a C atom next to the dopant. Although the atomic motion happens below our experimental time resolution, stochastic analysis of 38 such lattice jumps reveals a probability for their occurrence in a good agreement with the simulations. Conversions from three- to fourfold coordinated dopant structures and the subsequent reverse process are significantly less likely than the direct bond inversion. Our results thus provide a model of nondestructive and atomically precise structural modification and detection for two-dimensional materials.
Samant, Asawari; Ogunnaike, Babatunde A; Vlachos, Dionisios G
2007-05-24
The fundamental role that intrinsic stochasticity plays in cellular functions has been shown via numerous computational and experimental studies. In the face of such evidence, it is important that intracellular networks are simulated with stochastic algorithms that can capture molecular fluctuations. However, separation of time scales and disparity in species population, two common features of intracellular networks, make stochastic simulation of such networks computationally prohibitive. While recent work has addressed each of these challenges separately, a generic algorithm that can simultaneously tackle disparity in time scales and population scales in stochastic systems is currently lacking. In this paper, we propose the hybrid, multiscale Monte Carlo (HyMSMC) method that fills in this void. The proposed HyMSMC method blends stochastic singular perturbation concepts, to deal with potential stiffness, with a hybrid of exact and coarse-grained stochastic algorithms, to cope with separation in population sizes. In addition, we introduce the computational singular perturbation (CSP) method as a means of systematically partitioning fast and slow networks and computing relaxation times for convergence. We also propose a new criteria of convergence of fast networks to stochastic low-dimensional manifolds, which further accelerates the algorithm. We use several prototype and biological examples, including a gene expression model displaying bistability, to demonstrate the efficiency, accuracy and applicability of the HyMSMC method. Bistable models serve as stringent tests for the success of multiscale MC methods and illustrate limitations of some literature methods.
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei
2018-03-01
The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.
Drawert, Brian; Lawson, Michael J; Petzold, Linda; Khammash, Mustafa
2010-02-21
We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm.
Stochastic simulation in systems biology
Székely, Tamás; Burrage, Kevin
2014-01-01
Natural systems are, almost by definition, heterogeneous: this can be either a boon or an obstacle to be overcome, depending on the situation. Traditionally, when constructing mathematical models of these systems, heterogeneity has typically been ignored, despite its critical role. However, in recent years, stochastic computational methods have become commonplace in science. They are able to appropriately account for heterogeneity; indeed, they are based around the premise that systems inherently contain at least one source of heterogeneity (namely, intrinsic heterogeneity). In this mini-review, we give a brief introduction to theoretical modelling and simulation in systems biology and discuss the three different sources of heterogeneity in natural systems. Our main topic is an overview of stochastic simulation methods in systems biology. There are many different types of stochastic methods. We focus on one group that has become especially popular in systems biology, biochemistry, chemistry and physics. These discrete-state stochastic methods do not follow individuals over time; rather they track only total populations. They also assume that the volume of interest is spatially homogeneous. We give an overview of these methods, with a discussion of the advantages and disadvantages of each, and suggest when each is more appropriate to use. We also include references to software implementations of them, so that beginners can quickly start using stochastic methods for practical problems of interest. PMID:25505503
3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation
NASA Astrophysics Data System (ADS)
Chen, Z.; Meng, X.; Guo, L.; Liu, G.
2011-12-01
In recent years, large scale gravity data sets have been collected and employed to enhance gravity problem-solving abilities of tectonics studies in China. Aiming at the large scale data and the requirement of rapid interpretation, previous authors have carried out a lot of work, including the fast gradient module inversion and Euler deconvolution depth inversion ,3-D physical property inversion using stochastic subspaces and equivalent storage, fast inversion using wavelet transforms and a logarithmic barrier method. So it can be say that 3-D gravity inversion has been greatly improved in the last decade. Many authors added many different kinds of priori information and constraints to deal with nonuniqueness using models composed of a large number of contiguous cells of unknown property and obtained good results. However, due to long computation time, instability and other shortcomings, 3-D physical property inversion has not been widely applied to large-scale data yet. In order to achieve 3-D interpretation with high efficiency and precision for geological and ore bodies and obtain their subsurface distribution, there is an urgent need to find a fast and efficient inversion method for large scale gravity data. As an entirely new geophysical inversion method, 3D correlation has a rapid development thanks to the advantage of requiring no a priori information and demanding small amount of computer memory. This method was proposed to image the distribution of equivalent excess masses of anomalous geological bodies with high resolution both longitudinally and transversely. In order to tranform the equivalence excess masses into real density contrasts, we adopt the adaptive correlation imaging for gravity data. After each 3D correlation imaging, we change the equivalence into density contrasts according to the linear relationship, and then carry out forward gravity calculation for each rectangle cells. Next, we compare the forward gravity data with real data, and comtinue to perform 3D correlation imaging for the redisual gravity data. After several iterations, we can obtain a satisfactoy results. Newly developed general purpose computing technology from Nvidia GPU (Graphics Processing Unit) has been put into practice and received widespread attention in many areas. Based on the GPU programming mode and two parallel levels, five CPU loops for the main computation of 3D correlation imaging are converted into three loops in GPU kernel functions, thus achieving GPU/CPU collaborative computing. The two inner loops are defined as the dimensions of blocks and the three outer loops are defined as the dimensions of threads, thus realizing the double loop block calculation. Theoretical and real gravity data tests show that results are reliable and the computing time is greatly reduced. Acknowledgments We acknowledge the financial support of Sinoprobe project (201011039 and 201011049-03), the Fundamental Research Funds for the Central Universities (2010ZY26 and 2011PY0183), the National Natural Science Foundation of China (41074095) and the Open Project of State Key Laboratory of Geological Processes and Mineral Resources (GPMR0945).
Convolutionless Nakajima-Zwanzig equations for stochastic analysis in nonlinear dynamical systems.
Venturi, D; Karniadakis, G E
2014-06-08
Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima-Zwanzig-Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection-reaction problems.
Convolutionless Nakajima–Zwanzig equations for stochastic analysis in nonlinear dynamical systems
Venturi, D.; Karniadakis, G. E.
2014-01-01
Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima–Zwanzig–Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection–reaction problems. PMID:24910519
Asymmetric and Stochastic Behavior in Magnetic Vortices Studied by Soft X-ray Microscopy
NASA Astrophysics Data System (ADS)
Im, Mi-Young
Asymmetry and stochasticity in spin processes are not only long-standing fundamental issues but also highly relevant to technological applications of nanomagnetic structures to memory and storage nanodevices. Those nontrivial phenomena have been studied by direct imaging of spin structures in magnetic vortices utilizing magnetic transmission soft x-ray microscopy (BL6.1.2 at ALS). Magnetic vortices have attracted enormous scientific interests due to their fascinating spin structures consisting of circularity rotating clockwise (c = + 1) or counter-clockwise (c = -1) and polarity pointing either up (p = + 1) or down (p = -1). We observed a symmetry breaking in the formation process of vortex structures in circular permalloy (Ni80Fe20) disks. The generation rates of two different vortex groups with the signature of cp = + 1 and cp =-1 are completely asymmetric. The asymmetric nature was interpreted to be triggered by ``intrinsic'' Dzyaloshinskii-Moriya interaction (DMI) arising from the spin-orbit coupling due to the lack of inversion symmetry near the disk surface and ``extrinsic'' factors such as roughness and defects. We also investigated the stochastic behavior of vortex creation in the arrays of asymmetric disks. The stochasticity was found to be very sensitive to the geometry of disk arrays, particularly interdisk distance. The experimentally observed phenomenon couldn't be explained by thermal fluctuation effect, which has been considered as a main reason for the stochastic behavior in spin processes. We demonstrated for the first time that the ultrafast dynamics at the early stage of vortex creation, which has a character of classical chaos significantly affects the stochastic nature observed at the steady state in asymmetric disks. This work provided the new perspective of dynamics as a critical factor contributing to the stochasticity in spin processes and also the possibility for the control of the intrinsic stochastic nature by optimizing the design of asymmetric disk arrays. This work was supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, by Leading Foreign Research Institute Recruitment Program through the NRF.
Extreme wave formation in unidirectional sea due to stochastic wave phase dynamics
NASA Astrophysics Data System (ADS)
Wang, Rui; Balachandran, Balakumar
2018-07-01
The authors consider a stochastic model based on the interaction and phase coupling amongst wave components that are modified envelope soliton solutions to the nonlinear Schrödinger equation. A probabilistic study is carried out and the resulting findings are compared with ocean wave field observations and laboratory experimental results. The wave height probability distribution obtained from the model is found to match well with prior data in the large wave height region. From the eigenvalue spectrum obtained through the Inverse Scattering Transform, it is revealed that the deep-water wave groups move at a speed different from the linear group speed, which justifies the inclusion of phase correction to the envelope solitary wave components. It is determined that phase synchronization amongst elementary solitary wave components can be critical for the formation of extreme waves in unidirectional sea states.
The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis.
Godfrey, Devon J; McAdams, H Page; Dobbins, James T
2013-02-01
Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes, three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. For scan angles of 20° and 5 mm plane separation, seven MITS planes must be averaged to sufficiently remove partial-pixel artifacts. MITSa7 does appear to subtly reduce the contrast of high-frequency "edge" information, but the removal of partial-pixel artifacts makes the appearance of low-contrast, fine-detail anatomy even more conspicuous in MITSa7 slices. MITSa7 also appears to render simulated subtle 5 mm pulmonary nodules with greater visibility than MITS alone, in both the open lung and regions overlying the mediastinum. Finally, the MITSa7 technique reduces stochastic image variance, though the in-plane stochastic SNR (for very thin objects which do not span multiple MITS planes) is only improved at spatial frequencies between 0.05 and 0.20 cycles∕mm. The MITSa7 method is an improvement over traditional single-plane MITS for thoracic imaging and the pulmonary nodule detection task, and thus the authors plan to use the MITSa7 approach for all future MITS research at the authors' institution.
High performance GPU processing for inversion using uniform grid searches
NASA Astrophysics Data System (ADS)
Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios
2017-04-01
Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on both platforms, and execution time as a function of the grid dimension for each problem was recorded. Results indicate an average speedup in calculations by a factor of 100 on the GPU platform; for example problems with 1012 grid-points require less than two hours instead of several days on conventional desktop computers. Such a speedup encourages the application of TOPINV on high performance platforms, as a GPU, in cases where nearly real time decisions are necessary, for example finite fault modeling to identify possible tsunami sources.
Discrete stochastic simulation methods for chemically reacting systems.
Cao, Yang; Samuels, David C
2009-01-01
Discrete stochastic chemical kinetics describe the time evolution of a chemically reacting system by taking into account the fact that, in reality, chemical species are present with integer populations and exhibit some degree of randomness in their dynamical behavior. In recent years, with the development of new techniques to study biochemistry dynamics in a single cell, there are increasing studies using this approach to chemical kinetics in cellular systems, where the small copy number of some reactant species in the cell may lead to deviations from the predictions of the deterministic differential equations of classical chemical kinetics. This chapter reviews the fundamental theory related to stochastic chemical kinetics and several simulation methods based on that theory. We focus on nonstiff biochemical systems and the two most important discrete stochastic simulation methods: Gillespie's stochastic simulation algorithm (SSA) and the tau-leaping method. Different implementation strategies of these two methods are discussed. Then we recommend a relatively simple and efficient strategy that combines the strengths of the two methods: the hybrid SSA/tau-leaping method. The implementation details of the hybrid strategy are given here and a related software package is introduced. Finally, the hybrid method is applied to simple biochemical systems as a demonstration of its application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Jianbo, E-mail: jianbocui@lsec.cc.ac.cn; Hong, Jialin, E-mail: hjl@lsec.cc.ac.cn; Liu, Zhihui, E-mail: liuzhihui@lsec.cc.ac.cn
We indicate that the nonlinear Schrödinger equation with white noise dispersion possesses stochastic symplectic and multi-symplectic structures. Based on these structures, we propose the stochastic symplectic and multi-symplectic methods, which preserve the continuous and discrete charge conservation laws, respectively. Moreover, we show that the proposed methods are convergent with temporal order one in probability. Numerical experiments are presented to verify our theoretical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, George; Wang, Le Yi; Zhang, Hongwei
2014-12-10
Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomlymore » switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.« less
Inverse Statistics and Asset Allocation Efficiency
NASA Astrophysics Data System (ADS)
Bolgorian, Meysam
In this paper using inverse statistics analysis, the effect of investment horizon on the efficiency of portfolio selection is examined. Inverse statistics analysis is a general tool also known as probability distribution of exit time that is used for detecting the distribution of the time in which a stochastic process exits from a zone. This analysis was used in Refs. 1 and 2 for studying the financial returns time series. This distribution provides an optimal investment horizon which determines the most likely horizon for gaining a specific return. Using samples of stocks from Tehran Stock Exchange (TSE) as an emerging market and S&P 500 as a developed market, effect of optimal investment horizon in asset allocation is assessed. It is found that taking into account the optimal investment horizon in TSE leads to more efficiency for large size portfolios while for stocks selected from S&P 500, regardless of portfolio size, this strategy does not only not produce more efficient portfolios, but also longer investment horizons provides more efficiency.
Reconstruction From Multiple Particles for 3D Isotropic Resolution in Fluorescence Microscopy.
Fortun, Denis; Guichard, Paul; Hamel, Virginie; Sorzano, Carlos Oscar S; Banterle, Niccolo; Gonczy, Pierre; Unser, Michael
2018-05-01
The imaging of proteins within macromolecular complexes has been limited by the low axial resolution of optical microscopes. To overcome this problem, we propose a novel computational reconstruction method that yields isotropic resolution in fluorescence imaging. The guiding principle is to reconstruct a single volume from the observations of multiple rotated particles. Our new operational framework detects particles, estimates their orientation, and reconstructs the final volume. The main challenge comes from the absence of initial template and a priori knowledge about the orientations. We formulate the estimation as a blind inverse problem, and propose a block-coordinate stochastic approach to solve the associated non-convex optimization problem. The reconstruction is performed jointly in multiple channels. We demonstrate that our method is able to reconstruct volumes with 3D isotropic resolution on simulated data. We also perform isotropic reconstructions from real experimental data of doubly labeled purified human centrioles. Our approach revealed the precise localization of the centriolar protein Cep63 around the centriole microtubule barrel. Overall, our method offers new perspectives for applications in biology that require the isotropic mapping of proteins within macromolecular assemblies.
NASA Astrophysics Data System (ADS)
Niu, Chun-Yang; Qi, Hong; Huang, Xing; Ruan, Li-Ming; Tan, He-Ping
2016-11-01
A rapid computational method called generalized sourced multi-flux method (GSMFM) was developed to simulate outgoing radiative intensities in arbitrary directions at the boundary surfaces of absorbing, emitting, and scattering media which were served as input for the inverse analysis. A hybrid least-square QR decomposition-stochastic particle swarm optimization (LSQR-SPSO) algorithm based on the forward GSMFM solution was developed to simultaneously reconstruct multi-dimensional temperature distribution and absorption and scattering coefficients of the cylindrical participating media. The retrieval results for axisymmetric temperature distribution and non-axisymmetric temperature distribution indicated that the temperature distribution and scattering and absorption coefficients could be retrieved accurately using the LSQR-SPSO algorithm even with noisy data. Moreover, the influences of extinction coefficient and scattering albedo on the accuracy of the estimation were investigated, and the results suggested that the reconstruction accuracy decreased with the increase of extinction coefficient and the scattering albedo. Finally, a non-contact measurement platform of flame temperature field based on the light field imaging was set up to validate the reconstruction model experimentally.
Portfolio Optimization with Stochastic Dividends and Stochastic Volatility
ERIC Educational Resources Information Center
Varga, Katherine Yvonne
2015-01-01
We consider an optimal investment-consumption portfolio optimization model in which an investor receives stochastic dividends. As a first problem, we allow the drift of stock price to be a bounded function. Next, we consider a stochastic volatility model. In each problem, we use the dynamic programming method to derive the Hamilton-Jacobi-Bellman…
2016-05-11
AFRL-AFOSR-JP-TR-2016-0046 Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization U Kang Korea...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect...Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA2386
FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.
Li, Pu; Chen, Bing
2011-04-01
Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.
A manifold independent approach to understanding transport in stochastic dynamical systems
NASA Astrophysics Data System (ADS)
Bollt, Erik M.; Billings, Lora; Schwartz, Ira B.
2002-12-01
We develop a new collection of tools aimed at studying stochastically perturbed dynamical systems. Specifically, in the setting of bi-stability, that is a two-attractor system, it has previously been numerically observed that a small noise volume is sufficient to destroy would be zero-noise case barriers in the phase space (pseudo-barriers), thus creating a pre-heteroclinic tangency chaos-like behavior. The stochastic dynamical system has a corresponding Frobenius-Perron operator with a stochastic kernel, which describes how densities of initial conditions move under the noisy map. Thus in studying the action of the Frobenius-Perron operator, we learn about the transport of the map; we have employed a Galerkin-Ulam-like method to project the Frobenius-Perron operator onto a discrete basis set of characteristic functions to highlight this action localized in specified regions of the phase space. Graph theoretic methods allow us to re-order the resulting finite dimensional Markov operator approximation so as to highlight the regions of the original phase space which are particularly active pseudo-barriers of the stochastic dynamics. Our toolbox allows us to find: (1) regions of high activity of transport, (2) flux across pseudo-barriers, and also (3) expected time of escape from pseudo-basins. Some of these quantities are also possible via the manifold dependent stochastic Melnikov method, but Melnikov only applies to a very special class of models for which the unperturbed homoclinic orbit is available. Our methods are unique in that they can essentially be considered as a “black-box” of tools which can be applied to a wide range of stochastic dynamical systems in the absence of a priori knowledge of manifold structures. We use here a model of childhood diseases to showcase our methods. Our tools will allow us to make specific observations of: (1) loss of reducibility between basins with increasing noise, (2) identification in the phase space of active regions of stochastic transport, (3) stochastic flux which essentially completes the heteroclinic tangle.
Compressible cavitation with stochastic field method
NASA Astrophysics Data System (ADS)
Class, Andreas; Dumond, Julien
2012-11-01
Non-linear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrange particles or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic field method solving pdf transport based on Euler fields has been proposed which eliminates the necessity to mix Euler and Lagrange techniques or prescribed pdf assumptions. In the present work, part of the PhD Design and analysis of a Passive Outflow Reducer relying on cavitation, a first application of the stochastic field method to multi-phase flow and in particular to cavitating flow is presented. The application considered is a nozzle subjected to high velocity flow so that sheet cavitation is observed near the nozzle surface in the divergent section. It is demonstrated that the stochastic field formulation captures the wide range of pdf shapes present at different locations. The method is compatible with finite-volume codes where all existing physical models available for Lagrange techniques, presumed pdf or binning methods can be easily extended to the stochastic field formulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yi; Jakeman, John; Gittelson, Claude
2015-01-08
In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less
Stochastic reconstructions of spectral functions: Application to lattice QCD
NASA Astrophysics Data System (ADS)
Ding, H.-T.; Kaczmarek, O.; Mukherjee, Swagato; Ohno, H.; Shu, H.-T.
2018-05-01
We present a detailed study of the applications of two stochastic approaches, stochastic optimization method (SOM) and stochastic analytical inference (SAI), to extract spectral functions from Euclidean correlation functions. SOM has the advantage that it does not require prior information. On the other hand, SAI is a more generalized method based on Bayesian inference. Under mean field approximation SAI reduces to the often-used maximum entropy method (MEM) and for a specific choice of the prior SAI becomes equivalent to SOM. To test the applicability of these two stochastic methods to lattice QCD, firstly, we apply these methods to various reasonably chosen model correlation functions and present detailed comparisons of the reconstructed spectral functions obtained from SOM, SAI and MEM. Next, we present similar studies for charmonia correlation functions obtained from lattice QCD computations using clover-improved Wilson fermions on large, fine, isotropic lattices at 0.75 and 1.5 Tc, Tc being the deconfinement transition temperature of a pure gluon plasma. We find that SAI and SOM give consistent results to MEM at these two temperatures.
Schilde, M; Doerner, K F; Hartl, R F
2014-10-01
In urban areas, logistic transportation operations often run into problems because travel speeds change, depending on the current traffic situation. If not accounted for, time-dependent and stochastic travel speeds frequently lead to missed time windows and thus poorer service. Especially in the case of passenger transportation, it often leads to excessive passenger ride times as well. Therefore, time-dependent and stochastic influences on travel speeds are relevant for finding feasible and reliable solutions. This study considers the effect of exploiting statistical information available about historical accidents, using stochastic solution approaches for the dynamic dial-a-ride problem (dynamic DARP). The authors propose two pairs of metaheuristic solution approaches, each consisting of a deterministic method (average time-dependent travel speeds for planning) and its corresponding stochastic version (exploiting stochastic information while planning). The results, using test instances with up to 762 requests based on a real-world road network, show that in certain conditions, exploiting stochastic information about travel speeds leads to significant improvements over deterministic approaches.
Comparison of stochastic optimization methods for all-atom folding of the Trp-Cage protein.
Schug, Alexander; Herges, Thomas; Verma, Abhinav; Lee, Kyu Hwan; Wenzel, Wolfgang
2005-12-09
The performances of three different stochastic optimization methods for all-atom protein structure prediction are investigated and compared. We use the recently developed all-atom free-energy force field (PFF01), which was demonstrated to correctly predict the native conformation of several proteins as the global optimum of the free energy surface. The trp-cage protein (PDB-code 1L2Y) is folded with the stochastic tunneling method, a modified parallel tempering method, and the basin-hopping technique. All the methods correctly identify the native conformation, and their relative efficiency is discussed.
Boore, David M.; Di Alessandro, Carola; Abrahamson, Norman A.
2014-01-01
The stochastic method of simulating ground motions requires the specification of the shape and scaling with magnitude of the source spectrum. The spectral models commonly used are either single-corner-frequency or double-corner-frequency models, but the latter have no flexibility to vary the high-frequency spectral levels for a specified seismic moment. Two generalized double-corner-frequency ω2 source spectral models are introduced, one in which two spectra are multiplied together, and another where they are added. Both models have a low-frequency dependence controlled by the seismic moment, and a high-frequency spectral level controlled by the seismic moment and a stress parameter. A wide range of spectral shapes can be obtained from these generalized spectral models, which makes them suitable for inversions of data to obtain spectral models that can be used in ground-motion simulations in situations where adequate data are not available for purely empirical determinations of ground motions, as in stable continental regions. As an example of the use of the generalized source spectral models, data from up to 40 stations from seven events, plus response spectra at two distances and two magnitudes from recent ground-motion prediction equations, were inverted to obtain the parameters controlling the spectral shapes, as well as a finite-fault factor that is used in point-source, stochastic-method simulations of ground motion. The fits to the data are comparable to or even better than those from finite-fault simulations, even for sites close to large earthquakes.
Black-Scholes model under subordination
NASA Astrophysics Data System (ADS)
Stanislavsky, A. A.
2003-02-01
In this paper, we consider a new mathematical extension of the Black-Scholes (BS) model in which the stochastic time and stock share price evolution is described by two independent random processes. The parent process is Brownian, and the directing process is inverse to the totally skewed, strictly α-stable process. The subordinated process represents the Brownian motion indexed by an independent, continuous and increasing process. This allows us to introduce the long-term memory effects in the classical BS model.
NASA Astrophysics Data System (ADS)
Tasolamprou, A. C.; Mitov, M.; Zografopoulos, D. C.; Kriezis, E. E.
2009-03-01
Single-layer cholesteric liquid crystals exhibit a reflection coefficient which is at most 50% for unpolarized incident light. We give theoretical and experimental evidence of single-layer polymer-stabilized cholesteric liquid-crystalline structures that demonstrate hyper-reflective properties. Such original features are derived by the concurrent and randomly interlaced presence of both helicities. The fundamental properties of such structures are revealed by detailed numerical simulations based on a stochastic approach.
Inverse Opal Scaffolds and Their Biomedical Applications.
Zhang, Yu Shrike; Zhu, Chunlei; Xia, Younan
2017-09-01
Three-dimensional porous scaffolds play a pivotal role in tissue engineering and regenerative medicine by functioning as biomimetic substrates to manipulate cellular behaviors. While many techniques have been developed to fabricate porous scaffolds, most of them rely on stochastic processes that typically result in scaffolds with pores uncontrolled in terms of size, structure, and interconnectivity, greatly limiting their use in tissue regeneration. Inverse opal scaffolds, in contrast, possess uniform pores inheriting from the template comprised of a closely packed lattice of monodispersed microspheres. The key parameters of such scaffolds, including architecture, pore structure, porosity, and interconnectivity, can all be made uniform across the same sample and among different samples. In conjunction with a tight control over pore sizes, inverse opal scaffolds have found widespread use in biomedical applications. In this review, we provide a detailed discussion on this new class of advanced materials. After a brief introduction to their history and fabrication, we highlight the unique advantages of inverse opal scaffolds over their non-uniform counterparts. We then showcase their broad applications in tissue engineering and regenerative medicine, followed by a summary and perspective on future directions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Stochastic Estimation via Polynomial Chaos
2015-10-01
AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic
The Kolmogorov-Obukhov Statistical Theory of Turbulence
NASA Astrophysics Data System (ADS)
Birnir, Björn
2013-08-01
In 1941 Kolmogorov and Obukhov postulated the existence of a statistical theory of turbulence, which allows the computation of statistical quantities that can be simulated and measured in a turbulent system. These are quantities such as the moments, the structure functions and the probability density functions (PDFs) of the turbulent velocity field. In this paper we will outline how to construct this statistical theory from the stochastic Navier-Stokes equation. The additive noise in the stochastic Navier-Stokes equation is generic noise given by the central limit theorem and the large deviation principle. The multiplicative noise consists of jumps multiplying the velocity, modeling jumps in the velocity gradient. We first estimate the structure functions of turbulence and establish the Kolmogorov-Obukhov 1962 scaling hypothesis with the She-Leveque intermittency corrections. Then we compute the invariant measure of turbulence, writing the stochastic Navier-Stokes equation as an infinite-dimensional Ito process, and solving the linear Kolmogorov-Hopf functional differential equation for the invariant measure. Finally we project the invariant measure onto the PDF. The PDFs turn out to be the normalized inverse Gaussian (NIG) distributions of Barndorff-Nilsen, and compare well with PDFs from simulations and experiments.
Molecular finite-size effects in stochastic models of equilibrium chemical systems.
Cianci, Claudia; Smith, Stephen; Grima, Ramon
2016-02-28
The reaction-diffusion master equation (RDME) is a standard modelling approach for understanding stochastic and spatial chemical kinetics. An inherent assumption is that molecules are point-like. Here, we introduce the excluded volume reaction-diffusion master equation (vRDME) which takes into account volume exclusion effects on stochastic kinetics due to a finite molecular radius. We obtain an exact closed form solution of the RDME and of the vRDME for a general chemical system in equilibrium conditions. The difference between the two solutions increases with the ratio of molecular diameter to the compartment length scale. We show that an increase in the fraction of excluded space can (i) lead to deviations from the classical inverse square root law for the noise-strength, (ii) flip the skewness of the probability distribution from right to left-skewed, (iii) shift the equilibrium of bimolecular reactions so that more product molecules are formed, and (iv) strongly modulate the Fano factors and coefficients of variation. These volume exclusion effects are found to be particularly pronounced for chemical species not involved in chemical conservation laws. Finally, we show that statistics obtained using the vRDME are in good agreement with those obtained from Brownian dynamics with excluded volume interactions.
Correlated noise-based switches and stochastic resonance in a bistable genetic regulation system
NASA Astrophysics Data System (ADS)
Wang, Can-Jun; Yang, Ke-Li
2016-07-01
The correlated noise-based switches and stochastic resonance are investigated in a bistable single gene switching system driven by an additive noise (environmental fluctuations), a multiplicative noise (fluctuations of the degradation rate). The correlation between the two noise sources originates from on the lysis-lysogeny pathway system of the λ phage. The steady state probability distribution is obtained by solving the time-independent Fokker-Planck equation, and the effects of noises are analyzed. The effects of noises on the switching time between the two stable states (mean first passage time) is investigated by the numerical simulation. The stochastic resonance phenomenon is analyzed by the power amplification factor. The results show that the multiplicative noise can induce the switching from "on" → "off" of the protein production, while the additive noise and the correlation between the noise sources can induce the inverse switching "off" → "on". A nonmonotonic behaviour of the average switching time versus the multiplicative noise intensity, for different cross-correlation and additive noise intensities, is observed in the genetic system. There exist optimal values of the additive noise, multiplicative noise and cross-correlation intensities for which the weak signal can be optimal amplified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Qichun; Zhou, Jinglin; Wang, Hong
In this paper, stochastic coupling attenuation is investigated for a class of multi-variable bilinear stochastic systems and a novel output feedback m-block backstepping controller with linear estimator is designed, where gradient descent optimization is used to tune the design parameters of the controller. It has been shown that the trajectories of the closed-loop stochastic systems are bounded in probability sense and the stochastic coupling of the system outputs can be effectively attenuated by the proposed control algorithm. Moreover, the stability of the stochastic systems is analyzed and the effectiveness of the proposed method has been demonstrated using a simulated example.
Note on coefficient matrices from stochastic Galerkin methods for random diffusion equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Tao, E-mail: tzhou@lsec.cc.ac.c; Tang Tao, E-mail: ttang@hkbu.edu.h
2010-11-01
In a recent work by Xiu and Shen [D. Xiu, J. Shen, Efficient stochastic Galerkin methods for random diffusion equations, J. Comput. Phys. 228 (2009) 266-281], the Galerkin methods are used to solve stochastic diffusion equations in random media, where some properties for the coefficient matrix of the resulting system are provided. They also posed an open question on the properties of the coefficient matrix. In this work, we will provide some results related to the open question.
NASA Astrophysics Data System (ADS)
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-04-01
The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less
Field dynamics inference via spectral density estimation
NASA Astrophysics Data System (ADS)
Frank, Philipp; Steininger, Theo; Enßlin, Torsten A.
2017-11-01
Stochastic differential equations are of utmost importance in various scientific and industrial areas. They are the natural description of dynamical processes whose precise equations of motion are either not known or too expensive to solve, e.g., when modeling Brownian motion. In some cases, the equations governing the dynamics of a physical system on macroscopic scales occur to be unknown since they typically cannot be deduced from general principles. In this work, we describe how the underlying laws of a stochastic process can be approximated by the spectral density of the corresponding process. Furthermore, we show how the density can be inferred from possibly very noisy and incomplete measurements of the dynamical field. Generally, inverse problems like these can be tackled with the help of Information Field Theory. For now, we restrict to linear and autonomous processes. To demonstrate its applicability, we employ our reconstruction algorithm on a time-series and spatiotemporal processes.
Field dynamics inference via spectral density estimation.
Frank, Philipp; Steininger, Theo; Enßlin, Torsten A
2017-11-01
Stochastic differential equations are of utmost importance in various scientific and industrial areas. They are the natural description of dynamical processes whose precise equations of motion are either not known or too expensive to solve, e.g., when modeling Brownian motion. In some cases, the equations governing the dynamics of a physical system on macroscopic scales occur to be unknown since they typically cannot be deduced from general principles. In this work, we describe how the underlying laws of a stochastic process can be approximated by the spectral density of the corresponding process. Furthermore, we show how the density can be inferred from possibly very noisy and incomplete measurements of the dynamical field. Generally, inverse problems like these can be tackled with the help of Information Field Theory. For now, we restrict to linear and autonomous processes. To demonstrate its applicability, we employ our reconstruction algorithm on a time-series and spatiotemporal processes.
NASA Astrophysics Data System (ADS)
Brunet, P.; Gloaguen, E.
2014-12-01
Designing and monitoring of geothermal systems is a complex task which requires a multidisciplinary approach. Deep geothermal reservoir models are prone to greater uncertainty, with a lack of direct data and lower resolution of surface geophysical methods. However, recent technical advances have enabled the potential use of permanent downhole vertical resistivity arrays for monitoring fluid injection. As electrical resistivity is sensitive to temperature changes, such data could provide valuable information for deep geothermal reservoir characterization. The objective of this study is to assess the potential of time-lapse cross-borehole ERT to constrain 3D realizations of geothermal reservoir properties. The synthetic case of a permeable geothermal reservoir in a sedimentary basin was set up, as a confined deep and saline sandstone aquifer with intermediate reservoir temperatures (150ºC), depth (1 km) and 30m thickness. The reservoir permeability distribution is heterogeneous, as the result of a fluvial depositional environment. The ERT monitoring system design is a triangular arrangement of 3 wells at 150 m spacing, including 1 injection and 1 extraction well. The optimal number and spacing of electrodes of the ERT array design is site-specific and has been assessed through a sensibility study. Dipole-dipole and pole-pole electrode configurations were used. The study workflow was the following: 1) Generation of a reference reservoir model and 100 stochastic realizations of permeability; 2) Simulation of saturated single-phase flow and heat transport of reinjection of cooled formation fluid (50ºC) with TOUGH2 software; 3) Time-lapse forward ERT modeling on the reference model and all realizations (observed and simulated apparent resistivity change); 4) heuristic optimization on ERT computed and calculated data. Preliminary results show significant reduction of parameter uncertainty, hence realization space, with assimilation of cross-borehole ERT data. Loss in sensitivity of ERT between boreholes is compensated here by the stochastic modeling approach, rather than using a deterministic inversion scheme. Our results suggest stochastic reservoir simulations, together with assimilation of cross-borehole ERT data, could be useful tools for design and monitoring of deep geothermal systems.
NASA Astrophysics Data System (ADS)
García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.
2018-07-01
In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.
Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics.
Marquez-Lago, Tatiana T; Burrage, Kevin
2007-09-14
In cell biology, cell signaling pathway problems are often tackled with deterministic temporal models, well mixed stochastic simulators, and/or hybrid methods. But, in fact, three dimensional stochastic spatial modeling of reactions happening inside the cell is needed in order to fully understand these cell signaling pathways. This is because noise effects, low molecular concentrations, and spatial heterogeneity can all affect the cellular dynamics. However, there are ways in which important effects can be accounted without going to the extent of using highly resolved spatial simulators (such as single-particle software), hence reducing the overall computation time significantly. We present a new coarse grained modified version of the next subvolume method that allows the user to consider both diffusion and reaction events in relatively long simulation time spans as compared with the original method and other commonly used fully stochastic computational methods. Benchmarking of the simulation algorithm was performed through comparison with the next subvolume method and well mixed models (MATLAB), as well as stochastic particle reaction and transport simulations (CHEMCELL, Sandia National Laboratories). Additionally, we construct a model based on a set of chemical reactions in the epidermal growth factor receptor pathway. For this particular application and a bistable chemical system example, we analyze and outline the advantages of our presented binomial tau-leap spatial stochastic simulation algorithm, in terms of efficiency and accuracy, in scenarios of both molecular homogeneity and heterogeneity.
NASA Astrophysics Data System (ADS)
Jeanmairet, Guillaume; Sharma, Sandeep; Alavi, Ali
2017-01-01
In this article we report a stochastic evaluation of the recently proposed multireference linearized coupled cluster theory [S. Sharma and A. Alavi, J. Chem. Phys. 143, 102815 (2015)]. In this method, both the zeroth-order and first-order wavefunctions are sampled stochastically by propagating simultaneously two populations of signed walkers. The sampling of the zeroth-order wavefunction follows a set of stochastic processes identical to the one used in the full configuration interaction quantum Monte Carlo (FCIQMC) method. To sample the first-order wavefunction, the usual FCIQMC algorithm is augmented with a source term that spawns walkers in the sampled first-order wavefunction from the zeroth-order wavefunction. The second-order energy is also computed stochastically but requires no additional overhead outside of the added cost of sampling the first-order wavefunction. This fully stochastic method opens up the possibility of simultaneously treating large active spaces to account for static correlation and recovering the dynamical correlation using perturbation theory. The method is used to study a few benchmark systems including the carbon dimer and aromatic molecules. We have computed the singlet-triplet gaps of benzene and m-xylylene. For m-xylylene, which has proved difficult for standard complete active space self consistent field theory with perturbative correction, we find the singlet-triplet gap to be in good agreement with the experimental values.
Some Applications Of Semigroups And Computer Algebra In Discrete Structures
NASA Astrophysics Data System (ADS)
Bijev, G.
2009-11-01
An algebraic approach to the pseudoinverse generalization problem in Boolean vector spaces is used. A map (p) is defined, which is similar to an orthogonal projection in linear vector spaces. Some other important maps with properties similar to those of the generalized inverses (pseudoinverses) of linear transformations and matrices corresponding to them are also defined and investigated. Let Ax = b be an equation with matrix A and vectors x and b Boolean. Stochastic experiments for solving the equation, which involves the maps defined and use computer algebra methods, have been made. As a result, the Hamming distance between vectors Ax = p(b) and b is equal or close to the least possible. We also share our experience in using computer algebra systems for teaching discrete mathematics and linear algebra and research. Some examples for computations with binary relations using Maple are given.
Estimation of road profile variability from measured vehicle responses
NASA Astrophysics Data System (ADS)
Fauriat, W.; Mattrand, C.; Gayton, N.; Beakou, A.; Cembrzynski, T.
2016-05-01
When assessing the statistical variability of fatigue loads acting throughout the life of a vehicle, the question of the variability of road roughness naturally arises, as both quantities are strongly related. For car manufacturers, gathering information on the environment in which vehicles evolve is a long and costly but necessary process to adapt their products to durability requirements. In the present paper, a data processing algorithm is proposed in order to estimate the road profiles covered by a given vehicle, from the dynamic responses measured on this vehicle. The algorithm based on Kalman filtering theory aims at solving a so-called inverse problem, in a stochastic framework. It is validated using experimental data obtained from simulations and real measurements. The proposed method is subsequently applied to extract valuable statistical information on road roughness from an existing load characterisation campaign carried out by Renault within one of its markets.
The Origin and Evolution of the Galaxy Star Formation Rate-Stellar Mass Correlation
NASA Astrophysics Data System (ADS)
Gawiser, Eric; Iyer, Kartheik
2018-01-01
The existence of a tight correlation between galaxies’ star formation rates and stellar masses is far more surprising than usually noted. However, a simple analytical calculation illustrates that the evolution of the normalization of this correlation is driven primarily by the inverse age of the universe, and that the underlying correlation is one between galaxies’ instantaneous star formation rates and their average star formation rates since the Big Bang.Our new Dense Basis method of SED fitting (Iyer & Gawiser 2017, ApJ 838, 127) allows star formation histories (SFHs) to be reconstructed, along with uncertainties, for >10,000 galaxies in the CANDELS and 3D-HST catalogs at 0.5
Cataldo, E; Soize, C
2018-06-06
Jitter, in voice production applications, is a random phenomenon characterized by the deviation of the glottal cycle length with respect to a mean value. Its study can help in identifying pathologies related to the vocal folds according to the values obtained through the different ways to measure it. This paper aims to propose a stochastic model, considering three control parameters, to generate jitter based on a deterministic one-mass model for the dynamics of the vocal folds and to identify parameters from the stochastic model taking into account real voice signals experimentally obtained. To solve the corresponding stochastic inverse problem, the cost function used is based on the distance between probability density functions of the random variables associated with the fundamental frequencies obtained by the experimental voices and the simulated ones, and also on the distance between features extracted from the voice signals, simulated and experimental, to calculate jitter. The results obtained show that the model proposed is valid and some samples of voices are synthesized considering the identified parameters for normal and pathological cases. The strategy adopted is also a novelty and mainly because a solution was obtained. In addition to the use of three parameters to construct the model of jitter, it is the discussion of a parameter related to the bandwidth of the power spectral density function of the stochastic process to measure the quality of the signal generated. A study about the influence of all the main parameters is also performed. The identification of the parameters of the model considering pathological cases is maybe of all novelties introduced by the paper the most interesting. Copyright © 2018 Elsevier Ltd. All rights reserved.
Measures of thermodynamic irreversibility in deterministic and stochastic dynamics
NASA Astrophysics Data System (ADS)
Ford, Ian J.
2015-07-01
It is generally observed that if a dynamical system is sufficiently complex, then as time progresses it will share out energy and other properties amongst its component parts to eliminate any initial imbalances, retaining only fluctuations. This is known as energy dissipation and it is closely associated with the concept of thermodynamic irreversibility, measured by the increase in entropy according to the second law. It is of interest to quantify such behaviour from a dynamical rather than a thermodynamic perspective and to this end stochastic entropy production and the time-integrated dissipation function have been introduced as analogous measures of irreversibility, principally for stochastic and deterministic dynamics, respectively. We seek to compare these measures. First we modify the dissipation function to allow it to measure irreversibility in situations where the initial probability density function (pdf) of the system is asymmetric as well as symmetric in velocity. We propose that it tests for failure of what we call the obversibility of the system, to be contrasted with reversibility, the failure of which is assessed by stochastic entropy production. We note that the essential difference between stochastic entropy production and the time-integrated modified dissipation function lies in the sequence of procedures undertaken in the associated tests of irreversibility. We argue that an assumed symmetry of the initial pdf with respect to velocity inversion (within a framework of deterministic dynamics) can be incompatible with the Past Hypothesis, according to which there should be a statistical distinction between the behaviour of certain properties of an isolated system as it evolves into the far future and the remote past. Imposing symmetry on a velocity distribution is acceptable for many applications of statistical physics, but can introduce difficulties when discussing irreversible behaviour.
Fast and Efficient Stochastic Optimization for Analytic Continuation
Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...
2016-09-28
In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less
Calibration of a Land Subsidence Model Using InSAR Data via the Ensemble Kalman Filter.
Li, Liangping; Zhang, Meijing; Katzenstein, Kurt
2017-11-01
The application of interferometric synthetic aperture radar (InSAR) has been increasingly used to improve capabilities to model land subsidence in hydrogeologic studies. A number of investigations over the last decade show how spatially detailed time-lapse images of ground displacements could be utilized to advance our understanding for better predictions. In this work, we use simulated land subsidences as observed measurements, mimicking InSAR data to inversely infer inelastic specific storage in a stochastic framework. The inelastic specific storage is assumed as a random variable and modeled using a geostatistical method such that the detailed variations in space could be represented and also that the uncertainties of both characterization of specific storage and prediction of land subsidence can be assessed. The ensemble Kalman filter (EnKF), a real-time data assimilation algorithm, is used to inversely calibrate a land subsidence model by matching simulated subsidences with InSAR data. The performance of the EnKF is demonstrated in a synthetic example in which simulated surface deformations using a reference field are assumed as InSAR data for inverse modeling. The results indicate: (1) the EnKF can be used successfully to calibrate a land subsidence model with InSAR data; the estimation of inelastic specific storage is improved, and uncertainty of prediction is reduced, when all the data are accounted for; and (2) if the same ensemble is used to estimate Kalman gain, the analysis errors could cause filter divergence; thus, it is essential to include localization in the EnKF for InSAR data assimilation. © 2017, National Ground Water Association.
Reddy, L Ram Gopal; Kuntamalla, Srinivas
2011-01-01
Heart rate variability analysis is fast gaining acceptance as a potential non-invasive means of autonomic nervous system assessment in research as well as clinical domains. In this study, a new nonlinear analysis method is used to detect the degree of nonlinearity and stochastic nature of heart rate variability signals during two forms of meditation (Chi and Kundalini). The data obtained from an online and widely used public database (i.e., MIT/BIH physionet database), is used in this study. The method used is the delay vector variance (DVV) method, which is a unified method for detecting the presence of determinism and nonlinearity in a time series and is based upon the examination of local predictability of a signal. From the results it is clear that there is a significant change in the nonlinearity and stochastic nature of the signal before and during the meditation (p value > 0.01). During Chi meditation there is a increase in stochastic nature and decrease in nonlinear nature of the signal. There is a significant decrease in the degree of nonlinearity and stochastic nature during Kundalini meditation.
Schilde, M.; Doerner, K.F.; Hartl, R.F.
2014-01-01
In urban areas, logistic transportation operations often run into problems because travel speeds change, depending on the current traffic situation. If not accounted for, time-dependent and stochastic travel speeds frequently lead to missed time windows and thus poorer service. Especially in the case of passenger transportation, it often leads to excessive passenger ride times as well. Therefore, time-dependent and stochastic influences on travel speeds are relevant for finding feasible and reliable solutions. This study considers the effect of exploiting statistical information available about historical accidents, using stochastic solution approaches for the dynamic dial-a-ride problem (dynamic DARP). The authors propose two pairs of metaheuristic solution approaches, each consisting of a deterministic method (average time-dependent travel speeds for planning) and its corresponding stochastic version (exploiting stochastic information while planning). The results, using test instances with up to 762 requests based on a real-world road network, show that in certain conditions, exploiting stochastic information about travel speeds leads to significant improvements over deterministic approaches. PMID:25844013
A stochastic evolution model for residue Insertion-Deletion Independent from Substitution.
Lèbre, Sophie; Michel, Christian J
2010-12-01
We develop here a new class of stochastic models of gene evolution based on residue Insertion-Deletion Independent from Substitution (IDIS). Indeed, in contrast to all existing evolution models, insertions and deletions are modeled here by a concept in population dynamics. Therefore, they are not only independent from each other, but also independent from the substitution process. After a separate stochastic analysis of the substitution and the insertion-deletion processes, we obtain a matrix differential equation combining these two processes defining the IDIS model. By deriving a general solution, we give an analytical expression of the residue occurrence probability at evolution time t as a function of a substitution rate matrix, an insertion rate vector, a deletion rate and an initial residue probability vector. Various mathematical properties of the IDIS model in relation with time t are derived: time scale, time step, time inversion and sequence length. Particular expressions of the nucleotide occurrence probability at time t are given for classical substitution rate matrices in various biological contexts: equal insertion rate, insertion-deletion only and substitution only. All these expressions can be directly used for biological evolutionary applications. The IDIS model shows a strongly different stochastic behavior from the classical substitution only model when compared on a gene dataset. Indeed, by considering three processes of residue insertion, deletion and substitution independently from each other, it allows a more realistic representation of gene evolution and opens new directions and applications in this research field. Copyright © 2010 Elsevier Ltd. All rights reserved.
Stochastic multifractal forecasts: from theory to applications in radar meteorology
NASA Astrophysics Data System (ADS)
da Silva Rocha Paz, Igor; Tchiguirinskaia, Ioulia; Schertzer, Daniel
2017-04-01
Radar meteorology has been very inspiring for the development of multifractals. It has enabled to work on a 3D+1 field with many challenging applications, including predictability and stochastic forecasts, especially nowcasts that are particularly demanding in computation speed. Multifractals are indeed parsimonious stochastic models that require only a few physically meaningful parameters, e.g. Universal Multifractal (UM) parameters, because they are based on non-trivial symmetries of nonlinear equations. We first recall the physical principles of multifractal predictability and predictions, which are so closely related that the latter correspond to the most optimal predictions in the multifractal framework. Indeed, these predictions are based on the fundamental duality of a relatively slow decay of large scale structures and an injection of new born small scale structures. Overall, this triggers a mulfitractal inverse cascade of unpredictability. With the help of high resolution rainfall radar data (≈ 100 m), we detail and illustrate the corresponding stochastic algorithm in the framework of (causal) UM Fractionally Integrated Flux models (UM-FIF), where the rainfall field is obtained with the help of a fractional integration of a conservative multifractal flux, whose average is strictly scale invariant (like the energy flux in a dynamic cascade). Whereas, the introduction of small structures is rather straightforward, the deconvolution of the past of the field is more subtle, but nevertheless achievable, to obtain the past of the flux. Then, one needs to only fractionally integrate a multiplicative combination of past and future fluxes to obtain a nowcast realisation.
Approximation and inference methods for stochastic biochemical kinetics—a tutorial review
NASA Astrophysics Data System (ADS)
Schnoerr, David; Sanguinetti, Guido; Grima, Ramon
2017-03-01
Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2017-09-01
This paper presents a general stochastic model for procrastination with respect to a deadline. The model establishes a universal procrastination pattern that follows an inverse power-law: if the time remaining to the deadline is r then the response is 1/rε , where ɛ is a positive exponent. The model further establishes that the exponent value ε =1 , which yields the harmonic response 1/r , stands out as special and distinguishable. The theoretical results of the model are shown to be in perfect accord with recent empirical findings.
Roh, Min K; Gillespie, Dan T; Petzold, Linda R
2010-11-07
The weighted stochastic simulation algorithm (wSSA) was developed by Kuwahara and Mura [J. Chem. Phys. 129, 165101 (2008)] to efficiently estimate the probabilities of rare events in discrete stochastic systems. The wSSA uses importance sampling to enhance the statistical accuracy in the estimation of the probability of the rare event. The original algorithm biases the reaction selection step with a fixed importance sampling parameter. In this paper, we introduce a novel method where the biasing parameter is state-dependent. The new method features improved accuracy, efficiency, and robustness.
The relationship between stochastic and deterministic quasi-steady state approximations.
Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R
2015-11-23
The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.
Uncertainty Reduction for Stochastic Processes on Complex Networks
NASA Astrophysics Data System (ADS)
Radicchi, Filippo; Castellano, Claudio
2018-05-01
Many real-world systems are characterized by stochastic dynamical rules where a complex network of interactions among individual elements probabilistically determines their state. Even with full knowledge of the network structure and of the stochastic rules, the ability to predict system configurations is generally characterized by a large uncertainty. Selecting a fraction of the nodes and observing their state may help to reduce the uncertainty about the unobserved nodes. However, choosing these points of observation in an optimal way is a highly nontrivial task, depending on the nature of the stochastic process and on the structure of the underlying interaction pattern. In this paper, we introduce a computationally efficient algorithm to determine quasioptimal solutions to the problem. The method leverages network sparsity to reduce computational complexity from exponential to almost quadratic, thus allowing the straightforward application of the method to mid-to-large-size systems. Although the method is exact only for equilibrium stochastic processes defined on trees, it turns out to be effective also for out-of-equilibrium processes on sparse loopy networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parchevsky, K. V.; Zhao, J.; Hartlep, T.
We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agreemore » well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.« less
Finite-time state feedback stabilisation of stochastic high-order nonlinear feedforward systems
NASA Astrophysics Data System (ADS)
Xie, Xue-Jun; Zhang, Xing-Hui; Zhang, Kemei
2016-07-01
This paper studies the finite-time state feedback stabilisation of stochastic high-order nonlinear feedforward systems. Based on the stochastic Lyapunov theorem on finite-time stability, by using the homogeneous domination method, the adding one power integrator and sign function method, constructing a ? Lyapunov function and verifying the existence and uniqueness of solution, a continuous state feedback controller is designed to guarantee the closed-loop system finite-time stable in probability.
On stochastic control and optimal measurement strategies. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Kramer, L. C.
1971-01-01
The control of stochastic dynamic systems is studied with particular emphasis on those which influence the quality or nature of the measurements which are made to effect control. Four main areas are discussed: (1) the meaning of stochastic optimality and the means by which dynamic programming may be applied to solve a combined control/measurement problem; (2) a technique by which it is possible to apply deterministic methods, specifically the minimum principle, to the study of stochastic problems; (3) the methods described are applied to linear systems with Gaussian disturbances to study the structure of the resulting control system; and (4) several applications are considered.
Study on the threshold of a stochastic SIR epidemic model and its extensions
NASA Astrophysics Data System (ADS)
Zhao, Dianli
2016-09-01
This paper provides a simple but effective method for estimating the threshold of a class of the stochastic epidemic models by use of the nonnegative semimartingale convergence theorem. Firstly, the threshold R0SIR is obtained for the stochastic SIR model with a saturated incidence rate, whose value is below 1 or above 1 will completely determine the disease to go extinct or prevail for any size of the white noise. Besides, when R0SIR > 1 , the system is proved to be convergent in time mean. Then, the threshold of the stochastic SIVS models with or without saturated incidence rate are also established by the same method. Comparing with the previously-known literatures, the related results are improved, and the method is simpler than before.
The response analysis of fractional-order stochastic system via generalized cell mapping method.
Wang, Liang; Xue, Lili; Sun, Chunyan; Yue, Xiaole; Xu, Wei
2018-01-01
This paper is concerned with the response of a fractional-order stochastic system. The short memory principle is introduced to ensure that the response of the system is a Markov process. The generalized cell mapping method is applied to display the global dynamics of the noise-free system, such as attractors, basins of attraction, basin boundary, saddle, and invariant manifolds. The stochastic generalized cell mapping method is employed to obtain the evolutionary process of probability density functions of the response. The fractional-order ϕ 6 oscillator and the fractional-order smooth and discontinuous oscillator are taken as examples to give the implementations of our strategies. Studies have shown that the evolutionary direction of the probability density function of the fractional-order stochastic system is consistent with the unstable manifold. The effectiveness of the method is confirmed using Monte Carlo results.
Synthetic Sediments and Stochastic Groundwater Hydrology
NASA Astrophysics Data System (ADS)
Wilson, J. L.
2002-12-01
For over twenty years the groundwater community has pursued the somewhat elusive goal of describing the effects of aquifer heterogeneity on subsurface flow and chemical transport. While small perturbation stochastic moment methods have significantly advanced theoretical understanding, why is it that stochastic applications use instead simulations of flow and transport through multiple realizations of synthetic geology? Allan Gutjahr was a principle proponent of the Fast Fourier Transform method for the synthetic generation of aquifer properties and recently explored new, more geologically sound, synthetic methods based on multi-scale Markov random fields. Focusing on sedimentary aquifers, how has the state-of-the-art of synthetic generation changed and what new developments can be expected, for example, to deal with issues like conceptual model uncertainty, the differences between measurement and modeling scales, and subgrid scale variability? What will it take to get stochastic methods, whether based on moments, multiple realizations, or some other approach, into widespread application?
Large scale Brownian dynamics of confined suspensions of rigid particles
NASA Astrophysics Data System (ADS)
Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar
2017-12-01
We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose height above the wall is set by a combination of thermal noise and active flows. We find the existence of two populations of active particles, slower ones closer to the bottom and faster ones above them, and demonstrate that our method provides quantitative accuracy even with relatively coarse resolutions of the particle geometry.
Lei, Youming; Zheng, Fan
2016-12-01
Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.
A stochastic method for stand-alone photovoltaic system sizing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabral, Claudia Valeria Tavora; Filho, Delly Oliveira; Martins, Jose Helvecio
Photovoltaic systems utilize solar energy to generate electrical energy to meet load demands. Optimal sizing of these systems includes the characterization of solar radiation. Solar radiation at the Earth's surface has random characteristics and has been the focus of various academic studies. The objective of this study was to stochastically analyze parameters involved in the sizing of photovoltaic generators and develop a methodology for sizing of stand-alone photovoltaic systems. Energy storage for isolated systems and solar radiation were analyzed stochastically due to their random behavior. For the development of the methodology proposed stochastic analysis were studied including the Markov chainmore » and beta probability density function. The obtained results were compared with those for sizing of stand-alone using from the Sandia method (deterministic), in which the stochastic model presented more reliable values. Both models present advantages and disadvantages; however, the stochastic one is more complex and provides more reliable and realistic results. (author)« less
Forecasting financial asset processes: stochastic dynamics via learning neural networks.
Giebel, S; Rainer, M
2010-01-01
Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.
Schramm-Loewner (SLE) analysis of quasi two-dimensional turbulent flows
NASA Astrophysics Data System (ADS)
Thalabard, Simon
2012-02-01
Quasi two-dimensional turbulence can be observed in several cases: for example, in the laboratory using liquid soap films, or as the result of a strong imposed rotation as obtained in three-dimensional large direct numerical simulations. We study and contrast SLE properties of such flows, in the former case in the inverse cascade of energy to large scale, and in the latter in the direct cascade of energy to small scales in the presence of a fully-helical forcing. We thus examine the geometric properties of these quasi 2D regimes in the context of stochastic geometry, as was done for the 2D inverse cascade by Bernard et al. (2006). We show that in both cases the data is compatible with self-similarity and with SLE behaviors, whose different diffusivities can be heuristically determined.
NASA Technical Reports Server (NTRS)
Perez-Peraza, J.; Alvarez, M.; Laville, A.; Gallegos, A.
1985-01-01
Energy spectra of photons emitted from Bremsstrahlung (BR) of energetic electrons with matter, is obtained from the deconvolution of the electron energy spectra. It can be inferred that the scenario for the production of X-rays and gamma rays in solar flares may vary from event to event. However, it is possible in many cases to associated low energy events to impulsive acceleration, and the high energy phase of some events to stochastic acceleration. In both cases, flare particles seem to be strongly modulated by local energy losses. Electric field acceleration, associated to neutral current sheets is a suitable candidate for impulsive acceleration. Finally, that the predominant radiation process of this radiation is the inverse Compton effect due to the local flare photon field.
How does the trans-cis photoisomerization of azobenzene take place in organic solvents?
Tiberio, Giustiniano; Muccioli, Luca; Berardi, Roberto; Zannoni, Claudio
2010-04-06
The trans-cis photoisomerization of azobenzene-containing materials is key to a number of photomechanical applications, but the actual conversion mechanism in condensed phases is still largely unknown. Herein, we study the n, pi* isomerization in a vacuum and in various solvents via a modified molecular dynamics simulation adopting an ab initio torsion-inversion force field in the ground and excited states, while allowing for electronic transitions and a stochastic decay to the fundamental state. We determine the trans-cis photoisomerization quantum yield and decay times in various solvents (n-hexane, anisole, toluene, ethanol, and ethylene glycol), and obtain results comparable with experimental ones where available. A profound difference between the isomerization mechanism in vacuum and in solution is found, with the often neglected mixed torsional-inversion pathway being the most important in solvents.
Analysis of a novel stochastic SIRS epidemic model with two different saturated incidence rates
NASA Astrophysics Data System (ADS)
Chang, Zhengbo; Meng, Xinzhu; Lu, Xiao
2017-04-01
This paper presents a stochastic SIRS epidemic model with two different nonlinear incidence rates and double epidemic asymmetrical hypothesis, and we devote to develop a mathematical method to obtain the threshold of the stochastic epidemic model. We firstly investigate the boundness and extinction of the stochastic system. Furthermore, we use Ito's formula, the comparison theorem and some new inequalities techniques of stochastic differential systems to discuss persistence in mean of two diseases on three cases. The results indicate that stochastic fluctuations can suppress the disease outbreak. Finally, numerical simulations about different noise disturbance coefficients are carried out to illustrate the obtained theoretical results.
Stochastic models for inferring genetic regulation from microarray gene expression data.
Tian, Tianhai
2010-03-01
Microarray expression profiles are inherently noisy and many different sources of variation exist in microarray experiments. It is still a significant challenge to develop stochastic models to realize noise in microarray expression profiles, which has profound influence on the reverse engineering of genetic regulation. Using the target genes of the tumour suppressor gene p53 as the test problem, we developed stochastic differential equation models and established the relationship between the noise strength of stochastic models and parameters of an error model for describing the distribution of the microarray measurements. Numerical results indicate that the simulated variance from stochastic models with a stochastic degradation process can be represented by a monomial in terms of the hybridization intensity and the order of the monomial depends on the type of stochastic process. The developed stochastic models with multiple stochastic processes generated simulations whose variance is consistent with the prediction of the error model. This work also established a general method to develop stochastic models from experimental information. 2009 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming
2018-04-01
Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.
Stochastic Nature in Cellular Processes
NASA Astrophysics Data System (ADS)
Liu, Bo; Liu, Sheng-Jun; Wang, Qi; Yan, Shi-Wei; Geng, Yi-Zhao; Sakata, Fumihiko; Gao, Xing-Fa
2011-11-01
The importance of stochasticity in cellular processes is increasingly recognized in both theoretical and experimental studies. General features of stochasticity in gene regulation and expression are briefly reviewed in this article, which include the main experimental phenomena, classification, quantization and regulation of noises. The correlation and transmission of noise in cascade networks are analyzed further and the stochastic simulation methods that can capture effects of intrinsic and extrinsic noise are described.
Dynamical Epidemic Suppression Using Stochastic Prediction and Control
2004-10-28
initial probability density function (PDF), p: D C R2 -- R, is defined by the stochastic Frobenius - Perron For deterministic systems, normal methods of...induced chaos. To analyze the qualitative change, we apply the technique of the stochastic Frobenius - Perron operator [L. Billings et al., Phys. Rev. Lett...transition matrix describing the probability of transport from one region of phase space to another, which approximates the stochastic Frobenius - Perron
Kadam, Shantanu; Vanka, Kumar
2013-02-15
Methods based on the stochastic formulation of chemical kinetics have the potential to accurately reproduce the dynamical behavior of various biochemical systems of interest. However, the computational expense makes them impractical for the study of real systems. Attempts to render these methods practical have led to the development of accelerated methods, where the reaction numbers are modeled by Poisson random numbers. However, for certain systems, such methods give rise to physically unrealistic negative numbers for species populations. The methods which make use of binomial variables, in place of Poisson random numbers, have since become popular, and have been partially successful in addressing this problem. In this manuscript, the development of two new computational methods, based on the representative reaction approach (RRA), has been discussed. The new methods endeavor to solve the problem of negative numbers, by making use of tools like the stochastic simulation algorithm and the binomial method, in conjunction with the RRA. It is found that these newly developed methods perform better than other binomial methods used for stochastic simulations, in resolving the problem of negative populations. Copyright © 2012 Wiley Periodicals, Inc.
Design Tool Using a New Optimization Method Based on a Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.
Barber, Jared; Tanase, Roxana; Yotov, Ivan
2016-06-01
Several Kalman filter algorithms are presented for data assimilation and parameter estimation for a nonlinear diffusion model of epithelial cell migration. These include the ensemble Kalman filter with Monte Carlo sampling and a stochastic collocation (SC) Kalman filter with structured sampling. Further, two types of noise are considered -uncorrelated noise resulting in one stochastic dimension for each element of the spatial grid and correlated noise parameterized by the Karhunen-Loeve (KL) expansion resulting in one stochastic dimension for each KL term. The efficiency and accuracy of the four methods are investigated for two cases with synthetic data with and without noise, as well as data from a laboratory experiment. While it is observed that all algorithms perform reasonably well in matching the target solution and estimating the diffusion coefficient and the growth rate, it is illustrated that the algorithms that employ SC and KL expansion are computationally more efficient, as they require fewer ensemble members for comparable accuracy. In the case of SC methods, this is due to improved approximation in stochastic space compared to Monte Carlo sampling. In the case of KL methods, the parameterization of the noise results in a stochastic space of smaller dimension. The most efficient method is the one combining SC and KL expansion. Copyright © 2016 Elsevier Inc. All rights reserved.
Koh, Wonryull; Blackwell, Kim T
2011-04-21
Stochastic simulation of reaction-diffusion systems enables the investigation of stochastic events arising from the small numbers and heterogeneous distribution of molecular species in biological cells. Stochastic variations in intracellular microdomains and in diffusional gradients play a significant part in the spatiotemporal activity and behavior of cells. Although an exact stochastic simulation that simulates every individual reaction and diffusion event gives a most accurate trajectory of the system's state over time, it can be too slow for many practical applications. We present an accelerated algorithm for discrete stochastic simulation of reaction-diffusion systems designed to improve the speed of simulation by reducing the number of time-steps required to complete a simulation run. This method is unique in that it employs two strategies that have not been incorporated in existing spatial stochastic simulation algorithms. First, diffusive transfers between neighboring subvolumes are based on concentration gradients. This treatment necessitates sampling of only the net or observed diffusion events from higher to lower concentration gradients rather than sampling all diffusion events regardless of local concentration gradients. Second, we extend the non-negative Poisson tau-leaping method that was originally developed for speeding up nonspatial or homogeneous stochastic simulation algorithms. This method calculates each leap time in a unified step for both reaction and diffusion processes while satisfying the leap condition that the propensities do not change appreciably during the leap and ensuring that leaping does not cause molecular populations to become negative. Numerical results are presented that illustrate the improvement in simulation speed achieved by incorporating these two new strategies.
Adaptive hybrid simulations for multiscale stochastic reaction networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such amore » partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.« less
Adaptive hybrid simulations for multiscale stochastic reaction networks.
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.
Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin
2018-04-20
We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.
Confidence set inference with a prior quadratic bound
NASA Technical Reports Server (NTRS)
Backus, George E.
1989-01-01
In the uniqueness part of a geophysical inverse problem, the observer wants to predict all likely values of P unknown numerical properties z=(z sub 1,...,z sub p) of the earth from measurement of D other numerical properties y (sup 0) = (y (sub 1) (sup 0), ..., y (sub D (sup 0)), using full or partial knowledge of the statistical distribution of the random errors in y (sup 0). The data space Y containing y(sup 0) is D-dimensional, so when the model space X is infinite-dimensional the linear uniqueness problem usually is insoluble without prior information about the correct earth model x. If that information is a quadratic bound on x, Bayesian inference (BI) and stochastic inversion (SI) inject spurious structure into x, implied by neither the data nor the quadratic bound. Confidence set inference (CSI) provides an alternative inversion technique free of this objection. Confidence set inference is illustrated in the problem of estimating the geomagnetic field B at the core-mantle boundary (CMB) from components of B measured on or above the earth's surface.
Resonant activation in piecewise linear asymmetric potentials.
Fiasconaro, Alessandro; Spagnolo, Bernardo
2011-04-01
This work analyzes numerically the role played by the asymmetry of a piecewise linear potential, in the presence of both a Gaussian white noise and a dichotomous noise, on the resonant activation phenomenon. The features of the asymmetry of the potential barrier arise by investigating the stochastic transitions far behind the potential maximum, from the initial well to the bottom of the adjacent potential well. Because of the asymmetry of the potential profile together with the random external force uniform in space, we find, for the different asymmetries: (1) an inversion of the curves of the mean first passage time in the resonant region of the correlation time τ of the dichotomous noise, for low thermal noise intensities; (2) a maximum of the mean velocity of the Brownian particle as a function of τ; and (3) an inversion of the curves of the mean velocity and a very weak current reversal in the miniratchet system obtained with the asymmetrical potential profiles investigated. An inversion of the mean first passage time curves is also observed by varying the amplitude of the dichotomous noise, behavior confirmed by recent experiments. ©2011 American Physical Society
Hybrid approaches for multiple-species stochastic reaction–diffusion models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spill, Fabian, E-mail: fspill@bu.edu; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139; Guerrero, Pilar
2015-10-15
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and smallmore » in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.« less
NASA Astrophysics Data System (ADS)
Manning, Robert Michael
This work concerns itself with the analysis of two optical remote sensing methods to be used to obtain parameters of the turbulent atmosphere pertinent to stochastic electromagnetic wave propagation studies, and the well -posed solution to a class of integral equations that are central to the development of these remote sensing methods. A remote sensing technique is theoretically developed whereby the temporal frequency spectrum of the scintillations of a stellar source or a point source within the atmosphere, observed through a variable radius aperture, is related to the space-time spectrum of atmospheric scintillation. The key to this spectral remote sensing method is the spatial filtering performed by a finite aperture. The entire method is developed without resorting to a priori information such as results from stochastic wave propagation theory. Once the space-time spectrum of the scintillations is obtained, an application of known results of atmospheric wave propagation theory and simple geometric considerations are shown to yield such important information such as the spectrum of atmospheric turbulence, the cross-wind velocity, and the path profile of the atmospheric refractive index structure parameter. A method is also developed to independently verify the Taylor frozen flow hypothesis. The success of the spectral remote sensing method relies on the solution to a Fredholm integral equation of the first kind. An entire class of such equations, that are peculiar to inverse diffraction problems, is studied and a well-posed solution (in the sense of Hadamard) is obtained and probed. Conditions of applicability are derived and shown not to limit the useful operating range of the spectral remote sensing method. The general integral equation solution obtained is then applied to another remote sensing problem having to do with the characterization of the particle size distribution to atmospheric aerosols and hydrometeors. By measuring the diffraction pattern in the focal plane of a lens created by the passage of a laser beam through a distribution of particles, it is shown that the particle-size distribution of the particles can be obtained. An intermediate result of the analysis also gives the total volume concentration of the particles.
The development of the deterministic nonlinear PDEs in particle physics to stochastic case
NASA Astrophysics Data System (ADS)
Abdelrahman, Mahmoud A. E.; Sohaly, M. A.
2018-06-01
In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing
2018-03-01
The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.
Distributed Adaptive Neural Control for Stochastic Nonlinear Multiagent Systems.
Wang, Fang; Chen, Bing; Lin, Chong; Li, Xuehua
2016-11-14
In this paper, a consensus tracking problem of nonlinear multiagent systems is investigated under a directed communication topology. All the followers are modeled by stochastic nonlinear systems in nonstrict feedback form, where nonlinearities and stochastic disturbance terms are totally unknown. Based on the structural characteristic of neural networks (in Lemma 4), a novel distributed adaptive neural control scheme is put forward. The raised control method not only effectively handles unknown nonlinearities in nonstrict feedback systems, but also copes with the interactions among agents and coupling terms. Based on the stochastic Lyapunov functional method, it is indicated that all the signals of the closed-loop system are bounded in probability and all followers' outputs are convergent to a neighborhood of the output of leader. At last, the efficiency of the control method is testified by a numerical example.
Li, Chunguang; Chen, Luonan; Aihara, Kazuyuki
2008-06-01
Real systems are often subject to both noise perturbations and impulsive effects. In this paper, we study the stability and stabilization of systems with both noise perturbations and impulsive effects. In other words, we generalize the impulsive control theory from the deterministic case to the stochastic case. The method is based on extending the comparison method to the stochastic case. The method presented in this paper is general and easy to apply. Theoretical results on both stability in the pth mean and stability with disturbance attenuation are derived. To show the effectiveness of the basic theory, we apply it to the impulsive control and synchronization of chaotic systems with noise perturbations, and to the stability of impulsive stochastic neural networks. Several numerical examples are also presented to verify the theoretical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Fei; Huang, Yongxi
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
Xie, Fei; Huang, Yongxi
2018-02-04
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.
Felipe, T.; Braun, D. C.; Birch, A. C.
2018-01-01
Improving methods for determining the subsurface structure of sunspots from their seismic signature requires a better understanding of the interaction of waves with magnetic field concentrations. We aim to quantify the impact of changes in the internal structure of sunspots on local helioseismic signals. We have numerically simulated the propagation of a stochastic wave field through sunspot models with different properties, accounting for changes in the Wilson depression between 250 and 550 km and in the photospheric umbral magnetic field between 1500 and 3500 G. The results show that travel-time shifts at frequencies above approximately 3.50 mHz (depending on the phase-speed filter) are insensitive to the magnetic field strength. The travel time of these waves is determined exclusively by the Wilson depression and sound-speed perturbation. The travel time of waves with lower frequencies is affected by the direct effect of the magnetic field, although photospheric field strengths below 1500 G do not leave a significant trace on the travel-time measurements. These results could potentially be used to develop simplified travel-time inversion methods. PMID:29670298
Felipe, T; Braun, D C; Birch, A C
2017-01-01
Improving methods for determining the subsurface structure of sunspots from their seismic signature requires a better understanding of the interaction of waves with magnetic field concentrations. We aim to quantify the impact of changes in the internal structure of sunspots on local helioseismic signals. We have numerically simulated the propagation of a stochastic wave field through sunspot models with different properties, accounting for changes in the Wilson depression between 250 and 550 km and in the photospheric umbral magnetic field between 1500 and 3500 G. The results show that travel-time shifts at frequencies above approximately 3.50 mHz (depending on the phase-speed filter) are insensitive to the magnetic field strength. The travel time of these waves is determined exclusively by the Wilson depression and sound-speed perturbation. The travel time of waves with lower frequencies is affected by the direct effect of the magnetic field, although photospheric field strengths below 1500 G do not leave a significant trace on the travel-time measurements. These results could potentially be used to develop simplified travel-time inversion methods.
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2010-12-01
This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.
NASA Astrophysics Data System (ADS)
Syahidatul Ayuni Mazlan, Mazma; Rosli, Norhayati; Jauhari Arief Ichwan, Solachuddin; Suhaity Azmi, Nina
2017-09-01
A stochastic model is introduced to describe the growth of cancer affected by anti-cancer therapeutics of Chondroitin Sulfate (CS). The parameters values of the stochastic model are estimated via maximum likelihood function. The numerical method of Euler-Maruyama will be employed to solve the model numerically. The efficiency of the stochastic model is measured by comparing the simulated result with the experimental data.
Asselineau, Charles-Alexis; Zapata, Jose; Pye, John
2015-06-01
A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.
IMPLICIT DUAL CONTROL BASED ON PARTICLE FILTERING AND FORWARD DYNAMIC PROGRAMMING.
Bayard, David S; Schumitzky, Alan
2010-03-01
This paper develops a sampling-based approach to implicit dual control. Implicit dual control methods synthesize stochastic control policies by systematically approximating the stochastic dynamic programming equations of Bellman, in contrast to explicit dual control methods that artificially induce probing into the control law by modifying the cost function to include a term that rewards learning. The proposed implicit dual control approach is novel in that it combines a particle filter with a policy-iteration method for forward dynamic programming. The integration of the two methods provides a complete sampling-based approach to the problem. Implementation of the approach is simplified by making use of a specific architecture denoted as an H-block. Practical suggestions are given for reducing computational loads within the H-block for real-time applications. As an example, the method is applied to the control of a stochastic pendulum model having unknown mass, length, initial position and velocity, and unknown sign of its dc gain. Simulation results indicate that active controllers based on the described method can systematically improve closed-loop performance with respect to other more common stochastic control approaches.
A stochastic visco-hyperelastic model of human placenta tissue for finite element crash simulations.
Hu, Jingwen; Klinich, Kathleen D; Miller, Carl S; Rupp, Jonathan D; Nazmi, Giseli; Pearlman, Mark D; Schneider, Lawrence W
2011-03-01
Placental abruption is the most common cause of fetal deaths in motor-vehicle crashes, but studies on the mechanical properties of human placenta are rare. This study presents a new method of developing a stochastic visco-hyperelastic material model of human placenta tissue using a combination of uniaxial tensile testing, specimen-specific finite element (FE) modeling, and stochastic optimization techniques. In our previous study, uniaxial tensile tests of 21 placenta specimens have been performed using a strain rate of 12/s. In this study, additional uniaxial tensile tests were performed using strain rates of 1/s and 0.1/s on 25 placenta specimens. Response corridors for the three loading rates were developed based on the normalized data achieved by test reconstructions of each specimen using specimen-specific FE models. Material parameters of a visco-hyperelastic model and their associated standard deviations were tuned to match both the means and standard deviations of all three response corridors using a stochastic optimization method. The results show a very good agreement between the tested and simulated response corridors, indicating that stochastic analysis can improve estimation of variability in material model parameters. The proposed method can be applied to develop stochastic material models of other biological soft tissues.
Xie, Ping; Wu, Zi Yi; Zhao, Jiang Yan; Sang, Yan Fang; Chen, Jie
2018-04-01
A stochastic hydrological process is influenced by both stochastic and deterministic factors. A hydrological time series contains not only pure random components reflecting its inheri-tance characteristics, but also deterministic components reflecting variability characteristics, such as jump, trend, period, and stochastic dependence. As a result, the stochastic hydrological process presents complicated evolution phenomena and rules. To better understand these complicated phenomena and rules, this study described the inheritance and variability characteristics of an inconsistent hydrological series from two aspects: stochastic process simulation and time series analysis. In addition, several frequency analysis approaches for inconsistent time series were compared to reveal the main problems in inconsistency study. Then, we proposed a new concept of hydrological genes origined from biological genes to describe the inconsistent hydrolocal processes. The hydrologi-cal genes were constructed using moments methods, such as general moments, weight function moments, probability weight moments and L-moments. Meanwhile, the five components, including jump, trend, periodic, dependence and pure random components, of a stochastic hydrological process were defined as five hydrological bases. With this method, the inheritance and variability of inconsistent hydrological time series were synthetically considered and the inheritance, variability and evolution principles were fully described. Our study would contribute to reveal the inheritance, variability and evolution principles in probability distribution of hydrological elements.
de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-12-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.
NASA Astrophysics Data System (ADS)
de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-12-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.
Stochastic Community Assembly: Does It Matter in Microbial Ecology?
Zhou, Jizhong; Ning, Daliang
2017-12-01
Understanding the mechanisms controlling community diversity, functions, succession, and biogeography is a central, but poorly understood, topic in ecology, particularly in microbial ecology. Although stochastic processes are believed to play nonnegligible roles in shaping community structure, their importance relative to deterministic processes is hotly debated. The importance of ecological stochasticity in shaping microbial community structure is far less appreciated. Some of the main reasons for such heavy debates are the difficulty in defining stochasticity and the diverse methods used for delineating stochasticity. Here, we provide a critical review and synthesis of data from the most recent studies on stochastic community assembly in microbial ecology. We then describe both stochastic and deterministic components embedded in various ecological processes, including selection, dispersal, diversification, and drift. We also describe different approaches for inferring stochasticity from observational diversity patterns and highlight experimental approaches for delineating ecological stochasticity in microbial communities. In addition, we highlight research challenges, gaps, and future directions for microbial community assembly research. Copyright © 2017 American Society for Microbiology.
Visell, Yon
2015-04-01
This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.
Gene regulatory networks: a coarse-grained, equation-free approach to multiscale computation.
Erban, Radek; Kevrekidis, Ioannis G; Adalsteinsson, David; Elston, Timothy C
2006-02-28
We present computer-assisted methods for analyzing stochastic models of gene regulatory networks. The main idea that underlies this equation-free analysis is the design and execution of appropriately initialized short bursts of stochastic simulations; the results of these are processed to estimate coarse-grained quantities of interest, such as mesoscopic transport coefficients. In particular, using a simple model of a genetic toggle switch, we illustrate the computation of an effective free energy Phi and of a state-dependent effective diffusion coefficient D that characterize an unavailable effective Fokker-Planck equation. Additionally we illustrate the linking of equation-free techniques with continuation methods for performing a form of stochastic "bifurcation analysis"; estimation of mean switching times in the case of a bistable switch is also implemented in this equation-free context. The accuracy of our methods is tested by direct comparison with long-time stochastic simulations. This type of equation-free analysis appears to be a promising approach to computing features of the long-time, coarse-grained behavior of certain classes of complex stochastic models of gene regulatory networks, circumventing the need for long Monte Carlo simulations.
Thomas, Philipp; Matuschek, Hannes; Grima, Ramon
2012-01-01
The accepted stochastic descriptions of biochemical dynamics under well-mixed conditions are given by the Chemical Master Equation and the Stochastic Simulation Algorithm, which are equivalent. The latter is a Monte-Carlo method, which, despite enjoying broad availability in a large number of existing software packages, is computationally expensive due to the huge amounts of ensemble averaging required for obtaining accurate statistical information. The former is a set of coupled differential-difference equations for the probability of the system being in any one of the possible mesoscopic states; these equations are typically computationally intractable because of the inherently large state space. Here we introduce the software package intrinsic Noise Analyzer (iNA), which allows for systematic analysis of stochastic biochemical kinetics by means of van Kampen's system size expansion of the Chemical Master Equation. iNA is platform independent and supports the popular SBML format natively. The present implementation is the first to adopt a complementary approach that combines state-of-the-art analysis tools using the computer algebra system Ginac with traditional methods of stochastic simulation. iNA integrates two approximation methods based on the system size expansion, the Linear Noise Approximation and effective mesoscopic rate equations, which to-date have not been available to non-expert users, into an easy-to-use graphical user interface. In particular, the present methods allow for quick approximate analysis of time-dependent mean concentrations, variances, covariances and correlations coefficients, which typically outperforms stochastic simulations. These analytical tools are complemented by automated multi-core stochastic simulations with direct statistical evaluation and visualization. We showcase iNA's performance by using it to explore the stochastic properties of cooperative and non-cooperative enzyme kinetics and a gene network associated with circadian rhythms. The software iNA is freely available as executable binaries for Linux, MacOSX and Microsoft Windows, as well as the full source code under an open source license.
Grima, Ramon
2012-01-01
The accepted stochastic descriptions of biochemical dynamics under well-mixed conditions are given by the Chemical Master Equation and the Stochastic Simulation Algorithm, which are equivalent. The latter is a Monte-Carlo method, which, despite enjoying broad availability in a large number of existing software packages, is computationally expensive due to the huge amounts of ensemble averaging required for obtaining accurate statistical information. The former is a set of coupled differential-difference equations for the probability of the system being in any one of the possible mesoscopic states; these equations are typically computationally intractable because of the inherently large state space. Here we introduce the software package intrinsic Noise Analyzer (iNA), which allows for systematic analysis of stochastic biochemical kinetics by means of van Kampen’s system size expansion of the Chemical Master Equation. iNA is platform independent and supports the popular SBML format natively. The present implementation is the first to adopt a complementary approach that combines state-of-the-art analysis tools using the computer algebra system Ginac with traditional methods of stochastic simulation. iNA integrates two approximation methods based on the system size expansion, the Linear Noise Approximation and effective mesoscopic rate equations, which to-date have not been available to non-expert users, into an easy-to-use graphical user interface. In particular, the present methods allow for quick approximate analysis of time-dependent mean concentrations, variances, covariances and correlations coefficients, which typically outperforms stochastic simulations. These analytical tools are complemented by automated multi-core stochastic simulations with direct statistical evaluation and visualization. We showcase iNA’s performance by using it to explore the stochastic properties of cooperative and non-cooperative enzyme kinetics and a gene network associated with circadian rhythms. The software iNA is freely available as executable binaries for Linux, MacOSX and Microsoft Windows, as well as the full source code under an open source license. PMID:22723865
Investigation of advanced UQ for CRUD prediction with VIPRE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldred, Michael Scott
2011-09-01
This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less
Kaye, T.N.; Pyke, David A.
2003-01-01
Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.
Modeling and Properties of Nonlinear Stochastic Dynamical System of Continuous Culture
NASA Astrophysics Data System (ADS)
Wang, Lei; Feng, Enmin; Ye, Jianxiong; Xiu, Zhilong
The stochastic counterpart to the deterministic description of continuous fermentation with ordinary differential equation is investigated in the process of glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae. We briefly discuss the continuous fermentation process driven by three-dimensional Brownian motion and Lipschitz coefficients, which is suitable for the factual fermentation. Subsequently, we study the existence and uniqueness of solutions for the stochastic system as well as the boundedness of the Two-order Moment and the Markov property of the solution. Finally stochastic simulation is carried out under the Stochastic Euler-Maruyama method.
NASA Astrophysics Data System (ADS)
Hunziker, Jürg; Laloy, Eric; Linde, Niklas
2016-04-01
Deterministic inversion procedures can often explain field data, but they only deliver one final subsurface model that depends on the initial model and regularization constraints. This leads to poor insights about the uncertainties associated with the inferred model properties. In contrast, probabilistic inversions can provide an ensemble of model realizations that accurately span the range of possible models that honor the available calibration data and prior information allowing a quantitative description of model uncertainties. We reconsider the problem of inferring the dielectric permittivity (directly related to radar velocity) structure of the subsurface by inversion of first-arrival travel times from crosshole ground penetrating radar (GPR) measurements. We rely on the DREAM_(ZS) algorithm that is a state-of-the-art Markov chain Monte Carlo (MCMC) algorithm. Such algorithms need several orders of magnitude more forward simulations than deterministic algorithms and often become infeasible in high parameter dimensions. To enable high-resolution imaging with MCMC, we use a recently proposed dimensionality reduction approach that allows reproducing 2D multi-Gaussian fields with far fewer parameters than a classical grid discretization. We consider herein a dimensionality reduction from 5000 to 257 unknowns. The first 250 parameters correspond to a spectral representation of random and uncorrelated spatial fluctuations while the remaining seven geostatistical parameters are (1) the standard deviation of the data error, (2) the mean and (3) the variance of the relative electric permittivity, (4) the integral scale along the major axis of anisotropy, (5) the anisotropy angle, (6) the ratio of the integral scale along the minor axis of anisotropy to the integral scale along the major axis of anisotropy and (7) the shape parameter of the Matérn function. The latter essentially defines the type of covariance function (e.g., exponential, Whittle, Gaussian). We present an improved formulation of the dimensionality reduction, and numerically show how it reduces artifacts in the generated models and provides better posterior estimation of the subsurface geostatistical structure. We next show that the results of the method compare very favorably against previous deterministic and stochastic inversion results obtained at the South Oyster Bacterial Transport Site in Virginia, USA. The long-term goal of this work is to enable MCMC-based full waveform inversion of crosshole GPR data.
Fokker-Planck Equations of Stochastic Acceleration: A Study of Numerical Methods
NASA Astrophysics Data System (ADS)
Park, Brian T.; Petrosian, Vahe
1996-03-01
Stochastic wave-particle acceleration may be responsible for producing suprathermal particles in many astrophysical situations. The process can be described as a diffusion process through the Fokker-Planck equation. If the acceleration region is homogeneous and the scattering mean free path is much smaller than both the energy change mean free path and the size of the acceleration region, then the Fokker-Planck equation reduces to a simple form involving only the time and energy variables. in an earlier paper (Park & Petrosian 1995, hereafter Paper 1), we studied the analytic properties of the Fokker-Planck equation and found analytic solutions for some simple cases. In this paper, we study the numerical methods which must be used to solve more general forms of the equation. Two classes of numerical methods are finite difference methods and Monte Carlo simulations. We examine six finite difference methods, three fully implicit and three semi-implicit, and a stochastic simulation method which uses the exact correspondence between the Fokker-Planck equation and the it5 stochastic differential equation. As discussed in Paper I, Fokker-Planck equations derived under the above approximations are singular, causing problems with boundary conditions and numerical overflow and underflow. We evaluate each method using three sample equations to test its stability, accuracy, efficiency, and robustness for both time-dependent and steady state solutions. We conclude that the most robust finite difference method is the fully implicit Chang-Cooper method, with minor extensions to account for the escape and injection terms. Other methods suffer from stability and accuracy problems when dealing with some Fokker-Planck equations. The stochastic simulation method, although simple to implement, is susceptible to Poisson noise when insufficient test particles are used and is computationally very expensive compared to the finite difference method.
Balancing the stochastic description of uncertainties as a function of hydrologic model complexity
NASA Astrophysics Data System (ADS)
Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.
2016-12-01
Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.
STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB.
Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K
2011-04-15
The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new software tool STOCHSIMGPU that exploits graphics processing units (GPUs) for parallel stochastic simulations of biological/chemical reaction systems and show that significant gains in efficiency can be made. It is integrated into MATLAB and works with the Systems Biology Toolbox 2 (SBTOOLBOX2) for MATLAB. The GPU-based parallel implementation of the Gillespie stochastic simulation algorithm (SSA), the logarithmic direct method (LDM) and the next reaction method (NRM) is approximately 85 times faster than the sequential implementation of the NRM on a central processing unit (CPU). Using our software does not require any changes to the user's models, since it acts as a direct replacement of the stochastic simulation software of the SBTOOLBOX2. The software is open source under the GPL v3 and available at http://www.maths.ox.ac.uk/cmb/STOCHSIMGPU. The web site also contains supplementary information. klingbeil@maths.ox.ac.uk Supplementary data are available at Bioinformatics online.
Backward-stochastic-differential-equation approach to modeling of gene expression
NASA Astrophysics Data System (ADS)
Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F.; Aguiar, Paulo
2017-03-01
In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).
Backward-stochastic-differential-equation approach to modeling of gene expression.
Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F; Aguiar, Paulo
2017-03-01
In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).
NASA Astrophysics Data System (ADS)
Moon, Seulgi; Shelef, Eitan; Hilley, George E.
2015-05-01
In this study, we model postglacial surface processes and examine the evolution of the topography and denudation rates within the deglaciated Washington Cascades to understand the controls on and time scales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-time scale denudation rates measured from cosmogenic 10Be isotopes. The probability distributions of those model parameters calculated based on a Bayesian inversion scheme show comparable ranges from previous studies in similar rock types and climatic conditions. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), whereas precipitation and slopes affect the spatial variation in landslide denudation rates. Simulation results show that postglacial denudation rates decay over time and take longer than 100 kyr to reach time-invariant rates. Over time, the landslides in the model consume the steep slopes characteristic of deglaciated landscapes. This response time scale is on the order of or longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may produce a significant and prolonged impact on denudation and topography.
Nonlinear Inference in Partially Observed Physical Systems and Deep Neural Networks
NASA Astrophysics Data System (ADS)
Rozdeba, Paul J.
The problem of model state and parameter estimation is a significant challenge in nonlinear systems. Due to practical considerations of experimental design, it is often the case that physical systems are partially observed, meaning that data is only available for a subset of the degrees of freedom required to fully model the observed system's behaviors and, ultimately, predict future observations. Estimation in this context is highly complicated by the presence of chaos, stochasticity, and measurement noise in dynamical systems. One of the aims of this dissertation is to simultaneously analyze state and parameter estimation in as a regularized inverse problem, where the introduction of a model makes it possible to reverse the forward problem of partial, noisy observation; and as a statistical inference problem using data assimilation to transfer information from measurements to the model states and parameters. Ultimately these two formulations achieve the same goal. Similar aspects that appear in both are highlighted as a means for better understanding the structure of the nonlinear inference problem. An alternative approach to data assimilation that uses model reduction is then examined as a way to eliminate unresolved nonlinear gating variables from neuron models. In this formulation, only measured variables enter into the model, and the resulting errors are themselves modeled by nonlinear stochastic processes with memory. Finally, variational annealing, a data assimilation method previously applied to dynamical systems, is introduced as a potentially useful tool for understanding deep neural network training in machine learning by exploiting similarities between the two problems.
Stochastic simulations on a model of circadian rhythm generation.
Miura, Shigehiro; Shimokawa, Tetsuya; Nomura, Taishin
2008-01-01
Biological phenomena are often modeled by differential equations, where states of a model system are described by continuous real values. When we consider concentrations of molecules as dynamical variables for a set of biochemical reactions, we implicitly assume that numbers of the molecules are large enough so that their changes can be regarded as continuous and they are described deterministically. However, for a system with small numbers of molecules, changes in their numbers are apparently discrete and molecular noises become significant. In such cases, models with deterministic differential equations may be inappropriate, and the reactions must be described by stochastic equations. In this study, we focus a clock gene expression for a circadian rhythm generation, which is known as a system involving small numbers of molecules. Thus it is appropriate for the system to be modeled by stochastic equations and analyzed by methodologies of stochastic simulations. The interlocked feedback model proposed by Ueda et al. as a set of deterministic ordinary differential equations provides a basis of our analyses. We apply two stochastic simulation methods, namely Gillespie's direct method and the stochastic differential equation method also by Gillespie, to the interlocked feedback model. To this end, we first reformulated the original differential equations back to elementary chemical reactions. With those reactions, we simulate and analyze the dynamics of the model using two methods in order to compare them with the dynamics obtained from the original deterministic model and to characterize dynamics how they depend on the simulation methodologies.
NASA Astrophysics Data System (ADS)
Kahraman, Gokalp
We examine the performance of optical communication systems using erbium-doped fiber amplifiers (OFAs) and avalanche photodiodes (APDs) including nonlinear and transient effects in the former and transient effects in the latter. Transient effects become important as these amplifiers are operated at very high data rates. Nonlinear effects are important for high gain amplifiers. In most studies of noise in these devices, the temporal and nonlinear effects have been ignored. We present a quantum theory of noise in OFAs including the saturation of the atomic population inversion and the pump depletion. We study the quantum-statistical properties of pulse amplification. The generating function of the output photon number distribution (PND) is determined as a function of time during the course of the pulse with an arbitrary input PND assumed. Under stationary conditions, we determine the Kolmogorov equation obeyed by the PND. The PND at the output is determined for arbitrary input distributions. The effect of the counting time and the filter bandwidth used by the detection circuit is determined. We determine the gain, the noise figure, and the sensitivity of receivers using OFAs as preamplifiers, including the effect of backward amplified spontaneous emission (ASE). Backward ASE degrades the noise figure and the sensitivity by depleting the population inversion at the input side of the fiber and thus increasing the noise during signal amplification. We show that the sensitivity improves with the bit rate at low rates but degrades at high rates. We provide a stochastic model that describes the time dynamics in a double-carrier multiplication (DCM) APD. A discrete stochastic model for the electron/hole motion and multiplication is defined on a spatio-temporal lattice and used to derive recursive equations for the mean, the variance, and the autocorrelation of the impulse response as functions of time. The power spectral density of the photocurrent produced in response to a Poisson-distributed stream of photons of uniform rate is evaluated. A method is also developed for solving the coupled transport equations that describe the electron and hole currents in a DCM-APD of arbitrary structure.
NASA Astrophysics Data System (ADS)
Kim, M. G.; Lin, J. C.; Huang, L.; Edwards, T. W.; Jones, J. P.; Polavarapu, S.; Nassar, R.
2012-12-01
Reducing uncertainties in the projections of atmospheric CO2 concentration levels relies on increasing our scientific understanding of the exchange processes between atmosphere and land at regional scales, which is highly dependent on climate, ecosystem processes, and anthropogenic disturbances. In order for researchers to reduce the uncertainties, a combined framework that mutually addresses these independent variables to account for each process is invaluable. In this research, an example of top-down inversion modeling approach that is combined with stable isotope measurement data is presented. The potential for the proposed analysis framework is demonstrated using the Stochastic Time-Inverted Lagrangian Transport (STILT) model runs combined with high precision CO2 concentration data measured at a Canadian greenhouse gas monitoring site as well as multiple tracers: stable isotopes and combustion-related species. This framework yields a unique regional scale constraint that can be used to relate the measured changes of tracer concentrations to processes in their upwind source regions. The inversion approach both reproduces source areas in a spatially explicit way through sophisticated Lagrangian transport modeling and infers emission processes that leave imprints on atmospheric tracers. The understanding gained through the combined approach can also be used to verify reported emissions as part of regulatory regimes. The results indicate that changes in CO2 concentration is strongly influenced by regional sources, including significant fossil fuel emissions, and that the combined approach can be used to test reported emissions of the greenhouse gas from oil sands developments. Also, methods to further reduce uncertainties in the retrieved emissions by incorporating additional constraints including tracer-to-tracer correlations and satellite measurements are discussed briefly.
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Sun, Jian-Qiao
2016-09-01
The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.
NASA Astrophysics Data System (ADS)
Hernandez, F.; Liang, X.
2017-12-01
Reliable real-time hydrological forecasting, to predict important phenomena such as floods, is invaluable to the society. However, modern high-resolution distributed models have faced challenges when dealing with uncertainties that are caused by the large number of parameters and initial state estimations involved. Therefore, to rely on these high-resolution models for critical real-time forecast applications, considerable improvements on the parameter and initial state estimation techniques must be made. In this work we present a unified data assimilation algorithm called Optimized PareTo Inverse Modeling through Inverse STochastic Search (OPTIMISTS) to deal with the challenge of having robust flood forecasting for high-resolution distributed models. This new algorithm combines the advantages of particle filters and variational methods in a unique way to overcome their individual weaknesses. The analysis of candidate particles compares model results with observations in a flexible time frame, and a multi-objective approach is proposed which attempts to simultaneously minimize differences with the observations and departures from the background states by using both Bayesian sampling and non-convex evolutionary optimization. Moreover, the resulting Pareto front is given a probabilistic interpretation through kernel density estimation to create a non-Gaussian distribution of the states. OPTIMISTS was tested on a low-resolution distributed land surface model using VIC (Variable Infiltration Capacity) and on a high-resolution distributed hydrological model using the DHSVM (Distributed Hydrology Soil Vegetation Model). In the tests streamflow observations are assimilated. OPTIMISTS was also compared with a traditional particle filter and a variational method. Results show that our method can reliably produce adequate forecasts and that it is able to outperform those resulting from assimilating the observations using a particle filter or an evolutionary 4D variational method alone. In addition, our method is shown to be efficient in tackling high-resolution applications with robust results.
Topology optimization under stochastic stiffness
NASA Astrophysics Data System (ADS)
Asadpoure, Alireza
Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.
Zhang, Huibin; Susanto, Teodorus T.; Wan, Yue
2016-01-01
Type 1 pili (T1P) are major virulence factors for uropathogenic Escherichia coli (UPEC), which cause both acute and recurrent urinary tract infections. T1P expression therefore is of direct relevance for disease. T1P are phase variable (both piliated and nonpiliated bacteria exist in a clonal population) and are controlled by an invertible DNA switch (fimS), which contains the promoter for the fim operon encoding T1P. Inversion of fimS is stochastic but may be biased by environmental conditions and other signals that ultimately converge at fimS itself. Previous studies of fimS sequences important for T1P phase variation have focused on laboratory-adapted E. coli strains and have been limited in the number of mutations or by alteration of the fimS genomic context. We surmounted these limitations by using saturating genomic mutagenesis of fimS coupled with accurate sequencing to detect both mutations and phase status simultaneously. In addition to the sequences known to be important for biasing fimS inversion, our method also identifies a previously unknown pair of 5′ UTR inverted repeats that act by altering the relative fimA levels to control phase variation. Thus we have uncovered an additional layer of T1P regulation potentially impacting virulence and the coordinate expression of multiple pilus systems. PMID:27035967
Zhang, Huibin; Susanto, Teodorus T; Wan, Yue; Chen, Swaine L
2016-04-12
Type 1 pili (T1P) are major virulence factors for uropathogenic Escherichia coli (UPEC), which cause both acute and recurrent urinary tract infections. T1P expression therefore is of direct relevance for disease. T1P are phase variable (both piliated and nonpiliated bacteria exist in a clonal population) and are controlled by an invertible DNA switch (fimS), which contains the promoter for the fim operon encoding T1P. Inversion of fimS is stochastic but may be biased by environmental conditions and other signals that ultimately converge at fimS itself. Previous studies of fimS sequences important for T1P phase variation have focused on laboratory-adapted E coli strains and have been limited in the number of mutations or by alteration of the fimS genomic context. We surmounted these limitations by using saturating genomic mutagenesis of fimS coupled with accurate sequencing to detect both mutations and phase status simultaneously. In addition to the sequences known to be important for biasing fimS inversion, our method also identifies a previously unknown pair of 5' UTR inverted repeats that act by altering the relative fimA levels to control phase variation. Thus we have uncovered an additional layer of T1P regulation potentially impacting virulence and the coordinate expression of multiple pilus systems.
Local Infrasound Variability Related to In Situ Atmospheric Observation
NASA Astrophysics Data System (ADS)
Kim, Keehoon; Rodgers, Arthur; Seastrand, Douglas
2018-04-01
Local infrasound is widely used to constrain source parameters of near-surface events (e.g., chemical explosions and volcanic eruptions). While atmospheric conditions are critical to infrasound propagation and source parameter inversion, local atmospheric variability is often ignored by assuming homogeneous atmospheres, and their impact on the source inversion uncertainty has never been accounted for due to the lack of quantitative understanding of infrasound variability. We investigate atmospheric impacts on local infrasound propagation by repeated explosion experiments with a dense acoustic network and in situ atmospheric measurement. We perform full 3-D waveform simulations with local atmospheric data and numerical weather forecast model to quantify atmosphere-dependent infrasound variability and address the advantage and restriction of local weather data/numerical weather model for sound propagation simulation. Numerical simulations with stochastic atmosphere models also showed nonnegligible influence of atmospheric heterogeneity on infrasound amplitude, suggesting an important role of local turbulence.
Current inversion in the Lévy ratchet.
Dybiec, Bartłomiej
2008-12-01
We study the motion of an overdamped test particle in a static periodic potential lacking spatial symmetry under the influence of periodically modulated alpha -stable (Lévy) type noise. Due to the nonthermal character of the driving noise, the particle exhibits a motion with a preferred direction. The additional periodic modulation of the noise asymmetry changes the behavior of the static "Lévy ratchet." For the fast rate of the noise asymmetry modulation, the Lévy ratchet behaves like the one driven by the symmetric alpha -stable noise. When the modulation period is larger, the nontrivial effects of the noise asymmetry on the behavior of the Lévy ratchet are visible. In particular, the current inversion is observed in the system at hand. The properties of the Lévy ratchet are studied by use of the robust measures of directionality, which are defined regardless of the type of the stochastic driving.
Lu, Huanhuan; Wang, Fuzhong; Zhang, Huichun
2016-04-01
Traditional speech detection methods regard the noise as a jamming signal to filter,but under the strong noise background,these methods lost part of the original speech signal while eliminating noise.Stochastic resonance can use noise energy to amplify the weak signal and suppress the noise.According to stochastic resonance theory,a new method based on adaptive stochastic resonance to extract weak speech signals is proposed.This method,combined with twice sampling,realizes the detection of weak speech signals from strong noise.The parameters of the systema,b are adjusted adaptively by evaluating the signal-to-noise ratio of the output signal,and then the weak speech signal is optimally detected.Experimental simulation analysis showed that under the background of strong noise,the output signal-to-noise ratio increased from the initial value-7dB to about 0.86 dB,with the gain of signalto-noise ratio is 7.86 dB.This method obviously raises the signal-to-noise ratio of the output speech signals,which gives a new idea to detect the weak speech signals in strong noise environment.
Dynamic electrical impedance imaging with the interacting multiple model scheme.
Kim, Kyung Youn; Kim, Bong Seok; Kim, Min Chan; Kim, Sin; Isaacson, David; Newell, Jonathan C
2005-04-01
In this paper, an effective dynamical EIT imaging scheme is presented for on-line monitoring of the abruptly changing resistivity distribution inside the object, based on the interacting multiple model (IMM) algorithm. The inverse problem is treated as a stochastic nonlinear state estimation problem with the time-varying resistivity (state) being estimated on-line with the aid of the IMM algorithm. In the design of the IMM algorithm multiple models with different process noise covariance are incorporated to reduce the modeling uncertainty. Simulations and phantom experiments are provided to illustrate the proposed algorithm.
Suppression of phase mixing in drift-kinetic plasma turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, J. T., E-mail: joseph.parker@stfc.ac.uk; OCIAM, Mathematical Institute, University of Oxford, Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG; Brasenose College, Radcliffe Square, Oxford OX1 4AJ
2016-07-15
Transfer of free energy from large to small velocity-space scales by phase mixing leads to Landau damping in a linear plasma. In a turbulent drift-kinetic plasma, this transfer is statistically nearly canceled by an inverse transfer from small to large velocity-space scales due to “anti-phase-mixing” modes excited by a stochastic form of plasma echo. Fluid moments (density, velocity, and temperature) are thus approximately energetically isolated from the higher moments of the distribution function, so phase mixing is ineffective as a dissipation mechanism when the plasma collisionality is small.
Spontaneous Polariton Currents in Periodic Lateral Chains.
Nalitov, A V; Liew, T C H; Kavokin, A V; Altshuler, B L; Rubo, Y G
2017-08-11
We predict spontaneous generation of superfluid polariton currents in planar microcavities with lateral periodic modulation of both the potential and decay rate. A spontaneous breaking of spatial inversion symmetry of a polariton condensate emerges at a critical pumping, and the current direction is stochastically chosen. We analyze the stability of the current with respect to the fluctuations of the condensate. A peculiar spatial current domain structure emerges, where the current direction is switched at the domain walls, and the characteristic domain size and lifetime scale with the pumping power.
A stochastic framework for spot-scanning particle therapy.
Robini, Marc; Yuemin Zhu; Wanyu Liu; Magnin, Isabelle
2016-08-01
In spot-scanning particle therapy, inverse treatment planning is usually limited to finding the optimal beam fluences given the beam trajectories and energies. We address the much more challenging problem of jointly optimizing the beam fluences, trajectories and energies. For this purpose, we design a simulated annealing algorithm with an exploration mechanism that balances the conflicting demands of a small mixing time at high temperatures and a reasonable acceptance rate at low temperatures. Numerical experiments substantiate the relevance of our approach and open new horizons to spot-scanning particle therapy.
Use of LANDSAT images of vegetation cover to estimate effective hydraulic properties of soils
NASA Technical Reports Server (NTRS)
Eagleson, Peter S.; Jasinski, Michael F.
1988-01-01
This work focuses on the characterization of natural, spatially variable, semivegetated landscapes using a linear, stochastic, canopy-soil reflectance model. A first application of the model was the investigation of the effects of subpixel and regional variability of scenes on the shape and structure of red-infrared scattergrams. Additionally, the model was used to investigate the inverse problem, the estimation of subpixel vegetation cover, given only the scattergrams of simulated satellite scale multispectral scenes. The major aspects of that work, including recent field investigations, are summarized.
Activation rates for nonlinear stochastic flows driven by non-Gaussian noise
NASA Astrophysics Data System (ADS)
van den Broeck, C.; Hänggi, P.
1984-11-01
Activation rates are calculated for stochastic bistable flows driven by asymmetric dichotomic Markov noise (a two-state Markov process). This noise contains as limits both a particular type of non-Gaussian white shot noise and white Gaussian noise. Apart from investigating the role of colored noise on the escape rates, one can thus also study the influence of the non-Gaussian nature of the noise on these rates. The rate for white shot noise differs in leading order (Arrhenius factor) from the corresponding rate for white Gaussian noise of equal strength. In evaluating the rates we demonstrate the advantage of using transport theory over a mean first-passage time approach for cases with generally non-white and non-Gaussian noise sources. For white shot noise with exponentially distributed weights we succeed in evaluating the mean first-passage time of the corresponding integro-differential master-equation dynamics. The rate is shown to coincide in the weak noise limit with the inverse mean first-passage time.
Huang, Tingwen; Li, Chuandong; Duan, Shukai; Starzyk, Janusz A
2012-06-01
This paper focuses on the hybrid effects of parameter uncertainty, stochastic perturbation, and impulses on global stability of delayed neural networks. By using the Ito formula, Lyapunov function, and Halanay inequality, we established several mean-square stability criteria from which we can estimate the feasible bounds of impulses, provided that parameter uncertainty and stochastic perturbations are well-constrained. Moreover, the present method can also be applied to general differential systems with stochastic perturbation and impulses.
Exponential stability of stochastic complex networks with multi-weights based on graph theory
NASA Astrophysics Data System (ADS)
Zhang, Chunmei; Chen, Tianrui
2018-04-01
In this paper, a novel approach to exponential stability of stochastic complex networks with multi-weights is investigated by means of the graph-theoretical method. New sufficient conditions are provided to ascertain the moment exponential stability and almost surely exponential stability of stochastic complex networks with multiple weights. It is noted that our stability results are closely related with multi-weights and the intensity of stochastic disturbance. Numerical simulations are also presented to substantiate the theoretical results.
Gompertzian stochastic model with delay effect to cervical cancer growth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti; Bahar, Arifah
2015-02-03
In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits.
Study on individual stochastic model of GNSS observations for precise kinematic applications
NASA Astrophysics Data System (ADS)
Próchniewicz, Dominik; Szpunar, Ryszard
2015-04-01
The proper definition of mathematical positioning model, which is defined by functional and stochastic models, is a prerequisite to obtain the optimal estimation of unknown parameters. Especially important in this definition is realistic modelling of stochastic properties of observations, which are more receiver-dependent and time-varying than deterministic relationships. This is particularly true with respect to precise kinematic applications which are characterized by weakening model strength. In this case, incorrect or simplified definition of stochastic model causes that the performance of ambiguity resolution and accuracy of position estimation can be limited. In this study we investigate the methods of describing the measurement noise of GNSS observations and its impact to derive precise kinematic positioning model. In particular stochastic modelling of individual components of the variance-covariance matrix of observation noise performed using observations from a very short baseline and laboratory GNSS signal generator, is analyzed. Experimental test results indicate that the utilizing the individual stochastic model of observations including elevation dependency and cross-correlation instead of assumption that raw measurements are independent with the same variance improves the performance of ambiguity resolution as well as rover positioning accuracy. This shows that the proposed stochastic assessment method could be a important part in complex calibration procedure of GNSS equipment.
Conservative Diffusions: a Constructive Approach to Nelson's Stochastic Mechanics.
NASA Astrophysics Data System (ADS)
Carlen, Eric Anders
In Nelson's stochastic mechanics, quantum phenomena are described in terms of diffusions instead of wave functions; this thesis is a study of that description. We emphasize that we are concerned here with the possibility of describing, as opposed to explaining, quantum phenomena in terms of diffusions. In this direction, the following questions arise: "Do the diffusions of stochastic mechanics--which are formally given by stochastic differential equations with extremely singular coefficients--really exist?" Given that they exist, one can ask, "Do these diffusions have physically reasonable sample path behavior, and can we use information about sample paths to study the behavior of physical systems?" These are the questions we treat in this thesis. In Chapter I we review stochastic mechanics and diffusion theory, using the Guerra-Morato variational principle to establish the connection with the Schroedinger equation. This chapter is largely expository; however, there are some novel features and proofs. In Chapter II we settle the first of the questions raised above. Using PDE methods, we construct the diffusions of stochastic mechanics. Our result is sufficiently general to be of independent mathematical interest. In Chapter III we treat potential scattering in stochastic mechanics and discuss direct probabilistic methods of studying quantum scattering problems. Our results provide a solid "Yes" in answer to the second question raised above.
NASA Astrophysics Data System (ADS)
Dorn, O.; Lesselier, D.
2010-07-01
Inverse problems in electromagnetics have a long history and have stimulated exciting research over many decades. New applications and solution methods are still emerging, providing a rich source of challenging topics for further investigation. The purpose of this special issue is to combine descriptions of several such developments that are expected to have the potential to fundamentally fuel new research, and to provide an overview of novel methods and applications for electromagnetic inverse problems. There have been several special sections published in Inverse Problems over the last decade addressing fully, or partly, electromagnetic inverse problems. Examples are: Electromagnetic imaging and inversion of the Earth's subsurface (Guest Editors: D Lesselier and T Habashy) October 2000 Testing inversion algorithms against experimental data (Guest Editors: K Belkebir and M Saillard) December 2001 Electromagnetic and ultrasonic nondestructive evaluation (Guest Editors: D Lesselier and J Bowler) December 2002 Electromagnetic characterization of buried obstacles (Guest Editors: D Lesselier and W C Chew) December 2004 Testing inversion algorithms against experimental data: inhomogeneous targets (Guest Editors: K Belkebir and M Saillard) December 2005 Testing inversion algorithms against experimental data: 3D targets (Guest Editors: A Litman and L Crocco) February 2009 In a certain sense, the current issue can be understood as a continuation of this series of special sections on electromagnetic inverse problems. On the other hand, its focus is intended to be more general than previous ones. Instead of trying to cover a well-defined, somewhat specialized research topic as completely as possible, this issue aims to show the broad range of techniques and applications that are relevant to electromagnetic imaging nowadays, which may serve as a source of inspiration and encouragement for all those entering this active and rapidly developing research area. Also, the construction of this special issue is likely to have been different from preceding ones. In addition to the invitations sent to specific research groups involved in electromagnetic inverse problems, the Guest Editors also solicited recommendations, from a large number of experts, of potential authors who were thereupon encouraged to contribute. Moreover, an open call for contributions was published on the homepage of Inverse Problems in order to attract as wide a scope of contributions as possible. This special issue's attempt at generality might also define its limitations: by no means could this collection of papers be exhaustive or complete, and as Guest Editors we are well aware that many exciting topics and potential contributions will be missing. This, however, also determines its very special flavor: besides addressing electromagnetic inverse problems in a broad sense, there were only a few restrictions on the contributions considered for this section. One requirement was plausible evidence of either novelty or the emergent nature of the technique or application described, judged mainly by the referees, and in some cases by the Guest Editors. The technical quality of the contributions always remained a stringent condition of acceptance, final adjudication (possibly questionable either way, not always positive) being made in most cases once a thorough revision process had been carried out. Therefore, we hope that the final result presented here constitutes an interesting collection of novel ideas and applications, properly refereed and edited, which will find its own readership and which can stimulate significant new research in the topics represented. Overall, as Guest Editors, we feel quite fortunate to have obtained such a strong response to the call for this issue and to have a really wide-ranging collection of high-quality contributions which, indeed, can be read from the first to the last page with sustained enthusiasm. A large number of applications and techniques is represented, overall via 16 contributions with 45 authors in total. This shows, in our opinion, that electromagnetic imaging and inversion remain amongst the most challenging and active research areas in applied inverse problems today. Below, we give a brief overview of the contributions included in this issue, ordered alphabetically by the surname of the leading author. 1. The complexity of handling potential randomness of the source in an inverse scattering problem is not minor, and the literature is far from being replete in this configuration. The contribution by G Bao, S N Chow, P Li and H Zhou, `Numerical solution of an inverse medium scattering problem with a stochastic source', exemplifies how to hybridize Wiener chaos expansion with a recursive linearization method in order to solve the stochastic problem as a set of decoupled deterministic ones. 2. In cases where the forward problem is expensive to evaluate, database methods might become a reliable method of choice, while enabling one to deliver more information on the inversion itself. The contribution by S Bilicz, M Lambert and Sz Gyimóthy, `Kriging-based generation of optimal databases as forward and inverse surrogate models', describes such a technique which uses kriging for constructing an efficient database with the goal of achieving an equidistant distribution of points in the measurement space. 3. Anisotropy remains a considerable challenge in electromagnetic imaging, which is tackled in the contribution by F Cakoni, D Colton, P Monk and J Sun, `The inverse electromagnetic scattering problem for anisotropic media', via the fact that transmission eigenvalues can be retrieved from a far-field scattering pattern, yielding, in particular, lower and upper bounds of the index of refraction of the unknown (dielectric anisotropic) scatterer. 4. So-called subspace optimization methods (SOM) have attracted a lot of interest recently in many fields. The contribution by X Chen, `Subspace-based optimization method for inverse scattering problems with an inhomogeneous background medium', illustrates how to address a realistic situation in which the medium containing the unknown obstacles is not homogeneous, via blending a properly developed SOM with a finite-element approach to the required Green's functions. 5. H Egger, M Hanke, C Schneider, J Schöberl and S Zaglmayr, in their contribution `Adjoint-based sampling methods for electromagnetic scattering', show how to efficiently develop sampling methods without explicit knowledge of the dyadic Green's function once an adjoint problem has been solved at much lower computational cost. This is demonstrated by examples in demanding propagative and diffusive situations. 6. Passive sensor arrays can be employed to image reflectors from ambient noise via proper migration of cross-correlation matrices into their embedding medium. This is investigated, and resolution, in particular, is considered in detail, as a function of the characteristics of the sensor array and those of the noise, in the contribution by J Garnier and G Papanicolaou, `Resolution analysis for imaging with noise'. 7. A direct reconstruction technique based on the conformal mapping theorem is proposed and investigated in depth in the contribution by H Haddar and R Kress, `Conformal mapping and impedance tomography'. This paper expands on previous work, with inclusions in homogeneous media, convergence results, and numerical illustrations. 8. The contribution by T Hohage and S Langer, `Acceleration techniques for regularized Newton methods applied to electromagnetic inverse medium scattering problems', focuses on a spectral preconditioner intended to accelerate regularized Newton methods as employed for the retrieval of a local inhomogeneity in a three-dimensional vector electromagnetic case, while also illustrating the implementation of a Lepskiĭ-type stopping rule outsmarting a traditional discrepancy principle. 9. Geophysical applications are a rich source of practically relevant inverse problems. The contribution by M Li, A Abubakar and T Habashy, `Application of a two-and-a-half dimensional model-based algorithm to crosswell electromagnetic data inversion', deals with a model-based inversion technique for electromagnetic imaging which addresses novel challenges such as multi-physics inversion, and incorporation of prior knowledge, such as in hydrocarbon recovery. 10. Non-stationary inverse problems, considered as a special class of Bayesian inverse problems, are framed via an orthogonal decomposition representation in the contribution by A Lipponen, A Seppänen and J P Kaipio, `Reduced order estimation of nonstationary flows with electrical impedance tomography'. The goal is to simultaneously estimate, from electrical impedance tomography data, certain characteristics of the Navier--Stokes fluid flow model together with time-varying concentration distribution. 11. Non-iterative imaging methods of thin, penetrable cracks, based on asymptotic expansion of the scattering amplitude and analysis of the multi-static response matrix, are discussed in the contribution by W-K Park, `On the imaging of thin dielectric inclusions buried within a half-space', completing, for a shallow burial case at multiple frequencies, the direct imaging of small obstacles (here, along their transverse dimension), MUSIC and non-MUSIC type indicator functions being used for that purpose. 12. The contribution by R Potthast, `A study on orthogonality sampling' envisages quick localization and shaping of obstacles from (portions of) far-field scattering patterns collected at one or more time-harmonic frequencies, via the simple calculation (and summation) of scalar products between those patterns and a test function. This is numerically exemplified for Neumann/Dirichlet boundary conditions and homogeneous/heterogeneous embedding media. 13. The contribution by J D Shea, P Kosmas, B D Van Veen and S C Hagness, `Contrast-enhanced microwave imaging of breast tumors: a computational study using 3D realistic numerical phantoms', aims at microwave medical imaging, namely the early detection of breast cancer. The use of contrast enhancing agents is discussed in detail and a number of reconstructions in three-dimensional geometry of realistic numerical breast phantoms are presented. 14. The contribution by D A Subbarayappa and V Isakov, `Increasing stability of the continuation for the Maxwell system', discusses enhanced log-type stability results for continuation of solutions of the time-harmonic Maxwell system, adding a fresh chapter to the interesting story of the study of the Cauchy problem for PDE. 15. In their contribution, `Recent developments of a monotonicity imaging method for magnetic induction tomography in the small skin-depth regime', A Tamburrino, S Ventre and G Rubinacci extend the recently developed monotonicity method toward the application of magnetic induction tomography in order to map surface-breaking defects affecting a damaged metal component. 16. The contribution by F Viani, P Rocca, M Benedetti, G Oliveri and A Massa, `Electromagnetic passive localization and tracking of moving targets in a WSN-infrastructured environment', contributes to what could still be seen as a niche problem, yet both useful in terms of applications, e.g., security, and challenging in terms of methodologies and experiments, in particular, in view of the complexity of environments in which this endeavor is to take place and the variability of the wireless sensor networks employed. To conclude, we would like to thank the able and tireless work of Kate Watt and Zoë Crossman, as past and present Publishers of the Journal, on what was definitely a long and exciting journey (sometimes a little discouraging when reports were not arriving, or authors were late, or Guest Editors overwhelmed) that started from a thorough discussion at the `Manchester workshop on electromagnetic inverse problems' held mid-June 2009, between Kate Watt and the Guest Editors. We gratefully acknowledge the fact that W W Symes gave us his full backing to carry out this special issue and that A K Louis completed it successfully. Last, but not least, the staff of Inverse Problems should be thanked, since they work together to make it a premier journal.
Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats
2015-05-01
Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.
Markov stochasticity coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: iddo.eliazar@intel.com
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
NASA Astrophysics Data System (ADS)
Lemmens, D.; Wouters, M.; Tempere, J.; Foulon, S.
2008-07-01
We present a path integral method to derive closed-form solutions for option prices in a stochastic volatility model. The method is explained in detail for the pricing of a plain vanilla option. The flexibility of our approach is demonstrated by extending the realm of closed-form option price formulas to the case where both the volatility and interest rates are stochastic. This flexibility is promising for the treatment of exotic options. Our analytical formulas are tested with numerical Monte Carlo simulations.
Reflected stochastic differential equation models for constrained animal movement
Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.
2017-01-01
Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.
1999-01-01
In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.
Kucza, Witold
2013-07-25
Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.
Newton's method for nonlinear stochastic wave equations driven by one-dimensional Brownian motion.
Leszczynski, Henryk; Wrzosek, Monika
2017-02-01
We consider nonlinear stochastic wave equations driven by one-dimensional white noise with respect to time. The existence of solutions is proved by means of Picard iterations. Next we apply Newton's method. Moreover, a second-order convergence in a probabilistic sense is demonstrated.
Stochastic Multi-Timescale Power System Operations With Variable Wind Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hongyu; Krad, Ibrahim; Florita, Anthony
This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
St James, S; Bloch, C; Saini, J
Purpose: Proton pencil beam scanning is used clinically across the United States. There are no current guidelines on tolerances for daily QA specific to pencil beam scanning, specifically related to the individual spot properties (spot width). Using a stochastic method to determine tolerances has the potential to optimize tolerances on individual spots and decrease the number of false positive failures in daily QA. Individual and global spot tolerances were evaluated. Methods: As part of daily QA for proton pencil beam scanning, a field of 16 spots (corresponding to 8 energies) is measured using an array of ion chambers (Matrixx, IBA).more » Each individual spot is fit to two Gaussian functions (x,y). The spot width (σ) in × and y are recorded (32 parameters). Results from the daily QA were retrospectively analyzed for 100 days of data. The deviations of the spot widths were histogrammed and fit to a Gaussian function. The stochastic spot tolerance was taken to be the mean ± 3σ. Using these results, tolerances were developed and tested against known deviations in spot width. Results: The individual spot tolerances derived with the stochastic method decreased in 30/32 instances. Using the previous tolerances (± 20% width), the daily QA would have detected 0/20 days of the deviation. Using a tolerance of any 6 spots failing the stochastic tolerance, 18/20 days of the deviation would have been detected. Conclusion: Using a stochastic method we have been able to decrease daily tolerances on the spot widths for 30/32 spot widths measured. The stochastic tolerances can lead to detection of deviations that previously would have been picked up on monthly QA and missed by daily QA. This method could be easily extended for evaluation of other QA parameters in proton spot scanning.« less
Caranica, C; Al-Omari, A; Deng, Z; Griffith, J; Nilsen, R; Mao, L; Arnold, J; Schüttler, H-B
2018-01-01
A major challenge in systems biology is to infer the parameters of regulatory networks that operate in a noisy environment, such as in a single cell. In a stochastic regime it is hard to distinguish noise from the real signal and to infer the noise contribution to the dynamical behavior. When the genetic network displays oscillatory dynamics, it is even harder to infer the parameters that produce the oscillations. To address this issue we introduce a new estimation method built on a combination of stochastic simulations, mass action kinetics and ensemble network simulations in which we match the average periodogram and phase of the model to that of the data. The method is relatively fast (compared to Metropolis-Hastings Monte Carlo Methods), easy to parallelize, applicable to large oscillatory networks and large (~2000 cells) single cell expression data sets, and it quantifies the noise impact on the observed dynamics. Standard errors of estimated rate coefficients are typically two orders of magnitude smaller than the mean from single cell experiments with on the order of ~1000 cells. We also provide a method to assess the goodness of fit of the stochastic network using the Hilbert phase of single cells. An analysis of phase departures from the null model with no communication between cells is consistent with a hypothesis of Stochastic Resonance describing single cell oscillators. Stochastic Resonance provides a physical mechanism whereby intracellular noise plays a positive role in establishing oscillatory behavior, but may require model parameters, such as rate coefficients, that differ substantially from those extracted at the macroscopic level from measurements on populations of millions of communicating, synchronized cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios; Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesianmore » model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.« less
Stochastic volatility models and Kelvin waves
NASA Astrophysics Data System (ADS)
Lipton, Alex; Sepp, Artur
2008-08-01
We use stochastic volatility models to describe the evolution of an asset price, its instantaneous volatility and its realized volatility. In particular, we concentrate on the Stein and Stein model (SSM) (1991) for the stochastic asset volatility and the Heston model (HM) (1993) for the stochastic asset variance. By construction, the volatility is not sign definite in SSM and is non-negative in HM. It is well known that both models produce closed-form expressions for the prices of vanilla option via the Lewis-Lipton formula. However, the numerical pricing of exotic options by means of the finite difference and Monte Carlo methods is much more complex for HM than for SSM. Until now, this complexity was considered to be an acceptable price to pay for ensuring that the asset volatility is non-negative. We argue that having negative stochastic volatility is a psychological rather than financial or mathematical problem, and advocate using SSM rather than HM in most applications. We extend SSM by adding volatility jumps and obtain a closed-form expression for the density of the asset price and its realized volatility. We also show that the current method of choice for solving pricing problems with stochastic volatility (via the affine ansatz for the Fourier-transformed density function) can be traced back to the Kelvin method designed in the 19th century for studying wave motion problems arising in fluid dynamics.
A comparison of LOD and UT1-UTC forecasts by different combined prediction techniques
NASA Astrophysics Data System (ADS)
Kosek, W.; Kalarus, M.; Johnson, T. J.; Wooden, W. H.; McCarthy, D. D.; Popiński, W.
Stochastic prediction techniques including autocovariance, autoregressive, autoregressive moving average, and neural networks were applied to the UT1-UTC and Length of Day (LOD) International Earth Rotation and Reference Systems Servive (IERS) EOPC04 time series to evaluate the capabilities of each method. All known effects such as leap seconds and solid Earth zonal tides were first removed from the observed values of UT1-UTC and LOD. Two combination procedures were applied to predict the resulting LODR time series: 1) the combination of the least-squares (LS) extrapolation with a stochastic predition method, and 2) the combination of the discrete wavelet transform (DWT) filtering and a stochastic prediction method. The results of the combination of the LS extrapolation with different stochastic prediction techniques were compared with the results of the UT1-UTC prediction method currently used by the IERS Rapid Service/Prediction Centre (RS/PC). It was found that the prediction accuracy depends on the starting prediction epochs, and for the combined forecast methods, the mean prediction errors for 1 to about 70 days in the future are of the same order as those of the method used by the IERS RS/PC.
Modeling the lake eutrophication stochastic ecosystem and the research of its stability.
Wang, Bo; Qi, Qianqian
2018-06-01
In the reality, the lake system will be disturbed by stochastic factors including the external and internal factors. By adding the additive noise and the multiplicative noise to the right-hand sides of the model equation, the additive stochastic model and the multiplicative stochastic model are established respectively in order to reduce model errors induced by the absence of some physical processes. For both the two kinds of stochastic ecosystems, the authors studied the bifurcation characteristics with the FPK equation and the Lyapunov exponent method based on the Stratonovich-Khasminiskii stochastic average principle. Results show that, for the additive stochastic model, when control parameter (i.e., nutrient loading rate) falls into the interval [0.388644, 0.66003825], there exists bistability for the ecosystem and the additive noise intensities cannot make the bifurcation point drift. In the region of the bistability, the external stochastic disturbance which is one of the main triggers causing the lake eutrophication, may make the ecosystem unstable and induce a transition. When control parameter (nutrient loading rate) falls into the interval (0, 0.388644) and (0.66003825, 1.0), there only exists a stable equilibrium state and the additive noise intensity could not change it. For the multiplicative stochastic model, there exists more complex bifurcation performance and the multiplicative ecosystem will be broken by the multiplicative noise. Also, the multiplicative noise could reduce the extent of the bistable region, ultimately, the bistable region vanishes for sufficiently large noise. What's more, both the nutrient loading rate and the multiplicative noise will make the ecosystem have a regime shift. On the other hand, for the two kinds of stochastic ecosystems, the authors also discussed the evolution of the ecological variable in detail by using the Four-stage Runge-Kutta method of strong order γ=1.5. The numerical method was found to be capable of effectively explaining the regime shift theory and agreed with the realistic analyze. These conclusions also confirms the two paths for the system to move from one stable state to another proposed by Beisner et al. [3], which may help understand the occurrence mechanism related to the lake eutrophication from the view point of the stochastic model and mathematical analysis. Copyright © 2018 Elsevier Inc. All rights reserved.
Rackauckas, Christopher; Nie, Qing
2017-01-01
Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.
Rackauckas, Christopher
2017-01-01
Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs. PMID:29527134
NASA Astrophysics Data System (ADS)
Bukoski, Alex; Steyn-Ross, D. A.; Pickett, Ashley F.; Steyn-Ross, Moira L.
2018-06-01
The dynamics of a stochastic type-I Hodgkin-Huxley-like point neuron model exposed to inhibitory synaptic noise are investigated as a function of distance from spiking threshold and the inhibitory influence of the general anesthetic agent propofol. The model is biologically motivated and includes the effects of intrinsic ion-channel noise via a stochastic differential equation description as well as inhibitory synaptic noise modeled as multiple Poisson-distributed impulse trains with saturating response functions. The effect of propofol on these synapses is incorporated through this drug's principal influence on fast inhibitory neurotransmission mediated by γ -aminobutyric acid (GABA) type-A receptors via reduction of the synaptic response decay rate. As the neuron model approaches spiking threshold from below, we track membrane voltage fluctuation statistics of numerically simulated stochastic trajectories. We find that for a given distance from spiking threshold, increasing the magnitude of anesthetic-induced inhibition is associated with augmented signatures of critical slowing: fluctuation amplitudes and correlation times grow as spectral power is increasingly focused at 0 Hz. Furthermore, as a function of distance from threshold, anesthesia significantly modifies the power-law exponents for variance and correlation time divergences observable in stochastic trajectories. Compared to the inverse square root power-law scaling of these quantities anticipated for the saddle-node bifurcation of type-I neurons in the absence of anesthesia, increasing anesthetic-induced inhibition results in an observable exponent <-0.5 for variance and >-0.5 for correlation time divergences. However, these behaviors eventually break down as distance from threshold goes to zero with both the variance and correlation time converging to common values independent of anesthesia. Compared to the case of no synaptic input, linearization of an approximating multivariate Ornstein-Uhlenbeck model reveals these effects to be the consequence of an additional slow eigenvalue associated with synaptic activity that competes with those of the underlying point neuron in a manner that depends on distance from spiking threshold.
NASA Astrophysics Data System (ADS)
Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi
2017-01-01
This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
Stochastic-field cavitation model
NASA Astrophysics Data System (ADS)
Dumond, J.; Magagnato, F.; Class, A.
2013-07-01
Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.
A cavitation model based on Eulerian stochastic fields
NASA Astrophysics Data System (ADS)
Magagnato, F.; Dumond, J.
2013-12-01
Non-linear phenomena can often be described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and in particular to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. Firstly, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
Methods of Stochastic Analysis of Complex Regimes in the 3D Hindmarsh-Rose Neuron Model
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina; Ryashko, Lev; Slepukhina, Evdokia
A problem of the stochastic nonlinear analysis of neuronal activity is studied by the example of the Hindmarsh-Rose (HR) model. For the parametric region of tonic spiking oscillations, it is shown that random noise transforms the spiking dynamic regime into the bursting one. This stochastic phenomenon is specified by qualitative changes in distributions of random trajectories and interspike intervals (ISIs). For a quantitative analysis of the noise-induced bursting, we suggest a constructive semi-analytical approach based on the stochastic sensitivity function (SSF) technique and the method of confidence domains that allows us to describe geometrically a distribution of random states around the deterministic attractors. Using this approach, we develop a new algorithm for estimation of critical values for the noise intensity corresponding to the qualitative changes in stochastic dynamics. We show that the obtained estimations are in good agreement with the numerical results. An interplay between noise-induced bursting and transitions from order to chaos is discussed.
Stochastic-field cavitation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumond, J., E-mail: julien.dumond@areva.com; AREVA GmbH, Erlangen, Paul-Gossen-Strasse 100, D-91052 Erlangen; Magagnato, F.
2013-07-15
Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-fieldmore » cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.« less
Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi
2014-08-15
We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with N_{e}>3000 electrons.
Treatment of constraints in the stochastic quantization method and covariantized Langevin equation
NASA Astrophysics Data System (ADS)
Ikegami, Kenji; Kimura, Tadahiko; Mochizuki, Riuji
1993-04-01
We study the treatment of the constraints in the stochastic quantization method. We improve the treatment of the stochastic consistency condition proposed by Namiki et al. by suitably taking into account the Ito calculus. Then we obtain an improved Langevi equation and the Fokker-Planck equation which naturally leads to the correct path integral quantization of the constrained system as the stochastic equilibrium state. This treatment is applied to an O( N) non-linear α model and it is shown that singular terms appearing in the improved Langevin equation cancel out the σ n(O) divergences in one loop order. We also ascertain that the above Langevin equation, rewritten in terms of idependent variables, is actually equivalent to the one in the general-coordinate transformation covariant and vielbein-rotation invariant formalish.
Stochastic modelling of intermittency.
Stemler, Thomas; Werner, Johannes P; Benner, Hartmut; Just, Wolfram
2010-01-13
Recently, methods have been developed to model low-dimensional chaotic systems in terms of stochastic differential equations. We tested such methods in an electronic circuit experiment. We aimed to obtain reliable drift and diffusion coefficients even without a pronounced time-scale separation of the chaotic dynamics. By comparing the analytical solutions of the corresponding Fokker-Planck equation with experimental data, we show here that crisis-induced intermittency can be described in terms of a stochastic model which is dominated by state-space-dependent diffusion. Further on, we demonstrate and discuss some limits of these modelling approaches using numerical simulations. This enables us to state a criterion that can be used to decide whether a stochastic model will capture the essential features of a given time series. This journal is © 2010 The Royal Society
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yongge; Xu, Wei, E-mail: weixu@nwpu.edu.cn; Yang, Guidong
The Poisson white noise, as a typical non-Gaussian excitation, has attracted much attention recently. However, little work was referred to the study of stochastic systems with fractional derivative under Poisson white noise excitation. This paper investigates the stationary response of a class of quasi-linear systems with fractional derivative excited by Poisson white noise. The equivalent stochastic system of the original stochastic system is obtained. Then, approximate stationary solutions are obtained with the help of the perturbation method. Finally, two typical examples are discussed in detail to demonstrate the effectiveness of the proposed method. The analysis also shows that the fractionalmore » order and the fractional coefficient significantly affect the responses of the stochastic systems with fractional derivative.« less
On two mathematical problems of canonical quantization. IV
NASA Astrophysics Data System (ADS)
Kirillov, A. I.
1992-11-01
A method for solving the problem of reconstructing a measure beginning with its logarithmic derivative is presented. The method completes that of solving the stochastic differential equation via Dirichlet forms proposed by S. Albeverio and M. Rockner. As a result one obtains the mathematical apparatus for the stochastic quantization. The apparatus is applied to prove the existence of the Feynman-Kac measure of the sine-Gordon and λφ2n/(1 + K2φ2n)-models. A synthesis of both mathematical problems of canonical quantization is obtained in the form of a second-order martingale problem for vacuum noise. It is shown that in stochastic mechanics the martingale problem is an analog of Newton's second law and enables us to find the Nelson's stochastic trajectories without determining the wave functions.
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
Stochastic noise characteristics in matrix inversion tomosynthesis (MITS).
Godfrey, Devon J; McAdams, H P; Dobbins, James T Third
2009-05-01
Matrix inversion tomosynthesis (MITS) uses known imaging geometry and linear systems theory to deterministically separate in-plane detail from residual tomographic blur in a set of conventional tomosynthesis ("shift-and-add") planes. A previous investigation explored the effect of scan angle (ANG), number of projections (N), and number of reconstructed planes (NP) on the MITS impulse response and modulation transfer function characteristics, and concluded that ANG = 20 degrees, N = 71, and NP = 69 is the optimal MITS imaging technique for chest imaging on our prototype tomosynthesis system. This article examines the effect of ANG, N, and NP on the MITS exposure-normalized noise power spectra (ENNPS) and seeks to confirm that the imaging parameters selected previously by an analysis of the MITS impulse response also yield reasonable stochastic properties in MITS reconstructed planes. ENNPS curves were generated for experimentally acquired mean-subtracted projection images, conventional tomosynthesis planes, and MITS planes with varying combinations of the parameters ANG, N, and NP. Image data were collected using a prototype tomosynthesis system, with 11.4 cm acrylic placed near the image receptor to produce lung-equivalent beam hardening and scattered radiation. Ten identically acquired tomosynthesis data sets (realizations) were collected for each selected technique and used to generate ensemble mean images that were subtracted from individual image realizations prior to noise power spectra (NPS) estimation. NPS curves were normalized to account for differences in entrance exposure (as measured with an ion chamber), yielding estimates of the ENNPS for each technique. Results suggest that mid- and high-frequency noise in MITS planes is fairly equivalent in magnitude to noise in conventional tomosynthesis planes, but low-frequency noise is amplified in the most anterior and posterior reconstruction planes. Selecting the largest available number of projections (N = 71) does not incur any appreciable additive electronic noise penalty compared to using fewer projections for roughly equivalent cumulative exposure. Stochastic noise is minimized by maximizing N and NP but increases with increasing ANG. The noise trend results for NP and ANG are contrary to what would be predicted by simply considering the MITS matrix conditioning and likely result from the interplay between noise correlation and the polarity of the MITS filters. From this study, the authors conclude that the previously determined optimal MITS imaging strategy based on impulse response considerations produces somewhat suboptimal stochastic noise characteristics, but is probably still the best technique for MITS imaging of the chest.
Downscaling Smooth Tomographic Models: Separating Intrinsic and Apparent Anisotropy
NASA Astrophysics Data System (ADS)
Bodin, Thomas; Capdeville, Yann; Romanowicz, Barbara
2016-04-01
In recent years, a number of tomographic models based on full waveform inversion have been published. Due to computational constraints, the fitted waveforms are low pass filtered, which results in an inability to map features smaller than half the shortest wavelength. However, these tomographic images are not a simple spatial average of the true model, but rather an effective, apparent, or equivalent model that provides a similar 'long-wave' data fit. For example, it can be shown that a series of horizontal isotropic layers will be seen by a 'long wave' as a smooth anisotropic medium. In this way, the observed anisotropy in tomographic models is a combination of intrinsic anisotropy produced by lattice-preferred orientation (LPO) of minerals, and apparent anisotropy resulting from the incapacity of mapping discontinuities. Interpretations of observed anisotropy (e.g. in terms of mantle flow) requires therefore the separation of its intrinsic and apparent components. The "up-scaling" relations that link elastic properties of a rapidly varying medium to elastic properties of the effective medium as seen by long waves are strongly non-linear and their inverse highly non-unique. That is, a smooth homogenized effective model is equivalent to a large number of models with discontinuities. In the 1D case, Capdeville et al (GJI, 2013) recently showed that a tomographic model which results from the inversion of low pass filtered waveforms is an homogenized model, i.e. the same as the model computed by upscaling the true model. Here we propose a stochastic method to sample the ensemble of layered models equivalent to a given tomographic profile. We use a transdimensional formulation where the number of layers is variable. Furthermore, each layer may be either isotropic (1 parameter) or intrinsically anisotropic (2 parameters). The parsimonious character of the Bayesian inversion gives preference to models with the least number of parameters (i.e. least number of layers, and maximum number of isotropic layers). The non-uniqueness of the problem can be addressed by adding high frequency data such as receiver functions, able to map first order discontinuities. We show with synthetic tests that this method enables us to distinguish between intrinsic and apparent anisotropy in tomographic models, as layers with intrinsic anisotropy are only present when required by the data. A real data example is presented based on the latest global model produced at Berkeley.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Z. W., E-mail: zhuzhiwen@tju.edu.cn; Li, X. M., E-mail: lixinmiaotju@163.com; Xu, J., E-mail: xujia-ld@163.com
A kind of magnetic shape memory alloy (MSMA) microgripper is proposed in this paper, and its nonlinear dynamic characteristics are studied when the stochastic perturbation is considered. Nonlinear differential items are introduced to explain the hysteretic phenomena of MSMA, and the constructive relationships among strain, stress, and magnetic field intensity are obtained by the partial least-square regression method. The nonlinear dynamic model of a MSMA microgripper subjected to in-plane stochastic excitation is developed. The stationary probability density function of the system’s response is obtained, the transition sets of the system are determined, and the conditions of stochastic bifurcation are obtained.more » The homoclinic and heteroclinic orbits of the system are given, and the boundary of the system’s safe basin is obtained by stochastic Melnikov integral method. The numerical and experimental results show that the system’s motion depends on its parameters, and stochastic Hopf bifurcation appears in the variation of the parameters; the area of the safe basin decreases with the increase of the stochastic excitation, and the boundary of the safe basin becomes fractal. The results of this paper are helpful for the application of MSMA microgripper in engineering fields.« less
NASA Astrophysics Data System (ADS)
El-Diasty, M.; El-Rabbany, A.; Pagiatakis, S.
2007-11-01
We examine the effect of varying the temperature points on MEMS inertial sensors' noise models using Allan variance and least-squares spectral analysis (LSSA). Allan variance is a method of representing root-mean-square random drift error as a function of averaging times. LSSA is an alternative to the classical Fourier methods and has been applied successfully by a number of researchers in the study of the noise characteristics of experimental series. Static data sets are collected at different temperature points using two MEMS-based IMUs, namely MotionPakII and Crossbow AHRS300CC. The performance of the two MEMS inertial sensors is predicted from the Allan variance estimation results at different temperature points and the LSSA is used to study the noise characteristics and define the sensors' stochastic model parameters. It is shown that the stochastic characteristics of MEMS-based inertial sensors can be identified using Allan variance estimation and LSSA and the sensors' stochastic model parameters are temperature dependent. Also, the Kaiser window FIR low-pass filter is used to investigate the effect of de-noising stage on the stochastic model. It is shown that the stochastic model is also dependent on the chosen cut-off frequency.
NASA Astrophysics Data System (ADS)
Phuong Tran, Anh; Dafflon, Baptiste; Hubbard, Susan S.
2017-09-01
Quantitative characterization of soil organic carbon (OC) content is essential due to its significant impacts on surface-subsurface hydrological-thermal processes and microbial decomposition of OC, which both in turn are important for predicting carbon-climate feedbacks. While such quantification is particularly important in the vulnerable organic-rich Arctic region, it is challenging to achieve due to the general limitations of conventional core sampling and analysis methods, and to the extremely dynamic nature of hydrological-thermal processes associated with annual freeze-thaw events. In this study, we develop and test an inversion scheme that can flexibly use single or multiple datasets - including soil liquid water content, temperature and electrical resistivity tomography (ERT) data - to estimate the vertical distribution of OC content. Our approach relies on the fact that OC content strongly influences soil hydrological-thermal parameters and, therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. We employ the Community Land Model to simulate nonisothermal surface-subsurface hydrological dynamics from the bedrock to the top of canopy, with consideration of land surface processes (e.g., solar radiation balance, evapotranspiration, snow accumulation and melting) and ice-liquid water phase transitions. For inversion, we combine a deterministic and an adaptive Markov chain Monte Carlo (MCMC) optimization algorithm to estimate a posteriori distributions of desired model parameters. For hydrological-thermal-to-geophysical variable transformation, the simulated subsurface temperature, liquid water content and ice content are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using different numerical experiments and evaluate the influence of measurement errors and benefit of joint inversion on the estimation of OC and other parameters. We also quantify the propagation of uncertainty from the estimated parameters to prediction of hydrological-thermal responses. We find that, compared to inversion of single dataset (temperature, liquid water content or apparent resistivity), joint inversion of these datasets significantly reduces parameter uncertainty. We find that the joint inversion approach is able to estimate OC and sand content within the shallow active layer (top 0.3 m of soil) with high reliability. Due to the small variations of temperature and moisture within the shallow permafrost (here at about 0.6 m depth), the approach is unable to estimate OC with confidence. However, if the soil porosity is functionally related to the OC and mineral content, which is often observed in organic-rich Arctic soil, the uncertainty of OC estimate at this depth remarkably decreases. Our study documents the value of the new surface-subsurface, deterministic-stochastic inversion approach, as well as the benefit of including multiple types of data to estimate OC and associated hydrological-thermal dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, J.; Hoversten, G.M.
2011-09-15
Joint inversion of seismic AVA and CSEM data requires rock-physics relationships to link seismic attributes to electrical properties. Ideally, we can connect them through reservoir parameters (e.g., porosity and water saturation) by developing physical-based models, such as Gassmann’s equations and Archie’s law, using nearby borehole logs. This could be difficult in the exploration stage because information available is typically insufficient for choosing suitable rock-physics models and for subsequently obtaining reliable estimates of the associated parameters. The use of improper rock-physics models and the inaccuracy of the estimates of model parameters may cause misleading inversion results. Conversely, it is easy tomore » derive statistical relationships among seismic and electrical attributes and reservoir parameters from distant borehole logs. In this study, we develop a Bayesian model to jointly invert seismic AVA and CSEM data for reservoir parameter estimation using statistical rock-physics models; the spatial dependence of geophysical and reservoir parameters are carried out by lithotypes through Markov random fields. We apply the developed model to a synthetic case, which simulates a CO{sub 2} monitoring application. We derive statistical rock-physics relations from borehole logs at one location and estimate seismic P- and S-wave velocity ratio, acoustic impedance, density, electrical resistivity, lithotypes, porosity, and water saturation at three different locations by conditioning to seismic AVA and CSEM data. Comparison of the inversion results with their corresponding true values shows that the correlation-based statistical rock-physics models provide significant information for improving the joint inversion results.« less
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
The magnetic field at the core-mantle boundary
NASA Technical Reports Server (NTRS)
Bloxham, J.; Gubbins, D.
1985-01-01
Models of the geomagnetic field are, in general, produced from a least-squares fit of the coefficients in a truncated spherical harmonic expansion to the available data. Downward continuation of such models to the core-mantle boundary (CMB) is an unstable process: the results are found to be critically dependent on the choice of truncation level. Modern techniques allow this fundamental difficulty to be circumvented. The method of stochastic inversion is applied to modeling the geomagnetic field. Prior information is introduced by requiring that the spectrum of spherical harmonic coefficients to fall-off in a particular manner which is consistent with the Ohmic heating in the core having a finite lower bound. This results in models with finite errors in the radial field at the CMB. Curves of zero radial field can then be determined and integrals of the radial field over patches on the CMB bounded by these null-flux curves calculated. With the assumption of negligible magnetic diffusion in the core; frozen-flux hypothesis, these integrals are time-invariant.
Koracin, Darko; Vellore, Ramesh; Lowenthal, Douglas H; Watson, John G; Koracin, Julide; McCord, Travis; DuBois, David W; Chen, L W Antony; Kumar, Naresh; Knipping, Eladio M; Wheeler, Neil J M; Craig, Kenneth; Reid, Stephen
2011-06-01
The main objective of this study was to investigate the capabilities of the receptor-oriented inverse mode Lagrangian Stochastic Particle Dispersion Model (LSPDM) with the 12-km resolution Mesoscale Model 5 (MM5) wind field input for the assessment of source identification from seven regions impacting two receptors located in the eastern United States. The LSPDM analysis was compared with a standard version of the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) single-particle backward-trajectory analysis using inputs from MM5 and the Eta Data Assimilation System (EDAS) with horizontal grid resolutions of 12 and 80 km, respectively. The analysis included four 7-day summertime events in 2002; residence times in the modeling domain were computed from the inverse LSPDM runs and HYPSLIT-simulated backward trajectories started from receptor-source heights of 100, 500, 1000, 1500, and 3000 m. Statistics were derived using normalized values of LSPDM- and HYSPLIT-predicted residence times versus Community Multiscale Air Quality model-predicted sulfate concentrations used as baseline information. From 40 cases considered, the LSPDM identified first- and second-ranked emission region influences in 37 cases, whereas HYSPLIT-MM5 (HYSPLIT-EDAS) identified the sources in 21 (16) cases. The LSPDM produced a higher overall correlation coefficient (0.89) compared with HYSPLIT (0.55-0.62). The improvement of using the LSPDM is also seen in the overall normalized root mean square error values of 0.17 for LSPDM compared with 0.30-0.32 for HYSPLIT. The HYSPLIT backward trajectories generally tend to underestimate near-receptor sources because of a lack of stochastic dispersion of the backward trajectories and to overestimate distant sources because of a lack of treatment of dispersion. Additionally, the HYSPLIT backward trajectories showed a lack of consistency in the results obtained from different single vertical levels for starting the backward trajectories. To alleviate problems due to selection of a backward-trajectory starting level within a large complex set of 3-dimensional winds, turbulence, and dispersion, results were averaged from all heights, which yielded uniform improvement against all individual cases.
Time Evolution of the Dynamical Variables of a Stochastic System.
ERIC Educational Resources Information Center
de la Pena, L.
1980-01-01
By using the method of moments, it is shown that several important and apparently unrelated theorems describing average properties of stochastic systems are in fact particular cases of a general law; this method is applied to generalize the virial theorem and the fluctuation-dissipation theorem to the time-dependent case. (Author/SK)
Intrusive Method for Uncertainty Quantification in a Multiphase Flow Solver
NASA Astrophysics Data System (ADS)
Turnquist, Brian; Owkes, Mark
2016-11-01
Uncertainty quantification (UQ) is a necessary, interesting, and often neglected aspect of fluid flow simulations. To determine the significance of uncertain initial and boundary conditions, a multiphase flow solver is being created which extends a single phase, intrusive, polynomial chaos scheme into multiphase flows. Reliably estimating the impact of input uncertainty on design criteria can help identify and minimize unwanted variability in critical areas, and has the potential to help advance knowledge in atomizing jets, jet engines, pharmaceuticals, and food processing. Use of an intrusive polynomial chaos method has been shown to significantly reduce computational cost over non-intrusive collocation methods such as Monte-Carlo. This method requires transforming the model equations into a weak form through substitution of stochastic (random) variables. Ultimately, the model deploys a stochastic Navier Stokes equation, a stochastic conservative level set approach including reinitialization, as well as stochastic normals and curvature. By implementing these approaches together in one framework, basic problems may be investigated which shed light on model expansion, uncertainty theory, and fluid flow in general. NSF Grant Number 1511325.
Calculating the Malliavin derivative of some stochastic mechanics problems
Hauseux, Paul; Hale, Jack S.
2017-01-01
The Malliavin calculus is an extension of the classical calculus of variations from deterministic functions to stochastic processes. In this paper we aim to show in a practical and didactic way how to calculate the Malliavin derivative, the derivative of the expectation of a quantity of interest of a model with respect to its underlying stochastic parameters, for four problems found in mechanics. The non-intrusive approach uses the Malliavin Weight Sampling (MWS) method in conjunction with a standard Monte Carlo method. The models are expressed as ODEs or PDEs and discretised using the finite difference or finite element methods. Specifically, we consider stochastic extensions of; a 1D Kelvin-Voigt viscoelastic model discretised with finite differences, a 1D linear elastic bar, a hyperelastic bar undergoing buckling, and incompressible Navier-Stokes flow around a cylinder, all discretised with finite elements. A further contribution of this paper is an extension of the MWS method to the more difficult case of non-Gaussian random variables and the calculation of second-order derivatives. We provide open-source code for the numerical examples in this paper. PMID:29261776
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-05-01
This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.
An inverse approach to perturb historical rainfall data for scenario-neutral climate impact studies
NASA Astrophysics Data System (ADS)
Guo, Danlu; Westra, Seth; Maier, Holger R.
2018-01-01
Scenario-neutral approaches are being used increasingly for climate impact assessments, as they allow water resource system performance to be evaluated independently of climate change projections. An important element of these approaches is the generation of perturbed series of hydrometeorological variables that form the inputs to hydrologic and water resource assessment models, with most scenario-neutral studies to-date considering only shifts in the average and a limited number of other statistics of each climate variable. In this study, a stochastic generation approach is used to perturb not only the average of the relevant hydrometeorological variables, but also attributes such as the intermittency and extremes. An optimization-based inverse approach is developed to obtain hydrometeorological time series with uniform coverage across the possible ranges of rainfall attributes (referred to as the 'exposure space'). The approach is demonstrated on a widely used rainfall generator, WGEN, for a case study at Adelaide, Australia, and is shown to be capable of producing evenly-distributed samples over the exposure space. The inverse approach expands the applicability of the scenario-neutral approach in evaluating a water resource system's sensitivity to a wider range of plausible climate change scenarios.
Debates - Stochastic subsurface hydrology from theory to practice: Introduction
NASA Astrophysics Data System (ADS)
Rajaram, Harihar
2016-12-01
This paper introduces the papers in the "Debates - Stochastic Subsurface Hydrology from Theory to Practice" series. Beginning in the 1970s, the field of stochastic subsurface hydrology has been an active field of research, with over 3500 journal publications, of which over 850 have appeared in Water Resources Research. We are fortunate to have insightful contributions from four groups of distinguished authors who discuss the reasons why the advanced research framework established in stochastic subsurface hydrology has not impacted the practice of groundwater flow and transport modeling and design significantly. There is reasonable consensus that a community effort aimed at developing "toolboxes" for applications of stochastic methods will make them more accessible and encourage practical applications.
Albert, Jaroslav
2016-01-01
Modeling stochastic behavior of chemical reaction networks is an important endeavor in many aspects of chemistry and systems biology. The chemical master equation (CME) and the Gillespie algorithm (GA) are the two most fundamental approaches to such modeling; however, each of them has its own limitations: the GA may require long computing times, while the CME may demand unrealistic memory storage capacity. We propose a method that combines the CME and the GA that allows one to simulate stochastically a part of a reaction network. First, a reaction network is divided into two parts. The first part is simulated via the GA, while the solution of the CME for the second part is fed into the GA in order to update its propensities. The advantage of this method is that it avoids the need to solve the CME or stochastically simulate the entire network, which makes it highly efficient. One of its drawbacks, however, is that most of the information about the second part of the network is lost in the process. Therefore, this method is most useful when only partial information about a reaction network is needed. We tested this method against the GA on two systems of interest in biology--the gene switch and the Griffith model of a genetic oscillator--and have shown it to be highly accurate. Comparing this method to four different stochastic algorithms revealed it to be at least an order of magnitude faster than the fastest among them.
NASA Astrophysics Data System (ADS)
Arnst, M.; Abello Álvarez, B.; Ponthot, J.-P.; Boman, R.
2017-11-01
This paper is concerned with the characterization and the propagation of errors associated with data limitations in polynomial-chaos-based stochastic methods for uncertainty quantification. Such an issue can arise in uncertainty quantification when only a limited amount of data is available. When the available information does not suffice to accurately determine the probability distributions that must be assigned to the uncertain variables, the Bayesian method for assigning these probability distributions becomes attractive because it allows the stochastic model to account explicitly for insufficiency of the available information. In previous work, such applications of the Bayesian method had already been implemented by using the Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) methods. In this paper, we present an alternative implementation, which uses an alternative MCMC method built around an Itô stochastic differential equation (SDE) that is ergodic for the Bayesian posterior. We draw together from the mathematics literature a number of formal properties of this Itô SDE that lend support to its use in the implementation of the Bayesian method, and we describe its discretization, including the choice of the free parameters, by using the implicit Euler method. We demonstrate the proposed methodology on a problem of uncertainty quantification in a complex nonlinear engineering application relevant to metal forming.
NASA Astrophysics Data System (ADS)
Wang, Ting; Plecháč, Petr
2017-12-01
Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.
Stochastic Spectral Descent for Discrete Graphical Models
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...
2015-12-14
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less
Engen, Steinar; Lee, Aline Magdalena; Sæther, Bernt-Erik
2018-02-01
We analyze a spatial age-structured model with density regulation, age specific dispersal, stochasticity in vital rates and proportional harvesting. We include two age classes, juveniles and adults, where juveniles are subject to logistic density dependence. There are environmental stochastic effects with arbitrary spatial scales on all birth and death rates, and individuals of both age classes are subject to density independent dispersal with given rates and specified distributions of dispersal distances. We show how to simulate the joint density fields of the age classes and derive results for the spatial scales of all spatial autocovariance functions for densities. A general result is that the squared scale has an additive term equal to the squared scale of the environmental noise, corresponding to the Moran effect, as well as additive terms proportional to the dispersal rate and variance of dispersal distance for the age classes and approximately inversely proportional to the strength of density regulation. We show that the optimal harvesting strategy in the deterministic case is to harvest only juveniles when their relative value (e.g. financial) is large, and otherwise only adults. With increasing environmental stochasticity there is an interval of increasing length of values of juveniles relative to adults where both age classes should be harvested. Harvesting generally tends to increase all spatial scales of the autocovariances of densities. Copyright © 2017. Published by Elsevier Inc.
A framework to analyze the stochastic harmonics and resonance of wind energy grid interconnection
Cho, Youngho; Lee, Choongman; Hur, Kyeon; ...
2016-08-31
This study addresses a modeling and analysis methodology for investigating the stochastic harmonics and resonance concerns of wind power plants (WPPs). Wideband harmonics from modern wind turbines are observed to be stochastic, associated with real power production, and they may adversely interact with the grid impedance and cause unexpected harmonic resonance if not comprehensively addressed in the planning and commissioning of the WPPs. These issues should become more critical as wind penetration levels increase. We thus propose a planning study framework comprising the following functional steps: First, the best-fitted probability density functions (PDFs) of the harmonic components of interest inmore » the frequency domain are determined. In operations planning, maximum likelihood estimations followed by a chi-square test are used once field measurements or manufacturers' data are available. Second, harmonic currents from the WPP are represented by randomly-generating harmonic components based on their PDFs (frequency spectrum) and then synthesized for time-domain simulations via inverse Fourier transform. Finally, we conduct a comprehensive assessment by including the impacts of feeder configurations, harmonic filters, and the variability of parameters. We demonstrate the efficacy of the proposed study approach for a 100-MW offshore WPP consisting of 20 units of 5-MW full-converter turbines, a realistic benchmark system adapted from a WPP under development in Korea, and discuss lessons learned through this research.« less
A solvable model of Vlasov-kinetic plasma turbulence in Fourier-Hermite phase space
NASA Astrophysics Data System (ADS)
Adkins, T.; Schekochihin, A. A.
2018-02-01
A class of simple kinetic systems is considered, described by the one-dimensional Vlasov-Landau equation with Poisson or Boltzmann electrostatic response and an energy source. Assuming a stochastic electric field, a solvable model is constructed for the phase-space turbulence of the particle distribution. The model is a kinetic analogue of the Kraichnan-Batchelor model of chaotic advection. The solution of the model is found in Fourier-Hermite space and shows that the free-energy flux from low to high Hermite moments is suppressed, with phase mixing cancelled on average by anti-phase-mixing (stochastic plasma echo). This implies that Landau damping is an ineffective route to dissipation (i.e. to thermalisation of electric energy via velocity space). The full Fourier-Hermite spectrum is derived. Its asymptotics are -3/2$ at low wavenumbers and high Hermite moments ( ) and -1/2k-2$ at low Hermite moments and high wavenumbers ( ). These conclusions hold at wavenumbers below a certain cutoff (analogue of Kolmogorov scale), which increases with the amplitude of the stochastic electric field and scales as inverse square of the collision rate. The energy distribution and flows in phase space are a simple and, therefore, useful example of competition between phase mixing and nonlinear dynamics in kinetic turbulence, reminiscent of more realistic but more complicated multi-dimensional systems that have not so far been amenable to complete analytical solution.
Kundu, Siddhartha
2016-10-21
Chemotaxis, integrates diverse intra- and inter-cellular molecular processes into a purposeful patho-physiological response; the operatic rules of which, remain speculative. Here, I surmise, that superoxide anion induced directional motility, in a responding cell, results from a quasi pathway between the stimulus, surrounding interstitium, and its biochemical repertoire. The epochal event in the mounting of an inflammatory response, is the extravascular transmigration of a phagocyte competent cell towards the site of injury, secondary to the development of a lamellipodium. This stochastic-to-markovian process conversion, is initiated by the cytosolic-ROS of the damaged cell, but is maintained by the inverse association of a de novo generated pool of self-sustaining superoxide anions and sub-critical hydrogen peroxide levels. Whilst, the exponential rise of O2(.-) is secondary to the focal accumulation of higher order lipid raft-Rac1/2-actin oligomers; O2(.-) mediated inactivation and redistribution of ECSOD, accounts for the minimal concentration of H2O2 that the phagocyte experiences. The net result of this reciprocal association between ROS/ RNS members, is the prolonged perturbation and remodeling of the cytoskeleton and plasma membrane, a prelude to chemotactic migration. The manuscript also describes the significance of stochastic modeling, in the testing of plausible molecular hypotheses of observable phenomena in complex biological systems. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz
2015-01-01
Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (<0.5 Hz) and cannot reproduce the varying frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5 Hz).
Multi-fidelity stochastic collocation method for computation of statistical moments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu; Linebarger, Erin M., E-mail: aerinline@sci.utah.edu; Xiu, Dongbin, E-mail: xiu.16@osu.edu
We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiu, Dongbin
2017-03-03
The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
Mo Zhou; Joseph Buongiorno
2011-01-01
Most economic studies of forest decision making under risk assume a fixed interest rate. This paper investigated some implications of this stochastic nature of interest rates. Markov decision process (MDP) models, used previously to integrate stochastic stand growth and prices, can be extended to include variable interest rates as well. This method was applied to...
NASA Astrophysics Data System (ADS)
Edenhofer, Peter; Ulamec, Stephan
2015-04-01
The paper is devoted to results of doctoral research work at University of Bochum as applied to the radar transmission experiment CONSERT of the ESA cometary mission Rosetta. This research aims at achieving the limits of optimum spatial (and temporal) resolution for radar remote sensing by implementation of covariance informations concerned with error-balanced control as well as coherence of wave propagation effects through random composite media involved (based on Joel Franklin's approach of extended stochastic inversion). As a consequence the well-known inherent numerical instabilities of remote sensing are significantly reduced in a robust way by increasing the weight of main diagonal elements of the resulting composite matrix to be inverted with respect to off-diagonal elements following synergy relations as to the principle of correlation receiver in wireless telecommunications. It is shown that the enhancement of resolution for remote sensing holds for an integral and differential equation approach of inversion as well. In addition to that the paper presents a discussion on how the efficiency of inversion for radar data gets achieved by an overall optimization of inversion due to a novel neuro-genetic approach. Such kind of approach is in synergy with the priority research program "Organic Computing" of DFG / German Research Organization. This Neuro-Genetic Optimization (NGO) turns out, firstly, to take into account more detailed physical informations supporting further improved resolution such as the process of accretion for cometary nucleus, wave propagation effects from rough surfaces, ground clutter, nonlinear focusing, etc. as well as, secondly, to accelerate the computing process of inversion in a really significantly enhanced and fast way, e.g., enabling online-control of autonomous processes such as detection of unknown objects, navigation, etc. The paper describes in some detail how this neuro-genetic approach of optimization is incorporated into the procedure of data inversion by combining inverted artificial neural networks of adequately chosen topology and learning routines for short access times with the concept of genetic algorithms enabling to achieve a multi-dimensional global optimum subject to a properly constructed and problem-oriented target function, ensemble selection rules, etc. Finally the paper discusses how the power of realistic simulation of the structures of the interior of a cometary nucleus can be improved by applying Benoit Mandelbrot's concept of fractal structures. It is shown how the fractal volumetric modelling of the nucleus of a comet can be accomplished by finite 3D elements of flexibility (serving topography and morphology as well) such as of tetrahedron shape with specific scaling factors of self similarity and a Maxwellian type of distribution function. By applying the widely accepted fBm-concept of fractal Brownian motion basically each of the corresponding Hurst exponents 0 (rough) < H < 1 (smooth) can be derived for the multi-fractal depth (and terrain) profiles of the equivalent dielectric constant per tomographic angular orbital segment of intersection by transmissive radar ray paths with the nucleus of the comet. Cooperative efforts and work are in progress to achieve numerical results of depth profiles for the nucleus of comet 67P/Churyumov-Gerasimenko.
NASA Astrophysics Data System (ADS)
Zhu, Z. W.; Zhang, W. D.; Xu, J.
2014-03-01
The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.
NASA Astrophysics Data System (ADS)
Moeeni, Hamid; Bonakdari, Hossein; Fatemi, Seyed Ehsan
2017-04-01
Because time series stationarization has a key role in stochastic modeling results, three methods are analyzed in this study. The methods are seasonal differencing, seasonal standardization and spectral analysis to eliminate the periodic effect on time series stationarity. First, six time series including 4 streamflow series and 2 water temperature series are stationarized. The stochastic term for these series obtained with ARIMA is subsequently modeled. For the analysis, 9228 models are introduced. It is observed that seasonal standardization and spectral analysis eliminate the periodic term completely, while seasonal differencing maintains seasonal correlation structures. The obtained results indicate that all three methods present acceptable performance overall. However, model accuracy in monthly streamflow prediction is higher with seasonal differencing than with the other two methods. Another advantage of seasonal differencing over the other methods is that the monthly streamflow is never estimated as negative. Standardization is the best method for predicting monthly water temperature although it is quite similar to seasonal differencing, while spectral analysis performed the weakest in all cases. It is concluded that for each monthly seasonal series, seasonal differencing is the best stationarization method in terms of periodic effect elimination. Moreover, the monthly water temperature is predicted with more accuracy than monthly streamflow. The criteria of the average stochastic term divided by the amplitude of the periodic term obtained for monthly streamflow and monthly water temperature were 0.19 and 0.30, 0.21 and 0.13, and 0.07 and 0.04 respectively. As a result, the periodic term is more dominant than the stochastic term for water temperature in the monthly water temperature series compared to streamflow series.
Simulated maximum likelihood method for estimating kinetic rates in gene expression.
Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin
2007-01-01
Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.
Data Analysis Approaches for the Risk-Informed Safety Margins Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Alfonsi, Andrea; Maljovec, Daniel P.
2016-09-01
In the past decades, several numerical simulation codes have been employed to simulate accident dynamics (e.g., RELAP5-3D, RELAP-7, MELCOR, MAAP). In order to evaluate the impact of uncertainties into accident dynamics, several stochastic methodologies have been coupled with these codes. These stochastic methods range from classical Monte-Carlo and Latin Hypercube sampling to stochastic polynomial methods. Similar approaches have been introduced into the risk and safety community where stochastic methods (such as RAVEN, ADAPT, MCDET, ADS) have been coupled with safety analysis codes in order to evaluate the safety impact of timing and sequencing of events. These approaches are usually calledmore » Dynamic PRA or simulation-based PRA methods. These uncertainties and safety methods usually generate a large number of simulation runs (database storage may be on the order of gigabytes or higher). The scope of this paper is to present a broad overview of methods and algorithms that can be used to analyze and extract information from large data sets containing time dependent data. In this context, “extracting information” means constructing input-output correlations, finding commonalities, and identifying outliers. Some of the algorithms presented here have been developed or are under development within the RAVEN statistical framework.« less
Zimmer, Christoph
2016-01-01
Background Computational modeling is a key technique for analyzing models in systems biology. There are well established methods for the estimation of the kinetic parameters in models of ordinary differential equations (ODE). Experimental design techniques aim at devising experiments that maximize the information encoded in the data. For ODE models there are well established approaches for experimental design and even software tools. However, data from single cell experiments on signaling pathways in systems biology often shows intrinsic stochastic effects prompting the development of specialized methods. While simulation methods have been developed for decades and parameter estimation has been targeted for the last years, only very few articles focus on experimental design for stochastic models. Methods The Fisher information matrix is the central measure for experimental design as it evaluates the information an experiment provides for parameter estimation. This article suggest an approach to calculate a Fisher information matrix for models containing intrinsic stochasticity and high nonlinearity. The approach makes use of a recently suggested multiple shooting for stochastic systems (MSS) objective function. The Fisher information matrix is calculated by evaluating pseudo data with the MSS technique. Results The performance of the approach is evaluated with simulation studies on an Immigration-Death, a Lotka-Volterra, and a Calcium oscillation model. The Calcium oscillation model is a particularly appropriate case study as it contains the challenges inherent to signaling pathways: high nonlinearity, intrinsic stochasticity, a qualitatively different behavior from an ODE solution, and partial observability. The computational speed of the MSS approach for the Fisher information matrix allows for an application in realistic size models. PMID:27583802
Study on Nonlinear Vibration Analysis of Gear System with Random Parameters
NASA Astrophysics Data System (ADS)
Tong, Cao; Liu, Xiaoyuan; Fan, Li
2018-03-01
In order to study the dynamic characteristics of gear nonlinear vibration system and the influence of random parameters, firstly, a nonlinear stochastic vibration analysis model of gear 3-DOF is established based on Newton’s Law. And the random response of gear vibration is simulated by stepwise integration method. Secondly, the influence of stochastic parameters such as meshing damping, tooth side gap and excitation frequency on the dynamic response of gear nonlinear system is analyzed by using the stability analysis method such as bifurcation diagram and Lyapunov exponent method. The analysis shows that the stochastic process can not be neglected, which can cause the random bifurcation and chaos of the system response. This study will provide important reference value for vibration engineering designers.
Stochastic volatility of the futures prices of emission allowances: A Bayesian approach
NASA Astrophysics Data System (ADS)
Kim, Jungmu; Park, Yuen Jung; Ryu, Doojin
2017-01-01
Understanding the stochastic nature of the spot volatility of emission allowances is crucial for risk management in emissions markets. In this study, by adopting a stochastic volatility model with or without jumps to represent the dynamics of European Union Allowances (EUA) futures prices, we estimate the daily volatilities and model parameters by using the Markov Chain Monte Carlo method for stochastic volatility (SV), stochastic volatility with return jumps (SVJ) and stochastic volatility with correlated jumps (SVCJ) models. Our empirical results reveal three important features of emissions markets. First, the data presented herein suggest that EUA futures prices exhibit significant stochastic volatility. Second, the leverage effect is noticeable regardless of whether or not jumps are included. Third, the inclusion of jumps has a significant impact on the estimation of the volatility dynamics. Finally, the market becomes very volatile and large jumps occur at the beginning of a new phase. These findings are important for policy makers and regulators.
Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng
2018-03-05
The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.