A path-level exact parallelization strategy for sequential simulation
NASA Astrophysics Data System (ADS)
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik
2014-05-16
Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less
Multiuser signal detection using sequential decoding
NASA Astrophysics Data System (ADS)
Xie, Zhenhua; Rushforth, Craig K.; Short, Robert T.
1990-05-01
The application of sequential decoding to the detection of data transmitted over the additive white Gaussian noise channel by K asynchronous transmitters using direct-sequence spread-spectrum multiple access is considered. A modification of Fano's (1963) sequential-decoding metric, allowing the messages from a given user to be safely decoded if its Eb/N0 exceeds -1.6 dB, is presented. Computer simulation is used to evaluate the performance of a sequential decoder that uses this metric in conjunction with the stack algorithm. In many circumstances, the sequential decoder achieves results comparable to those obtained using the much more complicated optimal receiver.
Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh
2009-01-01
This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.
Accelerating Sequential Gaussian Simulation with a constant path
NASA Astrophysics Data System (ADS)
Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus
2018-03-01
Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.
Spatial interpolation of forest conditions using co-conditional geostatistical simulation
H. Todd Mowrer
2000-01-01
In recent work the author used the geostatistical Monte Carlo technique of sequential Gaussian simulation (s.G.s.) to investigate uncertainty in a GIS analysis of potential old-growth forest areas. The current study compares this earlier technique to that of co-conditional simulation, wherein the spatial cross-correlations between variables are included. As in the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirley, C.; Pohlmann, K.; Andricevic, R.
1996-09-01
Geological and geophysical data are used with the sequential indicator simulation algorithm of Gomez-Hernandez and Srivastava to produce multiple, equiprobable, three-dimensional maps of informal hydrostratigraphic units at the Frenchman Flat Corrective Action Unit, Nevada Test Site. The upper 50 percent of the Tertiary volcanic lithostratigraphic column comprises the study volume. Semivariograms are modeled from indicator-transformed geophysical tool signals. Each equiprobable study volume is subdivided into discrete classes using the ISIM3D implementation of the sequential indicator simulation algorithm. Hydraulic conductivity is assigned within each class using the sequential Gaussian simulation method of Deutsch and Journel. The resulting maps show the contiguitymore » of high and low hydraulic conductivity regions.« less
Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian
2016-08-01
In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.
High energy protons generation by two sequential laser pulses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xiaofeng; Shen, Baifei, E-mail: bfshen@mail.shcnc.ac.cn, E-mail: zhxm@siom.ac.cn; Zhang, Xiaomei, E-mail: bfshen@mail.shcnc.ac.cn, E-mail: zhxm@siom.ac.cn
2015-04-15
The sequential proton acceleration by two laser pulses of relativistic intensity is proposed to produce high energy protons. In the scheme, a relativistic super-Gaussian (SG) laser pulse followed by a Laguerre-Gaussian (LG) pulse irradiates dense plasma attached by underdense plasma. A proton beam is produced from the target and accelerated in the radiation pressure regime by the short SG pulse and then trapped and re-accelerated in a special bubble driven by the LG pulse in the underdense plasma. The advantages of radiation pressure acceleration and LG transverse structure are combined to achieve the effective trapping and acceleration of protons. Inmore » a two-dimensional particle-in-cell simulation, protons of 6.7 GeV are obtained from a 2 × 10{sup 22 }W/cm{sup 2} SG laser pulse and a LG pulse at a lower peak intensity.« less
specsim: A Fortran-77 program for conditional spectral simulation in 3D
NASA Astrophysics Data System (ADS)
Yao, Tingting
1998-12-01
A Fortran 77 program, specsim, is presented for conditional spectral simulation in 3D domains. The traditional Fourier integral method allows generating random fields with a given covariance spectrum. Conditioning to local data is achieved by an iterative identification of the conditional phase information. A flowchart of the program is given to illustrate the implementation procedures of the program. A 3D case study is presented to demonstrate application of the program. A comparison with the traditional sequential Gaussian simulation algorithm emphasizes the advantages and drawbacks of the proposed algorithm.
Gibbs sampling on large lattice with GMRF
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Allard, Denis
2018-02-01
Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.
Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes.
Karacan, C Özgen; Olea, Ricardo A
2013-04-01
Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests.
Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes
Karacan, C. Özgen; Olea, Ricardo A.
2013-01-01
Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests.
Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes
Karacan, C.Özgen; Olea, Ricardo A.
2015-01-01
Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests. PMID:26190930
Gstat: a program for geostatistical modelling, prediction and simulation
NASA Astrophysics Data System (ADS)
Pebesma, Edzer J.; Wesseling, Cees G.
1998-01-01
Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
NASA Astrophysics Data System (ADS)
Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Chen, Ying
2016-05-01
Health state estimation of inaccessible components in complex systems necessitates effective state estimation techniques using the observable variables of the system. The task becomes much complicated when the system is nonlinear/non-Gaussian and it receives stochastic input. In this work, a novel sequential state estimation framework is developed based on particle filtering (PF) scheme for state estimation of general class of nonlinear dynamical systems with stochastic input. Performance of the developed framework is then validated with simulation on a Bivariate Non-stationary Growth Model (BNGM) as a benchmark. In the next step, three-year operating data of an industrial gas turbine engine (GTE) are utilized to verify the effectiveness of the developed framework. A comprehensive thermodynamic model for the GTE is therefore developed to formulate the relation of the observable parameters and the dominant degradation symptoms of the turbine, namely, loss of isentropic efficiency and increase of the mass flow. The results confirm the effectiveness of the developed framework for simultaneous estimation of multiple degradation symptoms in complex systems with noisy measured inputs.
NASA Astrophysics Data System (ADS)
Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.
2017-09-01
The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.
Model for wind resource analysis and for wind farm planning
NASA Astrophysics Data System (ADS)
Rozsavolgyi, K.
2008-12-01
Due to the ever increasing anthropogenic environmental pollution and the worldwide energy demand, the research and exploitation of environment-friendly renewable energy sources like wind, solar, geothermal, biomass become more and more important. During the last decade wind energy utilization has developed dynamically with big steps. Over just the past seven years, annual worldwide growth in installed wind capacity is near 30 %. Over 94 000 MW installed currently all over the world. Besides important economic incentives, the most extensive and most accurate scientific results are required in order to provide beneficial help for regional planning of wind farms to find appropriate sites for optimal exploitation of this renewable energy source. This research is on the spatial allocation of possible wind energy usage for wind farms. In order to carry this out a new model (CMPAM = Complex Multifactoral Polygenetic Adaptive Model) is being developed, which basically is a wind climate-oriented system, but other kind of factors are also considered. With this model those areas and terrains can be located where construction of large wind farms would be reasonable under the given conditions. This model consist of different sub- modules such as wind field modeling sub module (CMPAM/W) that is in high focus in this model development procedure. The wind field modeling core of CMPAM is mainly based on sGs (sequential Gaussian simulation) hence geostatistics, but atmospheric physics and GIS are used as well. For the application developed for the test area (Hungary) WAsP visualization results were used from 10 m height as input data. This data was geocorrected (GIS geometric correction) before it was used for further calculations. Using optimized variography and sequential Gaussian simulation, results were applied for the test area (Hungary) at different heights. Simulation results were produced and summarized for different heights. Furthermore an exponential regressive function describing the vertical wind profile was also established. The following altitudes were examined: 10 m, 30 m, 60 m, 80 m, 100 m, 120 m and 140 m. By the help of the complex analyses of CMPAM, where not just mere wind climatic and meteorological factors are considered, detailed results have been produced to 100 m height. Results at this altitude were analyzed and explained in a more detailed way because this altitude proved to be the first height that can ensure adequate wind speed for larger wind farms for wind energy exploitation in the test area. Keywords: wind site assessment, wind field modeling, complex modeling for planning of wind farm, sequential Gaussian simulation, GIS, wind profile
Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Hou, Zhangshuan; Meng, Da
2016-07-17
In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.
Statistical characteristics of the sequential detection of signals in correlated noise
NASA Astrophysics Data System (ADS)
Averochkin, V. A.; Baranov, P. E.
1985-10-01
A solution is given to the problem of determining the distribution of the duration of the sequential two-threshold Wald rule for the time-discrete detection of determinate and Gaussian correlated signals on a background of Gaussian correlated noise. Expressions are obtained for the joint probability densities of the likelihood ratio logarithms, and an analysis is made of the effect of correlation and SNR on the duration distribution and the detection efficiency. Comparison is made with Neumann-Pearson detection.
Sequential bearings-only-tracking initiation with particle filtering method.
Liu, Bin; Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schäfer, Joachim; Karpov, Evgueni; Cerf, Nicolas J.
2014-12-04
We seek for a realistic implementation of multimode Gaussian entangled states that can realize the optimal encoding for quantum bosonic Gaussian channels with memory. For a Gaussian channel with classical additive Markovian correlated noise and a lossy channel with non-Markovian correlated noise, we demonstrate the usefulness using Gaussian matrix-product states (GMPS). These states can be generated sequentially, and may, in principle, approximate well any Gaussian state. We show that we can achieve up to 99.9% of the classical Gaussian capacity with GMPS requiring squeezing parameters that are reachable with current technology. This may offer a way towards an experimental realization.
Karacan, C Özgen; Olea, Ricardo A
2018-03-01
Chemical properties of coal largely determine coal handling, processing, beneficiation methods, and design of coal-fired power plants. Furthermore, these properties impact coal strength, coal blending during mining, as well as coal's gas content, which is important for mining safety. In order for these processes and quantitative predictions to be successful, safer, and economically feasible, it is important to determine and map chemical properties of coals accurately in order to infer these properties prior to mining. Ultimate analysis quantifies principal chemical elements in coal. These elements are C, H, N, S, O, and, depending on the basis, ash, and/or moisture. The basis for the data is determined by the condition of the sample at the time of analysis, with an "as-received" basis being the closest to sampling conditions and thus to the in-situ conditions of the coal. The parts determined or calculated as the result of ultimate analyses are compositions, reported in weight percent, and pose the challenges of statistical analyses of compositional data. The treatment of parts using proper compositional methods may be even more important in mapping them, as most mapping methods carry uncertainty due to partial sampling as well. In this work, we map the ultimate analyses parts of the Springfield coal from an Indiana section of the Illinois basin, USA, using sequential Gaussian simulation of isometric log-ratio transformed compositions. We compare the results with those of direct simulations of compositional parts. We also compare the implications of these approaches in calculating other properties using correlations to identify the differences and consequences. Although the study here is for coal, the methods described in the paper are applicable to any situation involving compositional data and its mapping.
[Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].
Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong
2015-11-01
With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.
Lewicki, Jennifer L.; Bergfeld, Deborah; Cardellini, Carlo; Chiodini, Giovanni; Granieri, Domenico; Varley, Nick; Werner, Cynthia A.
2005-01-01
We present a comparative study of soil CO2 flux (FCO2">FCO2) measured by five groups (Groups 1–5) at the IAVCEI-CCVG Eighth Workshop on Volcanic Gases on Masaya volcano, Nicaragua. Groups 1–5 measured FCO2 using the accumulation chamber method at 5-m spacing within a 900 m2 grid during a morning (AM) period. These measurements were repeated by Groups 1–3 during an afternoon (PM) period. Measured FCO2 ranged from 218 to 14,719 g m−2 day−1. The variability of the five measurements made at each grid point ranged from ±5 to 167%. However, the arithmetic means of fluxes measured over the entire grid and associated total CO2 emission rate estimates varied between groups by only ±22%. All three groups that made PM measurements reported an 8–19% increase in total emissions over the AM results. Based on a comparison of measurements made during AM and PM times, we argue that this change is due in large part to natural temporal variability of gas flow, rather than to measurement error. In order to estimate the mean and associated CO2 emission rate of one data set and to map the spatial FCO2 distribution, we compared six geostatistical methods: arithmetic and minimum variance unbiased estimator means of uninterpolated data, and arithmetic means of data interpolated by the multiquadric radial basis function, ordinary kriging, multi-Gaussian kriging, and sequential Gaussian simulation methods. While the total CO2 emission rates estimated using the different techniques only varied by ±4.4%, the FCO2 maps showed important differences. We suggest that the sequential Gaussian simulation method yields the most realistic representation of the spatial distribution of FCO2, but a variety of geostatistical methods are appropriate to estimate the total CO2 emission rate from a study area, which is a primary goal in volcano monitoring research.
Compensating for estimation smoothing in kriging
Olea, R.A.; Pawlowsky, Vera
1996-01-01
Smoothing is a characteristic inherent to all minimum mean-square-error spatial estimators such as kriging. Cross-validation can be used to detect and model such smoothing. Inversion of the model produces a new estimator-compensated kriging. A numerical comparison based on an exhaustive permeability sampling of a 4-fr2 slab of Berea Sandstone shows that the estimation surface generated by compensated kriging has properties intermediate between those generated by ordinary kriging and stochastic realizations resulting from simulated annealing and sequential Gaussian simulation. The frequency distribution is well reproduced by the compensated kriging surface, which also approximates the experimental semivariogram well - better than ordinary kriging, but not as well as stochastic realizations. Compensated kriging produces surfaces that are more accurate than stochastic realizations, but not as accurate as ordinary kriging. ?? 1996 International Association for Mathematical Geology.
NASA Astrophysics Data System (ADS)
Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric
2017-12-01
This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.
NASA Astrophysics Data System (ADS)
Žukovič, Milan; Hristopulos, Dionissios T.
2009-02-01
A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.
Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang
2016-01-01
It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894
Uiberacker, Christoph; Jakubetz, Werner
2004-06-22
Using 550 previously calculated vibrational energy levels and dipole moments we performed simulations of the HCN-->HNC isomerization dynamics induced by sub-one-cycle and few-cycle IR pulses, which we represent as Gaussian pulses with 0.25-2 optical cycles in the pulse width. Starting from vibrationally pre-excited states, isomerization probabilities of up to 50% are obtained for optimized pulses. With decreasing number of optical cycles a strong dependence on the carrier-envelope phase (CEP) emerges. Although the optimized pulse parameters change significantly with the number of optical cycles, the distortion by the Gaussian envelope produces nearly equal fields, with a positive lobe followed by a negative one. The positions and areas of the lobes are also almost unchanged, irrespective of the number of cycles in the half-width. Isomerization proceeds via a pump-dumplike mechanism induced by the sequential lobes. The first lobe prepares a wave packet incorporating many delocalized states above the barrier. It is the motion of this wave packet across the barrier, which determines the timing of the pump and dump lobes. The role of the pulse parameters, and in particular of the CEP, is to produce the correct lobe sequence, size and timing within a continuous pulse. (c) 2004 American Institute of Physics.
NASA Astrophysics Data System (ADS)
Varouchakis, Emmanouil; Hristopulos, Dionissios
2015-04-01
Space-time geostatistical approaches can improve the reliability of dynamic groundwater level models in areas with limited spatial and temporal data. Space-time residual Kriging (STRK) is a reliable method for spatiotemporal interpolation that can incorporate auxiliary information. The method usually leads to an underestimation of the prediction uncertainty. The uncertainty of spatiotemporal models is usually estimated by determining the space-time Kriging variance or by means of cross validation analysis. For de-trended data the former is not usually applied when complex spatiotemporal trend functions are assigned. A Bayesian approach based on the bootstrap idea and sequential Gaussian simulation are employed to determine the uncertainty of the spatiotemporal model (trend and covariance) parameters. These stochastic modelling approaches produce multiple realizations, rank the prediction results on the basis of specified criteria and capture the range of the uncertainty. The correlation of the spatiotemporal residuals is modeled using a non-separable space-time variogram based on the Spartan covariance family (Hristopulos and Elogne 2007, Varouchakis and Hristopulos 2013). We apply these simulation methods to investigate the uncertainty of groundwater level variations. The available dataset consists of bi-annual (dry and wet hydrological period) groundwater level measurements in 15 monitoring locations for the time period 1981 to 2010. The space-time trend function is approximated using a physical law that governs the groundwater flow in the aquifer in the presence of pumping. The main objective of this research is to compare the performance of two simulation methods for prediction uncertainty estimation. In addition, we investigate the performance of the Spartan spatiotemporal covariance function for spatiotemporal geostatistical analysis. Hristopulos, D.T. and Elogne, S.N. 2007. Analytic properties and covariance functions for a new class of generalized Gibbs random fields. IΕΕΕ Transactions on Information Theory, 53:4667-4467. Varouchakis, E.A. and Hristopulos, D.T. 2013. Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables. Advances in Water Resources, 52:34-49. Research supported by the project SPARTA 1591: "Development of Space-Time Random Fields based on Local Interaction Models and Applications in the Processing of Spatiotemporal Datasets". "SPARTA" is implemented under the "ARISTEIA" Action of the operational programme Education and Lifelong Learning and is co-funded by the European Social Fund (ESF) and National Resources.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Spatiotemporal stochastic models for earth science and engineering applications
NASA Astrophysics Data System (ADS)
Luo, Xiaochun
1998-12-01
Spatiotemporal processes occur in many areas of earth sciences and engineering. However, most of the available theoretical tools and techniques of space-time daft processing have been designed to operate exclusively in time or in space, and the importance of spatiotemporal variability was not fully appreciated until recently. To address this problem, a systematic framework of spatiotemporal random field (S/TRF) models for geoscience/engineering applications is presented and developed in this thesis. The space-tune continuity characterization is one of the most important aspects in S/TRF modelling, where the space-time continuity is displayed with experimental spatiotemporal variograms, summarized in terms of space-time continuity hypotheses, and modelled using spatiotemporal variogram functions. Permissible spatiotemporal covariance/variogram models are addressed through permissibility criteria appropriate to spatiotemporal processes. The estimation of spatiotemporal processes is developed in terms of spatiotemporal kriging techniques. Particular emphasis is given to the singularity analysis of spatiotemporal kriging systems. The impacts of covariance, functions, trend forms, and data configurations on the singularity of spatiotemporal kriging systems are discussed. In addition, the tensorial invariance of universal spatiotemporal kriging systems is investigated in terms of the space-time trend. The conditional simulation of spatiotemporal processes is proposed with the development of the sequential group Gaussian simulation techniques (SGGS), which is actually a series of sequential simulation algorithms associated with different group sizes. The simulation error is analyzed with different covariance models and simulation grids. The simulated annealing technique honoring experimental variograms, is also proposed, providing a way of conditional simulation without the covariance model fitting which is prerequisite for most simulation algorithms. The proposed techniques were first applied for modelling of the pressure system in a carbonate reservoir, and then applied for modelling of springwater contents in the Dyle watershed. The results of these case studies as well as the theory suggest that these techniques are realistic and feasible.
Some sequential, distribution-free pattern classification procedures with applications
NASA Technical Reports Server (NTRS)
Poage, J. L.
1971-01-01
Some sequential, distribution-free pattern classification techniques are presented. The decision problem to which the proposed classification methods are applied is that of discriminating between two kinds of electroencephalogram responses recorded from a human subject: spontaneous EEG and EEG driven by a stroboscopic light stimulus at the alpha frequency. The classification procedures proposed make use of the theory of order statistics. Estimates of the probabilities of misclassification are given. The procedures were tested on Gaussian samples and the EEG responses.
NASA Astrophysics Data System (ADS)
Jennings, E.; Madigan, M.
2017-04-01
Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.
Repeat-until-success cubic phase gate for universal continuous-variable quantum computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Kevin; Pooser, Raphael; Siopsis, George
2015-03-24
We report that to achieve universal quantum computation using continuous variables, one needs to jump out of the set of Gaussian operations and have a non-Gaussian element, such as the cubic phase gate. However, such a gate is currently very difficult to implement in practice. Here we introduce an experimentally viable “repeat-until-success” approach to generating the cubic phase gate, which is achieved using sequential photon subtractions and Gaussian operations. Ultimately, we find that our scheme offers benefits in terms of the expected time until success, as well as the fact that we do not require any complex off-line resource state,more » although we require a primitive quantum memory.« less
Absolute judgment for one- and two-dimensional stimuli embedded in Gaussian noise
NASA Technical Reports Server (NTRS)
Kvalseth, T. O.
1977-01-01
This study examines the effect on human performance of adding Gaussian noise or disturbance to the stimuli in absolute judgment tasks involving both one- and two-dimensional stimuli. For each selected stimulus value (both an X-value and a Y-value were generated in the two-dimensional case), 10 values (or 10 pairs of values in the two-dimensional case) were generated from a zero-mean Gaussian variate, added to the selected stimulus value and then served as the coordinate values for the 10 points that were displayed sequentially on a CRT. The results show that human performance, in terms of the information transmitted and rms error as functions of stimulus uncertainty, was significantly reduced as the noise variance increased.
An Equivalent Fracture Modeling Method
NASA Astrophysics Data System (ADS)
Li, Shaohua; Zhang, Shujuan; Yu, Gaoming; Xu, Aiyun
2017-12-01
3D fracture network model is built based on discrete fracture surfaces, which are simulated based on fracture length, dip, aperture, height and so on. The interesting area of Wumishan Formation of Renqiu buried hill reservoir is about 57 square kilometer and the thickness of target strata is more than 2000 meters. In addition with great fracture density, the fracture simulation and upscaling of discrete fracture network model of Wumishan Formation are very intense computing. In order to solve this problem, a method of equivalent fracture modeling is proposed. First of all, taking the fracture interpretation data obtained from imaging logging and conventional logging as the basic data, establish the reservoir level model, and then under the constraint of reservoir level model, take fault distance analysis model as the second variable, establish fracture density model by Sequential Gaussian Simulation method. Increasing the width, height and length of fracture, at the same time decreasing its density in order to keep the similar porosity and permeability after upscaling discrete fracture network model. In this way, the fracture model of whole interesting area can be built within an accepted time.
Huang, Biao; Zhao, Yongcun
2014-01-01
Estimating standard-exceeding probabilities of toxic metals in soil is crucial for environmental evaluation. Because soil pH and land use types have strong effects on the bioavailability of trace metals in soil, they were taken into account by some environmental protection agencies in making composite soil environmental quality standards (SEQSs) that contain multiple metal thresholds under different pH and land use conditions. This study proposed a method for estimating the standard-exceeding probability map of soil cadmium using a composite SEQS. The spatial variability and uncertainty of soil pH and site-specific land use type were incorporated through simulated realizations by sequential Gaussian simulation. A case study was conducted using a sample data set from a 150 km2 area in Wuhan City and the composite SEQS for cadmium, recently set by the State Environmental Protection Administration of China. The method may be useful for evaluating the pollution risks of trace metals in soil with composite SEQSs. PMID:24672364
NASA Technical Reports Server (NTRS)
Eidson, T. M.; Erlebacher, G.
1994-01-01
While parallel computers offer significant computational performance, it is generally necessary to evaluate several programming strategies. Two programming strategies for a fairly common problem - a periodic tridiagonal solver - are developed and evaluated. Simple model calculations as well as timing results are presented to evaluate the various strategies. The particular tridiagonal solver evaluated is used in many computational fluid dynamic simulation codes. The feature that makes this algorithm unique is that these simulation codes usually require simultaneous solutions for multiple right-hand-sides (RHS) of the system of equations. Each RHS solutions is independent and thus can be computed in parallel. Thus a Gaussian elimination type algorithm can be used in a parallel computation and the more complicated approaches such as cyclic reduction are not required. The two strategies are a transpose strategy and a distributed solver strategy. For the transpose strategy, the data is moved so that a subset of all the RHS problems is solved on each of the several processors. This usually requires significant data movement between processor memories across a network. The second strategy attempts to have the algorithm allow the data across processor boundaries in a chained manner. This usually requires significantly less data movement. An approach to accomplish this second strategy in a near-perfect load-balanced manner is developed. In addition, an algorithm will be shown to directly transform a sequential Gaussian elimination type algorithm into the parallel chained, load-balanced algorithm.
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Stochastic Modeling of CO2 Migrations and Chemical Reactions in Deep Saline Formations
NASA Astrophysics Data System (ADS)
Ni, C.; Lee, I.; Lin, C.
2013-12-01
Carbon capture and storage (CCS) has been recognized the feasible technology that can significant reduce the anthropogenic CO2 emissions from large point sources. The CO2 injection in geological formations is one of the options to permanently store the captured CO2. Based on this concept a large number of target formations have been identified and intensively investigated with different types of techniques such as the hydrogeophysical experiments or numerical simulations. The numerical simulations of CO2 migrations in saline formations recently gather much attention because a number of models are available for this purpose and there are potential sites existing in many countries. The lower part of Cholan Formation (CF) near Changhua Coastal Industrial Park (CCIP) in west central Taiwan was identified the largest potential site for CO2 sequestration. The top elevations of the KF in this area varies from 1300 to 1700m below the sea level. Laboratory experiment showed that the permeability of CF is 10-14 to 10-12 m2. Over the years the offshore seismic survey and limited onshore borehole logs have provided information for the simulation of CO2 migration in the CF although the original investigations might not focus on the purpose of CO2 sequestration. In this study we modify the TOUGHREACT model to consider the small-scale heterogeneity in target formation and the cap rock of upper CF. A Monte Carlo Simulation (MCS) approach based on the TOUGHREACT model is employed to quantify the effect of small-scale heterogeneity on the CO2 migrations and hydrochemical reactions in the CF. We assume that the small-scale variability of permeability in KF can be described with a known Gaussian distribution. Therefore, the Gaussian type random field generator such as Sequential Gaussian Simulation (SGSIM) in Geostatistical Software Library (GSLIB) can be used to provide the random permeability realizations for the MCS. A variety of statistical parameters such as the variances and correlation lengths in a Gaussian covariance model are varied in the MCS and the uncertainty of the CO2 and other chemical concentrations are evaluated based on 144 random realizations. In this study a constant injection rate of100Mt/year supercritical CO2 is applied in the bottom of CF. The continuous injection time is 20 years and the uncertainty results are evaluated at 100 years. By comparing with the case without small-scale variability simulation results show that the CO2 plume sizes in the horizontal direction increase from tens of meters to hundreds of meters when the variances of small-scale variability are varied from 1.0 to 4.0. The changes of correlation lengths (i.e., from 100m, 200m, to 400m) show small contribution on the size increases of CO2 plumes. Other uncertainties of chemical concentrations show behaviors similar to the CO2 plume patterns.
NASA Astrophysics Data System (ADS)
Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald
2017-12-01
An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.
NASA Astrophysics Data System (ADS)
Abdel-Fattah, Mohamed I.; Metwalli, Farouk I.; Mesilhi, El Sayed I.
2018-02-01
3D static reservoir modeling of the Bahariya reservoirs using seismic and wells data can be a relevant part of an overall strategy for the oilfields development in South Umbarka area (Western Desert, Egypt). The seismic data is used to build the 3D grid, including fault sticks for the fault modeling, and horizon interpretations and surfaces for horizon modeling. The 3D grid is the digital representation of the structural geology of Bahariya Formation. When we got a reasonably accurate representation, we fill the 3D grid with facies and petrophysical properties to simulate it, to gain a more precise understanding of the reservoir properties behavior. Sequential Indicator Simulation (SIS) and Sequential Gaussian Simulation (SGS) techniques are the stochastic algorithms used to spatially distribute discrete reservoir properties (facies) and continuous reservoir properties (shale volume, porosity, and water saturation) respectively within the created 3D grid throughout property modeling. The structural model of Bahariya Formation exhibits the trapping mechanism which is a fault assisted anticlinal closure trending NW-SE. This major fault breaks the reservoirs into two major fault blocks (North Block and South Block). Petrophysical models classified Lower Bahariya reservoir as a moderate to good reservoir rather than Upper Bahariya reservoir in terms of facies, with good porosity and permeability, low water saturation, and moderate net to gross. The Original Oil In Place (OOIP) values of modeled Bahariya reservoirs show hydrocarbon accumulation in economic quantity, considering the high structural dips at the central part of South Umbarka area. The powerful of 3D static modeling technique has provided a considerable insight into the future prediction of Bahariya reservoirs performance and production behavior.
Gaussian vs non-Gaussian turbulence: impact on wind turbine loads
NASA Astrophysics Data System (ADS)
Berg, J.; Mann, J.; Natarajan, A.; Patton, E. G.
2014-12-01
In wind energy applications the turbulent velocity field of the Atmospheric Boundary Layer (ABL) is often characterised by Gaussian probability density functions. When estimating the dynamical loads on wind turbines this has been the rule more than anything else. From numerous studies in the laboratory, in Direct Numerical Simulations, and from in-situ measurements of the ABL we know, however, that turbulence is not purely Gaussian: the smallest and fastest scales often exhibit extreme behaviour characterised by strong non-Gaussian statistics. In this contribution we want to investigate whether these non-Gaussian effects are important when determining wind turbine loads, and hence of utmost importance to the design criteria and lifetime of a wind turbine. We devise a method based on Principal Orthogonal Decomposition where non-Gaussian velocity fields generated by high-resolution pseudo-spectral Large-Eddy Simulation (LES) of the ABL are transformed so that they maintain the exact same second-order statistics including variations of the statistics with height, but are otherwise Gaussian. In that way we can investigate in isolation the question whether it is important for wind turbine loads to include non-Gaussian properties of atmospheric turbulence. As an illustration the Figure show both a non-Gaussian velocity field (left) from our LES, and its transformed Gaussian Counterpart (right). Whereas the horizontal velocity components (top) look close to identical, the vertical components (bottom) are not: the non-Gaussian case is much more fluid-like (like in a sketch by Michelangelo). The question is then: Does the wind turbine see this? Using the load simulation software HAWC2 with both the non-Gaussian and newly constructed Gaussian fields, respectively, we show that the Fatigue loads and most of the Extreme loads are unaltered when using non-Gaussian velocity fields. The turbine thus acts like a low-pass filter which average out the non-Gaussian behaviour on time scales close to and faster than the revolution time of the turbine. For a few of the Extreme load estimations there is, on the other hand, a tendency that non-Gaussian effects increase the overall dynamical load, and hence can be of importance in wind energy load estimations.
Accounting for aquifer heterogeneity from geological data to management tools.
Blouin, Martin; Martel, Richard; Gloaguen, Erwan
2013-01-01
A nested workflow of multiple-point geostatistics (MPG) and sequential Gaussian simulation (SGS) was tested on a study area of 6 km(2) located about 20 km northwest of Quebec City, Canada. In order to assess its geological and hydrogeological parameter heterogeneity and to provide tools to evaluate uncertainties in aquifer management, direct and indirect field measurements are used as inputs in the geostatistical simulations to reproduce large and small-scale heterogeneities. To do so, the lithological information is first associated to equivalent hydrogeological facies (hydrofacies) according to hydraulic properties measured at several wells. Then, heterogeneous hydrofacies (HF) realizations are generated using a prior geological model as training image (TI) with the MPG algorithm. The hydraulic conductivity (K) heterogeneity modeling within each HF is finally computed using SGS algorithm. Different K models are integrated in a finite-element hydrogeological model to calculate multiple transport simulations. Different scenarios exhibit variations in mass transport path and dispersion associated with the large- and small-scale heterogeneity respectively. Three-dimensional maps showing the probability of overpassing different thresholds are presented as examples of management tools. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Mayer, J. M.; Stead, D.
2017-04-01
With the increased drive towards deeper and more complex mine designs, geotechnical engineers are often forced to reconsider traditional deterministic design techniques in favour of probabilistic methods. These alternative techniques allow for the direct quantification of uncertainties within a risk and/or decision analysis framework. However, conventional probabilistic practices typically discretize geological materials into discrete, homogeneous domains, with attributes defined by spatially constant random variables, despite the fact that geological media display inherent heterogeneous spatial characteristics. This research directly simulates this phenomenon using a geostatistical approach, known as sequential Gaussian simulation. The method utilizes the variogram which imposes a degree of controlled spatial heterogeneity on the system. Simulations are constrained using data from the Ok Tedi mine site in Papua New Guinea and designed to randomly vary the geological strength index and uniaxial compressive strength using Monte Carlo techniques. Results suggest that conventional probabilistic techniques have a fundamental limitation compared to geostatistical approaches, as they fail to account for the spatial dependencies inherent to geotechnical datasets. This can result in erroneous model predictions, which are overly conservative when compared to the geostatistical results.
An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process
NASA Technical Reports Server (NTRS)
Carter, M. C.; Madison, M. W.
1973-01-01
The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.
Poly-Gaussian model of randomly rough surface in rarefied gas flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aksenova, Olga A.; Khalidov, Iskander A.
2014-12-09
Surface roughness is simulated by the model of non-Gaussian random process. Our results for the scattering of rarefied gas atoms from a rough surface using modified approach to the DSMC calculation of rarefied gas flow near a rough surface are developed and generalized applying the poly-Gaussian model representing probability density as the mixture of Gaussian densities. The transformation of the scattering function due to the roughness is characterized by the roughness operator. Simulating rough surface of the walls by the poly-Gaussian random field expressed as integrated Wiener process, we derive a representation of the roughness operator that can be appliedmore » in numerical DSMC methods as well as in analytical investigations.« less
Toward statistical modeling of saccadic eye-movement and visual saliency.
Sun, Xiaoshuai; Yao, Hongxun; Ji, Rongrong; Liu, Xian-Ming
2014-11-01
In this paper, we present a unified statistical framework for modeling both saccadic eye movements and visual saliency. By analyzing the statistical properties of human eye fixations on natural images, we found that human attention is sparsely distributed and usually deployed to locations with abundant structural information. This observations inspired us to model saccadic behavior and visual saliency based on super-Gaussian component (SGC) analysis. Our model sequentially obtains SGC using projection pursuit, and generates eye movements by selecting the location with maximum SGC response. Besides human saccadic behavior simulation, we also demonstrated our superior effectiveness and robustness over state-of-the-arts by carrying out dense experiments on synthetic patterns and human eye fixation benchmarks. Multiple key issues in saliency modeling research, such as individual differences, the effects of scale and blur, are explored in this paper. Based on extensive qualitative and quantitative experimental results, we show promising potentials of statistical approaches for human behavior research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zentner, I.; Ferré, G., E-mail: gregoire.ferre@ponts.org; Poirion, F.
2016-06-01
In this paper, a new method for the identification and simulation of non-Gaussian and non-stationary stochastic fields given a database is proposed. It is based on two successive biorthogonal decompositions aiming at representing spatio–temporal stochastic fields. The proposed double expansion allows to build the model even in the case of large-size problems by separating the time, space and random parts of the field. A Gaussian kernel estimator is used to simulate the high dimensional set of random variables appearing in the decomposition. The capability of the method to reproduce the non-stationary and non-Gaussian features of random phenomena is illustrated bymore » applications to earthquakes (seismic ground motion) and sea states (wave heights).« less
Cameron, Donnie; Bouhrara, Mustapha; Reiter, David A; Fishbein, Kenneth W; Choi, Seongjin; Bergeron, Christopher M; Ferrucci, Luigi; Spencer, Richard G
2017-07-01
This work characterizes the effect of lipid and noise signals on muscle diffusion parameter estimation in several conventional and non-Gaussian models, the ultimate objectives being to characterize popular fat suppression approaches for human muscle diffusion studies, to provide simulations to inform experimental work and to report normative non-Gaussian parameter values. The models investigated in this work were the Gaussian monoexponential and intravoxel incoherent motion (IVIM) models, and the non-Gaussian kurtosis and stretched exponential models. These were evaluated via simulations, and in vitro and in vivo experiments. Simulations were performed using literature input values, modeling fat contamination as an additive baseline to data, whereas phantom studies used a phantom containing aliphatic and olefinic fats and muscle-like gel. Human imaging was performed in the hamstring muscles of 10 volunteers. Diffusion-weighted imaging was applied with spectral attenuated inversion recovery (SPAIR), slice-select gradient reversal and water-specific excitation fat suppression, alone and in combination. Measurement bias (accuracy) and dispersion (precision) were evaluated, together with intra- and inter-scan repeatability. Simulations indicated that noise in magnitude images resulted in <6% bias in diffusion coefficients and non-Gaussian parameters (α, K), whereas baseline fitting minimized fat bias for all models, except IVIM. In vivo, popular SPAIR fat suppression proved inadequate for accurate parameter estimation, producing non-physiological parameter estimates without baseline fitting and large biases when it was used. Combining all three fat suppression techniques and fitting data with a baseline offset gave the best results of all the methods studied for both Gaussian diffusion and, overall, for non-Gaussian diffusion. It produced consistent parameter estimates for all models, except IVIM, and highlighted non-Gaussian behavior perpendicular to muscle fibers (α ~ 0.95, K ~ 3.1). These results show that effective fat suppression is crucial for accurate measurement of non-Gaussian diffusion parameters, and will be an essential component of quantitative studies of human muscle quality. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Propagation of elliptic-Gaussian beams in strongly nonlocal nonlinear media
NASA Astrophysics Data System (ADS)
Deng, Dongmei; Guo, Qi
2011-10-01
The propagation of the elliptic-Gaussian beams is studied in strongly nonlocal nonlinear media. The elliptic-Gaussian beams and elliptic-Gaussian vortex beams are obtained analytically and numerically. The patterns of the elegant Ince-Gaussian and the generalized Ince-Gaussian beams are varied periodically when the input power is equal to the critical power. The stability is verified by perturbing the initial beam by noise. By simulating the propagation of the elliptic-Gaussian beams in liquid crystal, we find that when the mode order is not big enough, there exists the quasi-elliptic-Gaussian soliton states.
Huh, Joonsuk; Yung, Man-Hong
2017-08-07
Molecular vibroic spectroscopy, where the transitions involve non-trivial Bosonic correlation due to the Duschinsky Rotation, is strongly believed to be in a similar complexity class as Boson Sampling. At finite temperature, the problem is represented as a Boson Sampling experiment with correlated Gaussian input states. This molecular problem with temperature effect is intimately related to the various versions of Boson Sampling sharing the similar computational complexity. Here we provide a full description to this relation in the context of Gaussian Boson Sampling. We find a hierarchical structure, which illustrates the relationship among various Boson Sampling schemes. Specifically, we show that every instance of Gaussian Boson Sampling with an initial correlation can be simulated by an instance of Gaussian Boson Sampling without initial correlation, with only a polynomial overhead. Since every Gaussian state is associated with a thermal state, our result implies that every sampling problem in molecular vibronic transitions, at any temperature, can be simulated by Gaussian Boson Sampling associated with a product of vacuum modes. We refer such a generalized Gaussian Boson Sampling motivated by the molecular sampling problem as Vibronic Boson Sampling.
Vilseck, Jonah Z.; Kostal, Jakub; Tirado-Rives, Julian; Jorgensen, William L.
2015-01-01
Hybrid quantum mechanics and molecular mechanics (QM/MM) computer simulations have become an indispensable tool for studying chemical and biological phenomena for systems too large to treat with quantum mechanics alone. For several decades, semi-empirical QM methods have been used in QM/MM simulations. However, with increased computational resources, the introduction of ab initio and density function methods into on-the-fly QM/MM simulations is being increasingly preferred. This adaptation can be accomplished with a program interface that tethers independent QM and MM software packages. This report introduces such an interface for the BOSS and Gaussian programs, featuring modification of BOSS to request QM energies and partial atomic charges from Gaussian. A customizable C-shell linker script facilitates the inter-program communication. The BOSS–Gaussian interface also provides convenient access to Charge Model 5 (CM5) partial atomic charges for multiple purposes including QM/MM studies of reactions. In this report, the BOSS–Gaussian interface is applied to a nitroaldol (Henry) reaction and two methyl transfer reactions in aqueous solution. Improved agreement with experiment is found by determining free-energy surfaces with MP2/CM5 QM/MM simulations than previously reported investigations employing semiempirical methods. PMID:26311531
Vilseck, Jonah Z; Kostal, Jakub; Tirado-Rives, Julian; Jorgensen, William L
2015-10-15
Hybrid quantum mechanics and molecular mechanics (QM/MM) computer simulations have become an indispensable tool for studying chemical and biological phenomena for systems too large to treat with QM alone. For several decades, semiempirical QM methods have been used in QM/MM simulations. However, with increased computational resources, the introduction of ab initio and density function methods into on-the-fly QM/MM simulations is being increasingly preferred. This adaptation can be accomplished with a program interface that tethers independent QM and MM software packages. This report introduces such an interface for the BOSS and Gaussian programs, featuring modification of BOSS to request QM energies and partial atomic charges from Gaussian. A customizable C-shell linker script facilitates the interprogram communication. The BOSS-Gaussian interface also provides convenient access to Charge Model 5 (CM5) partial atomic charges for multiple purposes including QM/MM studies of reactions. In this report, the BOSS-Gaussian interface is applied to a nitroaldol (Henry) reaction and two methyl transfer reactions in aqueous solution. Improved agreement with experiment is found by determining free-energy surfaces with MP2/CM5 QM/MM simulations than previously reported investigations using semiempirical methods. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, D.O.
It is recognized that some dynamic and noise environments are characterized by time histories which are not Gaussian. An example is high intensity acoustic noise. Another example is some transportation vibration. A better simulation of these environments can be generated if a zero mean non-Gaussian time history can be reproduced with a specified auto (or power) spectral density (ASD or PSD) and a specified probability density function (pdf). After the required time history is synthesized, the waveform can be used for simulation purposes. For example, modem waveform reproduction techniques can be used to reproduce the waveform on electrodynamic or electrohydraulicmore » shakers. Or the waveforms can be used in digital simulations. A method is presented for the generation of realizations of zero mean non-Gaussian random time histories with a specified ASD, and pdf. First a Gaussian time history with the specified auto (or power) spectral density (ASD) is generated. A monotonic nonlinear function relating the Gaussian waveform to the desired realization is then established based on the Cumulative Distribution Function (CDF) of the desired waveform and the known CDF of a Gaussian waveform. The established function is used to transform the Gaussian waveform to a realization of the desired waveform. Since the transformation preserves the zero-crossings and peaks of the original Gaussian waveform, and does not introduce any substantial discontinuities, the ASD is not substantially changed. Several methods are available to generate a realization of a Gaussian distributed waveform with a known ASD. The method of Smallwood and Paez (1993) is an example. However, the generation of random noise with a specified ASD but with a non-Gaussian distribution is less well known.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, E.; Madigan, M.
Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrainmore » high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kong, Bo; Fox, Rodney O.; Feng, Heng
An Euler–Euler anisotropic Gaussian approach (EE-AG) for simulating gas–particle flows, in which particle velocities are assumed to follow a multivariate anisotropic Gaussian distribution, is used to perform mesoscale simulations of homogeneous cluster-induced turbulence (CIT). A three-dimensional Gauss–Hermite quadrature formulation is used to calculate the kinetic flux for 10 velocity moments in a finite-volume framework. The particle-phase volume-fraction and momentum equations are coupled with the Eulerian solver for the gas phase. This approach is implemented in an open-source CFD package, OpenFOAM, and detailed simulation results are compared with previous Euler–Lagrange simulations in a domain size study of CIT. Here, these resultsmore » demonstrate that the proposed EE-AG methodology is able to produce comparable results to EL simulations, and this moment-based methodology can be used to perform accurate mesoscale simulations of dilute gas–particle flows.« less
Kong, Bo; Fox, Rodney O.; Feng, Heng; ...
2017-02-16
An Euler–Euler anisotropic Gaussian approach (EE-AG) for simulating gas–particle flows, in which particle velocities are assumed to follow a multivariate anisotropic Gaussian distribution, is used to perform mesoscale simulations of homogeneous cluster-induced turbulence (CIT). A three-dimensional Gauss–Hermite quadrature formulation is used to calculate the kinetic flux for 10 velocity moments in a finite-volume framework. The particle-phase volume-fraction and momentum equations are coupled with the Eulerian solver for the gas phase. This approach is implemented in an open-source CFD package, OpenFOAM, and detailed simulation results are compared with previous Euler–Lagrange simulations in a domain size study of CIT. Here, these resultsmore » demonstrate that the proposed EE-AG methodology is able to produce comparable results to EL simulations, and this moment-based methodology can be used to perform accurate mesoscale simulations of dilute gas–particle flows.« less
Simulations of Gaussian electron guns for RHIC electron lens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pikin, A.
Simulations of two versions of the electron gun for RHIC electron lens are presented. The electron guns have to generate an electron beam with Gaussian radial profile of the electron beam density. To achieve the Gaussian electron emission profile on the cathode we used a combination of the gun electrodes and shaping of the cathode surface. Dependence of electron gun performance parameters on the geometry of electrodes and the margins for electrodes positioning are presented.
NASA Astrophysics Data System (ADS)
Gholizadeh Doonechaly, N.; Rahman, S. S.
2012-05-01
Simulation of naturally fractured reservoirs offers significant challenges due to the lack of a methodology that can utilize field data. To date several methods have been proposed by authors to characterize naturally fractured reservoirs. Among them is the unfolding/folding method which offers some degree of accuracy in estimating the probability of the existence of fractures in a reservoir. Also there are statistical approaches which integrate all levels of field data to simulate the fracture network. This approach, however, is dependent on the availability of data sources, such as seismic attributes, core descriptions, well logs, etc. which often make it difficult to obtain field wide. In this study a hybrid tectono-stochastic simulation is proposed to characterize a naturally fractured reservoir. A finite element based model is used to simulate the tectonic event of folding and unfolding of a geological structure. A nested neuro-stochastic technique is used to develop the inter-relationship between the data and at the same time it utilizes the sequential Gaussian approach to analyze field data along with fracture probability data. This approach has the ability to overcome commonly experienced discontinuity of the data in both horizontal and vertical directions. This hybrid technique is used to generate a discrete fracture network of a specific Australian gas reservoir, Palm Valley in the Northern Territory. Results of this study have significant benefit in accurately describing fluid flow simulation and well placement for maximal hydrocarbon recovery.
Zhou, Nan; Wang, Jian
2018-05-23
Bessel-Gaussian beams have distinct properties of suppressed diffraction divergence and self-reconstruction. In this paper, we propose and simulate metasurface-assisted orbital angular momentum (OAM) carrying Bessel-Gaussian laser. The laser can be regarded as a Fabry-Perot cavity formed by one partially transparent output plane mirror and the other metasurface-based reflector mirror. The gain medium of Nd:YVO 4 enables the lasing wavelength at 1064 nm with a 808 nm laser serving as the pump. The sub-wavelength structure of metasurface facilitates flexible spatial light manipulation. The compact metasurface-based reflector provides combined phase functions of an axicon and a spherical mirror. By appropriately selecting the size of output mirror and inserting mode-selection element in the laser cavity, different orders of OAM-carrying Bessel-Gaussian lasing modes are achievable. The lasing Bessel-Gaussian 0 , Bessel-Gaussian 01 + , Bessel-Gaussian 02 + and Bessel-Gaussian 03 + modes have high fidelities of ~0.889, ~0.889, ~0.881 and ~0.879, respectively. The metasurface fabrication tolerance and the dependence of threshold power and output lasing power on the length of gain medium, beam radius of pump and transmittance of output mirror are also discussed. The obtained results show successful implementation of metasurface-assisted OAM-carrying Bessel-Gaussian laser with favorable performance. The metasurface-assisted OAM-carrying Bessel-Gaussian laser may find wide OAM-enabled communication and non-communication applications.
PHYSICS OF NON-GAUSSIAN FIELDS AND THE COSMOLOGICAL GENUS STATISTIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, J. Berian, E-mail: berian@berkeley.edu
2012-05-20
We report a technique to calculate the impact of distinct physical processes inducing non-Gaussianity on the cosmological density field. A natural decomposition of the cosmic genus statistic into an orthogonal polynomial sequence allows complete expression of the scale-dependent evolution of the topology of large-scale structure, in which effects including galaxy bias, nonlinear gravitational evolution, and primordial non-Gaussianity may be delineated. The relationship of this decomposition to previous methods for analyzing the genus statistic is briefly considered and the following applications are made: (1) the expression of certain systematics affecting topological measurements, (2) the quantification of broad deformations from Gaussianity thatmore » appear in the genus statistic as measured in the Horizon Run simulation, and (3) the study of the evolution of the genus curve for simulations with primordial non-Gaussianity. These advances improve the treatment of flux-limited galaxy catalogs for use with this measurement and further the use of the genus statistic as a tool for exploring non-Gaussianity.« less
Sequential and parallel image restoration: neural network implementations.
Figueiredo, M T; Leitao, J N
1994-01-01
Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.
Response of a tethered aerostat to simulated turbulence
NASA Astrophysics Data System (ADS)
Stanney, Keith A.; Rahn, Christopher D.
2006-09-01
Aerostats are lighter-than-air vehicles tethered to the ground by a cable and used for broadcasting, communications, surveillance, and drug interdiction. The dynamic response of tethered aerostats subject to extreme atmospheric turbulence often dictates survivability. This paper develops a theoretical model that predicts the planar response of a tethered aerostat subject to atmospheric turbulence and simulates the response to 1000 simulated hurricane scale turbulent time histories. The aerostat dynamic model assumes the aerostat hull to be a rigid body with non-linear fluid loading, instantaneous weathervaning for planar response, and a continuous tether. Galerkin's method discretizes the coupled aerostat and tether partial differential equations to produce a non-linear initial value problem that is integrated numerically given initial conditions and wind inputs. The proper orthogonal decomposition theorem generates, based on Hurricane Georges wind data, turbulent time histories that possess the sequential behavior of actual turbulence, are spectrally accurate, and have non-Gaussian density functions. The generated turbulent time histories are simulated to predict the aerostat response to severe turbulence. The resulting probability distributions for the aerostat position, pitch angle, and confluence point tension predict the aerostat behavior in high gust environments. The dynamic results can be up to twice as large as a static analysis indicating the importance of dynamics in aerostat modeling. The results uncover a worst case wind input consisting of a two-pulse vertical gust.
Lin, Yu-Pin; Chu, Hone-Jay; Huang, Yu-Long; Tang, Chia-Hsi; Rouhani, Shahrokh
2011-06-01
This study develops a stratified conditional Latin hypercube sampling (scLHS) approach for multiple, remotely sensed, normalized difference vegetation index (NDVI) images. The objective is to sample, monitor, and delineate spatiotemporal landscape changes, including spatial heterogeneity and variability, in a given area. The scLHS approach, which is based on the variance quadtree technique (VQT) and the conditional Latin hypercube sampling (cLHS) method, selects samples in order to delineate landscape changes from multiple NDVI images. The images are then mapped for calibration and validation by using sequential Gaussian simulation (SGS) with the scLHS selected samples. Spatial statistical results indicate that in terms of their statistical distribution, spatial distribution, and spatial variation, the statistics and variograms of the scLHS samples resemble those of multiple NDVI images more closely than those of cLHS and VQT samples. Moreover, the accuracy of simulated NDVI images based on SGS with scLHS samples is significantly better than that of simulated NDVI images based on SGS with cLHS samples and VQT samples, respectively. However, the proposed approach efficiently monitors the spatial characteristics of landscape changes, including the statistics, spatial variability, and heterogeneity of NDVI images. In addition, SGS with the scLHS samples effectively reproduces spatial patterns and landscape changes in multiple NDVI images.
Dynamical Casimir Effect for Gaussian Boson Sampling.
Peropadre, Borja; Huh, Joonsuk; Sabín, Carlos
2018-02-28
We show that the Dynamical Casimir Effect (DCE), realized on two multimode coplanar waveg-uide resonators, implements a gaussian boson sampler (GBS). The appropriate choice of the mirror acceleration that couples both resonators translates into the desired initial gaussian state and many-boson interference in a boson sampling network. In particular, we show that the proposed quantum simulator naturally performs a classically hard task, known as scattershot boson sampling. Our result unveils an unprecedented computational power of DCE, and paves the way for using DCE as a resource for quantum simulation.
Statistical description of turbulent transport for flux driven toroidal plasmas
NASA Astrophysics Data System (ADS)
Anderson, J.; Imadera, K.; Kishimoto, Y.; Li, J. Q.; Nordman, H.
2017-06-01
A novel methodology to analyze non-Gaussian probability distribution functions (PDFs) of intermittent turbulent transport in global full-f gyrokinetic simulations is presented. In this work, the auto-regressive integrated moving average (ARIMA) model is applied to time series data of intermittent turbulent heat transport to separate noise and oscillatory trends, allowing for the extraction of non-Gaussian features of the PDFs. It was shown that non-Gaussian tails of the PDFs from first principles based gyrokinetic simulations agree with an analytical estimation based on a two fluid model.
Mercedes Berterretche; Andrew T. Hudak; Warren B. Cohen; Thomas K. Maiersperger; Stith T. Gower; Jennifer Dungan
2005-01-01
This study compared aspatial and spatial methods of using remote sensing and field data to predict maximum growing season leaf area index (LAI) maps in a boreal forest in Manitoba, Canada. The methods tested were orthogonal regression analysis (reduced major axis, RMA) and two geostatistical techniques: kriging with an external drift (KED) and sequential Gaussian...
Observation Uncertainty in Gaussian Sensor Networks
2006-01-23
Ziv , J., and Lempel , A. A universal algorithm for sequential data compression . IEEE Transactions on Information Theory 23, 3 (1977), 337–343. 73 ...using the Lempel - Ziv algorithm [42], context-tree weighting [41], or the Burrows-Wheeler Trans- form [4], [15], for example. These source codes will...and Computation (Monticello, IL, September 2004). [4] Burrows, M., and Wheeler, D. A block sorting lossless data compression algorithm . Tech.
Messaoudi, Noureddine; Bekka, Raïs El'hadi; Ravier, Philippe; Harba, Rachid
2017-02-01
The purpose of this paper was to evaluate the effects of the longitudinal single differential (LSD), the longitudinal double differential (LDD) and the normal double differential (NDD) spatial filters, the electrode shape, the inter-electrode distance (IED) on non-Gaussianity and non-linearity levels of simulated surface EMG (sEMG) signals when the maximum voluntary contraction (MVC) varied from 10% to 100% by a step of 10%. The effects of recruitment range thresholds (RR), the firing rate (FR) strategy and the peak firing rate (PFR) of motor units were also considered. A cylindrical multilayer model of the volume conductor and a model of motor unit (MU) recruitment and firing rate were used to simulate sEMG signals in a pool of 120 MUs for 5s. Firstly, the stationarity of sEMG signals was tested by the runs, the reverse arrangements (RA) and the modified reverse arrangements (MRA) tests. Then the non-Gaussianity was characterised with bicoherence and kurtosis, and non-linearity levels was evaluated with linearity test. The kurtosis analysis showed that the sEMG signals detected by the LSD filter were the most Gaussian and those detected by the NDD filter were the least Gaussian. In addition, the sEMG signals detected by the LSD filter were the most linear. For a given filter, the sEMG signals detected by using rectangular electrodes were more Gaussian and more linear than that detected with circular electrodes. Moreover, the sEMG signals are less non-Gaussian and more linear with reverse onion-skin firing rate strategy than those with onion-skin strategy. The levels of sEMG signal Gaussianity and linearity increased with the increase of the IED, RR and PFR. Copyright © 2016 Elsevier Ltd. All rights reserved.
Martin, Daniel R; Matyushov, Dmitry V
2012-08-30
We show that electrostatic fluctuations of the protein-water interface are globally non-Gaussian. The electrostatic component of the optical transition energy (energy gap) in a hydrated green fluorescent protein is studied here by classical molecular dynamics simulations. The distribution of the energy gap displays a high excess in the breadth of electrostatic fluctuations over the prediction of the Gaussian statistics. The energy gap dynamics include a nanosecond component. When simulations are repeated with frozen protein motions, the statistics shifts to the expectations of linear response and the slow dynamics disappear. We therefore suggest that both the non-Gaussian statistics and the nanosecond dynamics originate largely from global, low-frequency motions of the protein coupled to the interfacial water. The non-Gaussian statistics can be experimentally verified from the temperature dependence of the first two spectral moments measured at constant-volume conditions. Simulations at different temperatures are consistent with other indicators of the non-Gaussian statistics. In particular, the high-temperature part of the energy gap variance (second spectral moment) scales linearly with temperature and extrapolates to zero at a temperature characteristic of the protein glass transition. This result, violating the classical limit of the fluctuation-dissipation theorem, leads to a non-Boltzmann statistics of the energy gap and corresponding non-Arrhenius kinetics of radiationless electronic transitions, empirically described by the Vogel-Fulcher-Tammann law.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
NASA Astrophysics Data System (ADS)
Sandoz, J.-P.; Steenaart, W.
1984-12-01
The nonuniform sampling digital phase-locked loop (DPLL) with sequential loop filter, in which the correction sizes are controlled by the accumulated differences of two additional phase comparators, is graphically analyzed. In the absence of noise and frequency drift, the analysis gives some physical insight into the acquisition and tracking behavior. Taking noise into account, a mathematical model is derived and a random walk technique is applied to evaluate the rms phase error and the mean acquisition time. Experimental results confirm the appropriate simplifying hypotheses used in the numerical analysis. Two related performance measures defined in terms of the rms phase error and the acquisition time for a given SNR are used. These measures provide a common basis for comparing different digital loops and, to a limited extent, also with a first-order linear loop. Finally, the behavior of a modified DPLL under frequency deviation in the presence of Gaussian noise is tested experimentally and by computer simulation.
NASA Astrophysics Data System (ADS)
Chen, Qingfa; Zhao, Fuyu; Chen, Qinglin; Wang, Yuding; Zhong, Yu; Niu, Wenjing
2017-12-01
A study on the flow characteristics of ore and factors that influence these characteristics is important to master ore flow laws. An orthogonal ore-drawing numerical model was established and the flow characteristics were explored. A weight matrix was obtained and the effect of the factors was determined. It was found that (1) the entire isolation-layer interface presents a Gaussian curve morphology and marked particles in each layer show a funnel morphology; (2) the drawing amount, Q, and the isolation layer half-width, W, are correlated positively with the fall depth, H, of the isolation layer; (3) factors that affect the characteristics sequentially include the particle friction coefficient, the interface friction coefficient, the isolation layer thickness, and the particle radius, and (4) the optimal combination is an isolation layer thickness of 0.005 m, an interface friction coefficient of 0.8, a particle friction coefficient of 0.2, and a particle radius of 0.007 m.
Multiple ionization of neon by soft x-rays at ultrahigh intensity
NASA Astrophysics Data System (ADS)
Guichard, R.; Richter, M.; Rost, J.-M.; Saalmann, U.; Sorokin, A. A.; Tiedtke, K.
2013-08-01
At the free-electron laser FLASH, multiple ionization of neon atoms was quantitatively investigated at photon energies of 93.0 and 90.5 eV. For ion charge states up to 6+, we compare the respective absolute photoionization yields with results from a minimal model and an elaborate description including standard sequential and direct photoionization channels. Both approaches are based on rate equations and take into account a Gaussian spatial intensity distribution of the laser beam. From the comparison we conclude that photoionization up to a charge of 5+ can be described by the minimal model which we interpret as sequential photoionization assisted by electron shake-up processes. For higher charges, the experimental ionization yields systematically exceed the elaborate rate-based prediction.
Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.
1988-01-01
A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.
Stochastic uncertainty analysis for unconfined flow systems
Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming
2006-01-01
A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen‐Loeve decomposition‐based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen‐Loeve decomposition, polynomial expansion, and perturbation methods. The random log‐transformed hydraulic conductivity field (lnKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of lnKS. Next, head h is decomposed as a perturbation expansion series Σh(m), where h(m) represents the mth‐order head term with respect to the standard deviation of lnKS. Then h(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients hi1,i2,...,im(m) are deterministic and solved sequentially from low to high expansion orders using MODFLOW‐2000. Finally, the statistics of head and flux are computed using simple algebraic operations on hi1,i2,...,im(m). A series of numerical test results in 2‐D and 3‐D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique.
A method for selective excitation of Ince-Gaussian modes in an end-pumped solid-state laser
NASA Astrophysics Data System (ADS)
Lei, J.; Hu, A.; Wang, Y.; Chen, P.
2014-12-01
A method for selective excitation of Ince-Gaussian modes is presented. The method is based on the spatial distributions of Ince-Gaussian modes as well as the transverse mode selection theory. Significant diffraction loss is introduced in a resonator by using opaque lines at zero-intensity positions, and this loss allows to excite a specific mode; we call this method "loss control." We study the method by means of numerical simulation of a half-symmetric laser resonator. The simulated field is represented by angular spectrum of the plane waves representation, and its changes are calculated by the two-dimensional fast Fourier transform algorithm when it passes through the optical elements and propagates back and forth in the resonator. The output lasing modes of our method have an overlap of over 90 % with the target Ince-Gaussian modes. The method will be beneficial to the further study of properties and potential applications of Ince-Gaussian modes.
Lin, Chuan-Kai; Wang, Sheng-De
2004-11-01
A new autopilot design for bank-to-turn (BTT) missiles is presented. In the design of autopilot, a ridge Gaussian neural network with local learning capability and fewer tuning parameters than Gaussian neural networks is proposed to model the controlled nonlinear systems. We prove that the proposed ridge Gaussian neural network, which can be a universal approximator, equals the expansions of rotated and scaled Gaussian functions. Although ridge Gaussian neural networks can approximate the nonlinear and complex systems accurately, the small approximation errors may affect the tracking performance significantly. Therefore, by employing the Hinfinity control theory, it is easy to attenuate the effects of the approximation errors of the ridge Gaussian neural networks to a prescribed level. Computer simulation results confirm the effectiveness of the proposed ridge Gaussian neural networks-based autopilot with Hinfinity stabilization.
Aberration analysis and calculation in system of Gaussian beam illuminates lenslet array
NASA Astrophysics Data System (ADS)
Zhao, Zhu; Hui, Mei; Zhou, Ping; Su, Tianquan; Feng, Yun; Zhao, Yuejin
2014-09-01
Low order aberration was founded when focused Gaussian beam imaging at Kodak KAI -16000 image detector, which is integrated with lenslet array. Effect of focused Gaussian beam and numerical simulation calculation of the aberration were presented in this paper. First, we set up a model of optical imaging system based on previous experiment. Focused Gaussian beam passed through a pinhole and was received by Kodak KAI -16000 image detector whose microlenses of lenslet array were exactly focused on sensor surface. Then, we illustrated the characteristics of focused Gaussian beam and the effect of relative space position relations between waist of Gaussian beam and front spherical surface of microlenses to the aberration. Finally, we analyzed the main element of low order aberration and calculated the spherical aberration caused by lenslet array according to the results of above two steps. Our theoretical calculations shown that , the numerical simulation had a good agreement with the experimental result. Our research results proved that spherical aberration was the main element and made up about 93.44% of the 48 nm error, which was demonstrated in previous experiment. The spherical aberration is inversely proportional to the value of divergence distance between microlens and waist, and directly proportional to the value of the Gaussian beam waist radius.
NASA Astrophysics Data System (ADS)
Jang, Cheng-Shin
2016-04-01
The Jiaosi Hot Spring Region is located in northeastern Taiwan and is rich in geothermal springs. The geothermal development of the Jiaosi Hot Spring Region dates back to the 18th century and currently, the spring water is processed for various uses, including irrigation, aquaculture, swimming, bathing, foot spas, and recreational tourism. Because of the proximity of the Jiaosi Hot Spring Region to the metropolitan area of Taipei City, the hot spring resources in this region attract millions of tourists annually. Recently, the Taiwan government is paying more attention to surveying the spring water temperatures in the Jiaosi Hot Spring Region because of the severe spring water overexploitation, causing a significant decline in spring water temperatures. Furthermore, the temperature of spring water is a reliable indicator for exploring the occurrence and evolution of springs and strongly affects hydrochemical reactions, components, and magnitudes. The multipurpose uses of spring water can be dictated by the temperature of the water. Therefore, accurately estimating the temperature distribution of the spring water is critical in the Jiaosi Hot Spring Region to facilitate the sustainable development and management of the multipurpose uses of the hot spring resources. To evaluate the suitability of spring water for these various uses, this study spatially characterized the spring water temperatures of the Jiaosi Hot Spring Region by using ordinary kriging (OK), sequential Gaussian simulation (SGS), and geographical information system (GIS). First, variogram analyses were used to determine the spatial variability of spring water temperatures. Next, OK and SGS were adopted to model the spatial distributions and uncertainty of the spring water temperatures. Finally, the land use (i.e., agriculture, dwelling, public land, and recreation) was determined and combined with the estimated distributions of the spring water temperatures using GIS. A suitable development strategy for the multipurpose uses of spring water is proposed according to the integration of the land use and spring water temperatures. The study results indicate that OK, SGS, and GIS are capable of characterizing spring water temperatures and the suitability of multipurpose uses of spring water. SGS realizations are more robust than OK estimates for characterizing spring water temperatures. Furthermore, current land use is almost ideal in the Jiaosi Hot Spring Region according to the estimated spatial pattern of spring water temperatures. Keywords: Hot spring; Temperature; Land use; Ordinary kriging; Sequential Gaussian simulation; Geographical information system
Single-frequency Ince-Gaussian mode operations of laser-diode-pumped microchip solid-state lasers.
Ohtomo, Takayuki; Kamikariya, Koji; Otsuka, Kenju; Chu, Shu-Chun
2007-08-20
Various single-frequency Ince-Gaussian mode oscillations have been achieved in laser-diode-pumped microchip solid-state lasers, including LiNdP(4)O(12) (LNP) and Nd:GdVO(4), by adjusting the azimuthal symmetry of the short laser resonator. Ince-Gaussian modes formed by astigmatic pumping have been reproduced by numerical simulation.
NASA Astrophysics Data System (ADS)
Zelisko, Matthew; Ahmadpoor, Fatemeh; Gao, Huajian; Sharma, Pradeep
2017-08-01
The dominant deformation behavior of two-dimensional materials (bending) is primarily governed by just two parameters: bending rigidity and the Gaussian modulus. These properties also set the energy scale for various important physical and biological processes such as pore formation, cell fission and generally, any event accompanied by a topological change. Unlike the bending rigidity, the Gaussian modulus is, however, notoriously difficult to evaluate via either experiments or atomistic simulations. In this Letter, recognizing that the Gaussian modulus and edge tension play a nontrivial role in the fluctuations of a 2D material edge, we derive closed-form expressions for edge fluctuations. Combined with atomistic simulations, we use the developed approach to extract the Gaussian modulus and edge tension at finite temperatures for both graphene and various types of lipid bilayers. Our results possibly provide the first reliable estimate of this elusive property at finite temperatures and appear to suggest that earlier estimates must be revised. In particular, we show that, if previously estimated properties are employed, the graphene-free edge will exhibit unstable behavior at room temperature. Remarkably, in the case of graphene, we show that the Gaussian modulus and edge tension even change sign at finite temperatures.
NASA Astrophysics Data System (ADS)
Jia, W.; Pan, F.; McPherson, B. J. O. L.
2015-12-01
Due to the presence of multiple phases in a given system, CO2 sequestration with enhanced oil recovery (CO2-EOR) includes complex multiphase flow processes compared to CO2 sequestration in deep saline aquifers (no hydrocarbons). Two of the most important factors are three-phase relative permeability and hysteresis effects, both of which are difficult to measure and are usually represented by numerical interpolation models. The purposes of this study included quantification of impacts of different three-phase relative permeability models and hysteresis models on CO2 sequestration simulation results, and associated quantitative estimation of uncertainty. Four three-phase relative permeability models and three hysteresis models were applied to a model of an active CO2-EOR site, the SACROC unit located in western Texas. To eliminate possible bias of deterministic parameters on the evaluation, a sequential Gaussian simulation technique was utilized to generate 50 realizations to describe heterogeneity of porosity and permeability, initially obtained from well logs and seismic survey data. Simulation results of forecasted pressure distributions and CO2 storage suggest that (1) the choice of three-phase relative permeability model and hysteresis model have noticeable impacts on CO2 sequestration simulation results; (2) influences of both factors are observed in all 50 realizations; and (3) the specific choice of hysteresis model appears to be somewhat more important relative to the choice of three-phase relative permeability model in terms of model uncertainty.
Keresztes, Janos C; John Koshel, R; D'huys, Karlien; De Ketelaere, Bart; Audenaert, Jan; Goos, Peter; Saeys, Wouter
2016-12-26
A novel meta-heuristic approach for minimizing nonlinear constrained problems is proposed, which offers tolerance information during the search for the global optimum. The method is based on the concept of design and analysis of computer experiments combined with a novel two phase design augmentation (DACEDA), which models the entire merit space using a Gaussian process, with iteratively increased resolution around the optimum. The algorithm is introduced through a series of cases studies with increasing complexity for optimizing uniformity of a short-wave infrared (SWIR) hyperspectral imaging (HSI) illumination system (IS). The method is first demonstrated for a two-dimensional problem consisting of the positioning of analytical isotropic point sources. The method is further applied to two-dimensional (2D) and five-dimensional (5D) SWIR HSI IS versions using close- and far-field measured source models applied within the non-sequential ray-tracing software FRED, including inherent stochastic noise. The proposed method is compared to other heuristic approaches such as simplex and simulated annealing (SA). It is shown that DACEDA converges towards a minimum with 1 % improvement compared to simplex and SA, and more importantly requiring only half the number of simulations. Finally, a concurrent tolerance analysis is done within DACEDA for to the five-dimensional case such that further simulations are not required.
Short-term prediction of chaotic time series by using RBF network with regression weights.
Rojas, I; Gonzalez, J; Cañas, A; Diaz, A F; Rojas, F J; Rodriguez, M
2000-10-01
We propose a framework for constructing and training a radial basis function (RBF) neural network. The structure of the gaussian functions is modified using a pseudo-gaussian function (PG) in which two scaling parameters sigma are introduced, which eliminates the symmetry restriction and provides the neurons in the hidden layer with greater flexibility with respect to function approximation. We propose a modified PG-BF (pseudo-gaussian basis function) network in which the regression weights are used to replace the constant weights in the output layer. For this purpose, a sequential learning algorithm is presented to adapt the structure of the network, in which it is possible to create a new hidden unit and also to detect and remove inactive units. A salient feature of the network systems is that the method used for calculating the overall output is the weighted average of the output associated with each receptive field. The superior performance of the proposed PG-BF system over the standard RBF are illustrated using the problem of short-term prediction of chaotic time series.
A wavelet-based Gaussian method for energy dispersive X-ray fluorescence spectrum.
Liu, Pan; Deng, Xiaoyan; Tang, Xin; Shen, Shijian
2017-05-01
This paper presents a wavelet-based Gaussian method (WGM) for the peak intensity estimation of energy dispersive X-ray fluorescence (EDXRF). The relationship between the parameters of Gaussian curve and the wavelet coefficients of Gaussian peak point is firstly established based on the Mexican hat wavelet. It is found that the Gaussian parameters can be accurately calculated by any two wavelet coefficients at the peak point which has to be known. This fact leads to a local Gaussian estimation method for spectral peaks, which estimates the Gaussian parameters based on the detail wavelet coefficients of Gaussian peak point. The proposed method is tested via simulated and measured spectra from an energy X-ray spectrometer, and compared with some existing methods. The results prove that the proposed method can directly estimate the peak intensity of EDXRF free from the background information, and also effectively distinguish overlap peaks in EDXRF spectrum.
Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras
NASA Astrophysics Data System (ADS)
He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.
Foam morphology, frustration and topological defects in a Negatively curved Hele-Shaw geometry
NASA Astrophysics Data System (ADS)
Mughal, Adil; Schroeder-Turk, Gerd; Evans, Myfanwy
2014-03-01
We present preliminary simulations of foams and single bubbles confined in a narrow gap between parallel surfaces. Unlike previous work, in which the bounding surfaces are flat (the so called Hele-Shaw geometry), we consider surfaces with non-vanishing Gaussian curvature. We demonstrate that the curvature of the bounding surfaces induce a geometric frustration in the preferred order of the foam. This frustration can be relieved by the introduction of topological defects (disclinations, dislocations and complex scar arrangements). We give a detailed analysis of these defects for foams confined in curved Hele-Shaw cells and compare our results with exotic honeycombs, built by bees on surfaces of varying Gaussian curvature. Our simulations, while encompassing surfaces of constant Gaussian curvature (such as the sphere and the cylinder), focus on surfaces with negative Gaussian curvature and in particular triply periodic minimal surfaces (such as the Schwarz P-surface and the Schoen's Gyroid surface). We use the results from a sphere-packing algorithm to generate a Voronoi partition that forms the basis of a Surface Evolver simulation, which yields a realistic foam morphology.
Non-Gaussian spatiotemporal simulation of multisite daily precipitation: downscaling framework
NASA Astrophysics Data System (ADS)
Ben Alaya, M. A.; Ouarda, T. B. M. J.; Chebana, F.
2018-01-01
Probabilistic regression approaches for downscaling daily precipitation are very useful. They provide the whole conditional distribution at each forecast step to better represent the temporal variability. The question addressed in this paper is: how to simulate spatiotemporal characteristics of multisite daily precipitation from probabilistic regression models? Recent publications point out the complexity of multisite properties of daily precipitation and highlight the need for using a non-Gaussian flexible tool. This work proposes a reasonable compromise between simplicity and flexibility avoiding model misspecification. A suitable nonparametric bootstrapping (NB) technique is adopted. A downscaling model which merges a vector generalized linear model (VGLM as a probabilistic regression tool) and the proposed bootstrapping technique is introduced to simulate realistic multisite precipitation series. The model is applied to data sets from the southern part of the province of Quebec, Canada. It is shown that the model is capable of reproducing both at-site properties and the spatial structure of daily precipitations. Results indicate the superiority of the proposed NB technique, over a multivariate autoregressive Gaussian framework (i.e. Gaussian copula).
Persistent homology and non-Gaussianity
NASA Astrophysics Data System (ADS)
Cole, Alex; Shiu, Gary
2018-03-01
In this paper, we introduce the topological persistence diagram as a statistic for Cosmic Microwave Background (CMB) temperature anisotropy maps. A central concept in 'Topological Data Analysis' (TDA), the idea of persistence is to represent a data set by a family of topological spaces. One then examines how long topological features 'persist' as the family of spaces is traversed. We compute persistence diagrams for simulated CMB temperature anisotropy maps featuring various levels of primordial non-Gaussianity of local type. Postponing the analysis of observational effects, we show that persistence diagrams are more sensitive to local non-Gaussianity than previous topological statistics including the genus and Betti number curves, and can constrain Δ fNLloc= 35.8 at the 68% confidence level on the simulation set, compared to Δ fNLloc= 60.6 for the Betti number curves. Given the resolution of our simulations, we expect applying persistence diagrams to observational data will give constraints competitive with those of the Minkowski Functionals. This is the first in a series of papers where we plan to apply TDA to different shapes of non-Gaussianity in the CMB and Large Scale Structure.
NASA Astrophysics Data System (ADS)
Li, S.; Zhang, Y.; Zhang, X.; Du, C.
2009-12-01
The Moxa Arch Anticline is a regional-scale northwest-trending uplift in western Wyoming where geological storage of acid gases (CO2, CH4, N2, H2S, He) from ExxonMobile's Shute Creek Gas Plant is under consideration. The Nugget Sandstone, a deep saline aquifer at depths exceeding 17,170 ft, is a candidate formation for acid gas storage. As part of a larger goal of determining site suitability, this study builds three-dimensional local to regional scale geological and fluid flow models for the Nugget Sandstone, its caprock (Twin Creek Limestone), and an underlying aquifer (Ankareh Sandstone), or together, the ``Nugget Suite''. For an area of 3000 square miles, geological and engineering data were assembled, screened for accuracy, and digitized, covering an average formation thickness of ~1700 feet. The data include 900 public-domain well logs (SP, Gamma Ray, Neutron Porosity, Density, Sonic, shallow and deep Resistivity, Lithology, Deviated well logs), 784 feet of core measurements (porosity and permeability), 4 regional geological cross sections, and 3 isopach maps. Data were interpreted and correlated for geological formations and facies, the later categorized using both Neural Network and Gaussian Hierarchical Clustering algorithms. Well log porosities were calibrated with core measurements, those of permeability estimated using formation-specific porosity-permeability transforms. Using conditional geostatistical simulations (first indicator simulation of facies, then sequential Gaussian simulation of facies-specific porosity), data were integrated at the regional-scale to create a geological model from which a local-scale simulation model surrounding the Shute Creek injection site was extracted. Based on this model, full compositional multiphase flow simulations were conducted with which we explore (1) an appropriate grid resolution for accurate acid gas predictions (pressure, saturation, and mass balance); (2) sensitivity of key geological and engineering variables on model predictions. Results suggest that (1) a horizontal and vertical resolution of 1/75 and 1/5~1/2 porosity correlation length is needed, respectively, to accurately capture the flow physics and mass balance. (2) the most sensitive variables that have first order impact on model predictions (i.e., regional storage, local displacement efficiency) are boundary condition, vertical permeability, relative permeability hysteresis, and injection rate. However, all else being equal, formation brine salinity has the most important effects on the concentrations of all dissolved components. Future work will define and simulate reactions of acid gases with formation brines and rocks which are currently under laboratory investigations.
Suppressing correlations in massively parallel simulations of lattice models
NASA Astrophysics Data System (ADS)
Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle
2017-11-01
For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin
Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allison, M.L.
1996-05-13
The objective of this project is to increase oil production and reserves in the Uinta Basin by demonstrating improved completion techniques. Low productivity of Uinta Basin will is caused by gross production intervals of several thousand feet that contain perforated thief zones, water-bearing zones, and unperforated oil- bearing intervals. Geologic and engineering characterization and computer simulation of the Green River and Wasatch Formations in the Bluefell field will determine reservoir heterogeneities related to fractures and depositional trends. This will be followed by techniques based on the reservoir characterization. Transfer of the project results will be an ongoing component of themore » project. Data (net pay thickness, porosity, and water saturation) of more than 100 individuals beds in he lower Green River and Wasatch Formations were used to generate geostatistical realization (numerical- representation) of the reservoir properties. The data set was derived from the Michelle Ute and Malnar Pike demonstration wells and 22 other wells in a 20 (52 km{sup 2}) square-mile area. Beds were studied independently of each other. Principles of sequential Gaussian simulations were used to generate geostatistical realizations of the beds.« less
Hajati, Omid; Zarrabi, Khalil; Karimi, Reza; Hajati, Azadeh
2012-01-01
There is still controversy over the differences in the patency rates of the sequential and individual coronary artery bypass grafting (CABG) techniques. The purpose of this paper was to non-invasively evaluate hemodynamic parameters using complete 3D computational fluid dynamics (CFD) simulations of the sequential and the individual methods based on the patient-specific data extracted from computed tomography (CT) angiography. For CFD analysis, the geometric model of coronary arteries was reconstructed using an ECG-gated 64-detector row CT. Modeling the sequential and individual bypass grafting, this study simulates the flow from the aorta to the occluded posterior descending artery (PDA) and the posterior left ventricle (PLV) vessel with six coronary branches based on the physiologically measured inlet flow as the boundary condition. The maximum calculated wall shear stress (WSS) in the sequential and the individual models were estimated to be 35.1 N/m(2) and 36.5 N/m(2), respectively. Compared to the individual bypass method, the sequential graft has shown a higher velocity at the proximal segment and lower spatial wall shear stress gradient (SWSSG) due to the flow splitting caused by the side-to-side anastomosis. Simulated results combined with its surgical benefits including the requirement of shorter vein length and fewer anastomoses advocate the sequential method as a more favorable CABG method.
Gaussian mass optimization for kernel PCA parameters
NASA Astrophysics Data System (ADS)
Liu, Yong; Wang, Zulin
2011-10-01
This paper proposes a novel kernel parameter optimization method based on Gaussian mass, which aims to overcome the current brute force parameter optimization method in a heuristic way. Generally speaking, the choice of kernel parameter should be tightly related to the target objects while the variance between the samples, the most commonly used kernel parameter, doesn't possess much features of the target, which gives birth to Gaussian mass. Gaussian mass defined in this paper has the property of the invariance of rotation and translation and is capable of depicting the edge, topology and shape information. Simulation results show that Gaussian mass leads a promising heuristic optimization boost up for kernel method. In MNIST handwriting database, the recognition rate improves by 1.6% compared with common kernel method without Gaussian mass optimization. Several promising other directions which Gaussian mass might help are also proposed at the end of the paper.
Simulation of time series by distorted Gaussian processes
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1977-01-01
Distorted stationary Gaussian process can be used to provide computer-generated imitations of experimental time series. A method of analyzing a source time series and synthesizing an imitation is shown, and an example using X-band radiometer data is given.
NASA Astrophysics Data System (ADS)
Lorentzen, Rolf J.; Stordal, Andreas S.; Hewitt, Neal
2017-05-01
Flowrate allocation in production wells is a complicated task, especially for multiphase flow combined with several reservoir zones and/or branches. The result depends heavily on the available production data, and the accuracy of these. In the application we show here, downhole pressure and temperature data are available, in addition to the total flowrates at the wellhead. The developed methodology inverts these observations to the fluid flowrates (oil, water and gas) that enters two production branches in a real full-scale producer. A major challenge is accurate estimation of flowrates during rapid variations in the well, e.g. due to choke adjustments. The Auxiliary Sequential Importance Resampling (ASIR) filter was developed to handle such challenges, by introducing an auxiliary step, where the particle weights are recomputed (second weighting step) based on how well the particles reproduce the observations. However, the ASIR filter suffers from large computational time when the number of unknown parameters increase. The Gaussian Mixture (GM) filter combines a linear update, with the particle filters ability to capture non-Gaussian behavior. This makes it possible to achieve good performance with fewer model evaluations. In this work we present a new filter which combines the ASIR filter and the Gaussian Mixture filter (denoted ASGM), and demonstrate improved estimation (compared to ASIR and GM filters) in cases with rapid parameter variations, while maintaining reasonable computational cost.
PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Shiyuan; Huang, Jianhua Z.; Long, James
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequencymore » parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.« less
Miao, Yinglong; Feher, Victoria A; McCammon, J Andrew
2015-08-11
A Gaussian accelerated molecular dynamics (GaMD) approach for simultaneous enhanced sampling and free energy calculation of biomolecules is presented. By constructing a boost potential that follows Gaussian distribution, accurate reweighting of the GaMD simulations is achieved using cumulant expansion to the second order. Here, GaMD is demonstrated on three biomolecular model systems: alanine dipeptide, chignolin folding, and ligand binding to the T4-lysozyme. Without the need to set predefined reaction coordinates, GaMD enables unconstrained enhanced sampling of these biomolecules. Furthermore, the free energy profiles obtained from reweighting of the GaMD simulations allow us to identify distinct low-energy states of the biomolecules and characterize the protein-folding and ligand-binding pathways quantitatively.
Gaussian Accelerated Molecular Dynamics: Unconstrained Enhanced Sampling and Free Energy Calculation
2016-01-01
A Gaussian accelerated molecular dynamics (GaMD) approach for simultaneous enhanced sampling and free energy calculation of biomolecules is presented. By constructing a boost potential that follows Gaussian distribution, accurate reweighting of the GaMD simulations is achieved using cumulant expansion to the second order. Here, GaMD is demonstrated on three biomolecular model systems: alanine dipeptide, chignolin folding, and ligand binding to the T4-lysozyme. Without the need to set predefined reaction coordinates, GaMD enables unconstrained enhanced sampling of these biomolecules. Furthermore, the free energy profiles obtained from reweighting of the GaMD simulations allow us to identify distinct low-energy states of the biomolecules and characterize the protein-folding and ligand-binding pathways quantitatively. PMID:26300708
NASA Technical Reports Server (NTRS)
Reeves, P. M.; Campbell, G. S.; Ganzer, V. M.; Joppa, R. G.
1974-01-01
A method is described for generating time histories which model the frequency content and certain non-Gaussian probability characteristics of atmospheric turbulence including the large gusts and patchy nature of turbulence. Methods for time histories using either analog or digital computation are described. A STOL airplane was programmed into a 6-degree-of-freedom flight simulator, and turbulence time histories from several atmospheric turbulence models were introduced. The pilots' reactions are described.
NASA Astrophysics Data System (ADS)
Zheng, Guo; Wang, Jue; Wang, Lin; Zhou, Muchun; Chen, Yanru; Song, Minmin
2018-03-01
The scintillation index of pseudo-Bessel-Gaussian Schell-mode (PBGSM) beams propagating through atmospheric turbulence is analyzed with the help of wave optics simulation due to the analytic difficulties. It is found that in the strong fluctuation regime, the PBGSM beams are more resistant to the turbulence with the appropriate parameters β and δ . However, the case is contrary in the weak fluctuation regime. Our simulation results indicate that the PBGSM beams may be applied to free-space optical (FSO) communication systems only when the turbulence is strong or the propagation distance is long.
NASA Technical Reports Server (NTRS)
Parrish, R. S.; Carter, M. C.
1974-01-01
This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.
Pulse shaping system research of CdZnTe radiation detector for high energy x-ray diagnostic
NASA Astrophysics Data System (ADS)
Li, Miao; Zhao, Mingkun; Ding, Keyu; Zhou, Shousen; Zhou, Benjie
2018-02-01
As one of the typical wide band-gap semiconductor materials, the CdZnTe material has high detection efficiency and excellent energy resolution for the hard X-ray and the Gamma ray. The generated signal of the CdZnTe detector needs to be transformed to the pseudo-Gaussian pulse with a small impulse-width to remove noise and improve the energy resolution by the following nuclear spectrometry data acquisition system. In this paper, the multi-stage pseudo-Gaussian shaping-filter has been investigated based on the nuclear electronic principle. The optimized circuit parameters were also obtained based on the analysis of the characteristics of the pseudo-Gaussian shaping-filter in our following simulations. Based on the simulation results, the falling-time of the output pulse was decreased and faster response time can be obtained with decreasing shaping-time τs-k. And the undershoot was also removed when the ratio of input resistors was set to 1 to 2.5. Moreover, a two stage sallen-key Gaussian shaping-filter was designed and fabricated by using a low-noise voltage feedback operation amplifier LMH6628. A detection experiment platform had been built by using the precise pulse generator CAKE831 as the imitated radiation pulse which was equivalent signal of the semiconductor CdZnTe detector. Experiment results show that the output pulse of the two stage pseudo-Gaussian shaping filter has minimum 200ns pulse width (FWHM), and the output pulse of each stage was well consistent with the simulation results. Based on the performance in our experiment, this multi-stage pseudo-Gaussian shaping-filter can reduce the event-lost caused by pile-up in the CdZnTe semiconductor detector and improve the energy resolution effectively.
NASA Technical Reports Server (NTRS)
Cheng, Anning; Xu, Kuan-Man
2006-01-01
The abilities of cloud-resolving models (CRMs) with the double-Gaussian based and the single-Gaussian based third-order closures (TOCs) to simulate the shallow cumuli and their transition to deep convective clouds are compared in this study. The single-Gaussian based TOC is fully prognostic (FP), while the double-Gaussian based TOC is partially prognostic (PP). The latter only predicts three important third-order moments while the former predicts all the thirdorder moments. A shallow cumulus case is simulated by single-column versions of the FP and PP TOC models. The PP TOC improves the simulation of shallow cumulus greatly over the FP TOC by producing more realistic cloud structures. Large differences between the FP and PP TOC simulations appear in the cloud layer of the second- and third-order moments, which are related mainly to the underestimate of the cloud height in the FP TOC simulation. Sensitivity experiments and analysis of probability density functions (PDFs) used in the TOCs show that both the turbulence-scale condensation and higher-order moments are important to realistic simulations of the boundary-layer shallow cumuli. A shallow to deep convective cloud transition case is also simulated by the 2-D versions of the FP and PP TOC models. Both CRMs can capture the transition from the shallow cumuli to deep convective clouds. The PP simulations produce more and deeper shallow cumuli than the FP simulations, but the FP simulations produce larger and wider convective clouds than the PP simulations. The temporal evolutions of cloud and precipitation are closely related to the turbulent transport, the cold pool and the cloud-scale circulation. The large amount of turbulent mixing associated with the shallow cumuli slows down the increase of the convective available potential energy and inhibits the early transition to deep convective clouds in the PP simulation. When the deep convective clouds fully develop and the precipitation is produced, the cold pools produced by the evaporation of the precipitation are not favorable to the formation of shallow cumuli.
Optimal decision making on the basis of evidence represented in spike trains.
Zhang, Jiaxiang; Bogacz, Rafal
2010-05-01
Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.
Inverse sequential detection of parameter changes in developing time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1992-01-01
Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.
Upper bounds on sequential decoding performance parameters
NASA Technical Reports Server (NTRS)
Jelinek, F.
1974-01-01
This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.
Inverse sequential procedures for the monitoring of time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1995-01-01
When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.
NASA Astrophysics Data System (ADS)
Nie, Yongming; Li, Xiujian; Qi, Junli; Ma, Haotong; Liao, Jiali; Yang, Jiankun; Hu, Wenhua
2012-03-01
Based on the refractive beam shaping system, the transformation of a quasi-Gaussian beam into a dark hollow Gaussian beam by a phase-only liquid crystal spatial light modulator (LC-SLM) is proposed. According to the energy conservation and constant optical path principle, the phase distribution of the aspheric lens and the phase-only LC-SLM can modulate the wave-front properly to generate the hollow beam. The numerical simulation results indicate that, the dark hollow intensity distribution of the output shaped beam can be maintained well for a certain propagation distance during which the dark region will not decrease whereas the ideal hollow Gaussian beam will do. By designing the phase modulation profile, which loaded into the LC-SLM carefully, the experimental results indicate that the dark hollow intensity distribution of the output shaped beam can be maintained well even at a distance much more than 550 mm from the LC-SLM, which agree with the numerical simulation results.
Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions
NASA Astrophysics Data System (ADS)
Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio
1993-02-01
The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.
Kang; Ih; Kim; Kim
2000-03-01
In this study, a new prediction method is suggested for sound transmission loss (STL) of multilayered panels of infinite extent. Conventional methods such as random or field incidence approach often given significant discrepancies in predicting STL of multilayered panels when compared with the experiments. In this paper, appropriate directional distributions of incident energy to predict the STL of multilayered panels are proposed. In order to find a weighting function to represent the directional distribution of incident energy on the wall in a reverberation chamber, numerical simulations by using a ray-tracing technique are carried out. Simulation results reveal that the directional distribution can be approximately expressed by the Gaussian distribution function in terms of the angle of incidence. The Gaussian function is applied to predict the STL of various multilayered panel configurations as well as single panels. The compared results between the measurement and the prediction show good agreements, which validate the proposed Gaussian function approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruban, V. P., E-mail: ruban@itp.ac.ru
2015-05-15
The nonlinear dynamics of an obliquely oriented wave packet on a sea surface is analyzed analytically and numerically for various initial parameters of the packet in relation to the problem of the so-called rogue waves. Within the Gaussian variational ansatz applied to the corresponding (1+2)-dimensional hyperbolic nonlinear Schrödinger equation (NLSE), a simplified Lagrangian system of differential equations is derived that describes the evolution of the coefficients of the real and imaginary quadratic forms appearing in the Gaussian. This model provides a semi-quantitative description of the process of nonlinear spatiotemporal focusing, which is one of the most probable mechanisms of roguemore » wave formation in random wave fields. The system of equations is integrated in quadratures, which allows one to better understand the qualitative differences between linear and nonlinear focusing regimes of a wave packet. Predictions of the Gaussian model are compared with the results of direct numerical simulation of fully nonlinear long-crested waves.« less
Chialvo, Ariel A.; Vlcek, Lukas
2014-11-01
We present a detailed derivation of the complete set of expressions required for the implementation of an Ewald summation approach to handle the long-range electrostatic interactions of polar and ionic model systems involving Gaussian charges and induced dipole moments with a particular application to the isobaricisothermal molecular dynamics simulation of our Gaussian Charge Polarizable (GCP) water model and its extension to aqueous electrolytes solutions. The set comprises the individual components of the potential energy, electrostatic potential, electrostatic field and gradient, the electrostatic force and the corresponding virial. Moreover, we show how the derived expressions converge to known point-based electrostatic counterpartsmore » when the parameters, defining the Gaussian charge and induced-dipole distributions, are extrapolated to their limiting point values. Finally, we illustrate the Ewald implementation against the current reaction field approach by isothermal-isobaric molecular dynamics of ambient GCP water for which we compared the outcomes of the thermodynamic, microstructural, and polarization behavior.« less
Mapping soil textural fractions across a large watershed in north-east Florida.
Lamsal, S; Mishra, U
2010-08-01
Assessment of regional scale soil spatial variation and mapping their distribution is constrained by sparse data which are collected using field surveys that are labor intensive and cost prohibitive. We explored geostatistical (ordinary kriging-OK), regression (Regression Tree-RT), and hybrid methods (RT plus residual Sequential Gaussian Simulation-SGS) to map soil textural fractions across the Santa Fe River Watershed (3585 km(2)) in north-east Florida. Soil samples collected from four depths (L1: 0-30 cm, L2: 30-60 cm, L3: 60-120 cm, and L4: 120-180 cm) at 141 locations were analyzed for soil textural fractions (sand, silt and clay contents), and combined with textural data (15 profiles) assembled under the Florida Soil Characterization program. Textural fractions in L1 and L2 were autocorrelated, and spatially mapped across the watershed. OK performance was poor, which may be attributed to the sparse sampling. RT model structure varied among textural fractions, and the model explained variations ranged from 25% for L1 silt to 61% for L2 clay content. Regression residuals were simulated using SGS, and the average of simulated residuals were used to approximate regression residual distribution map, which were added to regression trend maps. Independent validation of the prediction maps showed that regression models performed slightly better than OK, and regression combined with average of simulated regression residuals improved predictions beyond the regression model. Sand content >90% in both 0-30 and 30-60 cm covered 80.6% of the watershed area. Copyright 2010 Elsevier Ltd. All rights reserved.
Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S
2017-12-01
To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Lida; Chen, Tao; Wang, Yan; Yuan, Hongyong
2015-12-01
Gatherings of large human crowds often result in crowd disasters such as the Love Parade Disaster in Duisburg, Germany on July 24, 2010. To avoid these tragedies, video surveillance and early warning are becoming more and more significant. In this paper, the velocity entropy is first defined as the criterion for congestion detection, which represents the motion magnitude distribution and the motion direction distribution simultaneously. Then the detection method is verified by the simulation data based on AnyLogic software. To test the generalization performance of this method, video recordings of a real-world case, the Love Parade disaster, are also used in the experiments. The velocity histograms of the foreground object in the videos are extracted by the Gaussian Mixture Model (GMM) and optical flow computation. With a sequential change-point detection algorithm, the velocity entropy can be applied to detect congestions of the Love Parade festival. It turned out that without recognizing and tracking individual pedestrian, our method can detect abnormal crowd behaviors in real-time.
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta
2009-07-01
Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.
A novel multitarget model of radiation-induced cell killing based on the Gaussian distribution.
Zhao, Lei; Mi, Dong; Sun, Yeqing
2017-05-07
The multitarget version of the traditional target theory based on the Poisson distribution is still used to describe the dose-survival curves of cells after ionizing radiation in radiobiology and radiotherapy. However, noting that the usual ionizing radiation damage is the result of two sequential stochastic processes, the probability distribution of the damage number per cell should follow a compound Poisson distribution, like e.g. Neyman's distribution of type A (N. A.). In consideration of that the Gaussian distribution can be considered as the approximation of the N. A. in the case of high flux, a multitarget model based on the Gaussian distribution is proposed to describe the cell inactivation effects in low linear energy transfer (LET) radiation with high dose-rate. Theoretical analysis and experimental data fitting indicate that the present theory is superior to the traditional multitarget model and similar to the Linear - Quadratic (LQ) model in describing the biological effects of low-LET radiation with high dose-rate, and the parameter ratio in the present model can be used as an alternative indicator to reflect the radiation damage and radiosensitivity of the cells. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mapping iron oxides and the color of Australian soil using visible-near-infrared reflectance spectra
NASA Astrophysics Data System (ADS)
Viscarra Rossel, R. A.; Bui, E. N.; de Caritat, P.; McKenzie, N. J.
2010-12-01
Iron (Fe) oxide mineralogy in most Australian soils is poorly characterized, even though Fe oxides play an important role in soil function. Fe oxides reflect the conditions of pH, redox potential, moisture, and temperature in the soil environment. The strong pigmenting effect of Fe oxides gives most soils their color, which is largely a reflection of the soil's Fe mineralogy. Visible-near-infrared (vis-NIR) spectroscopy can be used to identify and measure the abundance of certain Fe oxides in soil, and the visible range can be used to derive tristimuli soil color information. The aims of this paper are (1) to measure the abundance of hematite and goethite in Australian soils from their vis-NIR spectra, (2) to compare these results to measurements of soil color, and (3) to describe the spatial variability of hematite, goethite, and soil color and map their distribution across Australia. We measured the spectra of 4606 surface soil samples from across Australia using a vis-NIR spectrometer with a wavelength range of 350-2500 nm. We determined the Fe oxide abundance for each sample using the diagnostic absorption features of hematite (near 880 nm) and goethite (near 920 nm) and derived a normalized iron oxide difference index (NIODI) to better discriminate between them. The NIODI was generalized across Australia with its spatial uncertainty using sequential indicator simulation, which resulted in a map of the probability of the occurrence of hematite and goethite. We also derived soil RGB color from the spectra and mapped its distribution and uncertainty across the country using sequential Gaussian simulations. The simulated RGB color values were made into a composite true color image and were also converted to Munsell hue, value, and chroma. These color maps were compared to the map of the NIODI, and both were used to interpret our results. The work presented here was validated by randomly splitting the data into training and test data sets, as well as by comparing our results to existing studies on the distribution of Fe oxides in Australian soils.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olea, Ricardo A., E-mail: olea@usgs.gov; Cook, Troy A.; Coleman, James L.
2010-12-15
The Greater Natural Buttes tight natural gas field is an unconventional (continuous) accumulation in the Uinta Basin, Utah, that began production in the early 1950s from the Upper Cretaceous Mesaverde Group. Three years later, production was extended to the Eocene Wasatch Formation. With the exclusion of 1100 non-productive ('dry') wells, we estimate that the final recovery from the 2500 producing wells existing in 2007 will be about 1.7 trillion standard cubic feet (TSCF) (48.2 billion cubic meters (BCM)). The use of estimated ultimate recovery (EUR) per well is common in assessments of unconventional resources, and it is one of themore » main sources of information to forecast undiscovered resources. Each calculated recovery value has an associated drainage area that generally varies from well to well and that can be mathematically subdivided into elemental subareas of constant size and shape called cells. Recovery per 5-acre cells at Greater Natural Buttes shows spatial correlation; hence, statistical approaches that ignore this correlation when inferring EUR values for untested cells do not take full advantage of all the information contained in the data. More critically, resulting models do not match the style of spatial EUR fluctuations observed in nature. This study takes a new approach by applying spatial statistics to model geographical variation of cell EUR taking into account spatial correlation and the influence of fractures. We applied sequential indicator simulation to model non-productive cells, while spatial mapping of cell EUR was obtained by applying sequential Gaussian simulation to provide multiple versions of reality (realizations) having equal chances of being the correct model. For each realization, summation of EUR in cells not drained by the existing wells allowed preparation of a stochastic prediction of undiscovered resources, which range between 2.6 and 3.4 TSCF (73.6 and 96.3 BCM) with a mean of 2.9 TSCF (82.1 BCM) for Greater Natural Buttes. A second approach illustrates the application of multiple-point simulation to assess a hypothetical frontier area for which there is no production information but which is regarded as being similar to Greater Natural Buttes.« less
Ersoy, Adem; Yunsel, Tayfun Yusuf; Atici, Umit
2008-02-01
Abandoned mine workings can undoubtedly cause varying degrees of contamination of soil with heavy metals such as lead and zinc has occurred on a global scale. Exposure to these elements may cause to harm human health and environment. In the study, a total of 269 soil samples were collected at 1, 5, and 10 m regular grid intervals of 100 x 100 m area of Carsington Pasture in the UK. Cell declustering technique was applied to the data set due to no statistical representativity. Directional experimental semivariograms of the elements for the transformed data showed that both geometric and zonal anisotropy exists in the data. The most evident spatial dependence structure of the continuity for the directional experimental semivariogram, characterized by spherical and exponential models of Pb and Zn were obtained. This study reports the spatial distribution and uncertainty of Pb and Zn concentrations in soil at the study site using a probabilistic approach. The approach was based on geostatistical sequential Gaussian simulation (SGS), which is used to yield a series of conditional images characterized by equally probable spatial distributions of the heavy elements concentrations across the area. Postprocessing of many simulations allowed the mapping of contaminated and uncontaminated areas, and provided a model for the uncertainty in the spatial distribution of element concentrations. Maps of the simulated Pb and Zn concentrations revealed the extent and severity of contamination. SGS was validated by statistics, histogram, variogram reproduction, and simulation errors. The maps of the elements might be used in the remediation studies, help decision-makers and others involved in the abandoned heavy metal mining site in the world.
Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion
NASA Astrophysics Data System (ADS)
Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin
2018-02-01
Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.
Generation of singular optical beams from fundamental Gaussian beam using Sagnac interferometer
NASA Astrophysics Data System (ADS)
Naik, Dinesh N.; Viswanathan, Nirmal K.
2016-09-01
We propose a simple free-space optics recipe for the controlled generation of optical vortex beams with a vortex dipole or a single charge vortex, using an inherently stable Sagnac interferometer. We investigate the role played by the amplitude and phase differences in generating higher-order Gaussian beams from the fundamental Gaussian mode. Our simulation results reveal how important the control of both the amplitude and the phase difference between superposing beams is to achieving optical vortex beams. The creation of a vortex dipole from null interference is unveiled through the introduction of a lateral shear and a radial phase difference between two out-of-phase Gaussian beams. A stable and high quality optical vortex beam, equivalent to the first-order Laguerre-Gaussian beam, is synthesized by coupling lateral shear with linear phase difference, introduced orthogonal to the shear between two out-of-phase Gaussian beams.
Anomalous and non-Gaussian diffusion in Hertzian spheres
NASA Astrophysics Data System (ADS)
Ouyang, Wenze; Sun, Bin; Sun, Zhiwei; Xu, Shenghua
2018-09-01
By means of molecular dynamics simulations, we study the non-Gaussian diffusion in the fluid of Hertzian spheres. The time dependent non-Gaussian parameter, as an indicator of the dynamic heterogeneity, is increased with the increasing of temperature. When the temperature is high enough, the dynamic heterogeneity becomes very significant, and it seems counterintuitive that the maximum of non-Gaussian parameter and the position of its peak decrease monotonically with the increasing of density. By fitting the curves of self intermediate scattering function, we find that the character relaxation time τα is surprisingly not coupled with the time τmax where the non-Gaussian parameter reaches to a maximum. The intriguing features of non-Gaussian diffusion at high enough temperatures can be associated with the weakly correlated mean-field behavior of Hertzian spheres. Especially the time τmax is nearly inversely proportional to the density at extremely high temperatures.
NASA Astrophysics Data System (ADS)
Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing
2018-05-01
We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, A.; Borland, M.
Both intra-beamscattering (IBS) and the Touschek effect become prominent formulti-bend-achromat- (MBA-) based ultra-low-emittance storage rings. To mitigate the transverse emittance degradation and obtain a reasonably long beam lifetime, a higher harmonic rf cavity (HHC) is often proposed to lengthen the bunch. The use of such a cavity results in a non-gaussian longitudinal distribution. However, common methods for computing IBS and Touschek scattering assume Gaussian distributions. Modifications have been made to several simulation codes that are part of the elegant [1] toolkit to allow these computations for arbitrary longitudinal distributions. After describing thesemodifications, we review the results of detailed simulations formore » the proposed hybrid seven-bend-achromat (H7BA) upgrade lattice [2] for the Advanced Photon Source.« less
Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆
Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny
2014-01-01
There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702
Li, Tiejun; Min, Bin; Wang, Zhiming
2013-03-14
The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2017-02-01
Current report considers development of a unified Monte Carlo (MC) -based computational model for simulation of propagation of Laguerre-Gaussian (LG) beams in turbid tissue-like scattering medium. With a primary goal to proof the concept of using complex light for tissue diagnosis we explore propagation of LG beams in comparison with Gaussian beams for both linear and circular polarization. MC simulations of radially and azimuthally polarized LG beams in turbid media have been performed, classic phenomena such as preservation of the orbital angular momentum, optical memory and helicity flip are observed, detailed comparison is presented and discussed.
Nicolas, F; Coëtmellec, S; Brunel, M; Allano, D; Lebrun, D; Janssen, A J E M
2005-11-01
The authors have studied the diffraction pattern produced by a particle field illuminated by an elliptic and astigmatic Gaussian beam. They demonstrate that the bidimensional fractional Fourier transformation is a mathematically suitable tool to analyse the diffraction pattern generated not only by a collimated plane wave [J. Opt. Soc. Am A 19, 1537 (2002)], but also by an elliptic and astigmatic Gaussian beam when two different fractional orders are considered. Simulations and experimental results are presented.
Elegant Ince—Gaussian breathers in strongly nonlocal nonlinear media
NASA Astrophysics Data System (ADS)
Bai, Zhi-Yong; Deng, Dong-Mei; Guo, Qi
2012-06-01
A novel class of optical breathers, called elegant Ince—Gaussian breathers, are presented in this paper. They are exact analytical solutions to Snyder and Mitchell's mode in an elliptic coordinate system, and their transverse structures are described by Ince-polynomials with complex arguments and a Gaussian function. We provide convincing evidence for the correctness of the solutions and the existence of the breathers via comparing the analytical solutions with numerical simulation of the nonlocal nonlinear Schrödinger equation.
NASA Astrophysics Data System (ADS)
Kumari, Vandana; Kumar, Ayush; Saxena, Manoj; Gupta, Mridula
2018-01-01
The sub-threshold model formulation of Gaussian Doped Double Gate JunctionLess (GD-DG-JL) FET including source/drain depletion length is reported in the present work under the assumption that the ungated regions are fully depleted. To provide deeper insight into the device performance, the impact of gaussian straggle, channel length, oxide and channel thickness and high-k gate dielectric has been studied using extensive TCAD device simulation.
Orthogonal Gaussian process models
Plumlee, Matthew; Joseph, V. Roshan
2017-01-01
Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.
Orthogonal Gaussian process models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plumlee, Matthew; Joseph, V. Roshan
Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.
Sequential Computerized Mastery Tests--Three Simulation Studies
ERIC Educational Resources Information Center
Wiberg, Marie
2006-01-01
A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…
Multiple Point Statistics algorithm based on direct sampling and multi-resolution images
NASA Astrophysics Data System (ADS)
Julien, S.; Renard, P.; Chugunova, T.
2017-12-01
Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.
Managing numerical errors in random sequential adsorption
NASA Astrophysics Data System (ADS)
Cieśla, Michał; Nowak, Aleksandra
2016-09-01
Aim of this study is to examine the influence of a finite surface size and a finite simulation time on a packing fraction estimated using random sequential adsorption simulations. The goal of particular interest is providing hints on simulation setup to achieve desired level of accuracy. The analysis is based on properties of saturated random packing of disks on continuous and flat surfaces of different sizes.
Tracer diffusion in a sea of polymers with binding zones: mobile vs. frozen traps.
Samanta, Nairhita; Chakrabarti, Rajarshi
2016-10-19
We use molecular dynamics simulations to investigate the tracer diffusion in a sea of polymers with specific binding zones for the tracer. These binding zones act as traps. Our simulations show that the tracer can undergo normal yet non-Gaussian diffusion under certain circumstances, e.g., when the polymers with traps are frozen in space and the volume fraction and the binding strength of the traps are moderate. In this case, as the tracer moves, it experiences a heterogeneous environment and exhibits confined continuous time random walk (CTRW) like motion resulting in a non-Gaussian behavior. Also the long time dynamics becomes subdiffusive as the number or the binding strength of the traps increases. However, if the polymers are mobile then the tracer dynamics is Gaussian but could be normal or subdiffusive depending on the number and the binding strength of the traps. In addition, with increasing binding strength and number of polymer traps, the probability of the tracer being trapped increases. On the other hand, removing the binding zones does not result in trapping, even at comparatively high crowding. Our simulations also show that the trapping probability increases with the increasing size of the tracer and for a bigger tracer with the frozen polymer background the dynamics is only weakly non-Gaussian but highly subdiffusive. Our observations are in the same spirit as found in many recent experiments on tracer diffusion in polymeric materials and question the validity of using Gaussian theory to describe diffusion in a crowded environment in general.
Novel theory for propagation of tilted Gaussian beam through aligned optical system
NASA Astrophysics Data System (ADS)
Xia, Lei; Gao, Yunguo; Han, Xudong
2017-03-01
A novel theory for tilted beam propagation is established in this paper. By setting the propagation direction of the tilted beam as the new optical axis, we establish a virtual optical system that is aligned with the new optical axis. Within the first order approximation of the tilt and off-axis, the propagation of the tilted beam is studied in the virtual system instead of the actual system. To achieve more accurate optical field distributions of tilted Gaussian beams, a complete diffraction integral for a misaligned optical system is derived by using the matrix theory with angular momentums. The theory demonstrates that a tilted TEM00 Gaussian beam passing through an aligned optical element transforms into a decentered Gaussian beam along the propagation direction. The deviations between the peak intensity axis of the decentered Gaussian beam and the new optical axis have linear relationships with the misalignments in the virtual system. ZEMAX simulation of a tilted beam through a thick lens exposed to air shows that the errors between the simulation results and theoretical calculations of the position deviations are less than 2‰ when the misalignments εx, εy, εx', εy' are in the range of [-0.5, 0.5] mm and [-0.5, 0.5]°.
Inverse sequential procedures for the monitoring of time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy
1993-01-01
Climate changes traditionally have been detected from long series of observations and long after they happened. The 'inverse sequential' monitoring procedure is designed to detect changes as soon as they occur. Frequency distribution parameters are estimated both from the most recent existing set of observations and from the same set augmented by 1,2,...j new observations. Individual-value probability products ('likelihoods') are then calculated which yield probabilities for erroneously accepting the existing parameter(s) as valid for the augmented data set and vice versa. A parameter change is signaled when these probabilities (or a more convenient and robust compound 'no change' probability) show a progressive decrease. New parameters are then estimated from the new observations alone to restart the procedure. The detailed algebra is developed and tested for Gaussian means and variances, Poisson and chi-square means, and linear or exponential trends; a comprehensive and interactive Fortran program is provided in the appendix.
Gaussian theory for spatially distributed self-propelled particles
NASA Astrophysics Data System (ADS)
Seyed-Allaei, Hamid; Schimansky-Geier, Lutz; Ejtehadi, Mohammad Reza
2016-12-01
Obtaining a reduced description with particle and momentum flux densities outgoing from the microscopic equations of motion of the particles requires approximations. The usual method, we refer to as truncation method, is to zero Fourier modes of the orientation distribution starting from a given number. Here we propose another method to derive continuum equations for interacting self-propelled particles. The derivation is based on a Gaussian approximation (GA) of the distribution of the direction of particles. First, by means of simulation of the microscopic model, we justify that the distribution of individual directions fits well to a wrapped Gaussian distribution. Second, we numerically integrate the continuum equations derived in the GA in order to compare with results of simulations. We obtain that the global polarization in the GA exhibits a hysteresis in dependence on the noise intensity. It shows qualitatively the same behavior as we find in particles simulations. Moreover, both global polarizations agree perfectly for low noise intensities. The spatiotemporal structures of the GA are also in agreement with simulations. We conclude that the GA shows qualitative agreement for a wide range of noise intensities. In particular, for low noise intensities the agreement with simulations is better as other approximations, making the GA to an acceptable candidates of describing spatially distributed self-propelled particles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lange, R.; Dickerson, M.A.; Peterson, K.R.
Two numerical models for the calculation of air concentration and ground deposition of airborne effluent releases are compared. The Particle-in-Cell (PIC) model and the Straight-Line Airflow Gaussian model were used for the simulation. Two sites were selected for comparison: the Hudson River Valley, New York, and the area around the Savannah River Plant, South Carolina. Input for the models was synthesized from meteorological data gathered in previous studies by various investigators. It was found that the PIC model more closely simulated the three-dimensional effects of the meteorology and topography. Overall, the Gaussian model calculated higher concentrations under stable conditions withmore » better agreement between the two methods during neutral to unstable conditions. In addition, because of its consideration of exposure from the returning plume after flow reversal, the PIC model calculated air concentrations over larger areas than did the Gaussian model.« less
Dynamical heterogeneities of cold 2D Yukawa liquids
NASA Astrophysics Data System (ADS)
Wang, Kang; Huang, Dong; Feng, Yan
2018-06-01
Dynamical heterogeneities of 2D liquid dusty plasmas at different temperatures are investigated systematically using Langevin dynamical simulations. From the simulated trajectories, various heterogeneity measures have been calculated, such as the distance matrix, the averaged squared displacement, the non-Gaussian parameter, and the four-point susceptibility. It is found that, for 2D Yukawa liquids, both spatial and temporal heterogeneities in dynamics are more severe at a lower temperature near the melting point. For various temperatures, the calculated non-Gaussian parameter of 2D Yukawa liquids contains two peaks at different times, indicating the most heterogeneous dynamics, which are attributed to the transition of different motions and the α relaxation time, respectively. In the diffusive motion, the most heterogeneous dynamics for a colder Yukawa liquid happen more slowly, as indicated by both the non-Gaussian parameter and the four-point susceptibility.
NASA Astrophysics Data System (ADS)
Nishimichi, Takahiro; Taruya, Atsushi; Koyama, Kazuya; Sabiu, Cristiano
2010-07-01
We study the halo bispectrum from non-Gaussian initial conditions. Based on a set of large N-body simulations starting from initial density fields with local type non-Gaussianity, we find that the halo bispectrum exhibits a strong dependence on the shape and scale of Fourier space triangles near squeezed configurations at large scales. The amplitude of the halo bispectrum roughly scales as fNL2. The resultant scaling on the triangular shape is consistent with that predicted by Jeong & Komatsu based on perturbation theory. We systematically investigate this dependence with varying redshifts and halo mass thresholds. It is shown that the fNL dependence of the halo bispectrum is stronger for more massive haloes at higher redshifts. This feature can be a useful discriminator of inflation scenarios in future deep and wide galaxy redshift surveys.
Gaussian Accelerated Molecular Dynamics: Theory, Implementation, and Applications
Miao, Yinglong; McCammon, J. Andrew
2018-01-01
A novel Gaussian Accelerated Molecular Dynamics (GaMD) method has been developed for simultaneous unconstrained enhanced sampling and free energy calculation of biomolecules. Without the need to set predefined reaction coordinates, GaMD enables unconstrained enhanced sampling of the biomolecules. Furthermore, by constructing a boost potential that follows a Gaussian distribution, accurate reweighting of GaMD simulations is achieved via cumulant expansion to the second order. The free energy profiles obtained from GaMD simulations allow us to identify distinct low energy states of the biomolecules and characterize biomolecular structural dynamics quantitatively. In this chapter, we present the theory of GaMD, its implementation in the widely used molecular dynamics software packages (AMBER and NAMD), and applications to the alanine dipeptide biomolecular model system, protein folding, biomolecular large-scale conformational transitions and biomolecular recognition. PMID:29720925
Topology of microwave background fluctuations - Theory
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III; Park, Changbom; Bies, William E.; Bennett, David P.; Juszkiewicz, Roman
1990-01-01
Topological measures are used to characterize the microwave background temperature fluctuations produced by 'standard' scenarios (Gaussian) and by cosmic strings (non-Gaussian). Three topological quantities: total area of the excursion regions, total length, and total curvature (genus) of the isotemperature contours, are studied for simulated Gaussian microwave background anisotropy maps and then compared with those of the non-Gaussian anisotropy pattern produced by cosmic strings. In general, the temperature gradient field shows the non-Gaussian behavior of the string map more distinctively than the temperature field for all topology measures. The total contour length and the genus are found to be more sensitive to the existence of a stringy pattern than the usual temperature histogram. Situations when instrumental noise is superposed on the map, are considered to find the critical signal-to-noise ratio for which strings can be detected.
NASA Astrophysics Data System (ADS)
Mu, Hongqian; Wang, Muguang; Tang, Yu; Zhang, Jing; Jian, Shuisheng
2018-03-01
A novel scheme for the generation of FCC-compliant UWB pulse is proposed based on modified Gaussian quadruplet and incoherent wavelength-to-time conversion. The modified Gaussian quadruplet is synthesized based on linear sum of a broad Gaussian pulse and two narrow Gaussian pulses with the same pulse-width and amplitude peak. Within specific parameter range, FCC-compliant UWB with spectral power efficiency of higher than 39.9% can be achieved. In order to realize the designed waveform, a UWB generator based on spectral shaping and incoherent wavelength-to-time mapping is proposed. The spectral shaper is composed of a Gaussian filter and a programmable filter. Single-mode fiber functions as both dispersion device and transmission medium. Balanced photodetection is employed to combine linearly the broad Gaussian pulse and two narrow Gaussian pulses, and at same time to suppress pulse pedestals that result in low-frequency components. The proposed UWB generator can be reconfigured for UWB doublet by operating the programmable filter as a single-band Gaussian filter. The feasibility of proposed UWB generator is demonstrated experimentally. Measured UWB pulses match well with simulation results. FCC-compliant quadruplet with 10-dB bandwidth of 6.88-GHz, fractional bandwidth of 106.8% and power efficiency of 51% is achieved.
On the Response of a Nonlinear Structure to High Kurtosis Non-Gaussian Random Loadings
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Przekop, Adam; Turner, Travis L.
2011-01-01
This paper is a follow-on to recent work by the authors in which the response and high-cycle fatigue of a nonlinear structure subject to non-Gaussian loadings was found to vary markedly depending on the nature of the loading. There it was found that a non-Gaussian loading having a steady rate of short-duration, high-excursion peaks produced essentially the same response as would have been incurred by a Gaussian loading. In contrast, a non-Gaussian loading having the same kurtosis, but with bursts of high-excursion peaks was found to elicit a much greater response. This work is meant to answer the question of when consideration of a loading probability distribution other than Gaussian is important. The approach entailed nonlinear numerical simulation of a beam structure under Gaussian and non-Gaussian random excitations. Whether the structure responded in a Gaussian or non-Gaussian manner was determined by adherence to, or violations of, the Central Limit Theorem. Over a practical range of damping, it was found that the linear response to a non-Gaussian loading was Gaussian when the period of the system impulse response is much greater than the rate of peaks in the loading. Lower damping reduced the kurtosis, but only when the linear response was non-Gaussian. In the nonlinear regime, the response was found to be non-Gaussian for all loadings. The effect of a spring-hardening type of nonlinearity was found to limit extreme values and thereby lower the kurtosis relative to the linear response regime. In this case, lower damping gave rise to greater nonlinearity, resulting in lower kurtosis than a higher level of damping.
Identification of hydraulic conductivity structure in sand and gravel aquifers: Cape Cod data set
Eggleston, J.R.; Rojstaczer, S.A.; Peirce, J.J.
1996-01-01
This study evaluates commonly used geostatistical methods to assess reproduction of hydraulic conductivity (K) structure and sensitivity under limiting amounts of data. Extensive conductivity measurements from the Cape Cod sand and gravel aquifer are used to evaluate two geostatistical estimation methods, conditional mean as an estimate and ordinary kriging, and two stochastic simulation methods, simulated annealing and sequential Gaussian simulation. Our results indicate that for relatively homogeneous sand and gravel aquifers such as the Cape Cod aquifer, neither estimation methods nor stochastic simulation methods give highly accurate point predictions of hydraulic conductivity despite the high density of collected data. Although the stochastic simulation methods yielded higher errors than the estimation methods, the stochastic simulation methods yielded better reproduction of the measured In (K) distribution and better reproduction of local contrasts in In (K). The inability of kriging to reproduce high In (K) values, as reaffirmed by this study, provides a strong instigation for choosing stochastic simulation methods to generate conductivity fields when performing fine-scale contaminant transport modeling. Results also indicate that estimation error is relatively insensitive to the number of hydraulic conductivity measurements so long as more than a threshold number of data are used to condition the realizations. This threshold occurs for the Cape Cod site when there are approximately three conductivity measurements per integral volume. The lack of improvement with additional data suggests that although fine-scale hydraulic conductivity structure is evident in the variogram, it is not accurately reproduced by geostatistical estimation methods. If the Cape Cod aquifer spatial conductivity characteristics are indicative of other sand and gravel deposits, then the results on predictive error versus data collection obtained here have significant practical consequences for site characterization. Heavily sampled sand and gravel aquifers, such as Cape Cod and Borden, may have large amounts of redundant data, while in more common real world settings, our results suggest that denser data collection will likely improve understanding of permeability structure.
Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just
2003-01-01
A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531
Permutation entropy of fractional Brownian motion and fractional Gaussian noise
NASA Astrophysics Data System (ADS)
Zunino, L.; Pérez, D. G.; Martín, M. T.; Garavaglia, M.; Plastino, A.; Rosso, O. A.
2008-06-01
We have worked out theoretical curves for the permutation entropy of the fractional Brownian motion and fractional Gaussian noise by using the Bandt and Shiha [C. Bandt, F. Shiha, J. Time Ser. Anal. 28 (2007) 646] theoretical predictions for their corresponding relative frequencies. Comparisons with numerical simulations show an excellent agreement. Furthermore, the entropy-gap in the transition between these processes, observed previously via numerical results, has been here theoretically validated. Also, we have analyzed the behaviour of the permutation entropy of the fractional Gaussian noise for different time delays.
Plechawska, Małgorzata; Polańska, Joanna
2009-01-01
This article presents the method of the processing of mass spectrometry data. Mass spectra are modelled with Gaussian Mixture Models. Every peak of the spectrum is represented by a single Gaussian. Its parameters describe the location, height and width of the corresponding peak of the spectrum. An authorial version of the Expectation Maximisation Algorithm was used to perform all calculations. Errors were estimated with a virtual mass spectrometer. The discussed tool was originally designed to generate a set of spectra within defined parameters.
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
Probabilistic Elastic Part Model: A Pose-Invariant Representation for Real-World Face Verification.
Li, Haoxiang; Hua, Gang
2018-04-01
Pose variation remains to be a major challenge for real-world face recognition. We approach this problem through a probabilistic elastic part model. We extract local descriptors (e.g., LBP or SIFT) from densely sampled multi-scale image patches. By augmenting each descriptor with its location, a Gaussian mixture model (GMM) is trained to capture the spatial-appearance distribution of the face parts of all face images in the training corpus, namely the probabilistic elastic part (PEP) model. Each mixture component of the GMM is confined to be a spherical Gaussian to balance the influence of the appearance and the location terms, which naturally defines a part. Given one or multiple face images of the same subject, the PEP-model builds its PEP representation by sequentially concatenating descriptors identified by each Gaussian component in a maximum likelihood sense. We further propose a joint Bayesian adaptation algorithm to adapt the universally trained GMM to better model the pose variations between the target pair of faces/face tracks, which consistently improves face verification accuracy. Our experiments show that we achieve state-of-the-art face verification accuracy with the proposed representations on the Labeled Face in the Wild (LFW) dataset, the YouTube video face database, and the CMU MultiPIE dataset.
Chimera states in Gaussian coupled map lattices
NASA Astrophysics Data System (ADS)
Li, Xiao-Wen; Bi, Ran; Sun, Yue-Xiang; Zhang, Shuo; Song, Qian-Qian
2018-04-01
We study chimera states in one-dimensional and two-dimensional Gaussian coupled map lattices through simulations and experiments. Similar to the case of global coupling oscillators, individual lattices can be regarded as being controlled by a common mean field. A space-dependent order parameter is derived from a self-consistency condition in order to represent the collective state.
Gas kinematics in FIRE simulated galaxies compared to spatially unresolved H I observations
NASA Astrophysics Data System (ADS)
El-Badry, Kareem; Bradford, Jeremy; Quataert, Eliot; Geha, Marla; Boylan-Kolchin, Michael; Weisz, Daniel R.; Wetzel, Andrew; Hopkins, Philip F.; Chan, T. K.; Fitts, Alex; Kereš, Dušan; Faucher-Giguère, Claude-André
2018-06-01
The shape of a galaxy's spatially unresolved, globally integrated 21-cm emission line depends on its internal gas kinematics: galaxies with rotationally supported gas discs produce double-horned profiles with steep wings, while galaxies with dispersion-supported gas produce Gaussian-like profiles with sloped wings. Using mock observations of simulated galaxies from the FIRE project, we show that one can therefore constrain a galaxy's gas kinematics from its unresolved 21-cm line profile. In particular, we find that the kurtosis of the 21-cm line increases with decreasing V/σ and that this trend is robust across a wide range of masses, signal-to-noise ratios, and inclinations. We then quantify the shapes of 21-cm line profiles from a morphologically unbiased sample of ˜2000 low-redshift, H I-detected galaxies with Mstar = 107-11 M⊙ and compare to the simulated galaxies. At Mstar ≳ 1010 M⊙, both the observed and simulated galaxies produce double-horned profiles with low kurtosis and steep wings, consistent with rotationally supported discs. Both the observed and simulated line profiles become more Gaussian like (higher kurtosis and less-steep wings) at lower masses, indicating increased dispersion support. However, the simulated galaxies transition from rotational to dispersion support more strongly: at Mstar = 108-10 M⊙, most of the simulations produce more Gaussian-like profiles than typical observed galaxies with similar mass, indicating that gas in the low-mass simulated galaxies is, on average, overly dispersion supported. Most of the lower-mass-simulated galaxies also have somewhat lower gas fractions than the median of the observed population. The simulations nevertheless reproduce the observed line-width baryonic Tully-Fisher relation, which is insensitive to rotational versus dispersion support.
Cluster mass inference via random field theory.
Zhang, Hui; Nichols, Thomas E; Johnson, Timothy D
2009-01-01
Cluster extent and voxel intensity are two widely used statistics in neuroimaging inference. Cluster extent is sensitive to spatially extended signals while voxel intensity is better for intense but focal signals. In order to leverage strength from both statistics, several nonparametric permutation methods have been proposed to combine the two methods. Simulation studies have shown that of the different cluster permutation methods, the cluster mass statistic is generally the best. However, to date, there is no parametric cluster mass inference available. In this paper, we propose a cluster mass inference method based on random field theory (RFT). We develop this method for Gaussian images, evaluate it on Gaussian and Gaussianized t-statistic images and investigate its statistical properties via simulation studies and real data. Simulation results show that the method is valid under the null hypothesis and demonstrate that it can be more powerful than the cluster extent inference method. Further, analyses with a single subject and a group fMRI dataset demonstrate better power than traditional cluster size inference, and good accuracy relative to a gold-standard permutation test.
NASA Astrophysics Data System (ADS)
Zhao, Leihong; Qu, Xiaolu; Lin, Hongjun; Yu, Genying; Liao, Bao-Qiang
2018-03-01
Simulation of randomly rough bioparticle surface is crucial to better understand and control interface behaviors and membrane fouling. Pursuing literature indicated a lack of effective method for simulating random rough bioparticle surface. In this study, a new method which combines Gaussian distribution, Fourier transform, spectrum method and coordinate transformation was proposed to simulate surface topography of foulant bioparticles in a membrane bioreactor (MBR). The natural surface of a foulant bioparticle was found to be irregular and randomly rough. The topography simulated by the new method was quite similar to that of real foulant bioparticles. Moreover, the simulated topography of foulant bioparticles was critically affected by parameters correlation length (l) and root mean square (σ). The new method proposed in this study shows notable superiority over the conventional methods for simulation of randomly rough foulant bioparticles. The ease, facility and fitness of the new method point towards potential applications in interface behaviors and membrane fouling research.
A spatial approach to environmental risk assessment of PAH contamination.
Bengtsson, Göran; Törneman, Niklas
2009-01-01
The extent of remediation of contaminated industrial sites depends on spatial heterogeneity of contaminant concentration and spatially explicit risk characterization. We used sequential Gaussian simulation (SGS) and indicator kriging (IK) to describe the spatial distribution of polycyclic aromatic hydrocarbons (PAHs), pH, electric conductivity, particle aggregate distribution, water holding capacity, and total organic carbon, and quantitative relations among them, in a creosote polluted soil in southern Sweden. The geostatistical analyses were combined with risk analyses, in which the total toxic equivalent concentration of the PAH mixture was calculated from the soil concentrations of individual PAHs and compared with ecotoxicological effect concentrations and regulatory threshold values in block sizes of 1.8 x 1.8 m. Most PAHs were spatially autocorrelated and appeared in several hot spots. The risk calculated by SGS was more confined to specific hot spot areas than the risk calculated by IK, and 40-50% of the site had PAH concentrations exceeding the threshold values with a probability of 80% and higher. The toxic equivalent concentration of the PAH mixture was dependent on the spatial distribution of organic carbon, showing the importance of assessing risk by a combination of measurements of PAH and organic carbon concentrations. Essentially, the same risk distribution pattern was maintained when Monte Carlo simulations were used for implementation of risk in larger (5 x 5 m), economically more feasible remediation blocks, but a smaller area became of great concern for remediation when the simulations included PAH partitioning to two separate sources, creosote and natural, of organic matter, rather than one general.
Sequential biases in accumulating evidence
Huggins, Richard; Dogo, Samson Henry
2015-01-01
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562
NON-GAUSSIANITIES IN THE LOCAL CURVATURE OF THE FIVE-YEAR WMAP DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudjord, Oeystein; Groeneboom, Nicolaas E.; Hansen, Frode K.
Using the five-year WMAP data, we re-investigate claims of non-Gaussianities and asymmetries detected in local curvature statistics of the one-year WMAP data. In Hansen et al., it was found that the northern ecliptic hemisphere was non-Gaussian at the {approx}1% level testing the densities of hill, lake, and saddle points based on the second derivatives of the cosmic microwave background temperature map. The five-year WMAP data have a much lower noise level and better control of systematics. Using these, we find that the anomalies are still present at a consistent level. Also the direction of maximum non-Gaussianity remains. Due to limitedmore » availability of computer resources, Hansen et al. were unable to calculate the full covariance matrix for the {chi}{sup 2}-test used. Here, we apply the full covariance matrix instead of the diagonal approximation and find that the non-Gaussianities disappear and there is no preferred non-Gaussian direction. We compare with simulations of weak lensing to see if this may cause the observed non-Gaussianity when using a diagonal covariance matrix. We conclude that weak lensing does not produce non-Gaussianity in the local curvature statistics at the scales investigated in this paper. The cause of the non-Gaussian detection in the case of a diagonal matrix remains unclear.« less
Orphan therapies: making best use of postmarket data.
Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling
2014-08-01
Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.
Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie
2018-06-12
Particle swarm optimization (PSO) is a powerful metaheuristic population-based global optimization algorithm. However, when it is applied to nonseparable objective functions, its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant PSO algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates superior performance across several nonlinear, multimodal benchmark functions compared with the rotation-invariant PSO algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in the ReaxFF- lg reactive force field was carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents better performance compared to a genetic algorithm optimization method in the optimization of the parameters of a ReaxFF- lg correction model. The computational framework is implemented in a stand-alone C++ code that allows the straightforward development of ReaxFF reactive force fields.
On uncertainty quantification in hydrogeology and hydrogeophysics
NASA Astrophysics Data System (ADS)
Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud
2017-12-01
Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
Neural pulse frequency modulation of an exponentially correlated Gaussian process
NASA Technical Reports Server (NTRS)
Hutchinson, C. E.; Chon, Y.-T.
1976-01-01
The effect of NPFM (Neural Pulse Frequency Modulation) on a stationary Gaussian input, namely an exponentially correlated Gaussian input, is investigated with special emphasis on the determination of the average number of pulses in unit time, known also as the average frequency of pulse occurrence. For some classes of stationary input processes where the formulation of the appropriate multidimensional Markov diffusion model of the input-plus-NPFM system is possible, the average impulse frequency may be obtained by a generalization of the approach adopted. The results are approximate and numerical, but are in close agreement with Monte Carlo computer simulation results.
Non-Gaussian Methods for Causal Structure Learning.
Shimizu, Shohei
2018-05-22
Causal structure learning is one of the most exciting new topics in the fields of machine learning and statistics. In many empirical sciences including prevention science, the causal mechanisms underlying various phenomena need to be studied. Nevertheless, in many cases, classical methods for causal structure learning are not capable of estimating the causal structure of variables. This is because it explicitly or implicitly assumes Gaussianity of data and typically utilizes only the covariance structure. In many applications, however, non-Gaussian data are often obtained, which means that more information may be contained in the data distribution than the covariance matrix is capable of containing. Thus, many new methods have recently been proposed for using the non-Gaussian structure of data and inferring the causal structure of variables. This paper introduces prevention scientists to such causal structure learning methods, particularly those based on the linear, non-Gaussian, acyclic model known as LiNGAM. These non-Gaussian data analysis tools can fully estimate the underlying causal structures of variables under assumptions even in the presence of unobserved common causes. This feature is in contrast to other approaches. A simulated example is also provided.
Rossitto, Giacomo; Battistel, Michele; Barbiero, Giulio; Bisogni, Valeria; Maiolino, Giuseppe; Diego, Miotto; Seccia, Teresa M; Rossi, Gian Paolo
2018-02-01
The pulsatile secretion of adrenocortical hormones and a stress reaction occurring when starting adrenal vein sampling (AVS) can affect the selectivity and also the assessment of lateralization when sequential blood sampling is used. We therefore tested the hypothesis that a simulated sequential blood sampling could decrease the diagnostic accuracy of lateralization index for identification of aldosterone-producing adenoma (APA), as compared with bilaterally simultaneous AVS. In 138 consecutive patients who underwent subtyping of primary aldosteronism, we compared the results obtained simultaneously bilaterally when starting AVS (t-15) and 15 min after (t0), with those gained with a simulated sequential right-to-left AVS technique (R ⇒ L) created by combining hormonal values obtained at t-15 and at t0. The concordance between simultaneously obtained values at t-15 and t0, and between simultaneously obtained values and values gained with a sequential R ⇒ L technique, was also assessed. We found a marked interindividual variability of lateralization index values in the patients with bilaterally selective AVS at both time point. However, overall the lateralization index simultaneously determined at t0 provided a more accurate identification of APA than the simulated sequential lateralization indexR ⇒ L (P = 0.001). Moreover, regardless of which side was sampled first, the sequential AVS technique induced a sequence-dependent overestimation of lateralization index. While in APA patients the concordance between simultaneous AVS at t0 and t-15 and between simultaneous t0 and sequential technique was moderate-to-good (K = 0.55 and 0.66, respectively), in non-APA patients, it was poor (K = 0.12 and 0.13, respectively). Sequential AVS generates factitious between-sides gradients, which lower its diagnostic accuracy, likely because of the stress reaction arising upon starting AVS.
NASA Astrophysics Data System (ADS)
Basin, M.; Maldonado, J. J.; Zendejo, O.
2016-07-01
This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.
Saltzman, Erica J; Schweizer, Kenneth S
2006-12-01
Brownian trajectory simulation methods are employed to fully establish the non-Gaussian fluctuation effects predicted by our nonlinear Langevin equation theory of single particle activated dynamics in glassy hard-sphere fluids. The consequences of stochastic mobility fluctuations associated with the space-time complexities of the transient localization and barrier hopping processes have been determined. The incoherent dynamic structure factor was computed for a range of wave vectors and becomes of an increasingly non-Gaussian form for volume fractions beyond the (naive) ideal mode coupling theory (MCT) transition. The non-Gaussian parameter (NGP) amplitude increases markedly with volume fraction and is well described by a power law in the maximum restoring force of the nonequilibrium free energy profile. The time scale associated with the NGP peak becomes much smaller than the alpha relaxation time for systems characterized by significant entropic barriers. An alternate non-Gaussian parameter that probes the long time alpha relaxation process displays a different shape, peak intensity, and time scale of its maximum. However, a strong correspondence between the classic and alternate NGP amplitudes is predicted which suggests a deep connection between the early and final stages of cage escape. Strong space-time decoupling emerges at high volume fractions as indicated by a nondiffusive wave vector dependence of the relaxation time and growth of the translation-relaxation decoupling parameter. Displacement distributions exhibit non-Gaussian behavior at intermediate times, evolving into a strongly bimodal form with slow and fast subpopulations at high volume fractions. Qualitative and semiquantitative comparisons of the theoretical results with colloid experiments, ideal MCT, and multiple simulation studies are presented.
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
Response of MDOF strongly nonlinear systems to fractional Gaussian noises.
Deng, Mao-Lin; Zhu, Wei-Qiu
2016-08-01
In the present paper, multi-degree-of-freedom strongly nonlinear systems are modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems (including quasi-non-integrable, completely integrable and non-resonant, completely integrable and resonant, partially integrable and non-resonant, and partially integrable and resonant Hamiltonian systems) driven by fractional Gaussian noise is introduced. The averaged fractional stochastic differential equations (SDEs) are derived. The simulation results for some examples show that the averaged SDEs can be used to predict the response of the original systems and the simulation time for the averaged SDEs is less than that for the original systems.
Response of MDOF strongly nonlinear systems to fractional Gaussian noises
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Mao-Lin; Zhu, Wei-Qiu, E-mail: wqzhu@zju.edu.cn
2016-08-15
In the present paper, multi-degree-of-freedom strongly nonlinear systems are modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems (including quasi-non-integrable, completely integrable and non-resonant, completely integrable and resonant, partially integrable and non-resonant, and partially integrable and resonant Hamiltonian systems) driven by fractional Gaussian noise is introduced. The averaged fractional stochastic differential equations (SDEs) are derived. The simulation results for some examples show that the averaged SDEs can be used to predict the response of the original systems and the simulation time for the averaged SDEs is less than that for the original systems.
Limits of quantitation - Yet another suggestion
NASA Astrophysics Data System (ADS)
Carlson, Jill; Wysoczanski, Artur; Voigtman, Edward
2014-06-01
The work presented herein suggests that the limit of quantitation concept may be rendered substantially less ambiguous and ultimately more useful as a figure of merit by basing it upon the significant figure and relative measurement error ideas due to Coleman, Auses and Gram, coupled with the correct instantiation of Currie's detection limit methodology. Simple theoretical results are presented for a linear, univariate chemical measurement system with homoscedastic Gaussian noise, and these are tested against both Monte Carlo computer simulations and laser-excited molecular fluorescence experimental results. Good agreement among experiment, theory and simulation is obtained and an easy extension to linearly heteroscedastic Gaussian noise is also outlined.
Theoretical study of sum-frequency vibrational spectroscopy on limonene surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Ren-Hui, E-mail: zrh@iccas.ac.cn; Liu, Hao; Jing, Yuan-Yuan
2014-03-14
By combining molecule dynamics (MD) simulation and quantum chemistry computation, we calculate the surface sum-frequency vibrational spectroscopy (SFVS) of R-limonene molecules at the gas-liquid interface for SSP, PPP, and SPS polarization combinations. The distributions of the Euler angles are obtained using MD simulation, the ψ-distribution is between isotropic and Gaussian. Instead of the MD distributions, different analytical distributions such as the δ-function, Gaussian and isotropic distributions are applied to simulate surface SFVS. We find that different distributions significantly affect the absolute SFVS intensity and also influence on relative SFVS intensity, and the δ-function distribution should be used with caution whenmore » the orientation distribution is broad. Furthermore, the reason that the SPS signal is weak in reflected arrangement is discussed.« less
Nonlinear estimation theory applied to the interplanetary orbit determination problem.
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Choe, C. Y.
1972-01-01
Martingale theory and appropriate smoothing properties of Loeve (1953) have been used to develop a modified Gaussian second-order filter. The performance of the filter is evaluated through numerical simulation of a Jupiter flyby mission. The observations used in the simulation are on-board measurements of the angle between Jupiter and a fixed star taken at discrete time intervals. In the numerical study, the influence of each of the second-order terms is evaluated. Five filter algorithms are used in the simulations. Four of the filters are the modified Gaussian second-order filter and three approximations derived by neglecting one or more of the second-order terms in the equations. The fifth filter is the extended Kalman-Bucy filter which is obtained by neglecting all of the second-order terms.
Radiation detector spectrum simulator
Wolf, Michael A.; Crowell, John M.
1987-01-01
A small battery operated nuclear spectrum simulator having a noise source nerates pulses with a Gaussian distribution of amplitudes. A switched dc bias circuit cooperating therewith generates several nominal amplitudes of such pulses and a spectral distribution of pulses that closely simulates the spectrum produced by a radiation source such as Americium 241.
Radiation detector spectrum simulator
Wolf, M.A.; Crowell, J.M.
1985-04-09
A small battery operated nuclear spectrum simulator having a noise source generates pulses with a Gaussian distribution of amplitudes. A switched dc bias circuit cooperating therewith to generate several nominal amplitudes of such pulses and a spectral distribution of pulses that closely simulates the spectrum produced by a radiation source such as Americium 241.
ISIM3D: AN ANSI-C THREE-DIMENSIONAL MULTIPLE INDICATOR CONDITIONAL SIMULATION PROGRAM
The indicator conditional simulation technique provides stochastic simulations of a variable that (i) honor the initial data and (ii) can feature a richer family of spatial structures not limited by Gaussianity. he data are encoded into a series of indicators which then are used ...
Simultaneous sequential monitoring of efficacy and safety led to masking of effects.
van Eekelen, Rik; de Hoop, Esther; van der Tweel, Ingeborg
2016-08-01
Usually, sequential designs for clinical trials are applied on the primary (=efficacy) outcome. In practice, other outcomes (e.g., safety) will also be monitored and influence the decision whether to stop a trial early. Implications of simultaneous monitoring on trial decision making are yet unclear. This study examines what happens to the type I error, power, and required sample sizes when one efficacy outcome and one correlated safety outcome are monitored simultaneously using sequential designs. We conducted a simulation study in the framework of a two-arm parallel clinical trial. Interim analyses on two outcomes were performed independently and simultaneously on the same data sets using four sequential monitoring designs, including O'Brien-Fleming and Triangular Test boundaries. Simulations differed in values for correlations and true effect sizes. When an effect was present in both outcomes, competition was introduced, which decreased power (e.g., from 80% to 60%). Futility boundaries for the efficacy outcome reduced overall type I errors as well as power for the safety outcome. Monitoring two correlated outcomes, given that both are essential for early trial termination, leads to masking of true effects. Careful consideration of scenarios must be taken into account when designing sequential trials. Simulation results can help guide trial design. Copyright © 2016 Elsevier Inc. All rights reserved.
Environmentally adaptive processing for shallow ocean applications: A sequential Bayesian approach.
Candy, J V
2015-09-01
The shallow ocean is a changing environment primarily due to temperature variations in its upper layers directly affecting sound propagation throughout. The need to develop processors capable of tracking these changes implies a stochastic as well as an environmentally adaptive design. Bayesian techniques have evolved to enable a class of processors capable of performing in such an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean environment. A solution to this problem is addressed by developing a sequential Bayesian processor capable of providing a joint solution to the modal function tracking and environmental adaptivity problem. Here, the focus is on the development of both a particle filter and an unscented Kalman filter capable of providing reasonable performance for this problem. These processors are applied to hydrophone measurements obtained from a vertical array. The adaptivity problem is attacked by allowing the modal coefficients and/or wavenumbers to be jointly estimated from the noisy measurement data along with tracking of the modal functions while simultaneously enhancing the noisy pressure-field measurements.
Non-Gaussian noise-weakened stability in a foraging colony system with time delay
NASA Astrophysics Data System (ADS)
Dong, Xiaohui; Zeng, Chunhua; Yang, Fengzao; Guan, Lin; Xie, Qingshuang; Duan, Weilong
2018-02-01
In this paper, the dynamical properties in a foraging colony system with time delay and non-Gaussian noise were investigated. Using delay Fokker-Planck approach, the stationary probability distribution (SPD), the associated relaxation time (ART) and normalization correlation function (NCF) are obtained, respectively. The results show that: (i) the time delay and non-Gaussian noise can induce transition from a single peak to double peaks in the SPD, i.e., a type of bistability occurring in a foraging colony system where time delay and non-Gaussian noise not only cause transitions between stable states, but also construct the states themselves. Numerical simulations are presented and are in good agreement with the approximate theoretical results; (ii) there exists a maximum in the ART as a function of the noise intensity, this maximum for ART is identified as the characteristic of the non-Gaussian noise-weakened stability of the foraging colonies in the steady state; (iii) the ART as a function of the noise correlation time exhibits a maximum and a minimum, where the minimum for ART is identified as the signature of the non-Gaussian noise-enhanced stability of the foraging colonies; and (iv) the time delay can enhance the stability of the foraging colonies in the steady state, while the departure from Gaussian noise can weaken it, namely, the time delay and departure from Gaussian noise play opposite roles in ART or NCF.
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heavens, Alan F.
2018-01-01
We investigate whether a Gaussian likelihood, as routinely assumed in the analysis of cosmological data, is supported by simulated survey data. We define test statistics, based on a novel method that first destroys Gaussian correlations in a data set, and then measures the non-Gaussian correlations that remain. This procedure flags pairs of data points that depend on each other in a non-Gaussian fashion, and thereby identifies where the assumption of a Gaussian likelihood breaks down. Using this diagnosis, we find that non-Gaussian correlations in the CFHTLenS cosmic shear correlation functions are significant. With a simple exclusion of the most contaminated data points, the posterior for s8 is shifted without broadening, but we find no significant reduction in the tension with s8 derived from Planck cosmic microwave background data. However, we also show that the one-point distributions of the correlation statistics are noticeably skewed, such that sound weak-lensing data sets are intrinsically likely to lead to a systematically low lensing amplitude being inferred. The detected non-Gaussianities get larger with increasing angular scale such that for future wide-angle surveys such as Euclid or LSST, with their very small statistical errors, the large-scale modes are expected to be increasingly affected. The shifts in posteriors may then not be negligible and we recommend that these diagnostic tests be run as part of future analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, M. L.; Liu, B.; Hu, R. H.
In the case of a thin plasma slab accelerated by the radiation pressure of an ultra-intense laser pulse, the development of Rayleigh-Taylor instability (RTI) will destroy the acceleration structure and terminate the acceleration process much sooner than theoretical limit. In this paper, a new scheme using multiple Gaussian pulses for ion acceleration in a radiation pressure acceleration regime is investigated with particle-in-cell simulation. We found that with multiple Gaussian pulses, the instability could be efficiently suppressed and the divergence of the ion bunch is greatly reduced, resulting in a longer acceleration time and much more collimated ion bunch with highermore » energy than using a single Gaussian pulse. An analytical model is developed to describe the suppression of RTI at the laser-plasma interface. The model shows that the suppression of RTI is due to the introduction of the long wavelength mode RTI by the multiple Gaussian pulses.« less
Ince Gaussian beams in strongly nonlocal nonlinear media
NASA Astrophysics Data System (ADS)
Deng, Dongmei; Guo, Qi
2008-07-01
Based on the Snyder-Mitchell model that describes the beam propagation in strongly nonlocal nonlinear media, the close forms of Ince-Gaussian (IG) beams have been found. The transverse structures of the IG beams are described by the product of the Ince polynomials and the Gaussian function. Depending on the input power of the beams, the IG beams can be either a soliton state or a breather state. The IG beams constitute the exact and continuous transition modes between Hermite-Gaussian beams and Laguerre-Gaussian beams. The IG vortex beams can be constructed by a linear combination of the even and odd IG beams. The transverse intensity pattern of IG vortex beams consists of elliptic rings, whose number and ellipticity can be controlled, and a phase displaying a number of in-line vortices, each with a unitary topological charge. The analytical solutions of the IG beams are confirmed by the numerical simulations of the nonlocal nonlinear Schr\\rm \\ddot{o} dinger equation.
Extinction time of a stochastic predator-prey model by the generalized cell mapping method
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao
2018-03-01
The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.
Leading non-Gaussian corrections for diffusion orientation distribution function.
Jensen, Jens H; Helpern, Joseph A; Tabesh, Ali
2014-02-01
An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed from the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves on the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. 2013 John Wiley & Sons, Ltd.
Leading Non-Gaussian Corrections for Diffusion Orientation Distribution Function
Jensen, Jens H.; Helpern, Joseph A.; Tabesh, Ali
2014-01-01
An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed out of the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves upon the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. PMID:24738143
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rossi, Matteo A. C., E-mail: matteo.rossi@unimi.it; Paris, Matteo G. A., E-mail: matteo.paris@fisica.unimi.it; CNISM, Unità Milano Statale, I-20133 Milano
2016-01-14
We address the interaction of single- and two-qubit systems with an external transverse fluctuating field and analyze in detail the dynamical decoherence induced by Gaussian noise and random telegraph noise (RTN). Upon exploiting the exact RTN solution of the time-dependent von Neumann equation, we analyze in detail the behavior of quantum correlations and prove the non-Markovianity of the dynamical map in the full parameter range, i.e., for either fast or slow noise. The dynamics induced by Gaussian noise is studied numerically and compared to the RTN solution, showing the existence of (state dependent) regions of the parameter space where themore » two noises lead to very similar dynamics. We show that the effects of RTN noise and of Gaussian noise are different, i.e., the spectrum alone is not enough to summarize the noise effects, but the dynamics under the effect of one kind of noise may be simulated with high fidelity by the other one.« less
AUTONOMOUS GAUSSIAN DECOMPOSITION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.
2015-04-15
We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocitymore » width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.« less
A Bayesian sequential design using alpha spending function to control type I error.
Zhu, Han; Yu, Qingzhao
2017-10-01
We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef
Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less
Siddiqi, Ariba; Arjunan, Sridhar P; Kumar, Dinesh K
2016-08-01
Age-associated changes in the surface electromyogram (sEMG) of Tibialis Anterior (TA) muscle can be attributable to neuromuscular alterations that precede strength loss. We have used our sEMG model of the Tibialis Anterior to interpret the age-related changes and compared with the experimental sEMG. Eighteen young (20-30 years) and 18 older (60-85 years) performed isometric dorsiflexion at 6 different percentage levels of maximum voluntary contractions (MVC), and their sEMG from the TA muscle was recorded. Six different age-related changes in the neuromuscular system were simulated using the sEMG model at the same MVCs as the experiment. The maximal power of the spectrum, Gaussianity and Linearity Test Statistics were computed from the simulated and experimental sEMG. A correlation analysis at α=0.05 was performed between the simulated and experimental age-related change in the sEMG features. The results show the loss in motor units was distinguished by the Gaussianity and Linearity test statistics; while the maximal power of the PSD distinguished between the muscular factors. The simulated condition of 40% loss of motor units with halved the number of fast fibers best correlated with the age-related change observed in the experimental sEMG higher order statistical features. The simulated aging condition found by this study corresponds with the moderate motor unit remodelling and negligible strength loss reported in literature for the cohorts aged 60-70 years.
NASA Astrophysics Data System (ADS)
Nishimura, Tomoaki
2016-03-01
A computer simulation program for ion scattering and its graphical user interface (MEISwin) has been developed. Using this program, researchers have analyzed medium-energy ion scattering and Rutherford backscattering spectrometry at Ritsumeikan University since 1998, and at Rutgers University since 2007. The main features of the program are as follows: (1) stopping power can be chosen from five datasets spanning several decades (from 1977 to 2011), (2) straggling can be chosen from two datasets, (3) spectral shape can be selected as Gaussian or exponentially modified Gaussian, (4) scattering cross sections can be selected as Coulomb or screened, (5) simulations adopt the resonant elastic scattering cross section of 16O(4He, 4He)16O, (6) pileup simulation for RBS spectra is supported, (7) natural and specific isotope abundances are supported, and (8) the charge fraction can be chosen from three patterns (fixed, energy-dependent, and ion fraction with charge-exchange parameters for medium-energy ion scattering). This study demonstrates and discusses the simulations and their results.
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises
Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise. PMID:28692667
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises.
Jin, Qiyu; Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise.
Non-Gaussian behavior in jamming / unjamming transition in dense granular materials
NASA Astrophysics Data System (ADS)
Atman, A. P. F.; Kolb, E.; Combe, G.; Paiva, H. A.; Martins, G. H. B.
2013-06-01
Experiments of penetration of a cylindrical intruder inside a bidimensional dense and disordered granular media were reported recently showing the jamming / unjamming transition. In the present work, we perform molecular dynamics simulations with the same geometry in order to assess both kinematic and static features of jamming / unjamming transition. We study the statistics of the particles velocities at the neighborhood of the intruder to evince that both experiments and simulations present the same qualitative behavior. We observe that the probability density functions (PDF) of velocities deviate from Gaussian depending on the packing fraction of the granular assembly. In order to quantify these deviations we consider a q-Gaussian (Tsallis) function to fit the PDF's. The q-value can be an indication of the presence of long range correlations along the system. We compare the fitted PDF's obtained with those obtained using the stretched exponential, and sketch some conclusions concerning the nature of the correlations along a granular confined flow.
Current-induced instability of domain walls in cylindrical nanowires
NASA Astrophysics Data System (ADS)
Wang, Weiwei; Zhang, Zhaoyang; Pepper, Ryan A.; Mu, Congpu; Zhou, Yan; Fangohr, Hans
2018-01-01
We study the current-driven domain wall (DW) motion in cylindrical nanowires using micromagnetic simulations by implementing the Landau-Lifshitz-Gilbert equation with nonlocal spin-transfer torque in a finite difference micromagnetic package. We find that in the presence of DW, Gaussian wave packets (spin waves) will be generated when the charge current is suddenly applied to the system. This effect is excluded when using the local spin-transfer torque. The existence of spin waves emission indicates that transverse domain walls can not move arbitrarily fast in cylindrical nanowires although they are free from the Walker limit. We establish an upper velocity limit for DW motion by analyzing the stability of Gaussian wave packets using the local spin-transfer torque. Micromagnetic simulations show that the stable region obtained by using nonlocal spin-transfer torque is smaller than that by using its local counterpart. This limitation is essential for multiple DWs since the instability of Gaussian wave packets will break the structure of multiple DWs.
Searching for efficient Markov chain Monte Carlo proposal kernels
Yang, Ziheng; Rodríguez, Carlos E.
2013-01-01
Markov chain Monte Carlo (MCMC) or the Metropolis–Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis–Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals. PMID:24218600
NASA Astrophysics Data System (ADS)
Arendt, V.; Shalchi, A.
2018-06-01
We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.
Hermite-Gaussian beams with self-forming spiral phase distribution
NASA Astrophysics Data System (ADS)
Zinchik, Alexander A.; Muzychenko, Yana B.
2014-05-01
Spiral laser beams is a family of laser beams that preserve the structural stability up to scale and rotate with the propagation. Properties of spiral beams are of practical interest for laser technology, medicine and biotechnology. Researchers use a spiral beams for movement and manipulation of microparticles. Spiral beams have a complicated phase distribution in cross section. This paper describes the results of analytical and computer simulation of Hermite-Gaussian beams with self-forming spiral phase distribution. In the simulation used a laser beam consisting of the sum of the two modes HG TEMnm and TEMn1m1. The coefficients n1, n, m1, m were varied. Additional phase depending from the coefficients n, m, m1, n1 imposed on the resulting beam. As a result, formed the Hermite Gaussian beam phase distribution which takes the form of a spiral in the process of distribution. For modeling was used VirtualLab 5.0 (manufacturer LightTrans GmbH).
NASA Astrophysics Data System (ADS)
Wolfsteiner, Peter; Breuer, Werner
2013-10-01
The assessment of fatigue load under random vibrations is usually based on load spectra. Typically they are computed with counting methods (e.g. Rainflow) based on a time domain signal. Alternatively methods are available (e.g. Dirlik) enabling the estimation of load spectra directly from power spectral densities (PSDs) of the corresponding time signals; the knowledge of the time signal is then not necessary. These PSD based methods have the enormous advantage that if for example the signal to assess results from a finite element method based vibration analysis, the computation time of the simulation of PSDs in the frequency domain outmatches by far the simulation of time signals in the time domain. This is especially true for random vibrations with very long signals in the time domain. The disadvantage of the PSD based simulation of vibrations and also the PSD based load spectra estimation is their limitation to Gaussian distributed time signals. Deviations from this Gaussian distribution cause relevant deviations in the estimated load spectra. In these cases usually only computation time intensive time domain calculations produce accurate results. This paper presents a method dealing with non-Gaussian signals with real statistical properties that is still able to use the efficient PSD approach with its computation time advantages. Essentially it is based on a decomposition of the non-Gaussian signal in Gaussian distributed parts. The PSDs of these rearranged signals are then used to perform usual PSD analyses. In particular, detailed methods are described for the decomposition of time signals and the derivation of PSDs and cross power spectral densities (CPSDs) from multiple real measurements without using inaccurate standard procedures. Furthermore the basic intention is to design a general and integrated method that is not just able to analyse a certain single load case for a small time interval, but to generate representative PSD and CPSD spectra replacing extensive measured loads in time domain without losing the necessary accuracy for the fatigue load results. These long measurements may even represent the whole application range of the railway vehicle. The presented work demonstrates the application of this method to railway vehicle components subjected to random vibrations caused by the wheel rail contact. Extensive measurements of axle box accelerations have been used to verify the proposed procedure for this class of railway vehicle applications. The linearity is not a real limitation, because the structural vibrations caused by the random excitations are usually small for rail vehicle applications. The impact of nonlinearities is usually covered by separate nonlinear models and only needed for the deterministic part of the loads. Linear vibration systems subjected to Gaussian vibrations respond with vibrations having also a Gaussian distribution. A non-Gaussian distribution in the excitation signal produces also a non-Gaussian response with statistical properties different from these excitations. A drawback is the fact that there is no simple mathematical relation between excitation and response concerning these deviations from the Gaussian distribution (see e.g. Ito calculus [6], which is usually not part of commercial codes!). There are a couple of well-established procedures for the prediction of fatigue load spectra from PSDs designed for Gaussian loads (see [4]); the question of the impact of non-Gaussian distributions on the fatigue load prediction has been studied for decades (see e.g. [3,4,11-13]) and is still subject of the ongoing research; e.g. [13] proposed a procedure, capable of considering non-Gaussian broadbanded loads. It is based on the knowledge of the response PSD and some statistical data, defining the non-Gaussian character of the underlying time signal. As already described above, these statistical data are usually not available for a PSD vibration response that has been calculated in the frequency domain. Summarizing the above and considering the fact of having highly non-Gaussian excitations on railway vehicles caused by the wheel rail contact means that the fast PSD analysis in the frequency domain cannot be combined with load spectra prediction methods for PSDs.
Stochastic inflation lattice simulations - Ultra-large scale structure of the universe
NASA Technical Reports Server (NTRS)
Salopek, D. S.
1991-01-01
Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients, a (exp -1), small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a toy model with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Gaussian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits.
Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models
NASA Astrophysics Data System (ADS)
Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing
2018-06-01
The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.
Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping
NASA Technical Reports Server (NTRS)
Leberl, F.
1975-01-01
Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.
2012-05-30
annealing-based or Bayesian sequential simulation approaches B. Dafflon1,2 and W. Barrash1 Received 13 May 2011; revised 12 March 2012; accepted 17 April 2012...the withheld porosity log are also withheld for this estimation process. For both cases we do this for two wells having locally variable stratigraphy ...borehole location is given at the bottom of each log comparison panel. For comparison with stratigraphy at the BHRS, contacts between Units 1 to 4
Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki
2016-03-01
The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference between double and triple Gaussian kernel models. The authors found that the difference between the two studied kernel models appeared at mid-depths and the accuracy of predicting the double Gaussian model deteriorated at the low-dose bump that appeared at mid-depths. When the authors employed the double Gaussian kernel model, the accuracy of calculations for the absolute dose at the center of the SOBP varied with irradiation conditions and the maximum difference was 3.4%. In contrast, the results obtained from calculations with the triple Gaussian kernel model indicated good agreement with the measurements within ±1.1%, regardless of the irradiation conditions. The difference between the results obtained with the two types of studied kernel models was distinct in the high energy region. The accuracy of calculations with the double Gaussian kernel model varied with the field size and SOBP width because the accuracy of prediction with the double Gaussian model was insufficient at the low-dose bump. The evaluation was only qualitative under limited volumetric irradiation conditions. Further accumulation of measured data would be needed to quantitatively comprehend what influence the double and triple Gaussian kernel models had on the accuracy of dose calculations.
The effect of optically active turbulence on Gaussian laser beams in the ocean
NASA Astrophysics Data System (ADS)
Nootz, G.; Matt, S.; Jarosz, E.; Hou, W.
2016-02-01
Motivated by the high resolution and data transfer potential, optical imaging and communication methods are intensely investigated for marine applications. The majority of research focuses on overcoming the strong scattering of light by particles present in the ocean. However when operating in very clear water the limiting factor for such applications can be the strongly forward biased scattering from optically active turbulent layers. For this presentation the effect of optically active turbulence on focused Gaussian beams has been studied in the field, in a controlled laboratory test tank, and by numerical simulations. For the field experiments a telescoping rigid underwater sensor structure (TRUSS) was deployed in the Bahamas equipped with a diffractive optics element projecting a matrix of beams towards a fast beam profiler. Image processing techniques are used to extract the beam wander and beam breathing. The results are compared to theoretical values for the optical turbulence strength derived from the measured temperature microstructure at the test side. Laboratory and simulated experiments are carried out in a physical and numerical Rayleigh-Benard convection turbulence tank of the same geometry. A focused Gaussian laser beam is propagated through the test tank and recorded with a camera from the back side of a diffuser. Similarly, a focused Gaussian beam is propagated numerically by means of split-step Fourier method through the simulated turbulence environment. Results will be presented for weak to moderate turbulence as they are most typical for oceanic conditions. Conclusions about the effect on optical imaging and communication applications will be discussed.
NASA Astrophysics Data System (ADS)
Fang, Y.; Hou, J.; Engel, D.; Lin, G.; Yin, J.; Han, B.; Fang, Z.; Fountoulakis, V.
2011-12-01
In this study, we introduce an uncertainty quantification (UQ) software framework for carbon sequestration, with the focus of studying being the effect of spatial heterogeneity of reservoir properties on CO2 migration. We use a sequential Gaussian method (SGSIM) to generate realizations of permeability fields with various spatial statistical attributes. To deal with the computational difficulties, we integrate the following ideas/approaches: 1) firstly, we use three different sampling approaches (probabilistic collocation, quasi-Monte Carlo, and adaptive sampling approaches) to reduce the required forward calculations while trying to explore the parameter space and quantify the input uncertainty; 2) secondly, we use eSTOMP as the forward modeling simulator. eSTOMP is implemented using the Global Arrays toolkit (GA) that is based on one-sided inter-processor communication and supports a shared memory programming style on distributed memory platforms. It provides highly-scalable performance. It uses a data model to partition most of the large scale data structures into a relatively small number of distinct classes. The lower level simulator infrastructure (e.g. meshing support, associated data structures, and data mapping to processors) is separated from the higher level physics and chemistry algorithmic routines using a grid component interface; and 3) besides the faster model and more efficient algorithms to speed up the forward calculation, we built an adaptive system infrastructure to select the best possible data transfer mechanisms, to optimally allocate system resources to improve performance, and to integrate software packages and data for composing carbon sequestration simulation, computation, analysis, estimation and visualization. We will demonstrate the framework with a given CO2 injection scenario in a heterogeneous sandstone reservoir.
NASA Astrophysics Data System (ADS)
Jia, Wei; McPherson, Brian; Pan, Feng; Dai, Zhenxue; Moodie, Nathan; Xiao, Ting
2018-02-01
Geological CO2 sequestration in conjunction with enhanced oil recovery (CO2-EOR) includes complex multiphase flow processes compared to CO2 storage in deep saline aquifers. Two of the most important factors affecting multiphase flow in CO2-EOR are three-phase relative permeability and associated hysteresis, both of which are difficult to measure and are usually represented by numerical interpolation models. The purpose of this study is to improve understanding of (1) the relative impacts of different three-phase relative permeability models and hysteresis models on CO2 trapping mechanisms, and (2) uncertainty associated with these two factors. Four different three-phase relative permeability models and three hysteresis models were applied to simulations of an active CO2-EOR site, the SACROC unit located in western Texas. To eliminate possible bias of deterministic parameters, we utilized a sequential Gaussian simulation technique to generate 50 realizations to describe heterogeneity of porosity and permeability, based on data obtained from well logs and seismic survey. Simulation results of forecasted CO2 storage suggested that (1) the choice of three-phase relative permeability model and hysteresis model led to noticeable impacts on forecasted CO2 sequestration capacity; (2) impacts of three-phase relative permeability models and hysteresis models on CO2 trapping are small during the CO2-EOR injection period, and increase during the post-EOR CO2 injection period; (3) the specific choice of hysteresis model is more important relative to the choice of three-phase relative permeability model; and (4) using the recommended three-phase WAG (Water-Alternating-Gas) hysteresis model may increase the impact of three-phase relative permeability models and uncertainty due to heterogeneity.
Analysis of non-Gaussian laser mode guidance and evolution in leaky plasma channels
NASA Astrophysics Data System (ADS)
Djordjevic, Blagoje; Benedetti, Carlo; Schroeder, Carl; Esarey, Eric; Leemans, Wim
2016-10-01
The evolution and propagation of a non-Gaussian laser pulse under varying circumstances, including a typical matched parabolic channel as well as leaky channels, are investigated. It has previously been shown for a Gaussian pulse that matched guiding can be achieved using parabolic plasma channels. In the low power regime, it can be shown directly that for multi-mode pulses there is significant transverse beating. Given the adverse behavior of non-Gaussian pulses in traditional guiding designs, we examine the use of leaky channels to filter out higher modes as a means of optimizing laser conditions. The interaction between different modes can have an adverse effect on the laser pulse as it propagates through the primary channel. To improve guiding of the pulse we propose using leaky channels. Realistic plasma channel profiles are considered. Higher order mode content is lost through the leaky channel, while the fundamental mode remains well-guided. This is demonstrated using both numerical simulations as well as the source-dependent Laguerre-Gaussian modal expansion. In conclusion, an idealized plasma lens based on leaky channels is found to filter out the higher order modes and leave a near-Gaussian profile before the pulse enters the primary channel.
The impact of non-Gaussianity upon cosmological forecasts
NASA Astrophysics Data System (ADS)
Repp, A.; Szapudi, I.; Carron, J.; Wolk, M.
2015-12-01
The primary science driver for 3D galaxy surveys is their potential to constrain cosmological parameters. Forecasts of these surveys' effectiveness typically assume Gaussian statistics for the underlying matter density, despite the fact that the actual distribution is decidedly non-Gaussian. To quantify the effect of this assumption, we employ an analytic expression for the power spectrum covariance matrix to calculate the Fisher information for Baryon Acoustic Oscillation (BAO)-type model surveys. We find that for typical number densities, at kmax = 0.5h Mpc-1, Gaussian assumptions significantly overestimate the information on all parameters considered, in some cases by up to an order of magnitude. However, after marginalizing over a six-parameter set, the form of the covariance matrix (dictated by N-body simulations) causes the majority of the effect to shift to the `amplitude-like' parameters, leaving the others virtually unaffected. We find that Gaussian assumptions at such wavenumbers can underestimate the dark energy parameter errors by well over 50 per cent, producing dark energy figures of merit almost three times too large. Thus, for 3D galaxy surveys probing the non-linear regime, proper consideration of non-Gaussian effects is essential.
An algorithm for separation of mixed sparse and Gaussian sources
Akkalkotkar, Ameya
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition. PMID:28414814
An algorithm for separation of mixed sparse and Gaussian sources.
Akkalkotkar, Ameya; Brown, Kevin Scott
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition.
Towards information-optimal simulation of partial differential equations.
Leike, Reimar H; Enßlin, Torsten A
2018-03-01
Most simulation schemes for partial differential equations (PDEs) focus on minimizing a simple error norm of a discretized version of a field. This paper takes a fundamentally different approach; the discretized field is interpreted as data providing information about a real physical field that is unknown. This information is sought to be conserved by the scheme as the field evolves in time. Such an information theoretic approach to simulation was pursued before by information field dynamics (IFD). In this paper we work out the theory of IFD for nonlinear PDEs in a noiseless Gaussian approximation. The result is an action that can be minimized to obtain an information-optimal simulation scheme. It can be brought into a closed form using field operators to calculate the appearing Gaussian integrals. The resulting simulation schemes are tested numerically in two instances for the Burgers equation. Their accuracy surpasses finite-difference schemes on the same resolution. The IFD scheme, however, has to be correctly informed on the subgrid correlation structure. In certain limiting cases we recover well-known simulation schemes like spectral Fourier-Galerkin methods. We discuss implications of the approximations made.
From plane waves to local Gaussians for the simulation of correlated periodic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, George H., E-mail: george.booth@kcl.ac.uk; Tsatsoulis, Theodoros; Grüneis, Andreas, E-mail: a.grueneis@fkf.mpg.de
2016-08-28
We present a simple, robust, and black-box approach to the implementation and use of local, periodic, atom-centered Gaussian basis functions within a plane wave code, in a computationally efficient manner. The procedure outlined is based on the representation of the Gaussians within a finite bandwidth by their underlying plane wave coefficients. The core region is handled within the projected augment wave framework, by pseudizing the Gaussian functions within a cutoff radius around each nucleus, smoothing the functions so that they are faithfully represented by a plane wave basis with only moderate kinetic energy cutoff. To mitigate the effects of themore » basis set superposition error and incompleteness at the mean-field level introduced by the Gaussian basis, we also propose a hybrid approach, whereby the complete occupied space is first converged within a large plane wave basis, and the Gaussian basis used to construct a complementary virtual space for the application of correlated methods. We demonstrate that these pseudized Gaussians yield compact and systematically improvable spaces with an accuracy comparable to their non-pseudized Gaussian counterparts. A key advantage of the described method is its ability to efficiently capture and describe electronic correlation effects of weakly bound and low-dimensional systems, where plane waves are not sufficiently compact or able to be truncated without unphysical artifacts. We investigate the accuracy of the pseudized Gaussians for the water dimer interaction, neon solid, and water adsorption on a LiH surface, at the level of second-order Møller–Plesset perturbation theory.« less
Dynamics of dark hollow Gaussian laser pulses in relativistic plasma.
Sharma, A; Misra, S; Mishra, S K; Kourakis, I
2013-06-01
Optical beams with null central intensity have potential applications in the field of atom optics. The spatial and temporal evolution of a central shadow dark hollow Gaussian (DHG) relativistic laser pulse propagating in a plasma is studied in this article for first principles. A nonlinear Schrodinger-type equation is obtained for the beam spot profile and then solved numerically to investigate the pulse propagation characteristics. As series of numerical simulations are employed to trace the profile of the focused and compressed DHG laser pulse as it propagates through the plasma. The theoretical and simulation results predict that higher-order DHG pulses show smaller divergence as they propagate and, thus, lead to enhanced energy transport.
Dynamics of dark hollow Gaussian laser pulses in relativistic plasma
NASA Astrophysics Data System (ADS)
Sharma, A.; Misra, S.; Mishra, S. K.; Kourakis, I.
2013-06-01
Optical beams with null central intensity have potential applications in the field of atom optics. The spatial and temporal evolution of a central shadow dark hollow Gaussian (DHG) relativistic laser pulse propagating in a plasma is studied in this article for first principles. A nonlinear Schrodinger-type equation is obtained for the beam spot profile and then solved numerically to investigate the pulse propagation characteristics. As series of numerical simulations are employed to trace the profile of the focused and compressed DHG laser pulse as it propagates through the plasma. The theoretical and simulation results predict that higher-order DHG pulses show smaller divergence as they propagate and, thus, lead to enhanced energy transport.
Capacity Maximizing Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Jones, Christopher
2010-01-01
Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity
Recovering dark-matter clustering from galaxies with Gaussianization
NASA Astrophysics Data System (ADS)
McCullagh, Nuala; Neyrinck, Mark; Norberg, Peder; Cole, Shaun
2016-04-01
The Gaussianization transform has been proposed as a method to remove the issues of scale-dependent galaxy bias and non-linearity from galaxy clustering statistics, but these benefits have yet to be thoroughly tested for realistic galaxy samples. In this paper, we test the effectiveness of the Gaussianization transform for different galaxy types by applying it to realistic simulated blue and red galaxy samples. We show that in real space, the shapes of the Gaussianized power spectra of both red and blue galaxies agree with that of the underlying dark matter, with the initial power spectrum, and with each other to smaller scales than do the statistics of the usual (untransformed) density field. However, we find that the agreement in the Gaussianized statistics breaks down in redshift space. We attribute this to the fact that red and blue galaxies exhibit very different fingers of god in redshift space. After applying a finger-of-god compression, the agreement on small scales between the Gaussianized power spectra is restored. We also compare the Gaussianization transform to the clipped galaxy density field and find that while both methods are effective in real space, they have more complicated behaviour in redshift space. Overall, we find that Gaussianization can be useful in recovering the shape of the underlying dark-matter power spectrum to k ˜ 0.5 h Mpc-1 and of the initial power spectrum to k ˜ 0.4 h Mpc-1 in certain cases at z = 0.
Denoising of polychromatic CT images based on their own noise properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji Hye; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr
Purpose: Because of high diagnostic accuracy and fast scan time, computed tomography (CT) has been widely used in various clinical applications. Since the CT scan introduces radiation exposure to patients, however, dose reduction has recently been recognized as an important issue in CT imaging. However, low-dose CT causes an increase of noise in the image and thereby deteriorates the accuracy of diagnosis. In this paper, the authors develop an efficient denoising algorithm for low-dose CT images obtained using a polychromatic x-ray source. The algorithm is based on two steps: (i) estimation of space variant noise statistics, which are uniquely determinedmore » according to the system geometry and scanned object, and (ii) subsequent novel conversion of the estimated noise to Gaussian noise so that an existing high performance Gaussian noise filtering algorithm can be directly applied to CT images with non-Gaussian noise. Methods: For efficient polychromatic CT image denoising, the authors first reconstruct an image with the iterative maximum-likelihood polychromatic algorithm for CT to alleviate the beam-hardening problem. We then estimate the space-variant noise variance distribution on the image domain. Since there are many high performance denoising algorithms available for the Gaussian noise, image denoising can become much more efficient if they can be used. Hence, the authors propose a novel conversion scheme to transform the estimated space-variant noise to near Gaussian noise. In the suggested scheme, the authors first convert the image so that its mean and variance can have a linear relationship, and then produce a Gaussian image via variance stabilizing transform. The authors then apply a block matching 4D algorithm that is optimized for noise reduction of the Gaussian image, and reconvert the result to obtain a final denoised image. To examine the performance of the proposed method, an XCAT phantom simulation and a physical phantom experiment were conducted. Results: Both simulation and experimental results show that, unlike the existing denoising algorithms, the proposed algorithm can effectively reduce the noise over the whole region of CT images while preventing degradation of image resolution. Conclusions: To effectively denoise polychromatic low-dose CT images, a novel denoising algorithm is proposed. Because this algorithm is based on the noise statistics of a reconstructed polychromatic CT image, the spatially varying noise on the image is effectively reduced so that the denoised image will have homogeneous quality over the image domain. Through a simulation and a real experiment, it is verified that the proposed algorithm can deliver considerably better performance compared to the existing denoising algorithms.« less
NASA Astrophysics Data System (ADS)
Yu, Haitao; Sun, Hui; Shen, Jianqi; Tropea, Cameron
2018-03-01
The primary rainbow observed when light is scattered by a spherical drop has been exploited in the past to measure drop size and relative refractive index. However, if higher spatial resolution is required in denser drop ensembles/sprays, and to avoid then multiple drops simultaneously appearing in the measurement volume, a highly focused beam is desirable, inevitably with a Gaussian intensity profile. The present study examines the primary rainbow pattern resulting when a Gaussian beam is scattered by a spherical drop and estimates the attainable accuracy when extracting size and refractive index. The scattering is computed using generalized Lorenz-Mie theory (GLMT) and Debye series decomposition of the Gaussian beam scattering. The results of these simulations show that the measurement accuracy is dependent on both the beam waist radius and the position of the drop in the beam waist.
Leong, Siow Hoo; Ong, Seng Huat
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.
Leong, Siow Hoo
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634
Recurrence plots of discrete-time Gaussian stochastic processes
NASA Astrophysics Data System (ADS)
Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick
2016-09-01
We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Sun, Jian-Qiao
2016-09-01
The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.
On the efficacy of procedures to normalize Ex-Gaussian distributions.
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2014-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.
Nonparametric estimation of stochastic differential equations with sparse Gaussian processes.
García, Constantino A; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G
2017-08-01
The application of stochastic differential equations (SDEs) to the analysis of temporal data has attracted increasing attention, due to their ability to describe complex dynamics with physically interpretable equations. In this paper, we introduce a nonparametric method for estimating the drift and diffusion terms of SDEs from a densely observed discrete time series. The use of Gaussian processes as priors permits working directly in a function-space view and thus the inference takes place directly in this space. To cope with the computational complexity that requires the use of Gaussian processes, a sparse Gaussian process approximation is provided. This approximation permits the efficient computation of predictions for the drift and diffusion terms by using a distribution over a small subset of pseudosamples. The proposed method has been validated using both simulated data and real data from economy and paleoclimatology. The application of the method to real data demonstrates its ability to capture the behavior of complex systems.
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Behnke, marlana N.; Przekop, Adam
2010-01-01
High-cycle fatigue of an elastic-plastic beam structure under the combined action of thermal and high-intensity non-Gaussian acoustic loadings is considered. Such loadings can be highly damaging when snap-through motion occurs between thermally post-buckled equilibria. The simulated non-Gaussian loadings investigated have a range of skewness and kurtosis typical of turbulent boundary layer pressure fluctuations in the vicinity of forward facing steps. Further, the duration and steadiness of high excursion peaks is comparable to that found in such turbulent boundary layer data. Response and fatigue life estimates are found to be insensitive to the loading distribution, with the minor exception of cases involving plastic deformation. In contrast, the fatigue life estimate was found to be highly affected by a different type of non-Gaussian loading having bursts of high excursion peaks.
NASA Astrophysics Data System (ADS)
Mitchell, Noah; Koning, Vinzenz; Vitelli, Vincenzo; Irvine, William T. M.
2014-03-01
When an elastic film conforms to a surface with Gaussian curvature, stresses arise in the film. As a result, cracks--typically studied in flat materials--interact with curvature when propagating through the system. Using silicone elastomer sheets that conform to the surface of a Gaussian bump, we find experimental evidence for the deflection of a crack propagating through the material. We interpret our experiments with reference to analytical modeling and simulations of a simplified model system.
Autonomous detection of crowd anomalies in multiple-camera surveillance feeds
NASA Astrophysics Data System (ADS)
Nordlöf, Jonas; Andersson, Maria
2016-10-01
A novel approach for autonomous detection of anomalies in crowded environments is presented in this paper. The proposed models uses a Gaussian mixture probability hypothesis density (GM-PHD) filter as feature extractor in conjunction with different Gaussian mixture hidden Markov models (GM-HMMs). Results, based on both simulated and recorded data, indicate that this method can track and detect anomalies on-line in individual crowds through multiple camera feeds in a crowded environment.
NASA Astrophysics Data System (ADS)
Rytchkov, D. S.
2017-11-01
The paper presents the results of a study of the backscattering enhancement factor (BSE) dependence of vortex LaguerreGaussian beams propagating on monostatic location paths in the atmosphere on optical turbulence intensity. The numeric simulation split-step method of laser beam propagation was used to obtain BSE factor values of a laser beam propagated on monostatic location path in the turbulent atmosphere and reflected from a diffuse target. It is shown that BSE factor of the averaged intensity of a backscattered vortex laser beam of any topological charge is less than BSE factor values of backscattered Gaussian beam in arbitrary turbulent conditions.
Simulation and analysis of scalable non-Gaussian statistically anisotropic random functions
NASA Astrophysics Data System (ADS)
Riva, Monica; Panzeri, Marco; Guadagnini, Alberto; Neuman, Shlomo P.
2015-12-01
Many earth and environmental (as well as other) variables, Y, and their spatial or temporal increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture some key aspects of such scaling by treating Y or ΔY as standard sub-Gaussian random functions. We were however unable to reconcile two seemingly contradictory observations, namely that whereas sample frequency distributions of Y (or its logarithm) exhibit relatively mild non-Gaussian peaks and tails, those of ΔY display peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we overcame this difficulty by developing a new generalized sub-Gaussian model which captures both behaviors in a unified and consistent manner, exploring it on synthetically generated random functions in one dimension (Riva et al., 2015). Here we extend our generalized sub-Gaussian model to multiple dimensions, present an algorithm to generate corresponding random realizations of statistically isotropic or anisotropic sub-Gaussian functions and illustrate it in two dimensions. We demonstrate the accuracy of our algorithm by comparing ensemble statistics of Y and ΔY (such as, mean, variance, variogram and probability density function) with those of Monte Carlo generated realizations. We end by exploring the feasibility of estimating all relevant parameters of our model by analyzing jointly spatial moments of Y and ΔY obtained from a single realization of Y.
Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.
Mao, Tianqi; Wang, Zhaocheng; Wang, Qi
2017-01-23
Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.
Probing Primordial Non-Gaussianity with Weak-lensing Minkowski Functionals
NASA Astrophysics Data System (ADS)
Shirasaki, Masato; Yoshida, Naoki; Hamana, Takashi; Nishimichi, Takahiro
2012-11-01
We study the cosmological information contained in the Minkowski functionals (MFs) of weak gravitational lensing convergence maps. We show that the MFs provide strong constraints on the local-type primordial non-Gaussianity parameter f NL. We run a set of cosmological N-body simulations and perform ray-tracing simulations of weak lensing to generate 100 independent convergence maps of a 25 deg2 field of view for f NL = -100, 0 and 100. We perform a Fisher analysis to study the degeneracy among other cosmological parameters such as the dark energy equation of state parameter w and the fluctuation amplitude σ8. We use fully nonlinear covariance matrices evaluated from 1000 ray-tracing simulations. For upcoming wide-field observations such as those from the Subaru Hyper Suprime-Cam survey with a proposed survey area of 1500 deg2, the primordial non-Gaussianity can be constrained with a level of f NL ~ 80 and w ~ 0.036 by weak-lensing MFs. If simply scaled by the effective survey area, a 20,000 deg2 lensing survey using the Large Synoptic Survey Telescope will yield constraints of f NL ~ 25 and w ~ 0.013. We show that these constraints can be further improved by a tomographic method using source galaxies in multiple redshift bins.
On the Lulejian-I Combat Model
1976-08-01
possible initial massing of the attacking side’s resources, the model tries to represent in a game -theoretic context the adversary nature of the...sequential game , as outlined in [A]. In principle, it is necessary to run the combat simulation once for each possible set of sequentially chosen...sequential game , in which the evaluative portion of the model (i.e., the combat assessment) serves to compute intermediate and terminal payoffs for the
Lifelong Transfer Learning for Heterogeneous Teams of Agents in Sequential Decision Processes
2016-06-01
making (SDM) tasks in dynamic environments with simulated and physical robots . 15. SUBJECT TERMS Sequential decision making, lifelong learning, transfer...sequential decision-making (SDM) tasks in dynamic environments with both simple benchmark tasks and more complex aerial and ground robot tasks. Our work...and ground robots in the presence of disturbances: We applied our methods to the problem of learning controllers for robots with novel disturbances in
Increasing efficiency of preclinical research by group sequential designs
Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich
2017-01-01
Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371
Effect of finite sample size on feature selection and classification: a simulation study.
Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping
2010-02-01
The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.
PET image reconstruction using multi-parametric anato-functional priors
NASA Astrophysics Data System (ADS)
Mehranian, Abolfazl; Belzunce, Martin A.; Niccolini, Flavia; Politis, Marios; Prieto, Claudia; Turkheimer, Federico; Hammers, Alexander; Reader, Andrew J.
2017-08-01
In this study, we investigate the application of multi-parametric anato-functional (MR-PET) priors for the maximum a posteriori (MAP) reconstruction of brain PET data in order to address the limitations of the conventional anatomical priors in the presence of PET-MR mismatches. In addition to partial volume correction benefits, the suitability of these priors for reconstruction of low-count PET data is also introduced and demonstrated, comparing to standard maximum-likelihood (ML) reconstruction of high-count data. The conventional local Tikhonov and total variation (TV) priors and current state-of-the-art anatomical priors including the Kaipio, non-local Tikhonov prior with Bowsher and Gaussian similarity kernels are investigated and presented in a unified framework. The Gaussian kernels are calculated using both voxel- and patch-based feature vectors. To cope with PET and MR mismatches, the Bowsher and Gaussian priors are extended to multi-parametric priors. In addition, we propose a modified joint Burg entropy prior that by definition exploits all parametric information in the MAP reconstruction of PET data. The performance of the priors was extensively evaluated using 3D simulations and two clinical brain datasets of [18F]florbetaben and [18F]FDG radiotracers. For simulations, several anato-functional mismatches were intentionally introduced between the PET and MR images, and furthermore, for the FDG clinical dataset, two PET-unique active tumours were embedded in the PET data. Our simulation results showed that the joint Burg entropy prior far outperformed the conventional anatomical priors in terms of preserving PET unique lesions, while still reconstructing functional boundaries with corresponding MR boundaries. In addition, the multi-parametric extension of the Gaussian and Bowsher priors led to enhanced preservation of edge and PET unique features and also an improved bias-variance performance. In agreement with the simulation results, the clinical results also showed that the Gaussian prior with voxel-based feature vectors, the Bowsher and the joint Burg entropy priors were the best performing priors. However, for the FDG dataset with simulated tumours, the TV and proposed priors were capable of preserving the PET-unique tumours. Finally, an important outcome was the demonstration that the MAP reconstruction of a low-count FDG PET dataset using the proposed joint entropy prior can lead to comparable image quality to a conventional ML reconstruction with up to 5 times more counts. In conclusion, multi-parametric anato-functional priors provide a solution to address the pitfalls of the conventional priors and are therefore likely to increase the diagnostic confidence in MR-guided PET image reconstructions.
Cosmological information in Gaussianized weak lensing signals
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.; Kiessling, A.
2011-11-01
Gaussianizing the one-point distribution of the weak gravitational lensing convergence has recently been shown to increase the signal-to-noise ratio contained in two-point statistics. We investigate the information on cosmology that can be extracted from the transformed convergence fields. Employing Box-Cox transformations to determine optimal transformations to Gaussianity, we develop analytical models for the transformed power spectrum, including effects of noise and smoothing. We find that optimized Box-Cox transformations perform substantially better than an offset logarithmic transformation in Gaussianizing the convergence, but both yield very similar results for the signal-to-noise ratio. None of the transformations is capable of eliminating correlations of the power spectra between different angular frequencies, which we demonstrate to have a significant impact on the errors in cosmology. Analytic models of the Gaussianized power spectrum yield good fits to the simulations and produce unbiased parameter estimates in the majority of cases, where the exceptions can be traced back to the limitations in modelling the higher order correlations of the original convergence. In the ideal case, without galaxy shape noise, we find an increase in the cumulative signal-to-noise ratio by a factor of 2.6 for angular frequencies up to ℓ= 1500, and a decrease in the area of the confidence region in the Ωm-σ8 plane, measured in terms of q-values, by a factor of 4.4 for the best performing transformation. When adding a realistic level of shape noise, all transformations perform poorly with little decorrelation of angular frequencies, a maximum increase in signal-to-noise ratio of 34 per cent, and even slightly degraded errors on cosmological parameters. We argue that to find Gaussianizing transformations of practical use, it will be necessary to go beyond transformations of the one-point distribution of the convergence, extend the analysis deeper into the non-linear regime and resort to an exploration of parameter space via simulations.
Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.
Nonlinear estimation theory applied to orbit determination
NASA Technical Reports Server (NTRS)
Choe, C. Y.
1972-01-01
The development of an approximate nonlinear filter using the Martingale theory and appropriate smoothing properties is considered. Both the first order and the second order moments were estimated. The filter developed can be classified as a modified Gaussian second order filter. Its performance was evaluated in a simulated study of the problem of estimating the state of an interplanetary space vehicle during both a simulated Jupiter flyby and a simulated Jupiter orbiter mission. In addition to the modified Gaussian second order filter, the modified truncated second order filter was also evaluated in the simulated study. Results obtained with each of these filters were compared with numerical results obtained with the extended Kalman filter and the performance of each filter is determined by comparison with the actual estimation errors. The simulations were designed to determine the effects of the second order terms in the dynamic state relations, the observation state relations, and the Kalman gain compensation term. It is shown that the Kalman gain-compensated filter which includes only the Kalman gain compensation term is superior to all of the other filters.
Modeling methods for merging computational and experimental aerodynamic pressure data
NASA Astrophysics Data System (ADS)
Haderlie, Jacob C.
This research describes a process to model surface pressure data sets as a function of wing geometry from computational and wind tunnel sources and then merge them into a single predicted value. The described merging process will enable engineers to integrate these data sets with the goal of utilizing the advantages of each data source while overcoming the limitations of both; this provides a single, combined data set to support analysis and design. The main challenge with this process is accurately representing each data source everywhere on the wing. Additionally, this effort demonstrates methods to model wind tunnel pressure data as a function of angle of attack as an initial step towards a merging process that uses both location on the wing and flow conditions (e.g., angle of attack, flow velocity or Reynold's number) as independent variables. This surrogate model of pressure as a function of angle of attack can be useful for engineers that need to predict the location of zero-order discontinuities, e.g., flow separation or normal shocks. Because, to the author's best knowledge, there is no published, well-established merging method for aerodynamic pressure data (here, the coefficient of pressure Cp), this work identifies promising modeling and merging methods, and then makes a critical comparison of these methods. Surrogate models represent the pressure data for both data sets. Cubic B-spline surrogate models represent the computational simulation results. Machine learning and multi-fidelity surrogate models represent the experimental data. This research compares three surrogates for the experimental data (sequential--a.k.a. online--Gaussian processes, batch Gaussian processes, and multi-fidelity additive corrector) on the merits of accuracy and computational cost. The Gaussian process (GP) methods employ cubic B-spline CFD surrogates as a model basis function to build a surrogate model of the WT data, and this usage of the CFD surrogate in building the WT data could serve as a "merging" because the resulting WT pressure prediction uses information from both sources. In the GP approach, this model basis function concept seems to place more "weight" on the Cp values from the wind tunnel (WT) because the GP surrogate uses the CFD to approximate the WT data values. Conversely, the computationally inexpensive additive corrector method uses the CFD B-spline surrogate to define the shape of the spanwise distribution of the Cp while minimizing prediction error at all spanwise locations for a given arc length position; this, too, combines information from both sources to make a prediction of the 2-D WT-based Cp distribution, but the additive corrector approach gives more weight to the CFD prediction than to the WT data. Three surrogate models of the experimental data as a function of angle of attack are also compared for accuracy and computational cost. These surrogates are a single Gaussian process model (a single "expert"), product of experts, and generalized product of experts. The merging approach provides a single pressure distribution that combines experimental and computational data. The batch Gaussian process method provides a relatively accurate surrogate that is computationally acceptable, and can receive wind tunnel data from port locations that are not necessarily parallel to a variable direction. On the other hand, the sequential Gaussian process and additive corrector methods must receive a sufficient number of data points aligned with one direction, e.g., from pressure port bands (tap rows) aligned with the freestream. The generalized product of experts best represents wind tunnel pressure as a function of angle of attack, but at higher computational cost than the single expert approach. The format of the application data from computational and experimental sources in this work precluded the merging process from including flow condition variables (e.g., angle of attack) in the independent variables, so the merging process is only conducted in the wing geometry variables of arc length and span. The merging process of Cp data allows a more "hands-off" approach to aircraft design and analysis, (i.e., not as many engineers needed to debate the Cp distribution shape) and generates Cp predictions at any location on the wing. However, the cost with these benefits are engineer time (learning how to build surrogates), computational time in constructing the surrogates, and surrogate accuracy (surrogates introduce error into data predictions). This dissertation effort used the Trap Wing / First AIAA CFD High-Lift Prediction Workshop as a relevant transonic wing with a multi-element high-lift system, and this work identified that the batch GP model for the WT data and the B-spline surrogate for the CFD might best be combined using expert belief weights to describe Cp as a function of location on the wing element surface. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Shukri, Seyfan Kelil
2017-01-01
We have done Kinetic Monte Carlo (KMC) simulations to investigate the effect of charge carrier density on the electrical conductivity and carrier mobility in disordered organic semiconductors using a lattice model. The density of state (DOS) of the system are considered to be Gaussian and exponential. Our simulations reveal that the mobility of the charge carrier increases with charge carrier density for both DOSs. In contrast, the mobility of charge carriers decreases as the disorder increases. In addition the shape of the DOS has a significance effect on the charge transport properties as a function of density which are clearly seen. On the other hand, for the same distribution width and at low carrier density, the change occurred on the conductivity and mobility for a Gaussian DOS is more pronounced than that for the exponential DOS.
NASA Astrophysics Data System (ADS)
Fallah-Shorshani, Masoud; Shekarrizfard, Maryam; Hatzopoulou, Marianne
2017-03-01
The development and use of dispersion models that simulate traffic-related air pollution in urban areas has risen significantly in support of air pollution exposure research. In order to accurately estimate population exposure, it is important to generate concentration surfaces that take into account near-road concentrations as well as the transport of pollutants throughout an urban region. In this paper, an integrated modelling chain was developed to simulate ambient Nitrogen Dioxide (NO2) in a dense urban neighbourhood while taking into account traffic emissions, the regional background, and the transport of pollutants within the urban canopy. For this purpose, we developed a hybrid configuration including 1) a street canyon model, which simulates pollutant transfer along streets and intersections, taking into account the geometry of buildings and other obstacles, and 2) a Gaussian puff model, which resolves the transport of contaminants at the top of the urban canopy and accounts for regional meteorology. Each dispersion model was validated against measured concentrations and compared against the hybrid configuration. Our results demonstrate that the hybrid approach significantly improves the output of each model on its own. An underestimation appears clearly for the Gaussian model and street-canyon model compared to observed data. This is due to ignoring the building effect by the Gaussian model and undermining the contribution of other roads by the canyon model. The hybrid approach reduced the RMSE (of observed vs. predicted concentrations) by 16%-25% compared to each model on its own, and increased FAC2 (fraction of predictions within a factor of two of the observations) by 10%-34%.
NASA Technical Reports Server (NTRS)
Leybold, H. A.
1971-01-01
Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.
Perturbative Gaussianizing transforms for cosmological fields
NASA Astrophysics Data System (ADS)
Hall, Alex; Mead, Alexander
2018-01-01
Constraints on cosmological parameters from large-scale structure have traditionally been obtained from two-point statistics. However, non-linear structure formation renders these statistics insufficient in capturing the full information content available, necessitating the measurement of higher order moments to recover information which would otherwise be lost. We construct quantities based on non-linear and non-local transformations of weakly non-Gaussian fields that Gaussianize the full multivariate distribution at a given order in perturbation theory. Our approach does not require a model of the fields themselves and takes as input only the first few polyspectra, which could be modelled or measured from simulations or data, making our method particularly suited to observables lacking a robust perturbative description such as the weak-lensing shear. We apply our method to simulated density fields, finding a significantly reduced bispectrum and an enhanced correlation with the initial field. We demonstrate that our method reconstructs a large proportion of the linear baryon acoustic oscillations, improving the information content over the raw field by 35 per cent. We apply the transform to toy 21 cm intensity maps, showing that our method still performs well in the presence of complications such as redshift-space distortions, beam smoothing, pixel noise and foreground subtraction. We discuss how this method might provide a route to constructing a perturbative model of the fully non-Gaussian multivariate likelihood function.
Yang, Hao; Cheng, Jian; Chen, Mingjun; Wang, Jian; Liu, Zhichao; An, Chenhui; Zheng, Yi; Hu, Kehui; Liu, Qi
2017-07-24
In high power laser systems, precision micro-machining is an effective method to mitigate the laser-induced surface damage growth on potassium dihydrogen phosphate (KDP) crystal. Repaired surfaces with smooth spherical and Gaussian contours can alleviate the light field modulation caused by damage site. To obtain the optimal repairing structure parameters, finite element method (FEM) models for simulating the light intensification caused by the mitigation pits on rear KDP surface were established. The light intensity modulation of these repairing profiles was compared by changing the structure parameters. The results indicate the modulation is mainly caused by the mutual interference between the reflected and incident lights on the rear surface. Owing to the total reflection, the light intensity enhancement factors (LIEFs) of the spherical and Gaussian mitigation pits sharply increase when the width-depth ratios are near 5.28 and 3.88, respectively. To achieve the optimal mitigation effect, the width-depth ratios greater than 5.3 and 4.3 should be applied to the spherical and Gaussian repaired contours. Particularly, for the cases of width-depth ratios greater than 5.3, the spherical repaired contour is preferred to achieve lower light intensification. The laser damage test shows that when the width-depth ratios are larger than 5.3, the spherical repaired contour presents higher laser damage resistance than that of Gaussian repaired contour, which agrees well with the simulation results.
Faheem, Muhammad; Heyden, Andreas
2014-08-12
We report the development of a quantum mechanics/molecular mechanics free energy perturbation (QM/MM-FEP) method for modeling chemical reactions at metal-water interfaces. This novel solvation scheme combines planewave density function theory (DFT), periodic electrostatic embedded cluster method (PEECM) calculations using Gaussian-type orbitals, and classical molecular dynamics (MD) simulations to obtain a free energy description of a complex metal-water system. We derive a potential of mean force (PMF) of the reaction system within the QM/MM framework. A fixed-size, finite ensemble of MM conformations is used to permit precise evaluation of the PMF of QM coordinates and its gradient defined within this ensemble. Local conformations of adsorbed reaction moieties are optimized using sequential MD-sampling and QM-optimization steps. An approximate reaction coordinate is constructed using a number of interpolated states and the free energy difference between adjacent states is calculated using the QM/MM-FEP method. By avoiding on-the-fly QM calculations and by circumventing the challenges associated with statistical averaging during MD sampling, a computational speedup of multiple orders of magnitude is realized. The method is systematically validated against the results of ab initio QM calculations and demonstrated for C-C cleavage in double-dehydrogenated ethylene glycol on a Pt (111) model surface.
Guagliardi, Ilaria; Cicchella, Domenico; De Rosa, Rosanna; Buttafuoco, Gabriele
2015-07-01
Exposure to lead (Pb) may affect adversely human health. Mapping soil Pb contents is essential to obtain a quantitative estimate of potential risk of Pb contamination. The main aim of this paper was to determine the soil Pb concentrations in the urban and peri-urban area of Cosenza-Rende to map their spatial distribution and assess the probability that soil Pb concentration exceeds a critical threshold that might cause concern for human health. Samples were collected at 149 locations from residual and non-residual topsoil in gardens, parks, flower-beds, and agricultural fields. Fine earth fraction of soil samples was analyzed by X-ray Fluorescence spectrometry. Stochastic images generated by the sequential Gaussian simulation were jointly combined to calculate the probability of exceeding the critical threshold that could be used to delineate the potentially risky areas. Results showed areas in which Pb concentration values were higher to the Italian regulatory values. These polluted areas were quite large and likely, they could create a significant health risk for human beings and vegetation in the near future. The results demonstrated that the proposed approach can be used to study soil contamination to produce geochemical maps, and identify hot-spot areas for soil Pb concentration. Copyright © 2015. Published by Elsevier B.V.
Rubber elasticity for percolation network consisting of Gaussian chains.
Nishi, Kengo; Noguchi, Hiroshi; Sakai, Takamasa; Shibayama, Mitsuhiro
2015-11-14
A theory describing the elastic modulus for percolation networks of Gaussian chains on general lattices such as square and cubic lattices is proposed and its validity is examined with simulation and mechanical experiments on well-defined polymer networks. The theory was developed by generalizing the effective medium approximation (EMA) for Hookian spring network to Gaussian chain networks. From EMA theory, we found that the ratio of the elastic modulus at p, G to that at p = 1, G0, must be equal to G/G0 = (p - 2/f)/(1 - 2/f) if the position of sites can be determined so as to meet the force balance, where p is the degree of cross-linking reaction. However, the EMA prediction cannot be applicable near its percolation threshold because EMA is a mean field theory. Thus, we combine real-space renormalization and EMA and propose a theory called real-space renormalized EMA, i.e., REMA. The elastic modulus predicted by REMA is in excellent agreement with the results of simulations and experiments of near-ideal diamond lattice gels.
Electric Field Fluctuations in Water
NASA Astrophysics Data System (ADS)
Thorpe, Dayton; Limmer, David; Chandler, David
2013-03-01
Charge transfer in solution, such as autoionization and ion pair dissociation in water, is governed by rare electric field fluctuations of the solvent. Knowing the statistics of such fluctuations can help explain the dynamics of these rare events. Trajectories short enough to be tractable by computer simulation are virtually certain not to sample the large fluctuations that promote rare events. Here, we employ importance sampling techniques with classical molecular dynamics simulations of liquid water to study statistics of electric field fluctuations far from their means. We find that the distributions of electric fields located on individual water molecules are not in general gaussian. Near the mean this non-gaussianity is due to the internal charge distribution of the water molecule. Further from the mean, however, there is a previously unreported Bjerrum-like defect that stabilizes certain large fluctuations out of equilibrium. As expected, differences in electric fields acting between molecules are gaussian to a remarkable degree. By studying these differences, though, we are able to determine what configurations result not only in large electric fields, but also in electric fields with long spatial correlations that may be needed to promote charge separation.
Heggeseth, Brianna C; Jewell, Nicholas P
2013-07-20
Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.
Spatiotemporal Airy Ince-Gaussian wave packets in strongly nonlocal nonlinear media.
Peng, Xi; Zhuang, Jingli; Peng, Yulian; Li, DongDong; Zhang, Liping; Chen, Xingyu; Zhao, Fang; Deng, Dongmei
2018-03-08
The self-accelerating Airy Ince-Gaussian (AiIG) and Airy helical Ince-Gaussian (AihIG) wave packets in strongly nonlocal nonlinear media (SNNM) are obtained by solving the strongly nonlocal nonlinear Schrödinger equation. For the first time, the propagation properties of three dimensional localized AiIG and AihIG breathers and solitons in the SNNM are demonstrated, these spatiotemporal wave packets maintain the self-accelerating and approximately non-dispersion properties in temporal dimension, periodically oscillating (breather state) or steady (soliton state) in spatial dimension. In particular, their numerical experiments of spatial intensity distribution, numerical simulations of spatiotemporal distribution, as well as the transverse energy flow and the angular momentum in SNNM are presented. Typical examples of the obtained solutions are based on the ratio between the input power and the critical power, the ellipticity and the strong nonlocality parameter. The comparisons of analytical solutions with numerical simulations and numerical experiments of the AiIG and AihIG optical solitons show that the numerical results agree well with the analytical solutions in the case of strong nonlocality.
Rubber elasticity for percolation network consisting of Gaussian chains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishi, Kengo, E-mail: kengo.nishi@phys.uni-goettingen.de, E-mail: sakai@tetrapod.t.u-tokyo.ac.jp, E-mail: sibayama@issp.u-tokyo.ac.jp; Noguchi, Hiroshi; Shibayama, Mitsuhiro, E-mail: kengo.nishi@phys.uni-goettingen.de, E-mail: sakai@tetrapod.t.u-tokyo.ac.jp, E-mail: sibayama@issp.u-tokyo.ac.jp
2015-11-14
A theory describing the elastic modulus for percolation networks of Gaussian chains on general lattices such as square and cubic lattices is proposed and its validity is examined with simulation and mechanical experiments on well-defined polymer networks. The theory was developed by generalizing the effective medium approximation (EMA) for Hookian spring network to Gaussian chain networks. From EMA theory, we found that the ratio of the elastic modulus at p, G to that at p = 1, G{sub 0}, must be equal to G/G{sub 0} = (p − 2/f)/(1 − 2/f) if the position of sites can be determined somore » as to meet the force balance, where p is the degree of cross-linking reaction. However, the EMA prediction cannot be applicable near its percolation threshold because EMA is a mean field theory. Thus, we combine real-space renormalization and EMA and propose a theory called real-space renormalized EMA, i.e., REMA. The elastic modulus predicted by REMA is in excellent agreement with the results of simulations and experiments of near-ideal diamond lattice gels.« less
Effect of asymmetric concentration profile on thermal conductivity in Ge/SiGe superlattices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn, Konstanze R., E-mail: konstanze.hahn@dsf.unica.it; Cecchi, Stefano; Colombo, Luciano
2016-05-16
The effect of the chemical composition in Si/Ge-based superlattices on their thermal conductivity has been investigated using molecular dynamics simulations. Simulation cells of Ge/SiGe superlattices have been generated with different concentration profiles such that the Si concentration follows a step-like, a tooth-saw, a Gaussian, and a gamma-type function in direction of the heat flux. The step-like and tooth-saw profiles mimic ideally sharp interfaces, whereas Gaussian and gamma-type profiles are smooth functions imitating atomic diffusion at the interface as obtained experimentally. Symmetry effects have been investigated comparing the symmetric profiles of the step-like and the Gaussian function to the asymmetric profilesmore » of the tooth-saw and the gamma-type function. At longer sample length and similar degree of interdiffusion, the thermal conductivity is found to be lower in asymmetric profiles. Furthermore, it is found that with smooth concentration profiles where atomic diffusion at the interface takes place the thermal conductivity is higher compared to systems with atomically sharp concentration profiles.« less
NASA Astrophysics Data System (ADS)
Yeung, Chuck
2018-06-01
The assumption that the local order parameter is related to an underlying spatially smooth auxiliary field, u (r ⃗,t ) , is a common feature in theoretical approaches to non-conserved order parameter phase separation dynamics. In particular, the ansatz that u (r ⃗,t ) is a Gaussian random field leads to predictions for the decay of the autocorrelation function which are consistent with observations, but distinct from predictions using alternative theoretical approaches. In this paper, the auxiliary field is obtained directly from simulations of the time-dependent Ginzburg-Landau equation in two and three dimensions. The results show that u (r ⃗,t ) is equivalent to the distance to the nearest interface. In two dimensions, the probability distribution, P (u ) , is well approximated as Gaussian except for small values of u /L (t ) , where L (t ) is the characteristic length-scale of the patterns. The behavior of P (u ) in three dimensions is more complicated; the non-Gaussian region for small u /L (t ) is much larger than that in two dimensions but the tails of P (u ) begin to approach a Gaussian form at intermediate times. However, at later times, the tails of the probability distribution appear to decay faster than a Gaussian distribution.
An Urban Diffusion Simulation Model for Carbon Monoxide
ERIC Educational Resources Information Center
Johnson, W. B.; And Others
1973-01-01
A relatively simple Gaussian-type diffusion simulation model for calculating urban carbon (CO) concentrations as a function of local meteorology and the distribution of traffic is described. The model can be used in two ways: in the synoptic mode and in the climatological mode. (Author/BL)
Variability-aware compact modeling and statistical circuit validation on SRAM test array
NASA Astrophysics Data System (ADS)
Qiao, Ying; Spanos, Costas J.
2016-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.
NASA Astrophysics Data System (ADS)
Simon, P.; Semboloni, E.; van Waerbeke, L.; Hoekstra, H.; Erben, T.; Fu, L.; Harnois-Déraps, J.; Heymans, C.; Hildebrandt, H.; Kilbinger, M.; Kitching, T. D.; Miller, L.; Schrabback, T.
2015-05-01
We study the correlations of the shear signal between triplets of sources in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) to probe cosmological parameters via the matter bispectrum. In contrast to previous studies, we adopt a non-Gaussian model of the data likelihood which is supported by our simulations of the survey. We find that for state-of-the-art surveys, similar to CFHTLenS, a Gaussian likelihood analysis is a reasonable approximation, albeit small differences in the parameter constraints are already visible. For future surveys we expect that a Gaussian model becomes inaccurate. Our algorithm for a refined non-Gaussian analysis and data compression is then of great utility especially because it is not much more elaborate if simulated data are available. Applying this algorithm to the third-order correlations of shear alone in a blind analysis, we find a good agreement with the standard cosmological model: Σ _8=σ _8(Ω _m/0.27)^{0.64}=0.79^{+0.08}_{-0.11} for a flat Λ cold dark matter cosmology with h = 0.7 ± 0.04 (68 per cent credible interval). Nevertheless our models provide only moderately good fits as indicated by χ2/dof = 2.9, including a 20 per cent rms uncertainty in the predicted signal amplitude. The models cannot explain a signal drop on scales around 15 arcmin, which may be caused by systematics. It is unclear whether the discrepancy can be fully explained by residual point spread function systematics of which we find evidence at least on scales of a few arcmin. Therefore we need a better understanding of higher order correlations of cosmic shear and their systematics to confidently apply them as cosmological probes.
Statistics of the epoch of reionization 21-cm signal - I. Power spectrum error-covariance
NASA Astrophysics Data System (ADS)
Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman
2016-02-01
The non-Gaussian nature of the epoch of reionization (EoR) 21-cm signal has a significant impact on the error variance of its power spectrum P(k). We have used a large ensemble of seminumerical simulations and an analytical model to estimate the effect of this non-Gaussianity on the entire error-covariance matrix {C}ij. Our analytical model shows that {C}ij has contributions from two sources. One is the usual variance for a Gaussian random field which scales inversely of the number of modes that goes into the estimation of P(k). The other is the trispectrum of the signal. Using the simulated 21-cm Signal Ensemble, an ensemble of the Randomized Signal and Ensembles of Gaussian Random Ensembles we have quantified the effect of the trispectrum on the error variance {C}II. We find that its relative contribution is comparable to or larger than that of the Gaussian term for the k range 0.3 ≤ k ≤ 1.0 Mpc-1, and can be even ˜200 times larger at k ˜ 5 Mpc-1. We also establish that the off-diagonal terms of {C}ij have statistically significant non-zero values which arise purely from the trispectrum. This further signifies that the error in different k modes are not independent. We find a strong correlation between the errors at large k values (≥0.5 Mpc-1), and a weak correlation between the smallest and largest k values. There is also a small anticorrelation between the errors in the smallest and intermediate k values. These results are relevant for the k range that will be probed by the current and upcoming EoR 21-cm experiments.
NASA Technical Reports Server (NTRS)
Tranter, W. H.; Ziemer, R. E.; Fashano, M. J.
1975-01-01
This paper reviews the SYSTID technique for performance evaluation of communication systems using time-domain computer simulation. An example program illustrates the language. The inclusion of both Gaussian and impulse noise models make accurate simulation possible in a wide variety of environments. A very flexible postprocessor makes possible accurate and efficient performance evaluation.
Focusing of concentric piecewise vector Bessel-Gaussian beam
NASA Astrophysics Data System (ADS)
Li, Jinsong; Fang, Ying; Zhou, Shenghua; Ye, Youxiang
2010-12-01
The focusing properties of a concentric piecewise vector Bessel-Gaussian beam are investigated in this paper. The beam consists of three portions: the center circular portion and outer annular portion are radially polarized, while the inner annular portion is generalized polarized with tunable polarized angle. Numerical simulations show that the evolution of focal pattern is altered considerably with different Bessel parameters in the Bessel term of the vector Bessel-Gaussian beam. The polarized angle also affects the focal pattern remarkably. Some interesting focal patterns may appear, such as two-peak, dark hollow focus; ring focus; spherical shell focus; cylindrical shell focus; and multi-ring-peak focus, and transverse focal switch occurs with increasing polarized angle of the inner annular portion, which may be used in optical manipulation.
NASA Astrophysics Data System (ADS)
Xu, Lu; Yu, Lianghong; Liang, Xiaoyan
2016-04-01
We present for the first time a scheme to amplify a Laguerre-Gaussian vortex beam based on non-collinear optical parametric chirped pulse amplification (OPCPA). In addition, a three-dimensional numerical model of non-collinear optical parametric amplification was deduced in the frequency domain, in which the effects of non-collinear configuration, temporal and spatial walk-off, group-velocity dispersion and diffraction were also taken into account, to trace the dynamics of the Laguerre-Gaussian vortex beam and investigate its critical parameters in the non-collinear OPCPA process. Based on the numerical simulation results, the scheme shows promise for implementation in a relativistic twisted laser pulse system, which will diversify the light-matter interaction field.
NASA Astrophysics Data System (ADS)
Hadgu, T.; Kalinina, E.; Klise, K. A.; Wang, Y.
2016-12-01
Disposal of high-level radioactive waste in a deep geological repository in crystalline host rock is one of the potential options for long term isolation. Characterization of the natural barrier system is an important component of the disposal option. In this study we present numerical modeling of flow and transport in fractured crystalline rock using an updated fracture continuum model (FCM). The FCM is a stochastic method that maps the permeability of discrete fractures onto a regular grid. The original method by McKenna and Reeves (2005) has been updated to provide capabilities that enhance representation of fractured rock. As reported in Hadgu et al. (2015) the method was first modified to include fully three-dimensional representations of anisotropic permeability, multiple independent fracture sets, and arbitrary fracture dips and orientations, and spatial correlation. More recently the FCM has been extended to include three different methods. (1) The Sequential Gaussian Simulation (SGSIM) method uses spatial correlation to generate fractures and define their properties for FCM (2) The ELLIPSIM method randomly generates a specified number of ellipses with properties defined by probability distributions. Each ellipse represents a single fracture. (3) Direct conversion of discrete fracture network (DFN) output. Test simulations were conducted to simulate flow and transport using ELLIPSIM and direct conversion of DFN methods. The simulations used a 1 km x 1km x 1km model domain and a structured with grid block of size of 10 m x 10m x 10m, resulting in a total of 106 grid blocks. Distributions of fracture parameters were used to generate a selected number of realizations. For each realization, the different methods were applied to generate representative permeability fields. The PFLOTRAN (Hammond et al., 2014) code was used to simulate flow and transport in the domain. Simulation results and analysis are presented. The results indicate that the FCM approach is a viable method to model fractured crystalline rocks. The FCM is a computationally efficient way to generate realistic representation of complex fracture systems. This approach is of interest for nuclear waste disposal models applied over large domains. SAND2016-7509 A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamble, John; Jacobson, Noah Tobias; Baczewski, Andrew
EMTpY is an implementation of effective mass theory in python. It is designed to simulate semiconductor qubits within a non-perturbative, multi-valley effective mass theory framework using robust Gaussian basis sets.
Gordon, Jeremy W; Milshteyn, Eugene; Marco-Rius, Irene; Ohliger, Michael; Vigneron, Daniel B; Larson, Peder E Z
2017-09-01
The purpose of this work was to explore the impact of slice profile effects on apparent diffusion coefficient (ADC) mapping of hyperpolarized (HP) substrates. Slice profile effects were simulated using a Gaussian radiofrequency (RF) pulse with a variety of flip angle schedules and b-value ordering schemes. A long T 1 water phantom was used to validate the simulation results, and ADC mapping of HP [ 13 C, 15 N 2 ]urea was performed on the murine liver to assess these effects in vivo. Slice profile effects result in excess signal after repeated RF pulses, causing bias in HP measurements. The largest error occurs for metabolites with small ADCs, resulting in up to 10-fold overestimation for metabolites that are in more-restricted environments. A mixed b-value scheme substantially reduces this bias, whereas scaling the slice-select gradient can mitigate it completely. In vivo, the liver ADC of hyperpolarized [ 13 C, 15 N 2 ]urea is nearly 70% lower (0.99 ± 0.22 vs 1.69 ± 0.21 × 10 -3 mm 2 /s) when slice-select gradient scaling is used. Slice profile effects can lead to bias in HP ADC measurements. A mixed b-value ordering scheme can reduce this bias compared to sequential b-value ordering. Slice-select gradient scaling can also correct for this deviation, minimizing bias and providing more-precise ADC measurements of HP substrates. Magn Reson Med 78:1087-1092, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
Online Bayesian Learning with Natural Sequential Prior Distribution Used for Wind Speed Prediction
NASA Astrophysics Data System (ADS)
Cheggaga, Nawal
2017-11-01
Predicting wind speed is one of the most important and critic tasks in a wind farm. All approaches, which directly describe the stochastic dynamics of the meteorological data are facing problems related to the nature of its non-Gaussian statistics and the presence of seasonal effects .In this paper, Online Bayesian learning has been successfully applied to online learning for three-layer perceptron's used for wind speed prediction. First a conventional transition model based on the squared norm of the difference between the current parameter vector and the previous parameter vector has been used. We noticed that the transition model does not adequately consider the difference between the current and the previous wind speed measurement. To adequately consider this difference, we use a natural sequential prior. The proposed transition model uses a Fisher information matrix to consider the difference between the observation models more naturally. The obtained results showed a good agreement between both series, measured and predicted. The mean relative error over the whole data set is not exceeding 5 %.
Li, Xiaowei; Mei, Qingqing; Dai, Xiaohu; Ding, Guoji
2017-03-01
Thermogravimetric analysis, Gaussian-fit-peak model (GFPM), and distributed activation energy model (DAEM) were firstly used to explore the effect of anaerobic digestion on sequential pyrolysis kinetic of four organic solid wastes (OSW). Results showed that the OSW weight loss mainly occurred in the second pyrolysis stage relating to organic matter decomposition. Compared with raw substrate, the weight loss of corresponding digestate was lower in the range of 180-550°C, but was higher in 550-900°C. GFPM analysis revealed that organic components volatized at peak temperatures of 188-263, 373-401 and 420-462°C had a faster degradation rate than those at 274-327°C during anaerobic digestion. DAEM analysis showed that anaerobic digestion had discrepant effects on activation energy for four OSW pyrolysis, possibly because of their different organic composition. It requires further investigation for the special organic matter, i.e., protein-like and carbohydrate-like groups, to confirm the assumption. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.
2014-02-01
Battery state monitoring is one of the key techniques in battery management systems e.g. in electric vehicles. An accurate estimation can help to improve the system performance and to prolong the battery remaining useful life. Main challenges for the state estimation for LiFePO4 batteries are the flat characteristic of open-circuit-voltage over battery state of charge (SOC) and the existence of hysteresis phenomena. Classical estimation approaches like Kalman filtering show limitations to handle nonlinear and non-Gaussian error distribution problems. In addition, uncertainties in the battery model parameters must be taken into account to describe the battery degradation. In this paper, a novel model-based method combining a Sequential Monte Carlo filter with adaptive control to determine the cell SOC and its electric impedance is presented. The applicability of this dual estimator is verified using measurement data acquired from a commercial LiFePO4 cell. Due to a better handling of the hysteresis problem, results show the benefits of the proposed method against the estimation with an Extended Kalman filter.
Contributions to the simulation of turbulence
NASA Technical Reports Server (NTRS)
Dutton, J. A.; Kerman, B. R.; Petersen, E. L.
1976-01-01
The simulation modeling of turbulence in the boundary layer in consolidated in terms of boundary layer similarity principles and empirical results. The modeling is extended for some aspects of the nonlinear and non-Gaussian structure of the turbulence. Properties of the discrete gust form structure of the modeled turbulence are identified.
Spatial Distribution of the Threshold Beam Spots of Laser Weapons Simulators
1993-09-08
This paper was based on the transmission theory of elliptical Gaussian beam fluxes in deriving some transmission equations for the threshold beam...spots of laser weapon simulators, in order to revise and expand the expressions for the threshold beam spots, their maximum range, the extinction
Gaussian Filtering with Tapered Oil-Filled Photonic Bandgap Fibers
NASA Astrophysics Data System (ADS)
Brunetti, A. C.; Scolari, L.; Weirich, J.; Eskildsen, L.; Bellanca, G.; Bassi, P.; Bjarklev, A.
2008-10-01
A tunable Gaussian filter based on a tapered oil-filled photonic crystal fiber is demonstrated. The filter is centered at λ = 1364 nm with a bandwidth (FWHM) of 237nm. Tunability is achieved by changing the temperature of the filter. A shift of 210nm of the central wavelength has been observed by increasing the temperature from 25 °C to 100 °C. The measurements are compared to a simulated spectrum obtained by means of a vectorial Beam Propagation Method model.
Linear Scaling Density Functional Calculations with Gaussian Orbitals
NASA Technical Reports Server (NTRS)
Scuseria, Gustavo E.
1999-01-01
Recent advances in linear scaling algorithms that circumvent the computational bottlenecks of large-scale electronic structure simulations make it possible to carry out density functional calculations with Gaussian orbitals on molecules containing more than 1000 atoms and 15000 basis functions using current workstations and personal computers. This paper discusses the recent theoretical developments that have led to these advances and demonstrates in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.
On estimating the phase of a periodic waveform in additive Gaussian noise, part 3
NASA Technical Reports Server (NTRS)
Rauch, L. L.
1991-01-01
Motivated by advances in signal processing technology that support more complex algorithms, researchers have taken a new look at the problem of estimating the phase and other parameters of a nearly periodic waveform in additive Gaussian noise, based on observation during a given time interval. Parts 1 and 2 are very briefly reviewed. In part 3, the actual performances of some of the highly nonlinear estimation algorithms of parts 1 and 2 are evaluated by numerical simulation using Monte Carlo techniques.
Strong and uniform convergence in the teleportation simulation of bosonic Gaussian channels
NASA Astrophysics Data System (ADS)
Wilde, Mark M.
2018-06-01
In the literature on the continuous-variable bosonic teleportation protocol due to Braunstein and Kimble [Phys. Rev. Lett. 80, 869 (1998), 10.1103/PhysRevLett.80.869], it is often loosely stated that this protocol converges to a perfect teleportation of an input state in the limit of ideal squeezing and ideal detection, but the exact form of this convergence is typically not clarified. In this paper, I explicitly clarify that the convergence is in the strong sense, and not the uniform sense, and furthermore that the convergence occurs for any input state to the protocol, including the infinite-energy Basel states defined and discussed here. I also prove, in contrast to the above result, that the teleportation simulations of pure-loss, thermal, pure-amplifier, amplifier, and additive-noise channels converge both strongly and uniformly to the original channels, in the limit of ideal squeezing and detection for the simulations. For these channels, I give explicit uniform bounds on the accuracy of their teleportation simulations. I then extend these uniform convergence results to particular multimode bosonic Gaussian channels. These convergence statements have important implications for mathematical proofs that make use of the teleportation simulation of bosonic Gaussian channels, some of which have to do with bounding their nonasymptotic secret-key-agreement capacities. As a by-product of the discussion given here, I confirm the correctness of the proof of such bounds from my joint work with Berta and Tomamichel from [Wilde, Tomamichel, and Berta, IEEE Trans. Inf. Theory 63, 1792 (2017), 10.1109/TIT.2017.2648825]. Furthermore, I show that it is not necessary to invoke the energy-constrained diamond distance in order to confirm the correctness of this proof.
A gaussian model for simulated geomagnetic field reversals
NASA Astrophysics Data System (ADS)
Wicht, Johannes; Meduri, Domenico G.
2016-10-01
Field reversals are the most spectacular events in the geomagnetic history but remain little understood. Here we explore the dipole behaviour in particularly long numerical dynamo simulations to reveal statistically significant conditions required for reversals and excursions to happen. We find that changes in the axial dipole moment behaviour are crucial while the equatorial dipole moment plays a negligible role. For small Rayleigh numbers, the axial dipole always remains strong and stable and obeys a clearly Gaussian probability distribution. Only when the Rayleigh number is increased sufficiently the axial dipole can reverse and its distribution becomes decisively non-Gaussian. Increased likelihoods around zero indicate a pronounced lingering in a new low dipole moment state. Reversals and excursions can only happen when axial dipole fluctuations are large enough to drive the system from the high dipole moment state assumed during stable polarity epochs into the low dipole moment state. Since it is just a matter of chance which polarity is amplified during dipole recovery, reversals and grand excursions, i.e. excursions during which the dipole assumes reverse polarity, are equally likely. While the overall reversal behaviour seems Earth-like, a closer comparison to palaeomagnetic findings suggests that the simulated events last too long and that grand excursions are too rare. For a particularly large Ekman number we find a second but less Earth-like type of reversals where the total field decays and recovers after a certain time.
Charged particle dynamics in the presence of non-Gaussian Lévy electrostatic fluctuations
Del-Castillo-Negrete, Diego B.; Moradi, Sara; Anderson, Johan
2016-09-01
Full orbit dynamics of charged particles in a 3-dimensional helical magnetic field in the presence of -stable Levy electrostatic fluctuations and linear friction modeling collisional Coulomb drag is studied via Monte Carlo numerical simulations. The Levy fluctuations are introduced to model the effect of non-local transport due to fractional diffusion in velocity space resulting from intermittent electrostatic turbulence. The probability distribution functions of energy, particle displacements, and Larmor radii are computed and showed to exhibit a transition from exponential decay, in the case of Gaussian fluctuations, to power law decay in the case of Levy fluctuations. The absolute value ofmore » the power law decay exponents are linearly proportional to the Levy index. Furthermore, the observed anomalous non-Gaussian statistics of the particles' Larmor radii (resulting from outlier transport events) indicate that, when electrostatic turbulent fluctuations exhibit non-Gaussian Levy statistics, gyro-averaging and guiding centre approximations might face limitations and full particle orbit effects should be taken into account.« less
Uncertainties in extracted parameters of a Gaussian emission line profile with continuum background.
Minin, Serge; Kamalabadi, Farzad
2009-12-20
We derive analytical equations for uncertainties in parameters extracted by nonlinear least-squares fitting of a Gaussian emission function with an unknown continuum background component in the presence of additive white Gaussian noise. The derivation is based on the inversion of the full curvature matrix (equivalent to Fisher information matrix) of the least-squares error, chi(2), in a four-variable fitting parameter space. The derived uncertainty formulas (equivalent to Cramer-Rao error bounds) are found to be in good agreement with the numerically computed uncertainties from a large ensemble of simulated measurements. The derived formulas can be used for estimating minimum achievable errors for a given signal-to-noise ratio and for investigating some aspects of measurement setup trade-offs and optimization. While the intended application is Fabry-Perot spectroscopy for wind and temperature measurements in the upper atmosphere, the derivation is generic and applicable to other spectroscopy problems with a Gaussian line shape.
Propagation of a Pearcey-Gaussian-vortex beam in free space and Kerr media
NASA Astrophysics Data System (ADS)
Peng, Yulian; Chen, Chidao; Chen, Bo; Peng, Xi; Zhou, Meiling; Zhang, Liping; Li, Dongdong; Deng, Dongmei
2016-12-01
The propagation of a Pearcey-Gaussian-vortex beam (PGVB) has been investigated numerically in free space and Kerr media. In addition, we have done a numerical experiment for the beam in free space. A PGVB maintains the characteristics of auto-focusing, self-healing and form-invariance which are possessed by a Pearcey beam and a Pearcey-Gaussian beam. Due to the influence of the optical vortex, a bright speck occurs in front of the main lobe. Compared with a Pearcey beam and a Pearcey-Gaussian beam, a PGVB has the most remarkable intensity singularity and the phase singularity. It is worth noting that the impact of the vortex at the coordinate origins means that a PGVB in the vicinity carries no angular momentum or transverse energy flow. We have investigated and numerically simulated the transverse intensity of a PGVB in Kerr media. We find that the auto-focusing of a PGVB in a Kerr medium becomes stronger with increasing power.
On the efficacy of procedures to normalize Ex-Gaussian distributions
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2015-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588
Charged particle dynamics in the presence of non-Gaussian Lévy electrostatic fluctuations
NASA Astrophysics Data System (ADS)
Moradi, Sara; del-Castillo-Negrete, Diego; Anderson, Johan
2016-09-01
Full orbit dynamics of charged particles in a 3-dimensional helical magnetic field in the presence of α-stable Lévy electrostatic fluctuations and linear friction modeling collisional Coulomb drag is studied via Monte Carlo numerical simulations. The Lévy fluctuations are introduced to model the effect of non-local transport due to fractional diffusion in velocity space resulting from intermittent electrostatic turbulence. The probability distribution functions of energy, particle displacements, and Larmor radii are computed and showed to exhibit a transition from exponential decay, in the case of Gaussian fluctuations, to power law decay in the case of Lévy fluctuations. The absolute value of the power law decay exponents is linearly proportional to the Lévy index α. The observed anomalous non-Gaussian statistics of the particles' Larmor radii (resulting from outlier transport events) indicate that, when electrostatic turbulent fluctuations exhibit non-Gaussian Lévy statistics, gyro-averaging and guiding centre approximations might face limitations and full particle orbit effects should be taken into account.
A neural-network based estimator to search for primordial non-Gaussianity in Planck CMB maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novaes, C.P.; Bernui, A.; Ferreira, I.S.
2015-09-01
We present an upgraded combined estimator, based on Minkowski Functionals and Neural Networks, with excellent performance in detecting primordial non-Gaussianity in simulated maps that also contain a weighted mixture of Galactic contaminations, besides real pixel's noise from Planck cosmic microwave background radiation data. We rigorously test the efficiency of our estimator considering several plausible scenarios for residual non-Gaussianities in the foreground-cleaned Planck maps, with the intuition to optimize the training procedure of the Neural Network to discriminate between contaminations with primordial and secondary non-Gaussian signatures. We look for constraints of primordial local non-Gaussianity at large angular scales in the foreground-cleanedmore » Planck maps. For the SMICA map we found f{sub NL} = 33 ± 23, at 1σ confidence level, in excellent agreement with the WMAP-9yr and Planck results. In addition, for the other three Planck maps we obtain similar constraints with values in the interval f{sub NL} element of [33, 41], concomitant with the fact that these maps manifest distinct features in reported analyses, like having different pixel's noise intensities.« less
Sequential Dependencies in Driving
ERIC Educational Resources Information Center
Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.
2012-01-01
The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…
J-adaptive estimation with estimated noise statistics
NASA Technical Reports Server (NTRS)
Jazwinski, A. H.; Hipkins, C.
1973-01-01
The J-adaptive sequential estimator is extended to include simultaneous estimation of the noise statistics in a model for system dynamics. This extension completely automates the estimator, eliminating the requirement of an analyst in the loop. Simulations in satellite orbit determination demonstrate the efficacy of the sequential estimation algorithm.
Parallelization and automatic data distribution for nuclear reactor simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebrock, L.M.
1997-07-01
Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directlymore » affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.« less
NASA Astrophysics Data System (ADS)
Greynolds, Alan W.
2013-09-01
Results from the GelOE optical engineering software are presented for the through-focus, monochromatic coherent and polychromatic incoherent imaging of a radial "star" target for equivalent t-number circular and Gaussian pupils. The FFT-based simulations are carried out using OpenMP threading on a multi-core desktop computer, with and without the aid of a many-core NVIDIA GPU accessing its cuFFT library. It is found that a custom FFT optimized for the 12-core host has similar performance to a simply implemented 256-core GPU FFT. A more sophisticated version of the latter but tuned to reduce overhead on a 448-core GPU is 20 to 28 times faster than a basic FFT implementation running on one CPU core.
NASA Technical Reports Server (NTRS)
Kihm, Frederic; Rizzi, Stephen A.; Ferguson, Neil S.; Halfpenny, Andrew
2013-01-01
High cycle fatigue of metals typically occurs through long term exposure to time varying loads which, although modest in amplitude, give rise to microscopic cracks that can ultimately propagate to failure. The fatigue life of a component is primarily dependent on the stress amplitude response at critical failure locations. For most vibration tests, it is common to assume a Gaussian distribution of both the input acceleration and stress response. In real life, however, it is common to experience non-Gaussian acceleration input, and this can cause the response to be non-Gaussian. Examples of non-Gaussian loads include road irregularities such as potholes in the automotive world or turbulent boundary layer pressure fluctuations for the aerospace sector or more generally wind, wave or high amplitude acoustic loads. The paper first reviews some of the methods used to generate non-Gaussian excitation signals with a given power spectral density and kurtosis. The kurtosis of the response is examined once the signal is passed through a linear time invariant system. Finally an algorithm is presented that determines the output kurtosis based upon the input kurtosis, the input power spectral density and the frequency response function of the system. The algorithm is validated using numerical simulations. Direct applications of these results include improved fatigue life estimations and a method to accelerate shaker tests by generating high kurtosis, non-Gaussian drive signals.
Wang, Minghao; Yuan, Xiuhua; Ma, Donglin
2017-04-01
Nonuniformly correlated partially coherent beams (PCBs) have extraordinary propagation properties, making it possible to further improve the performance of free-space optical communications. In this paper, a series of PCBs with varying degrees of coherence in the radial direction, academically called radial partially coherent beams (RPCBs), are considered. RPCBs with arbitrary coherence distributions can be created by adjusting the amplitude profile of a spatial modulation function imposed on a uniformly correlated phase screen. Since RPCBs cannot be well characterized by the coherence length, a modulation depth factor is introduced as an indicator of the overall distribution of coherence. By wave optics simulation, free-space and atmospheric propagation properties of RPCBs with (inverse) Gaussian and super-Gaussian coherence distributions are examined in comparison with conventional Gaussian Schell-model beams. Furthermore, the impacts of varying central coherent areas are studied. Simulation results reveal that under comparable overall coherence, beams with a highly coherent core and a less coherent margin exhibit a smaller beam spread and greater on-axis intensity, which is mainly due to the self-focusing phenomenon right after the beam exits the transmitter. Particularly, those RPCBs with super-Gaussian coherence distributions will repeatedly focus during propagation, resulting in even greater intensities. Additionally, RPCBs also have a considerable ability to reduce scintillation. And it is demonstrated that those properties have made RPCBs very effective in improving the mean signal-to-noise ratio of small optical receivers, especially in relatively short, weakly fluctuating links.
Kumar, Anil; Adhikary, Amitava; Shamoun, Lance; Sevilla, Michael D
2016-03-10
The solvated electron (e(aq)⁻) is a primary intermediate after an ionization event that produces reductive DNA damage. Accurate standard redox potentials (E(o)) of nucleobases and of e(aq)⁻ determine the extent of reaction of e(aq)⁻ with nucleobases. In this work, E(o) values of e(aq)⁻ and of nucleobases have been calculated employing the accurate ab initio Gaussian 4 theory including the polarizable continuum model (PCM). The Gaussian 4-calculated E(o) of e(aq)⁻ (-2.86 V) is in excellent agreement with the experimental one (-2.87 V). The Gaussian 4-calculated E(o) of nucleobases in dimethylformamide (DMF) lie in the range (-2.36 V to -2.86 V); they are in reasonable agreement with the experimental E(o) in DMF and have a mean unsigned error (MUE) = 0.22 V. However, inclusion of specific water molecules reduces this error significantly (MUE = 0.07). With the use of a model of e(aq)⁻ nucleobase complex with six water molecules, the reaction of e(aq)⁻ with the adjacent nucleobase is investigated using approximate ab initio molecular dynamics (MD) simulations including PCM. Our MD simulations show that e(aq)⁻ transfers to uracil, thymine, cytosine, and adenine, within 10 to 120 fs and e(aq)⁻ reacts with guanine only when a water molecule forms a hydrogen bond to O6 of guanine which stabilizes the anion radical.
Raman-Scattering Line Profiles of the Symbiotic Star AG Peg
NASA Astrophysics Data System (ADS)
Lee, Seong-Jae; Hyung, Siek
2017-06-01
The high dispersion Hα and Hβ line profiles of the Symbiotic star AG Peg consist of top double Gaussian and bottom components. We investigated the formation of the broad wings with Raman scattering mechanism. Adopting the same physical parameters from the photo-ionization study of Kim and Hyung (2008) for the white dwarf and the ionized gas shell, Monte Carlo simulations were carried out for a rotating accretion disk geometry of non-symmetrical latitude angles from -7° < θ < +7° to -16° < θ < +16°. The smaller latitude angle of the disk corresponds to the approaching side of the disk responsible for weak blue Gaussian profile, while the wider latitude angle corresponds to the other side of the disk responsible for the strong red Gaussian profile. We confirmed that the shell has the high gas density ˜ 109.85 cm-3 in the ionized zone of AG Peg derived in the previous photo-ionization model study. The simulation with various HI shell column densities (characterized by a thickness ΔD × gas number density nH) shows that the HI gas shell with a column density Hhi ≈ 3 - 5 × 1019 cm-2 fits the observed line profiles well. The estimated rotation speed of the accretion disk shell is in the range of 44 - 55 kms-1. We conclude that the kinematically incoherent structure involving the outflowing gas from the giant star caused an asymmetry of the disk and double Gaussian profiles found in AG Peg.
EVOLUTION OF THE MAGNETIC FIELD LINE DIFFUSION COEFFICIENT AND NON-GAUSSIAN STATISTICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snodin, A. P.; Ruffolo, D.; Matthaeus, W. H.
The magnetic field line random walk (FLRW) plays an important role in the transport of energy and particles in turbulent plasmas. For magnetic fluctuations that are transverse or almost transverse to a large-scale mean magnetic field, theories describing the FLRW usually predict asymptotic diffusion of magnetic field lines perpendicular to the mean field. Such theories often depend on the assumption that one can relate the Lagrangian and Eulerian statistics of the magnetic field via Corrsin’s hypothesis, and additionally take the distribution of magnetic field line displacements to be Gaussian. Here we take an ordinary differential equation (ODE) model with thesemore » underlying assumptions and test how well it describes the evolution of the magnetic field line diffusion coefficient in 2D+slab magnetic turbulence, by comparisons to computer simulations that do not involve such assumptions. In addition, we directly test the accuracy of the Corrsin approximation to the Lagrangian correlation. Over much of the studied parameter space we find that the ODE model is in fairly good agreement with computer simulations, in terms of both the evolution and asymptotic values of the diffusion coefficient. When there is poor agreement, we show that this can be largely attributed to the failure of Corrsin’s hypothesis rather than the assumption of Gaussian statistics of field line displacements. The degree of non-Gaussianity, which we measure in terms of the kurtosis, appears to be an indicator of how well Corrsin’s approximation works.« less
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.
High data rate coding for the space station telemetry links.
NASA Technical Reports Server (NTRS)
Lumb, D. R.; Viterbi, A. J.
1971-01-01
Coding systems for high data rates were examined from the standpoint of potential application in space-station telemetry links. Approaches considered included convolutional codes with sequential, Viterbi, and cascaded-Viterbi decoding. It was concluded that a high-speed (40 Mbps) sequential decoding system best satisfies the requirements for the assumed growth potential and specified constraints. Trade-off studies leading to this conclusion are viewed, and some sequential (Fano) algorithm improvements are discussed, together with real-time simulation results.
Concurrent processing simulation of the space station
NASA Technical Reports Server (NTRS)
Gluck, R.; Hale, A. L.; Sunkel, John W.
1989-01-01
The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.
Use of the Box-Cox Transformation in Detecting Changepoints in Daily Precipitation Data Series
NASA Astrophysics Data System (ADS)
Wang, X. L.; Chen, H.; Wu, Y.; Pu, Q.
2009-04-01
This study integrates a Box-Cox power transformation procedure into two statistical tests for detecting changepoints in Gaussian data series, to make the changepoint detection methods applicable to non-Gaussian data series, such as daily precipitation amounts. The detection power aspects of transformed methods in a common trend two-phase regression setting are assessed by Monte Carlo simulations for data of a log-normal or Gamma distribution. The results show that the transformed methods have increased the power of detection, in comparison with the corresponding original (untransformed) methods. The transformed data much better approximate to a Gaussian distribution. As an example of application, the new methods are applied to a series of daily precipitation amounts recorded at a station in Canada, showing satisfactory detection power.
Bayesian spatial transformation models with applications in neuroimaging data
Miranda, Michelle F.; Zhu, Hongtu; Ibrahim, Joseph G.
2013-01-01
Summary The aim of this paper is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. Our STMs include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov Random Field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. PMID:24128143
Qin, Fangjun; Chang, Lubin; Jiang, Sai; Zha, Feng
2018-05-03
In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.
A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations
Qin, Fangjun; Jiang, Sai; Zha, Feng
2018-01-01
In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538
Generic evolution of mixing in heterogeneous media
NASA Astrophysics Data System (ADS)
De Dreuzy, J.; Carrera, J.; Dentz, M.; Le Borgne, T.
2011-12-01
Mixing in heterogeneous media results from the competition bewteen flow fluctuations and local scale diffusion. Flow fluctuations quickly create concentration contrasts and thus heterogeneity of the concentration field, which is slowly homogenized by local scale diffusion. Mixing first deviates from Gaussian mixing, which represents the potential mixing induced by spreading before approaching it. This deviation fundamentally expresses the evolution of the interaction between spreading and local scale diffusion. We characterize it by the ratio γ of the non-Gaussian to the Gaussian mixing states. We define the Gaussian mixing state as the integrated squared concentration of the Gaussian plume that has the same longitudinal dispersion as the real plume. The non-Gaussian mixing state is the difference between the overall mixing state defined as the integrated squared concentration and the Gaussian mixing state. The main advantage of this definition is to use the full knowledge previously acquired on dispersion for characterizing mixing even when the solute concentration field is highly non Gaussian. Using high precision numerical simulations, we show that γ quickly increases, peaks and slowly decreases. γ can be derived from two scales characterizing spreading and local mixing, at least for large flux-weighted solute injection conditions into classically log-normal Gaussian correlated permeability fields. The spreading scale is directly related to the longitudinal dispersion. The local mixing scale is the largest scale over which solute concentrations can be considered locally uniform. More generally, beyond the characteristics of its maximum, γ turns out to have a highly generic scaling form. Its fast increase and slow decrease depend neither on the heterogeneity level, nor on the ratio of diffusion to advection, nor on the injection conditions. They might even not depend on the particularities of the flow fields as the same generic features also prevail for Taylor dispersion. This generic characterization of mixing can offer new ways to set up transport equations that honor not only advection and spreading (dispersion), but also mixing.
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-06-01
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.
Separating Gravitational Wave Signals from Instrument Artifacts
NASA Technical Reports Server (NTRS)
Littenberg, Tyson B.; Cornish, Neil J.
2010-01-01
Central to the gravitational wave detection problem is the challenge of separating features in the data produced by astrophysical sources from features produced by the detector. Matched filtering provides an optimal solution for Gaussian noise, but in practice, transient noise excursions or "glitches" complicate the analysis. Detector diagnostics and coincidence tests can be used to veto many glitches which may otherwise be misinterpreted as gravitational wave signals. The glitches that remain can lead to long tails in the matched filter search statistics and drive up the detection threshold. Here we describe a Bayesian approach that incorporates a more realistic model for the instrument noise allowing for fluctuating noise levels that vary independently across frequency bands, and deterministic "glitch fitting" using wavelets as "glitch templates", the number of which is determined by a trans-dimensional Markov chain Monte Carlo algorithm. We demonstrate the method's effectiveness on simulated data containing low amplitude gravitational wave signals from inspiraling binary black hole systems, and simulated non-stationary and non-Gaussian noise comprised of a Gaussian component with the standard LIGO/Virgo spectrum, and injected glitches of various amplitude, prevalence, and variety. Glitch fitting allows us to detect significantly weaker signals than standard techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less
Quantifying the non-Gaussianity in the EoR 21-cm signal through bispectrum
NASA Astrophysics Data System (ADS)
Majumdar, Suman; Pritchard, Jonathan R.; Mondal, Rajesh; Watkinson, Catherine A.; Bharadwaj, Somnath; Mellema, Garrelt
2018-05-01
The epoch of reionization (EoR) 21-cm signal is expected to be highly non-Gaussian in nature and this non-Gaussianity is also expected to evolve with the progressing state of reionization. Therefore the signal will be correlated between different Fourier modes (k). The power spectrum will not be able capture this correlation in the signal. We use a higher order estimator - the bispectrum - to quantify this evolving non-Gaussianity. We study the bispectrum using an ensemble of simulated 21-cm signal and with a large variety of k triangles. We observe two competing sources driving the non-Gaussianity in the signal: fluctuations in the neutral fraction (x_{H I}) field and fluctuations in the matter density field. We find that the non-Gaussian contribution from these two sources varies, depending on the stage of reionization and on which k modes are being studied. We show that the sign of the bispectrum works as a unique marker to identify which among these two components is driving the non-Gaussianity. We propose that the sign change in the bispectrum, when plotted as a function of triangle configuration cos θ and at a certain stage of the EoR can be used as a confirmative test for the detection of the 21-cm signal. We also propose a new consolidated way to visualize the signal evolution (with evolving \\bar{x}_{H I} or redshift), through the trajectories of the signal in a power spectrum and equilateral bispectrum i.e. P(k) - B(k, k, k) space.
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Pajer, E.; Pichon, C.; Nishimichi, T.; Codis, S.; Bernardeau, F.
2018-03-01
Non-Gaussianities of dynamical origin are disentangled from primordial ones using the formalism of large deviation statistics with spherical collapse dynamics. This is achieved by relying on accurate analytical predictions for the one-point probability distribution function and the two-point clustering of spherically averaged cosmic densities (sphere bias). Sphere bias extends the idea of halo bias to intermediate density environments and voids as underdense regions. In the presence of primordial non-Gaussianity, sphere bias displays a strong scale dependence relevant for both high- and low-density regions, which is predicted analytically. The statistics of densities in spheres are built to model primordial non-Gaussianity via an initial skewness with a scale dependence that depends on the bispectrum of the underlying model. The analytical formulas with the measured non-linear dark matter variance as input are successfully tested against numerical simulations. For local non-Gaussianity with a range from fNL = -100 to +100, they are found to agree within 2 per cent or better for densities ρ ∈ [0.5, 3] in spheres of radius 15 Mpc h-1 down to z = 0.35. The validity of the large deviation statistics formalism is thereby established for all observationally relevant local-type departures from perfectly Gaussian initial conditions. The corresponding estimators for the amplitude of the non-linear variance σ8 and primordial skewness fNL are validated using a fiducial joint maximum likelihood experiment. The influence of observational effects and the prospects for a future detection of primordial non-Gaussianity from joint one- and two-point densities-in-spheres statistics are discussed.
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Sequential use of simulation and optimization in analysis and planning
Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones
2000-01-01
Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...
NASA Astrophysics Data System (ADS)
Salehin, Z.; Woobaidullah, A. S. M.; Snigdha, S. S.
2015-12-01
Bengal Basin with its prolific gas rich province provides needed energy to Bangladesh. Present energy situation demands more Hydrocarbon explorations. Only 'Semutang' is discovered in the high amplitude structures, where rest of are in the gentle to moderate structures of western part of Chittagong-Tripura Fold Belt. But it has some major thrust faults which have strongly breached the reservoir zone. The major objectives of this research are interpretation of gas horizons and faults, then to perform velocity model, structural and property modeling to obtain reservoir properties. It is needed to properly identify the faults and reservoir heterogeneities. 3D modeling is widely used to reveal the subsurface structure in faulted zone where planning and development drilling is major challenge. Thirteen 2D seismic and six well logs have been used to identify six gas bearing horizons and a network of faults and to map the structure at reservoir level. Variance attributes were used to identify faults. Velocity model is performed for domain conversion. Synthetics were prepared from two wells where sonic and density logs are available. Well to seismic tie at reservoir zone shows good match with Direct Hydrocarbon Indicator on seismic section. Vsh, porosity, water saturation and permeability have been calculated and various cross plots among porosity logs have been shown. Structural modeling is used to make zone and layering accordance with minimum sand thickness. Fault model shows the possible fault network, those liable for several dry wells. Facies model have been constrained with Sequential Indicator Simulation method to show the facies distribution along the depth surfaces. Petrophysical models have been prepared with Sequential Gaussian Simulation to estimate petrophysical parameters away from the existing wells to other parts of the field and to observe heterogeneities in reservoir. Average porosity map for each gas zone were constructed. The outcomes of the research are an improved subsurface image of the seismic data (model), a porosity prediction for the reservoir, a reservoir quality map and also a fault map. The result shows a complex geologic model which may contribute to the economic potential of the field. For better understanding, 3D seismic survey, uncertainty and attributes analysis are necessary.
Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W.; Tremlett, Helen
2017-01-01
In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models (MSCMs) are frequently used to deal with such confounding. To avoid some of the problems of fitting MSCM, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as MSCM in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995 – 2008). PMID:27659168
Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W; Tremlett, Helen
2018-06-01
In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models are frequently used to deal with such confounding. To avoid some of the problems of fitting marginal structural Cox model, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as marginal structural Cox model in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995-2008).
Gaudrain, Etienne; Carlyon, Robert P
2013-01-01
Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish the target and the masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed.
Gaudrain, Etienne; Carlyon, Robert P.
2013-01-01
Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish target and masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed. PMID:23297922
An LES study of vertical-axis wind turbine wakes aerodynamics
NASA Astrophysics Data System (ADS)
Abkar, Mahdi; Dabiri, John O.
2016-11-01
In this study, large-eddy simulation (LES) combined with a turbine model is used to investigate the structure of the wake behind a vertical-axis wind turbine (VAWT). In the simulations, a recently developed minimum dissipation model is used to parameterize the subgrid-scale stress tensor, while the turbine-induced forces are modeled with an actuator-line technique. The LES framework is first tested in the simulation of the wake behind a model straight-bladed VAWT placed in the water channel, and then used to study the wake structure downwind of a full-scale VAWT sited in the atmospheric boundary layer. In particular, the self-similarity of the wake is examined, and it is found that the wake velocity deficit is well characterized by a two-dimensional elliptical Gaussian distribution. By assuming a self-similar Gaussian distribution of the velocity deficit, and applying mass and momentum conservation, an analytical model is developed and tested to predict the maximum velocity deficit downwind of the turbine.
Crevillén-García, D
2018-04-01
Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.
Quantum optical emulation of molecular vibronic spectroscopy using a trapped-ion device.
Shen, Yangchao; Lu, Yao; Zhang, Kuan; Zhang, Junhua; Zhang, Shuaining; Huh, Joonsuk; Kim, Kihwan
2018-01-28
Molecules are one of the most demanding quantum systems to be simulated by quantum computers due to their complexity and the emergent role of quantum nature. The recent theoretical proposal of Huh et al. (Nature Photon., 9, 615 (2015)) showed that a multi-photon network with a Gaussian input state can simulate a molecular spectroscopic process. Here, we present the first quantum device that generates a molecular spectroscopic signal with the phonons in a trapped ion system, using SO 2 as an example. In order to perform reliable Gaussian sampling, we develop the essential experimental technology with phonons, which includes the phase-coherent manipulation of displacement, squeezing, and rotation operations with multiple modes in a single realization. The required quantum optical operations are implemented through Raman laser beams. The molecular spectroscopic signal is reconstructed from the collective projection measurements for the two-phonon-mode. Our experimental demonstration will pave the way to large-scale molecular quantum simulations, which are classically intractable, but would be easily verifiable by real molecular spectroscopy.
A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Zhang, Dongxiao; Lin, Guang
A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a samplemore » of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching problem is captured and are used to give a reliable production prediction with uncertainty quantification. The new algorithm reveals a great improvement in terms of computational efficiency comparing previously studied approaches for the sample problem.« less
Digital simulation of hybrid loop operation in RFI backgrounds.
NASA Technical Reports Server (NTRS)
Ziemer, R. E.; Nelson, D. R.
1972-01-01
A digital computer model for Monte-Carlo simulation of an imperfect second-order hybrid phase-locked loop (PLL) operating in radio-frequency interference (RFI) and Gaussian noise backgrounds has been developed. Characterization of hybrid loop performance in terms of cycle slipping statistics and phase error variance, through computer simulation, indicates that the hybrid loop has desirable performance characteristics in RFI backgrounds over the conventional PLL or the costas loop.
Statistical Properties of Line Centroid Velocity Increments in the rho Ophiuchi Cloud
NASA Technical Reports Server (NTRS)
Lis, D. C.; Keene, Jocelyn; Li, Y.; Phillips, T. G.; Pety, J.
1998-01-01
We present a comparison of histograms of CO (2-1) line centroid velocity increments in the rho Ophiuchi molecular cloud with those computed for spectra synthesized from a three-dimensional, compressible, but non-starforming and non-gravitating hydrodynamic simulation. Histograms of centroid velocity increments in the rho Ophiuchi cloud show clearly non-Gaussian wings, similar to those found in histograms of velocity increments and derivatives in experimental studies of laboratory and atmospheric flows, as well as numerical simulations of turbulence. The magnitude of these wings increases monotonically with decreasing separation, down to the angular resolution of the data. This behavior is consistent with that found in the phase of the simulation which has most of the properties of incompressible turbulence. The time evolution of the magnitude of the non-Gaussian wings in the histograms of centroid velocity increments in the simulation is consistent with the evolution of the vorticity in the flow. However, we cannot exclude the possibility that the wings are associated with the shock interaction regions. Moreover, in an active starforming region like the rho Ophiuchi cloud, the effects of shocks may be more important than in the simulation. However, being able to identify shock interaction regions in the interstellar medium is also important, since numerical simulations show that vorticity is generated in shock interactions.
Simulations of 6-DOF Motion with a Cartesian Method
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)
2003-01-01
Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.
Monte Carlo Simulation of Sudden Death Bearing Testing
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.
Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase
Lu, Kelin; Zhou, Rui
2016-01-01
A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications. PMID:27537883
Tan, Sisi; Wu, Zhao; Lei, Lei; Hu, Shoujin; Dong, Jianji; Zhang, Xinliang
2013-03-25
We propose and experimentally demonstrate an all-optical differentiator-based computation system used for solving constant-coefficient first-order linear ordinary differential equations. It consists of an all-optical intensity differentiator and a wavelength converter, both based on a semiconductor optical amplifier (SOA) and an optical filter (OF). The equation is solved for various values of the constant-coefficient and two considered input waveforms, namely, super-Gaussian and Gaussian signals. An excellent agreement between the numerical simulation and the experimental results is obtained.
Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase.
Lu, Kelin; Zhou, Rui
2016-08-15
A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications.
Dry minor mergers and size evolution of high-z compact massive early-type galaxies
NASA Astrophysics Data System (ADS)
Oogi, Taira; Habe, Asao
2012-09-01
Recent observations show evidence that high-z (z ~ 2 - 3) early-type galaxies (ETGs) are quite compact than that with comparable mass at z ~ 0. Dry merger scenario is one of the most probable one that can explain such size evolution. However, previous studies based on this scenario do not succeed to explain both properties of high-z compact massive ETGs and local ETGs, consistently. We investigate effects of sequential, multiple dry minor (stellar mass ratio M2/M1<1/4) mergers on the size evolution of compact massive ETGs. We perform N-body simulations of the sequential minor mergers with parabolic and head-on orbits, including a dark matter component and a stellar component. We show that the sequential minor mergers of compact satellite galaxies are the most efficient in the size growth and in decrease of the velocity dispersion of the compact massive ETGs. The change of stellar size and density of the merger remnant is consistent with the recent observations. Furthermore, we construct the merger histories of candidates of high-z compact massive ETGs using the Millennium Simulation Database, and estimate the size growth of the galaxies by dry minor mergers. We can reproduce the mean size growth factor between z = 2 and z = 0, assuming the most efficient size growth obtained in the case of the sequential minor mergers in our simulations.
Neuromusculoskeletal model self-calibration for on-line sequential bayesian moment estimation
NASA Astrophysics Data System (ADS)
Bueno, Diana R.; Montano, L.
2017-04-01
Objective. Neuromusculoskeletal models involve many subject-specific physiological parameters that need to be adjusted to adequately represent muscle properties. Traditionally, neuromusculoskeletal models have been calibrated with a forward-inverse dynamic optimization which is time-consuming and unfeasible for rehabilitation therapy. Non self-calibration algorithms have been applied to these models. To the best of our knowledge, the algorithm proposed in this work is the first on-line calibration algorithm for muscle models that allows a generic model to be adjusted to different subjects in a few steps. Approach. In this paper we propose a reformulation of the traditional muscle models that is able to sequentially estimate the kinetics (net joint moments), and also its full self-calibration (subject-specific internal parameters of the muscle from a set of arbitrary uncalibrated data), based on the unscented Kalman filter. The nonlinearity of the model as well as its calibration problem have obliged us to adopt the sum of Gaussians filter suitable for nonlinear systems. Main results. This sequential Bayesian self-calibration algorithm achieves a complete muscle model calibration using as input only a dataset of uncalibrated sEMG and kinematics data. The approach is validated experimentally using data from the upper limbs of 21 subjects. Significance. The results show the feasibility of neuromusculoskeletal model self-calibration. This study will contribute to a better understanding of the generalization of muscle models for subject-specific rehabilitation therapies. Moreover, this work is very promising for rehabilitation devices such as electromyography-driven exoskeletons or prostheses.
ERIC Educational Resources Information Center
Facao, M.; Lopes, A.; Silva, A. L.; Silva, P.
2011-01-01
We propose an undergraduate numerical project for simulating the results of the second-order correlation function as obtained by an intensity interference experiment for two kinds of light, namely bunched light with Gaussian or Lorentzian power density spectrum and antibunched light obtained from single-photon sources. While the algorithm for…
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-12
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
NASA Astrophysics Data System (ADS)
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-01
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
Manual choice reaction times in the rate-domain
Harris, Christopher M.; Waddington, Jonathan; Biscione, Valerio; Manzi, Sean
2014-01-01
Over the last 150 years, human manual reaction times (RTs) have been recorded countless times. Yet, our understanding of them remains remarkably poor. RTs are highly variable with positively skewed frequency distributions, often modeled as an inverse Gaussian distribution reflecting a stochastic rise to threshold (diffusion process). However, latency distributions of saccades are very close to the reciprocal Normal, suggesting that “rate” (reciprocal RT) may be the more fundamental variable. We explored whether this phenomenon extends to choice manual RTs. We recorded two-alternative choice RTs from 24 subjects, each with 4 blocks of 200 trials with two task difficulties (easy vs. difficult discrimination) and two instruction sets (urgent vs. accurate). We found that rate distributions were, indeed, very close to Normal, shifting to lower rates with increasing difficulty and accuracy, and for some blocks they appeared to become left-truncated, but still close to Normal. Using autoregressive techniques, we found temporal sequential dependencies for lags of at least 3. We identified a transient and steady-state component in each block. Because rates were Normal, we were able to estimate autoregressive weights using the Box-Jenkins technique, and convert to a moving average model using z-transforms to show explicit dependence on stimulus input. We also found a spatial sequential dependence for the previous 3 lags depending on whether the laterality of previous trials was repeated or alternated. This was partially dissociated from temporal dependency as it only occurred in the easy tasks. We conclude that 2-alternative choice manual RT distributions are close to reciprocal Normal and not the inverse Gaussian. This is not consistent with stochastic rise to threshold models, and we propose a simple optimality model in which reward is maximized to yield to an optimal rate, and hence an optimal time to respond. We discuss how it might be implemented. PMID:24959134
NASA Astrophysics Data System (ADS)
Mazzoleni, Paolo; Matta, Fabio; Zappa, Emanuele; Sutton, Michael A.; Cigada, Alfredo
2015-03-01
This paper discusses the effect of pre-processing image blurring on the uncertainty of two-dimensional digital image correlation (DIC) measurements for the specific case of numerically-designed speckle patterns having particles with well-defined and consistent shape, size and spacing. Such patterns are more suitable for large measurement surfaces on large-scale specimens than traditional spray-painted random patterns without well-defined particles. The methodology consists of numerical simulations where Gaussian digital filters with varying standard deviation are applied to a reference speckle pattern. To simplify the pattern application process for large areas and increase contrast to reduce measurement uncertainty, the speckle shape, mean size and on-center spacing were selected to be representative of numerically-designed patterns that can be applied on large surfaces through different techniques (e.g., spray-painting through stencils). Such 'designer patterns' are characterized by well-defined regions of non-zero frequency content and non-zero peaks, and are fundamentally different from typical spray-painted patterns whose frequency content exhibits near-zero peaks. The effect of blurring filters is examined for constant, linear, quadratic and cubic displacement fields. Maximum strains between ±250 and ±20,000 με are simulated, thus covering a relevant range for structural materials subjected to service and ultimate stresses. The robustness of the simulation procedure is verified experimentally using a physical speckle pattern subjected to constant displacements. The stability of the relation between standard deviation of the Gaussian filter and measurement uncertainty is assessed for linear displacement fields at varying image noise levels, subset size, and frequency content of the speckle pattern. It is shown that bias error as well as measurement uncertainty are minimized through Gaussian pre-filtering. This finding does not apply to typical spray-painted patterns without well-defined particles, for which image blurring is only beneficial in reducing bias errors.
Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry
Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.
2014-01-01
Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.
Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.
Kim, Soohwan; Kim, Jonghyuk
2013-10-01
Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.
Reversible wavefront shaping between Gaussian and Airy beams by mimicking gravitational field
NASA Astrophysics Data System (ADS)
Wang, Xiangyang; Liu, Hui; Sheng, Chong; Zhu, Shining
2018-02-01
In this paper, we experimentally demonstrate reversible wavefront shaping through mimicking gravitational field. A gradient-index micro-structured optical waveguide with special refractive index profile was constructed whose effective index satisfying a gravitational field profile. Inside the waveguide, an incident broad Gaussian beam is firstly transformed into an accelerating beam, and the generated accelerating beam is gradually changed back to a Gaussian beam afterwards. To validate our experiment, we performed full-wave continuum simulations that agree with the experimental results. Furthermore, a theoretical model was established to describe the evolution of the laser beam based on Landau’s method, showing that the accelerating beam behaves like the Airy beam in the small range in which the linear potential approaches zero. To our knowledge, such a reversible wavefront shaping technique has not been reported before.
Bayesian spatial transformation models with applications in neuroimaging data.
Miranda, Michelle F; Zhu, Hongtu; Ibrahim, Joseph G
2013-12-01
The aim of this article is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. The proposed STM include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov random field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. © 2013, The International Biometric Society.
On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.
Li, Bing; Chun, Hyonho; Zhao, Hongyu
2014-09-01
We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.
Chialvo, Ariel A.; Moucka, Filip; Vlcek, Lukas; ...
2015-03-24
Here we implemented the Gaussian charge-on-spring (GCOS) version of the original self-consistent field implementation of the Gaussian Charge Polarizable water model and test its accuracy to represent the polarization behavior of the original model involving smeared charges and induced dipole moments. Moreover, for that purpose we adapted the recently developed multiple-particle-move (MPM) within the Gibbs and isochoric-isothermal ensembles Monte Carlo methods for the efficient simulation of polarizable fluids. We also assessed the accuracy of the GCOS representation by a direct comparison of the resulting vapor-liquid phase envelope, microstructure, and relevant microscopic descriptors of water polarization along the orthobaric curve againstmore » the corresponding quantities from the actual GCP water model.« less
Das, Anuradha; Das, Suman; Biswas, Ranjit
2015-01-21
Temperature dependent relaxation dynamics, particle motion characteristics, and heterogeneity aspects of deep eutectic solvents (DESs) made of acetamide (CH3CONH2) and urea (NH2CONH2) have been investigated by employing time-resolved fluorescence measurements and all-atom molecular dynamics simulations. Three different compositions (f) for the mixture [fCH3CONH2 + (1 - f)NH2CONH2] have been studied in a temperature range of 328-353 K which is ∼120-145 K above the measured glass transition temperatures (∼207 K) of these DESs but much lower than the individual melting temperature of either of the constituents. Steady state fluorescence emission measurements using probe solutes with sharply different lifetimes do not indicate any dependence on excitation wavelength in these metastable molten systems. Time-resolved fluorescence anisotropy measurements reveal near-hydrodynamic coupling between medium viscosity and rotation of a dissolved dipolar solute. Stokes shift dynamics have been found to be too fast to be detected by the time-resolution (∼70 ps) employed, suggesting extremely rapid medium polarization relaxation. All-atom simulations reveal Gaussian distribution for particle displacements and van Hove correlations, and significant overlap between non-Gaussian (α2) and new non-Gaussian (γ) heterogeneity parameters. In addition, no stretched exponential relaxations have been detected in the simulated wavenumber dependent acetamide dynamic structure factors. All these results are in sharp contrast to earlier observations for ionic deep eutectics with acetamide [Guchhait et al., J. Chem. Phys. 140, 104514 (2014)] and suggest a fundamental difference in interaction and dynamics between ionic and non-ionic deep eutectic solvent systems.
Gaussian Accelerated Molecular Dynamics in NAMD
2016-01-01
Gaussian accelerated molecular dynamics (GaMD) is a recently developed enhanced sampling technique that provides efficient free energy calculations of biomolecules. Like the previous accelerated molecular dynamics (aMD), GaMD allows for “unconstrained” enhanced sampling without the need to set predefined collective variables and so is useful for studying complex biomolecular conformational changes such as protein folding and ligand binding. Furthermore, because the boost potential is constructed using a harmonic function that follows Gaussian distribution in GaMD, cumulant expansion to the second order can be applied to recover the original free energy profiles of proteins and other large biomolecules, which solves a long-standing energetic reweighting problem of the previous aMD method. Taken together, GaMD offers major advantages for both unconstrained enhanced sampling and free energy calculations of large biomolecules. Here, we have implemented GaMD in the NAMD package on top of the existing aMD feature and validated it on three model systems: alanine dipeptide, the chignolin fast-folding protein, and the M3 muscarinic G protein-coupled receptor (GPCR). For alanine dipeptide, while conventional molecular dynamics (cMD) simulations performed for 30 ns are poorly converged, GaMD simulations of the same length yield free energy profiles that agree quantitatively with those of 1000 ns cMD simulation. Further GaMD simulations have captured folding of the chignolin and binding of the acetylcholine (ACh) endogenous agonist to the M3 muscarinic receptor. The reweighted free energy profiles are used to characterize the protein folding and ligand binding pathways quantitatively. GaMD implemented in the scalable NAMD is widely applicable to enhanced sampling and free energy calculations of large biomolecules. PMID:28034310
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Anuradha; Das, Suman; Biswas, Ranjit, E-mail: ranjit@bose.res.in
2015-01-21
Temperature dependent relaxation dynamics, particle motion characteristics, and heterogeneity aspects of deep eutectic solvents (DESs) made of acetamide (CH{sub 3}CONH{sub 2}) and urea (NH{sub 2}CONH{sub 2}) have been investigated by employing time-resolved fluorescence measurements and all-atom molecular dynamics simulations. Three different compositions (f) for the mixture [fCH{sub 3}CONH{sub 2} + (1 − f)NH{sub 2}CONH{sub 2}] have been studied in a temperature range of 328-353 K which is ∼120-145 K above the measured glass transition temperatures (∼207 K) of these DESs but much lower than the individual melting temperature of either of the constituents. Steady state fluorescence emission measurements using probemore » solutes with sharply different lifetimes do not indicate any dependence on excitation wavelength in these metastable molten systems. Time-resolved fluorescence anisotropy measurements reveal near-hydrodynamic coupling between medium viscosity and rotation of a dissolved dipolar solute. Stokes shift dynamics have been found to be too fast to be detected by the time-resolution (∼70 ps) employed, suggesting extremely rapid medium polarization relaxation. All-atom simulations reveal Gaussian distribution for particle displacements and van Hove correlations, and significant overlap between non-Gaussian (α{sub 2}) and new non-Gaussian (γ) heterogeneity parameters. In addition, no stretched exponential relaxations have been detected in the simulated wavenumber dependent acetamide dynamic structure factors. All these results are in sharp contrast to earlier observations for ionic deep eutectics with acetamide [Guchhait et al., J. Chem. Phys. 140, 104514 (2014)] and suggest a fundamental difference in interaction and dynamics between ionic and non-ionic deep eutectic solvent systems.« less
Gaussian Accelerated Molecular Dynamics in NAMD.
Pang, Yui Tik; Miao, Yinglong; Wang, Yi; McCammon, J Andrew
2017-01-10
Gaussian accelerated molecular dynamics (GaMD) is a recently developed enhanced sampling technique that provides efficient free energy calculations of biomolecules. Like the previous accelerated molecular dynamics (aMD), GaMD allows for "unconstrained" enhanced sampling without the need to set predefined collective variables and so is useful for studying complex biomolecular conformational changes such as protein folding and ligand binding. Furthermore, because the boost potential is constructed using a harmonic function that follows Gaussian distribution in GaMD, cumulant expansion to the second order can be applied to recover the original free energy profiles of proteins and other large biomolecules, which solves a long-standing energetic reweighting problem of the previous aMD method. Taken together, GaMD offers major advantages for both unconstrained enhanced sampling and free energy calculations of large biomolecules. Here, we have implemented GaMD in the NAMD package on top of the existing aMD feature and validated it on three model systems: alanine dipeptide, the chignolin fast-folding protein, and the M 3 muscarinic G protein-coupled receptor (GPCR). For alanine dipeptide, while conventional molecular dynamics (cMD) simulations performed for 30 ns are poorly converged, GaMD simulations of the same length yield free energy profiles that agree quantitatively with those of 1000 ns cMD simulation. Further GaMD simulations have captured folding of the chignolin and binding of the acetylcholine (ACh) endogenous agonist to the M 3 muscarinic receptor. The reweighted free energy profiles are used to characterize the protein folding and ligand binding pathways quantitatively. GaMD implemented in the scalable NAMD is widely applicable to enhanced sampling and free energy calculations of large biomolecules.
Optimum Laser Beam Characteristics for Achieving Smoother Ablations in Laser Vision Correction.
Verma, Shwetabh; Hesser, Juergen; Arba-Mosquera, Samuel
2017-04-01
Controversial opinions exist regarding optimum laser beam characteristics for achieving smoother ablations in laser-based vision correction. The purpose of the study was to outline a rigorous simulation model for simulating shot-by-shot ablation process. The impact of laser beam characteristics like super Gaussian order, truncation radius, spot geometry, spot overlap, and lattice geometry were tested on ablation smoothness. Given the super Gaussian order, the theoretical beam profile was determined following Lambert-Beer model. The intensity beam profile originating from an excimer laser was measured with a beam profiler camera. For both, the measured and theoretical beam profiles, two spot geometries (round and square spots) were considered, and two types of lattices (reticular and triangular) were simulated with varying spot overlaps and ablated material (cornea or polymethylmethacrylate [PMMA]). The roughness in ablation was determined by the root-mean-square per square root of layer depth. Truncating the beam profile increases the roughness in ablation, Gaussian profiles theoretically result in smoother ablations, round spot geometries produce lower roughness in ablation compared to square geometry, triangular lattices theoretically produce lower roughness in ablation compared to the reticular lattice, theoretically modeled beam profiles show lower roughness in ablation compared to the measured beam profile, and the simulated roughness in ablation on PMMA tends to be lower than on human cornea. For given input parameters, proper optimum parameters for minimizing the roughness have been found. Theoretically, the proposed model can be used for achieving smoothness with laser systems used for ablation processes at relatively low cost. This model may improve the quality of results and could be directly applied for improving postoperative surface quality.
NASA Astrophysics Data System (ADS)
Das, Anuradha; Das, Suman; Biswas, Ranjit
2015-01-01
Temperature dependent relaxation dynamics, particle motion characteristics, and heterogeneity aspects of deep eutectic solvents (DESs) made of acetamide (CH3CONH2) and urea (NH2CONH2) have been investigated by employing time-resolved fluorescence measurements and all-atom molecular dynamics simulations. Three different compositions (f) for the mixture [fCH3CONH2 + (1 - f)NH2CONH2] have been studied in a temperature range of 328-353 K which is ˜120-145 K above the measured glass transition temperatures (˜207 K) of these DESs but much lower than the individual melting temperature of either of the constituents. Steady state fluorescence emission measurements using probe solutes with sharply different lifetimes do not indicate any dependence on excitation wavelength in these metastable molten systems. Time-resolved fluorescence anisotropy measurements reveal near-hydrodynamic coupling between medium viscosity and rotation of a dissolved dipolar solute. Stokes shift dynamics have been found to be too fast to be detected by the time-resolution (˜70 ps) employed, suggesting extremely rapid medium polarization relaxation. All-atom simulations reveal Gaussian distribution for particle displacements and van Hove correlations, and significant overlap between non-Gaussian (α2) and new non-Gaussian (γ) heterogeneity parameters. In addition, no stretched exponential relaxations have been detected in the simulated wavenumber dependent acetamide dynamic structure factors. All these results are in sharp contrast to earlier observations for ionic deep eutectics with acetamide [Guchhait et al., J. Chem. Phys. 140, 104514 (2014)] and suggest a fundamental difference in interaction and dynamics between ionic and non-ionic deep eutectic solvent systems.
Non-Gaussian bias: insights from discrete density peaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desjacques, Vincent; Riotto, Antonio; Gong, Jinn-Ouk, E-mail: Vincent.Desjacques@unige.ch, E-mail: jinn-ouk.gong@apctp.org, E-mail: Antonio.Riotto@unige.ch
2013-09-01
Corrections induced by primordial non-Gaussianity to the linear halo bias can be computed from a peak-background split or the widespread local bias model. However, numerical simulations clearly support the prediction of the former, in which the non-Gaussian amplitude is proportional to the linear halo bias. To understand better the reasons behind the failure of standard Lagrangian local bias, in which the halo overdensity is a function of the local mass overdensity only, we explore the effect of a primordial bispectrum on the 2-point correlation of discrete density peaks. We show that the effective local bias expansion to peak clustering vastlymore » simplifies the calculation. We generalize this approach to excursion set peaks and demonstrate that the resulting non-Gaussian amplitude, which is a weighted sum of quadratic bias factors, precisely agrees with the peak-background split expectation, which is a logarithmic derivative of the halo mass function with respect to the normalisation amplitude. We point out that statistics of thresholded regions can be computed using the same formalism. Our results suggest that halo clustering statistics can be modelled consistently (in the sense that the Gaussian and non-Gaussian bias factors agree with peak-background split expectations) from a Lagrangian bias relation only if the latter is specified as a set of constraints imposed on the linear density field. This is clearly not the case of standard Lagrangian local bias. Therefore, one is led to consider additional variables beyond the local mass overdensity.« less
Statistics and topology of the COBE differential microwave radiometer first-year sky maps
NASA Technical Reports Server (NTRS)
Smoot, G. F.; Tenorio, L.; Banday, A. J.; Kogut, A.; Wright, E. L.; Hinshaw, G.; Bennett, C. L.
1994-01-01
We use statistical and topological quantities to test the Cosmic Background Explorer (COBE) Differential Microwave Radiometer (DMR) first-year sky maps against the hypothesis that the observed temperature fluctuations reflect Gaussian initial density perturbations with random phases. Recent papers discuss specific quantities as discriminators between Gaussian and non-Gaussian behavior, but the treatment of instrumental noise on the data is largely ignored. The presence of noise in the data biases many statistical quantities in a manner dependent on both the noise properties and the unknown cosmic microwave background temperature field. Appropriate weighting schemes can minimize this effect, but it cannot be completely eliminated. Analytic expressions are presented for these biases, and Monte Carlo simulations are used to assess the best strategy for determining cosmologically interesting information from noisy data. The genus is a robust discriminator that can be used to estimate the power-law quadrupole-normalized amplitude, Q(sub rms-PS), independently of the two-point correlation function. The genus of the DMR data is consistent with Gaussian initial fluctuations with Q(sub rms-PS) = (15.7 +/- 2.2) - (6.6 +/- 0.3)(n - 1) micro-K, where n is the power-law index. Fitting the rms temperature variations at various smoothing angles gives Q(sub rms-PS) = 13.2 +/- 2.5 micro-K and n = 1.7(sup (+0.3) sub (-0.6)). While consistent with Gaussian fluctuations, the first year data are only sufficient to rule out strongly non-Gaussian distributions of fluctuations.
Kärkkäinen, Hanni P; Sillanpää, Mikko J
2013-09-04
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.
Kärkkäinen, Hanni P.; Sillanpää, Mikko J.
2013-01-01
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618
Group-sequential three-arm noninferiority clinical trial designs
Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko
2016-01-01
We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481
Improved coverage of cDNA-AFLP by sequential digestion of immobilized cDNA.
Weiberg, Arne; Pöhler, Dirk; Morgenstern, Burkhard; Karlovsky, Petr
2008-10-13
cDNA-AFLP is a transcriptomics technique which does not require prior sequence information and can therefore be used as a gene discovery tool. The method is based on selective amplification of cDNA fragments generated by restriction endonucleases, electrophoretic separation of the products and comparison of the band patterns between treated samples and controls. Unequal distribution of restriction sites used to generate cDNA fragments negatively affects the performance of cDNA-AFLP. Some transcripts are represented by more than one fragment while other escape detection, causing redundancy and reducing the coverage of the analysis, respectively. With the goal of improving the coverage of cDNA-AFLP without increasing its redundancy, we designed a modified cDNA-AFLP protocol. Immobilized cDNA is sequentially digested with several restriction endonucleases and the released DNA fragments are collected in mutually exclusive pools. To investigate the performance of the protocol, software tool MECS (Multiple Enzyme cDNA-AFLP Simulation) was written in Perl. cDNA-AFLP protocols described in the literature and the new sequential digestion protocol were simulated on sets of cDNA sequences from mouse, human and Arabidopsis thaliana. The redundancy and coverage, the total number of PCR reactions, and the average fragment length were calculated for each protocol and cDNA set. Simulation revealed that sequential digestion of immobilized cDNA followed by the partitioning of released fragments into mutually exclusive pools outperformed other cDNA-AFLP protocols in terms of coverage, redundancy, fragment length, and the total number of PCRs. Primers generating 30 to 70 amplicons per PCR provided the highest fraction of electrophoretically distinguishable fragments suitable for normalization. For A. thaliana, human and mice transcriptome, the use of two marking enzymes and three sequentially applied releasing enzymes for each of the marking enzymes is recommended.
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
NASA Astrophysics Data System (ADS)
Chang, Anteng; Li, Huajun; Wang, Shuqing; Du, Junfeng
2017-08-01
Both wave-frequency (WF) and low-frequency (LF) components of mooring tension are in principle non-Gaussian due to nonlinearities in the dynamic system. This paper conducts a comprehensive investigation of applicable probability density functions (PDFs) of mooring tension amplitudes used to assess mooring-line fatigue damage via the spectral method. Short-term statistical characteristics of mooring-line tension responses are firstly investigated, in which the discrepancy arising from Gaussian approximation is revealed by comparing kurtosis and skewness coefficients. Several distribution functions based on present analytical spectral methods are selected to express the statistical distribution of the mooring-line tension amplitudes. Results indicate that the Gamma-type distribution and a linear combination of Dirlik and Tovo-Benasciutti formulas are suitable for separate WF and LF mooring tension components. A novel parametric method based on nonlinear transformations and stochastic optimization is then proposed to increase the effectiveness of mooring-line fatigue assessment due to non-Gaussian bimodal tension responses. Using time domain simulation as a benchmark, its accuracy is further validated using a numerical case study of a moored semi-submersible platform.
Multi-Target Tracking Using an Improved Gaussian Mixture CPHD Filter.
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-11-23
The cardinalized probability hypothesis density (CPHD) filter is an alternative approximation to the full multi-target Bayesian filter for tracking multiple targets. However, although the joint propagation of the posterior intensity and cardinality distribution in its recursion allows more reliable estimates of the target number than the PHD filter, the CPHD filter suffers from the spooky effect where there exists arbitrary PHD mass shifting in the presence of missed detections. To address this issue in the Gaussian mixture (GM) implementation of the CPHD filter, this paper presents an improved GM-CPHD filter, which incorporates a weight redistribution scheme into the filtering process to modify the updated weights of the Gaussian components when missed detections occur. In addition, an efficient gating strategy that can adaptively adjust the gate sizes according to the number of missed detections of each Gaussian component is also presented to further improve the computational efficiency of the proposed filter. Simulation results demonstrate that the proposed method offers favorable performance in terms of both estimation accuracy and robustness to clutter and detection uncertainty over the existing methods.
NASA Astrophysics Data System (ADS)
Zhou, Weijun; Hong, Xueren; Xie, Baisong; Yang, Yang; Wang, Li; Tian, Jianmin; Tang, Rongan; Duan, Wenshan
2018-02-01
In order to generate high quality ion beams through a relatively uniform radiation pressure acceleration (RPA) of a common flat foil, a new scheme is proposed to overcome the curve of the target while being radiated by a single transversely Gaussian laser. In this scheme, two matched counterpropagating transversely Gaussian laser pulses, a main pulse and an auxiliary pulse, impinge on the foil target at the meantime. It is found that in the two-dimensional (2D) particle-in-cell (PIC) simulation, by the restraint of the auxiliary laser, the curve of the foil can be effectively suppressed. As a result, a high quality monoenergetic ion beam is generated through an efficient RPA of the foil target. For example, two counterpropagating transversely circularly polarized Gaussian lasers with normalized amplitudes a1=120 and a2=30 , respectively, impinge on the foil target at the meantime, a 1.3 GeV monoenergetic proton beam with high collimation is obtained finally. Furthermore, the effects on the ions acceleration with different parameters of the auxiliary laser are also investigated.
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
Propagating probability distributions of stand variables using sequential Monte Carlo methods
Jeffrey H. Gove
2009-01-01
A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...
Poster error probability in the Mu-11 Sequential Ranging System
NASA Technical Reports Server (NTRS)
Coyle, C. W.
1981-01-01
An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.
Extracting foreground-obscured μ-distortion anisotropies to constrain primordial non-Gaussianity
NASA Astrophysics Data System (ADS)
Remazeilles, M.; Chluba, J.
2018-07-01
Correlations between cosmic microwave background (CMB) temperature, polarization, and spectral distortion anisotropies can be used as a probe of primordial non-Gaussianity. Here, we perform a reconstruction of μ-distortion anisotropies in the presence of Galactic and extragalactic foregrounds, applying the so-called Constrained ILC component separation method to simulations of proposed CMB space missions (PIXIE, LiteBIRD, CORE, and PICO). Our sky simulations include Galactic dust, Galactic synchrotron, Galactic free-free, thermal Sunyaev-Zeldovich effect, as well as primary CMB temperature and μ-distortion anisotropies, the latter being added as correlated field. The Constrained ILC method allows us to null the CMB temperature anisotropies in the reconstructed μ-map (and vice versa), in addition to mitigating the contaminations from astrophysical foregrounds and instrumental noise. We compute the cross-power spectrum between the reconstructed (CMB-free) μ-distortion map and the (μ-free) CMB temperature map, after foreground removal and component separations. Since the cross-power spectrum is proportional to the primordial non-Gaussianity parameter, fNL, on scales k˜eq 740 Mpc^{-1}, this allows us to derive fNL-detection limits for the aforementioned future CMB experiments. Our analysis shows that foregrounds degrade the theoretical detection limits (based mostly on instrumental noise) by more than one order of magnitude, with PICO standing the best chance at placing upper limits on scale-dependent non-Gaussianity. We also discuss the dependence of the constraints on the channel sensitivities and chosen bands. Like for B-mode polarization measurements, extended coverage at frequencies ν ≲ 40 GHz and ν ≳ 400 GHz provides more leverage than increased channel sensitivity.
Extracting foreground-obscured μ-distortion anisotropies to constrain primordial non-Gaussianity
NASA Astrophysics Data System (ADS)
Remazeilles, M.; Chluba, J.
2018-04-01
Correlations between cosmic microwave background (CMB) temperature, polarization and spectral distortion anisotropies can be used as a probe of primordial non-Gaussianity. Here, we perform a reconstruction of μ-distortion anisotropies in the presence of Galactic and extragalactic foregrounds, applying the so-called Constrained ILC component separation method to simulations of proposed CMB space missions (PIXIE, LiteBIRD, CORE, PICO). Our sky simulations include Galactic dust, Galactic synchrotron, Galactic free-free, thermal Sunyaev-Zeldovich effect, as well as primary CMB temperature and μ-distortion anisotropies, the latter being added as correlated field. The Constrained ILC method allows us to null the CMB temperature anisotropies in the reconstructed μ-map (and vice versa), in addition to mitigating the contaminations from astrophysical foregrounds and instrumental noise. We compute the cross-power spectrum between the reconstructed (CMB-free) μ-distortion map and the (μ-free) CMB temperature map, after foreground removal and component separation. Since the cross-power spectrum is proportional to the primordial non-Gaussianity parameter, fNL, on scales k˜eq 740 Mpc^{-1}, this allows us to derive fNL-detection limits for the aforementioned future CMB experiments. Our analysis shows that foregrounds degrade the theoretical detection limits (based mostly on instrumental noise) by more than one order of magnitude, with PICO standing the best chance at placing upper limits on scale-dependent non-Gaussianity. We also discuss the dependence of the constraints on the channel sensitivities and chosen bands. Like for B-mode polarization measurements, extended coverage at frequencies ν ≲ 40 GHz and ν ≳ 400 GHz provides more leverage than increased channel sensitivity.
Wu, Wei Mo; Wang, Jia Qiang; Cao, Qi; Wu, Jia Ping
2017-02-01
Accurate prediction of soil organic carbon (SOC) distribution is crucial for soil resources utilization and conservation, climate change adaptation, and ecosystem health. In this study, we selected a 1300 m×1700 m solonchak sampling area in northern Tarim Basin, Xinjiang, China, and collected a total of 144 soil samples (5-10 cm). The objectives of this study were to build a Baye-sian geostatistical model to predict SOC content, and to assess the performance of the Bayesian model for the prediction of SOC content by comparing with other three geostatistical approaches [ordinary kriging (OK), sequential Gaussian simulation (SGS), and inverse distance weighting (IDW)]. In the study area, soil organic carbon contents ranged from 1.59 to 9.30 g·kg -1 with a mean of 4.36 g·kg -1 and a standard deviation of 1.62 g·kg -1 . Sample semivariogram was best fitted by an exponential model with the ratio of nugget to sill being 0.57. By using the Bayesian geostatistical approach, we generated the SOC content map, and obtained the prediction variance, upper 95% and lower 95% of SOC contents, which were then used to evaluate the prediction uncertainty. Bayesian geostatistical approach performed better than that of the OK, SGS and IDW, demonstrating the advantages of Bayesian approach in SOC prediction.
NASA Astrophysics Data System (ADS)
Lam, D. T.; Kerrou, J.; Benabderrahmane, H.; Perrochet, P.
2017-12-01
The calibration of groundwater flow models in transient state can be motivated by the expected improved characterization of the aquifer hydraulic properties, especially when supported by a rich transient dataset. In the prospect of setting up a calibration strategy for a variably-saturated transient groundwater flow model of the area around the ANDRA's Bure Underground Research Laboratory, we wish to take advantage of the long hydraulic head and flowrate time series collected near and at the access shafts in order to help inform the model hydraulic parameters. A promising inverse approach for such high-dimensional nonlinear model, and which applicability has been illustrated more extensively in other scientific fields, could be an iterative ensemble smoother algorithm initially developed for a reservoir engineering problem. Furthermore, the ensemble-based stochastic framework will allow to address to some extent the uncertainty of the calibration for a subsequent analysis of a flow process dependent prediction. By assimilating the available data in one single step, this method iteratively updates each member of an initial ensemble of stochastic realizations of parameters until the minimization of an objective function. However, as it is well known for ensemble-based Kalman methods, this correction computed from approximations of covariance matrices is most efficient when the ensemble realizations are multi-Gaussian. As shown by the comparison of the updated ensemble mean obtained for our simplified synthetic model of 2D vertical flow by using either multi-Gaussian or multipoint simulations of parameters, the ensemble smoother fails to preserve the initial connectivity of the facies and the parameter bimodal distribution. Given the geological structures depicted by the multi-layered geological model built for the real case, our goal is to find how to still best leverage the performance of the ensemble smoother while using an initial ensemble of conditional multi-Gaussian simulations or multipoint simulations as conceptually consistent as possible. Performance of the algorithm including additional steps to help mitigate the effects of non-Gaussian patterns, such as Gaussian anamorphosis, or resampling of facies from the training image using updated local probability constraints will be assessed.
Soil Moisture Monitoring using Surface Electrical Resistivity measurements
NASA Astrophysics Data System (ADS)
Calamita, Giuseppe; Perrone, Angela; Brocca, Luca; Straface, Salvatore
2017-04-01
The relevant role played by the soil moisture (SM) for global and local natural processes results in an explicit interest for its spatial and temporal estimation in the vadose zone coming from different scientific areas - i.e. eco-hydrology, hydrogeology, atmospheric research, soil and plant sciences, etc... A deeper understanding of natural processes requires the collection of data on a higher number of points at increasingly higher spatial scales in order to validate hydrological numerical simulations. In order to take the best advantage of the Electrical Resistivity (ER) data with their non-invasive and cost-effective properties, sequential Gaussian geostatistical simulations (sGs) can be applied to monitor the SM distribution into the soil by means of a few SM measurements and a densely regular ER grid of monitoring. With this aim, co-located SM measurements using mobile TDR probes (MiniTrase), and ER measurements, obtained by using a four-electrode device coupled with a geo-resistivimeter (Syscal Junior), were collected during two surveys carried out on a 200 × 60 m2 area. Two time surveys were carried out during which Data were collected at a depth of around 20 cm for more than 800 points adopting a regular grid sampling scheme with steps (5 m) varying according to logistic and soil compaction constrains. The results of this study are robust due to the high number of measurements available for either variables which strengthen the confidence in the covariance function estimated. Moreover, the findings obtained using sGs show that it is possible to estimate soil moisture variations in the pedological zone by means of time-lapse electrical resistivity and a few SM measurements.
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes
NASA Astrophysics Data System (ADS)
Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.
2016-12-01
The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.
NASA Astrophysics Data System (ADS)
Guadagnini, A.; Riva, M.; Neuman, S. P.
2016-12-01
Environmental quantities such as log hydraulic conductivity (or transmissivity), Y(x) = ln K(x), and their spatial (or temporal) increments, ΔY, are known to be generally non-Gaussian. Documented evidence of such behavior includes symmetry of increment distributions at all separation scales (or lags) between incremental values of Y with sharp peaks and heavy tails that decay asymptotically as lag increases. This statistical scaling occurs in porous as well as fractured media characterized by either one or a hierarchy of spatial correlation scales. In hierarchical media one observes a range of additional statistical ΔY scaling phenomena, all of which are captured comprehensibly by a novel generalized sub-Gaussian (GSG) model. In this model Y forms a mixture Y(x) = U(x) G(x) of single- or multi-scale Gaussian processes G having random variances, U being a non-negative subordinator independent of G. Elsewhere we developed ways to generate unconditional and conditional random realizations of isotropic or anisotropic GSG fields which can be embedded in numerical Monte Carlo flow and transport simulations. Here we present and discuss expressions for probability distribution functions of Y and ΔY as well as their lead statistical moments. We then focus on a simple flow setting of mean uniform steady state flow in an unbounded, two-dimensional domain, exploring ways in which non-Gaussian heterogeneity affects stochastic flow and transport descriptions. Our expressions represent (a) lead order autocovariance and cross-covariance functions of hydraulic head, velocity and advective particle displacement as well as (b) analogues of preasymptotic and asymptotic Fickian dispersion coefficients. We compare them with corresponding expressions developed in the literature for Gaussian Y.
Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates
Bartroff, Jay; Song, Jinlin
2014-01-01
This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazalova-Carter, Magdalena; Liu, Michael; Palma, Bianey
2015-04-15
Purpose: To measure radiation dose in a water-equivalent medium from very high-energy electron (VHEE) beams and make comparisons to Monte Carlo (MC) simulation results. Methods: Dose in a polystyrene phantom delivered by an experimental VHEE beam line was measured with Gafchromic films for three 50 MeV and two 70 MeV Gaussian beams of 4.0–6.9 mm FWHM and compared to corresponding MC-simulated dose distributions. MC dose in the polystyrene phantom was calculated with the EGSnrc/BEAMnrc and DOSXYZnrc codes based on the experimental setup. Additionally, the effect of 2% beam energy measurement uncertainty and possible non-zero beam angular spread on MC dosemore » distributions was evaluated. Results: MC simulated percentage depth dose (PDD) curves agreed with measurements within 4% for all beam sizes at both 50 and 70 MeV VHEE beams. Central axis PDD at 8 cm depth ranged from 14% to 19% for the 5.4–6.9 mm 50 MeV beams and it ranged from 14% to 18% for the 4.0–4.5 mm 70 MeV beams. MC simulated relative beam profiles of regularly shaped Gaussian beams evaluated at depths of 0.64 to 7.46 cm agreed with measurements to within 5%. A 2% beam energy uncertainty and 0.286° beam angular spread corresponded to a maximum 3.0% and 3.8% difference in depth dose curves of the 50 and 70 MeV electron beams, respectively. Absolute dose differences between MC simulations and film measurements of regularly shaped Gaussian beams were between 10% and 42%. Conclusions: The authors demonstrate that relative dose distributions for VHEE beams of 50–70 MeV can be measured with Gafchromic films and modeled with Monte Carlo simulations to an accuracy of 5%. The reported absolute dose differences likely caused by imperfect beam steering and subsequent charge loss revealed the importance of accurate VHEE beam control and diagnostics.« less
Koopmeiners, Joseph S.; Feng, Ziding
2015-01-01
Group sequential testing procedures have been proposed as an approach to conserving resources in biomarker validation studies. Previously, Koopmeiners and Feng (2011) derived the asymptotic properties of the sequential empirical positive predictive value (PPV) and negative predictive value curves, which summarize the predictive accuracy of a continuous marker, under case-control sampling. A limitation of their approach is that the prevalence can not be estimated from a case-control study and must be assumed known. In this manuscript, we consider group sequential testing of the predictive accuracy of a continuous biomarker with unknown prevalence. First, we develop asymptotic theory for the sequential empirical PPV and NPV curves when the prevalence must be estimated, rather than assumed known in a case-control study. We then discuss how our results can be combined with standard group sequential methods to develop group sequential testing procedures and bias-adjusted estimators for the PPV and NPV curve. The small sample properties of the proposed group sequential testing procedures and estimators are evaluated by simulation and we illustrate our approach in the context of a study to validate a novel biomarker for prostate cancer. PMID:26537180
Multilevel Mixture Kalman Filter
NASA Astrophysics Data System (ADS)
Guo, Dong; Wang, Xiaodong; Chen, Rong
2004-12-01
The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoel, D.D.
1984-01-01
Two computer codes have been developed for operational use in performing real time evaluations of atmospheric releases from the Savannah River Plant (SRP) in South Carolina. These codes, based on mathematical models, are part of the SRP WIND (Weather Information and Display) automated emergency response system. Accuracy of ground level concentrations from a Gaussian puff-plume model and a two-dimensional sequential puff model are being evaluated with data from a series of short range diffusion experiments using sulfur hexafluoride as a tracer. The models use meteorological data collected from 7 towers on SRP and at the 300 m WJBF-TV tower aboutmore » 15 km northwest of SRP. The winds and the stability, which is based on turbulence measurements, are measured at the 60 m stack heights. These results are compared to downwind concentrations using only standard meteorological data, i.e., adjusted 10 m winds and stability determined by the Pasquill-Turner stability classification method. Scattergrams and simple statistics were used for model evaluations. Results indicate predictions within accepted limits for the puff-plume code and a bias in the sequential puff model predictions using the meteorologist-adjusted nonstandard data. 5 references, 4 figures, 2 tables.« less
Melo, Adma Nadja Ferreira de; Souza, Geany Targino de; Schaffner, Donald; Oliveira, Tereza C Moreira de; Maciel, Janeeyre Ferreira; Souza, Evandro Leite de; Magnani, Marciane
2017-06-19
This study assessed changes in thermo-tolerance and capability to survive to simulated gastrointestinal conditions of Salmonella Enteritidis PT4 and Salmonella Typhimurium PT4 inoculated in chicken breast meat following exposure to stresses (cold, acid and osmotic) commonly imposed during food processing. The effects of the stress imposed by exposure to oregano (Origanum vulgare L.) essential oil (OVEO) on thermo-tolerance were also assessed. After exposure to cold stress (5°C for 5h) in chicken breast meat the test strains were sequentially exposed to the different stressing substances (lactic acid, NaCl or OVEO) at sub-lethal amounts, which were defined considering previously determined minimum inhibitory concentrations, and finally to thermal treatment (55°C for 30min). Resistant cells from distinct sequential treatments were exposed to simulated gastrointestinal conditions. The exposure to cold stress did not result in increased tolerance to acid stress (lactic acid: 5 and 2.5μL/g) for both strains. Cells of S. Typhimurium PT4 and S. Enteritidis PT4 previously exposed to acid stress showed higher (p<0.05) tolerance to osmotic stress (NaCl: 75 or 37.5mg/g) compared to non-acid-exposed cells. Exposure to osmotic stress without previous exposure to acid stress caused a salt-concentration dependent decrease in counts for both strains. Exposure to OVEO (1.25 and 0.62μL/g) decreased the acid and osmotic tolerance of both S. Enteritidis PT4 and S. Typhimurium PT4. Sequential exposure to acid and osmotic stress conditions after cold exposure increased (p<0.05) the thermo-tolerance in both strains. The cells that survived the sequential stress exposure (resistant) showed higher tolerance (p<0.05) to acidic conditions during continuous exposure (182min) to simulated gastrointestinal conditions. Resistant cells of S. Enteritidis PT4 and S. Typhimurium PT4 showed higher survival rates (p<0.05) than control cells at the end of the in vitro digestion. These results show that sequential exposure to multiple sub-lethal stresses may increase the thermo-tolerance and enhance the survival under gastrointestinal conditions of S. Enteritidis PT4 and S. Typhimurium PT4. Copyright © 2017 Elsevier B.V. All rights reserved.
Preserving the Boltzmann ensemble in replica-exchange molecular dynamics.
Cooke, Ben; Schmidler, Scott C
2008-10-28
We consider the convergence behavior of replica-exchange molecular dynamics (REMD) [Sugita and Okamoto, Chem. Phys. Lett. 314, 141 (1999)] based on properties of the numerical integrators in the underlying isothermal molecular dynamics (MD) simulations. We show that a variety of deterministic algorithms favored by molecular dynamics practitioners for constant-temperature simulation of biomolecules fail either to be measure invariant or irreducible, and are therefore not ergodic. We then show that REMD using these algorithms also fails to be ergodic. As a result, the entire configuration space may not be explored even in an infinitely long simulation, and the simulation may not converge to the desired equilibrium Boltzmann ensemble. Moreover, our analysis shows that for initial configurations with unfavorable energy, it may be impossible for the system to reach a region surrounding the minimum energy configuration. We demonstrate these failures of REMD algorithms for three small systems: a Gaussian distribution (simple harmonic oscillator dynamics), a bimodal mixture of Gaussians distribution, and the alanine dipeptide. Examination of the resulting phase plots and equilibrium configuration densities indicates significant errors in the ensemble generated by REMD simulation. We describe a simple modification to address these failures based on a stochastic hybrid Monte Carlo correction, and prove that this is ergodic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, S; Takayanagi, T; Fujii, Y
2014-06-15
Purpose: To present the validity of our beam modeling with double and triple Gaussian dose kernels for spot scanning proton beams in Nagoya Proton Therapy Center. This study investigates the conformance between the measurements and calculation results in absolute dose with two types of beam kernel. Methods: A dose kernel is one of the important input data required for the treatment planning software. The dose kernel is the 3D dose distribution of an infinitesimal pencil beam of protons in water and consists of integral depth doses and lateral distributions. We have adopted double and triple Gaussian model as lateral distributionmore » in order to take account of the large angle scattering due to nuclear reaction by fitting simulated inwater lateral dose profile for needle proton beam at various depths. The fitted parameters were interpolated as a function of depth in water and were stored as a separate look-up table for the each beam energy. The process of beam modeling is based on the method of MDACC [X.R.Zhu 2013]. Results: From the comparison results between the absolute doses calculated by double Gaussian model and those measured at the center of SOBP, the difference is increased up to 3.5% in the high-energy region because the large angle scattering due to nuclear reaction is not sufficiently considered at intermediate depths in the double Gaussian model. In case of employing triple Gaussian dose kernels, the measured absolute dose at the center of SOBP agrees with calculation within ±1% regardless of the SOBP width and maximum range. Conclusion: We have demonstrated the beam modeling results of dose distribution employing double and triple Gaussian dose kernel. Treatment planning system with the triple Gaussian dose kernel has been successfully verified and applied to the patient treatment with a spot scanning technique in Nagoya Proton Therapy Center.« less
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.
de Nijs, Robin
2015-07-21
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
Jones, Kevin C; Seghal, Chandra M; Avery, Stephen
2016-03-21
The unique dose deposition of proton beams generates a distinctive thermoacoustic (protoacoustic) signal, which can be used to calculate the proton range. To identify the expected protoacoustic amplitude, frequency, and arrival time for different proton pulse characteristics encountered at hospital-based proton sources, the protoacoustic pressure emissions generated by 150 MeV, pencil-beam proton pulses were simulated in a homogeneous water medium. Proton pulses with Gaussian widths ranging up to 200 μs were considered. The protoacoustic amplitude, frequency, and time-of-flight (TOF) range accuracy were assessed. For TOF calculations, the acoustic pulse arrival time was determined based on multiple features of the wave. Based on the simulations, Gaussian proton pulses can be categorized as Dirac-delta-function-like (FWHM < 4 μs) and longer. For the δ-function-like irradiation, the protoacoustic spectrum peaks at 44.5 kHz and the systematic error in determining the Bragg peak range is <2.6 mm. For longer proton pulses, the spectrum shifts to lower frequencies, and the range calculation systematic error increases (⩽ 23 mm for FWHM of 56 μs). By mapping the protoacoustic peak arrival time to range with simulations, the residual error can be reduced. Using a proton pulse with FWHM = 2 μs results in a maximum signal-to-noise ratio per total dose. Simulations predict that a 300 nA, 150 MeV, FWHM = 4 μs Gaussian proton pulse (8.0 × 10(6) protons, 3.1 cGy dose at the Bragg peak) will generate a 146 mPa pressure wave at 5 cm beyond the Bragg peak. There is an angle dependent systematic error in the protoacoustic TOF range calculations. Placing detectors along the proton beam axis and beyond the Bragg peak minimizes this error. For clinical proton beams, protoacoustic detectors should be sensitive to <400 kHz (for -20 dB). Hospital-based synchrocyclotrons and cyclotrons are promising sources of proton pulses for generating clinically measurable protoacoustic emissions.
Photon-phonon-photon transfer in optomechanics
Rakhubovsky, Andrey A.; Filip, Radim
2017-01-01
We consider transfer of a highly nonclassical quantum state through an optomechanical system. That is we investigate a protocol consisting of sequential upload, storage and reading out of the quantum state from a mechanical mode of an optomechanical system. We show that provided the input state is in a test-bed single-photon Fock state, the Wigner function of the recovered state can have negative values at the origin, which is a manifest of nonclassicality of the quantum state of the macroscopic mechanical mode and the overall transfer protocol itself. Moreover, we prove that the recovered state is quantum non-Gaussian for wide range of setup parameters. We verify that current electromechanical and optomechanical experiments can test this complete transfer of single photon. PMID:28436461
Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.
Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter
2015-08-24
We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.
Li, Guoqi; Deng, Lei; Wang, Dong; Wang, Wei; Zeng, Fei; Zhang, Ziyang; Li, Huanglong; Song, Sen; Pei, Jing; Shi, Luping
2016-01-01
Chunking refers to a phenomenon whereby individuals group items together when performing a memory task to improve the performance of sequential memory. In this work, we build a bio-plausible hierarchical chunking of sequential memory (HCSM) model to explain why such improvement happens. We address this issue by linking hierarchical chunking with synaptic plasticity and neuromorphic engineering. We uncover that a chunking mechanism reduces the requirements of synaptic plasticity since it allows applying synapses with narrow dynamic range and low precision to perform a memory task. We validate a hardware version of the model through simulation, based on measured memristor behavior with narrow dynamic range in neuromorphic circuits, which reveals how chunking works and what role it plays in encoding sequential memory. Our work deepens the understanding of sequential memory and enables incorporating it for the investigation of the brain-inspired computing on neuromorphic architecture. PMID:28066223
Optimum threshold selection method of centroid computation for Gaussian spot
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; Wang, Caixia
2015-10-01
Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.
Investigations into phase effects from diffracted Gaussian beams for high-precision interferometry
NASA Astrophysics Data System (ADS)
Lodhia, Deepali
Gravitational wave detectors are a new class of observatories aiming to detect gravitational waves from cosmic sources. All-reflective interferometer configurations have been proposed for future detectors, replacing transmissive optics with diffractive elements, thereby reducing thermal issues associated with power absorption. However, diffraction gratings introduce additional phase noise, creating more stringent conditions for alignment stability, and further investigations are required into all-reflective interferometers. A suitable mathematical framework using Gaussian modes is required for analysing the alignment stability using diffraction gratings. Such a framework was created, whereby small beam displacements are modelled using a modal technique. It was confirmed that the original modal-based model does not contain the phase changes associated with grating displacements. Experimental tests verified that the phase of a diffracted Gaussian beam is independent of the beam shape. Phase effects were further examined using a rigorous time-domain simulation tool. These findings show that the perceived phase difference is based on an intrinsic change of coordinate system within the modal-based model, and that the extra phase can be added manually to the modal expansion. This thesis provides a well-tested and detailed mathematical framework that can be used to develop simulation codes to model more complex layouts of all-reflective interferometers.
NASA Astrophysics Data System (ADS)
Semchishen, Vladimir A.; Mrochen, Michael; Seminogov, Vladimir N.; Panchenko, Vladislav Y.; Seiler, Theo
1998-04-01
Purpose: The increasing interest in a homogeneous Gaussian light beam profile for applications in ophthalmology e.g. photorefractive keratectomy (PRK) requests simple optical systems with low energy losses. Therefore, we developed the Light Shaping Beam Homogenizer (LSBH) working from UV up to mid-IR. Method: The irregular microlenses structure on a quartz surface was fabricated by using photolithography, chemical etching and chemical polishing processes. This created a three dimensional structure on the quartz substrate characterized in case of a Gaussian beam by random law distribution of individual irregularities tilts. The LSBH was realized for the 193 nm and the 2.94 micrometer wavelengths. Simulation results obtained by 3-D analysis for an arbitrary incident light beam were compared to experimental results. Results: The correlation to a numerical Gaussian fit is better than 94% with high uniformity for an incident beam with an intensity modulation of nearly 100%. In the far field the cross section of the beam shows always rotation symmetry. Transmittance and damage threshold of the LSBH are only dependent on the substrate characteristics. Conclusions: considering our experimental and simulation results it is possible to control the angular distribution of the beam intensity after LSBH with higher efficiency compared to diffraction or holographic optical elements.
A Gaussian random field model for similarity-based smoothing in Bayesian disease mapping.
Baptista, Helena; Mendes, Jorge M; MacNab, Ying C; Xavier, Miguel; Caldas-de-Almeida, José
2016-08-01
Conditionally specified Gaussian Markov random field (GMRF) models with adjacency-based neighbourhood weight matrix, commonly known as neighbourhood-based GMRF models, have been the mainstream approach to spatial smoothing in Bayesian disease mapping. In the present paper, we propose a conditionally specified Gaussian random field (GRF) model with a similarity-based non-spatial weight matrix to facilitate non-spatial smoothing in Bayesian disease mapping. The model, named similarity-based GRF, is motivated for modelling disease mapping data in situations where the underlying small area relative risks and the associated determinant factors do not vary systematically in space, and the similarity is defined by "similarity" with respect to the associated disease determinant factors. The neighbourhood-based GMRF and the similarity-based GRF are compared and accessed via a simulation study and by two case studies, using new data on alcohol abuse in Portugal collected by the World Mental Health Survey Initiative and the well-known lip cancer data in Scotland. In the presence of disease data with no evidence of positive spatial correlation, the simulation study showed a consistent gain in efficiency from the similarity-based GRF, compared with the adjacency-based GMRF with the determinant risk factors as covariate. This new approach broadens the scope of the existing conditional autocorrelation models. © The Author(s) 2016.
Cost effectiveness of the stream-gaging program in North Dakota
Ryan, Gerald L.
1989-01-01
This report documents results of a cost-effectiveness study of the stream-gaging program In North Dakota. It is part of a nationwide evaluation of the stream-gaging program of the U.S. Geological Survey.One phase of evaluating cost effectiveness is to identify less costly alternative methods of simulating streamflow records. Statistical or hydro logic flow-routing methods were used as alternative methods to simulate streamflow records for 21 combinations of gaging stations from the 94-gaging-station network. Accuracy of the alternative methods was sufficient to consider discontinuing only one gaging station.Operation of the gaging-station network was evaluated by using associated uncertainty in streamflow records. The evaluation was limited to the nonwinter operation of 29 gaging stations in eastern North Dakota. The current (1987) travel routes and measurement frequencies require a budget of about $248/000 and result in an average equivalent Gaussian spread in streamflow records of 16.5 percent. Changes in routes and measurement frequencies optimally could reduce the average equivalent Gaussian spread to 14.7 percent.Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget would increase the optimal average equivalent Gaussian spread from 14.7 to 20.4 percent, and a $400,000 budget could decrease it to 5.8 percent.
Emergence of Multiscaling in a Random-Force Stirred Fluid
NASA Astrophysics Data System (ADS)
Yakhot, Victor; Donzis, Diego
2017-07-01
We consider the transition to strong turbulence in an infinite fluid stirred by a Gaussian random force. The transition is defined as a first appearance of anomalous scaling of normalized moments of velocity derivatives (dissipation rates) emerging from the low-Reynolds-number Gaussian background. It is shown that, due to multiscaling, strongly intermittent rare events can be quantitatively described in terms of an infinite number of different "Reynolds numbers" reflecting a multitude of anomalous scaling exponents. The theoretically predicted transition disappears at Rλ≤3 . The developed theory is in quantitative agreement with the outcome of large-scale numerical simulations.
Constructing petal modes from the coherent superposition of Laguerre-Gaussian modes
NASA Astrophysics Data System (ADS)
Naidoo, Darryl; Forbes, Andrew; Ait-Ameur, Kamel; Brunel, Marc
2011-03-01
An experimental approach in generating Petal-like transverse modes, which are similar to what is seen in porro-prism resonators, has been successfully demonstrated. We hypothesize that the petal-like structures are generated from a coherent superposition of Laguerre-Gaussian modes of zero radial order and opposite azimuthal order. To verify this hypothesis, visually based comparisons such as petal peak to peak diameter and the angle between adjacent petals are drawn between experimental data and simulated data. The beam quality factor of the Petal-like transverse modes and an inner product interaction is also experimentally compared to numerical results.
Numeric Solutions of Dirac-Gursey Spinor Field Equation Under External Gaussian White Noise
NASA Astrophysics Data System (ADS)
Aydogmus, Fatma
2016-06-01
In this paper, we consider the Dirac-Gursey spinor field equation that has particle-like solutions derived classical field equations so-called instantons, formed by using Heisenberg ansatz, under the effect of an additional Gaussian white noise term. Our purpose is to understand how the behavior of spinor-type excited instantons in four dimensions can be affected by noise. Thus, we simulate the phase portraits and Poincaré sections of the obtained system numerically both with and without noise. Recurrence plots are also given for more detailed information regarding the system.
Gaussian Process Regression Model in Spatial Logistic Regression
NASA Astrophysics Data System (ADS)
Sofro, A.; Oktaviarina, A.
2018-01-01
Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.
NASA Astrophysics Data System (ADS)
Dai, Zhiping; Ling, Xiaohui; Tang, Shiqing
2018-06-01
In this paper, the propagation properties of hollow Gaussian beams (HGBs) are discussed in detail when they are off-waist incident in strongly nonlocal media. A set of mathematic expressions are given to describe the evolutions of the beam intensity, the beam width, and the real beam radius. Numerical simulations are carried out to illustrate these propagation properties depended on the off-waist incidence. It is found that a HGB always periodically transforms its transverse patterns during propagation. Accordingly, the beam width and the real beam radius are also periodic during propagation.
Recent advances in lossless coding techniques
NASA Astrophysics Data System (ADS)
Yovanof, Gregory S.
Current lossless techniques are reviewed with reference to both sequential data files and still images. Two major groups of sequential algorithms, dictionary and statistical techniques, are discussed. In particular, attention is given to Lempel-Ziv coding, Huffman coding, and arithmewtic coding. The subject of lossless compression of imagery is briefly discussed. Finally, examples of practical implementations of lossless algorithms and some simulation results are given.
A stochastic-geometric model of soil variation in Pleistocene patterned ground
NASA Astrophysics Data System (ADS)
Lark, Murray; Meerschman, Eef; Van Meirvenne, Marc
2013-04-01
In this paper we examine the spatial variability of soil in parent material with complex spatial structure which arises from complex non-linear geomorphic processes. We show that this variability can be better-modelled by a stochastic-geometric model than by a standard Gaussian random field. The benefits of the new model are seen in the reproduction of features of the target variable which influence processes like water movement and pollutant dispersal. Complex non-linear processes in the soil give rise to properties with non-Gaussian distributions. Even under a transformation to approximate marginal normality, such variables may have a more complex spatial structure than the Gaussian random field model of geostatistics can accommodate. In particular the extent to which extreme values of the variable are connected in spatially coherent regions may be misrepresented. As a result, for example, geostatistical simulation generally fails to reproduce the pathways for preferential flow in an environment where coarse infill of former fluvial channels or coarse alluvium of braided streams creates pathways for rapid movement of water. Multiple point geostatistics has been developed to deal with this problem. Multiple point methods proceed by sampling from a set of training images which can be assumed to reproduce the non-Gaussian behaviour of the target variable. The challenge is to identify appropriate sources of such images. In this paper we consider a mode of soil variation in which the soil varies continuously, exhibiting short-range lateral trends induced by local effects of the factors of soil formation which vary across the region of interest in an unpredictable way. The trends in soil variation are therefore only apparent locally, and the soil variation at regional scale appears random. We propose a stochastic-geometric model for this mode of soil variation called the Continuous Local Trend (CLT) model. We consider a case study of soil formed in relict patterned ground with pronounced lateral textural variations arising from the presence of infilled ice-wedges of Pleistocene origin. We show how knowledge of the pedogenetic processes in this environment, along with some simple descriptive statistics, can be used to select and fit a CLT model for the apparent electrical conductivity (ECa) of the soil. We use the model to simulate realizations of the CLT process, and compare these with realizations of a fitted Gaussian random field. We show how statistics that summarize the spatial coherence of regions with small values of ECa, which are expected to have coarse texture and so larger saturated hydraulic conductivity, are better reproduced by the CLT model than by the Gaussian random field. This suggests that the CLT model could be used to generate an unlimited supply of training images to allow multiple point geostatistical simulation or prediction of this or similar variables.
Analyzing multicomponent receptive fields from neural responses to natural stimuli
Rowekamp, Ryan; Sharpee, Tatyana O
2011-01-01
The challenge of building increasingly better models of neural responses to natural stimuli is to accurately estimate the multiple stimulus features that may jointly affect the neural spike probability. The selectivity for combinations of features is thought to be crucial for achieving classical properties of neural responses such as contrast invariance. The joint search for these multiple stimulus features is difficult because estimating spike probability as a multidimensional function of stimulus projections onto candidate relevant dimensions is subject to the curse of dimensionality. An attractive alternative is to search for relevant dimensions sequentially, as in projection pursuit regression. Here we demonstrate using analytic arguments and simulations of model cells that different types of sequential search strategies exhibit systematic biases when used with natural stimuli. Simulations show that joint optimization is feasible for up to three dimensions with current algorithms. When applied to the responses of V1 neurons to natural scenes, models based on three jointly optimized dimensions had better predictive power in a majority of cases compared to dimensions optimized sequentially, with different sequential methods yielding comparable results. Thus, although the curse of dimensionality remains, at least several relevant dimensions can be estimated by joint information maximization. PMID:21780916
Wang, Shijun; Liu, Peter; Turkbey, Baris; Choyke, Peter; Pinto, Peter; Summers, Ronald M
2012-01-01
In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data. Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.
Isolated drops from capillary jets by means of Gaussian wave packets
NASA Astrophysics Data System (ADS)
Garcia, Francisco Javier; Gonzalez, Heliodoro; Castrejon-Pita, Alfonso Arturo; Castrejon-Pita, Jose Rafael; Gomez-Aguilar, Francisco Jose
2017-11-01
The possibility of obtaining isolated drops from a continuous liquid jet through localized velocity perturbations is explored analytically, numerically and experimentally. We show that Gaussian wave packets are appropriate for this goal. A temporal linear analysis predicts the early evolution of these wave packets and provides an estimate of the breakup length of the jet. Non-linear numerical simulations allow us both to corroborate these results and to obtain the shape of the surface of the jet prior to breakup. Finally, we show experimental evidence that stimulating with a Gaussian wave packet can lead to the formation of an isolated drop without disturbing the rest of the jet. The authors acknowledge support from the Spanish Government under Contract No. FIS2014-25161, the Junta de Andalucia under Contract No. P11-FQM-7919, the EPSRC-UK via the Grant EP/P024173/1, and the Royal Society.
Dynamical transition for a particle in a squared Gaussian potential
NASA Astrophysics Data System (ADS)
Touya, C.; Dean, D. S.
2007-02-01
We study the problem of a Brownian particle diffusing in finite dimensions in a potential given by ψ = phi2/2 where phi is Gaussian random field. Exact results for the diffusion constant in the high temperature phase are given in one and two dimensions and it is shown to vanish in a power-law fashion at the dynamical transition temperature. Our results are confronted with numerical simulations where the Gaussian field is constructed, in a standard way, as a sum over random Fourier modes. We show that when the number of Fourier modes is finite the low temperature diffusion constant becomes non-zero and has an Arrhenius form. Thus we have a simple model with a fully understood finite size scaling theory for the dynamical transition. In addition we analyse the nature of the anomalous diffusion in the low temperature regime and show that the anomalous exponent agrees with that predicted by a trap model.
Purity of Gaussian states: Measurement schemes and time evolution in noisy channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paris, Matteo G.A.; Illuminati, Fabrizio; Serafini, Alessio
2003-07-01
We present a systematic study of the purity for Gaussian states of single-mode continuous variable systems. We prove the connection of purity to observable quantities for these states, and show that the joint measurement of two conjugate quadratures is necessary and sufficient to determine the purity at any time. The statistical reliability and the range of applicability of the proposed measurement scheme are tested by means of Monte Carlo simulated experiments. We then consider the dynamics of purity in noisy channels. We derive an evolution equation for the purity of general Gaussian states both in thermal and in squeezed thermalmore » baths. We show that purity is maximized at any given time for an initial coherent state evolving in a thermal bath, or for an initial squeezed state evolving in a squeezed thermal bath whose asymptotic squeezing is orthogonal to that of the input state.« less
Bayesian Computation for Log-Gaussian Cox Processes: A Comparative Analysis of Methods
Teng, Ming; Nathoo, Farouk S.; Johnson, Timothy D.
2017-01-01
The Log-Gaussian Cox Process is a commonly used model for the analysis of spatial point pattern data. Fitting this model is difficult because of its doubly-stochastic property, i.e., it is an hierarchical combination of a Poisson process at the first level and a Gaussian Process at the second level. Various methods have been proposed to estimate such a process, including traditional likelihood-based approaches as well as Bayesian methods. We focus here on Bayesian methods and several approaches that have been considered for model fitting within this framework, including Hamiltonian Monte Carlo, the Integrated nested Laplace approximation, and Variational Bayes. We consider these approaches and make comparisons with respect to statistical and computational efficiency. These comparisons are made through several simulation studies as well as through two applications, the first examining ecological data and the second involving neuroimaging data. PMID:29200537
Modeling of dispersion near roadways based on the vehicle-induced turbulence concept
NASA Astrophysics Data System (ADS)
Sahlodin, Ali M.; Sotudeh-Gharebagh, Rahmat; Zhu, Yifang
A mathematical model is developed for dispersion near roadways by incorporating vehicle-induced turbulence (VIT) into Gaussian dispersion modeling using computational fluid dynamics (CFD). The model is based on the Gaussian plume equation in which roadway is regarded as a series of point sources. The Gaussian dispersion parameters are modified by simulation of the roadway using CFD in order to evaluate turbulent kinetic energy (TKE) as a measure of VIT. The model was evaluated against experimental carbon monoxide concentrations downwind of two major freeways reported in the literature. Good agreements were achieved between model results and the literature data. A significant difference was observed between the model results with and without considering VIT. The difference is rather high for data very close to the freeways. This model, after evaluation with additional data, may be used as a framework for predicting dispersion and deposition from any roadway for different traffic (vehicle type and speed) conditions.
Non-Gaussian information from weak lensing data via deep learning
NASA Astrophysics Data System (ADS)
Gupta, Arushi; Matilla, José Manuel Zorrilla; Hsu, Daniel; Haiman, Zoltán
2018-05-01
Weak lensing maps contain information beyond two-point statistics on small scales. Much recent work has tried to extract this information through a range of different observables or via nonlinear transformations of the lensing field. Here we train and apply a two-dimensional convolutional neural network to simulated noiseless lensing maps covering 96 different cosmological models over a range of {Ωm,σ8} . Using the area of the confidence contour in the {Ωm,σ8} plane as a figure of merit, derived from simulated convergence maps smoothed on a scale of 1.0 arcmin, we show that the neural network yields ≈5 × tighter constraints than the power spectrum, and ≈4 × tighter than the lensing peaks. Such gains illustrate the extent to which weak lensing data encode cosmological information not accessible to the power spectrum or even other, non-Gaussian statistics such as lensing peaks.
Modelling the excitation field of an optical resonator
NASA Astrophysics Data System (ADS)
Romanini, Daniele
2014-06-01
Assuming the paraxial approximation, we derive efficient recursive expressions for the projection coefficients of a Gaussian beam over the Gauss--Hermite transverse electro-magnetic (TEM) modes of an optical cavity. While previous studies considered cavities with cylindrical symmetry, our derivation accounts for "simple" astigmatism and ellipticity, which allows to deal with more realistic optical systems. The resulting expansion of the Gaussian beam over the cavity TEM modes provides accurate simulation of the excitation field distribution inside the cavity, in transmission, and in reflection. In particular, this requires including counter-propagating TEM modes, usually neglected in textbooks. As an illustrative application to a complex case, we simulate reentrant cavity configurations where Herriott spots are obtained at cavity output. We show that the case of an astigmatic cavity is also easily modelled. To our knowledge, such relevant applications are usually treated under the simplified geometrical optics approximation, or using heavier numerical methods.
Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles
2016-10-03
When representing the blade aerodynamics with rotating actuator lines, the computed forces have to be projected back to the CFD flow field as a volumetric body force. That has been done in the past with a geometrically simple uniform three-dimensional Gaussian at each point along the blade. Here, we argue that the body force can be shaped in a way that better predicts the blade local flow field, the blade load distribution, and the formation of the tip/root vortices. In previous work, we have determined the optimal scales of circular and elliptical Gaussian kernels that best reproduce the local flowmore » field in two-dimensions. Lastly, in this work we extend the analysis and applications by considering the full three-dimensional blade to test our hypothesis in a highly resolved Large Eddy Simulation.« less
Molteni, Matteo; Weigel, Udo M; Remiro, Francisco; Durduran, Turgut; Ferri, Fabio
2014-11-17
We present a new hardware simulator (HS) for characterization, testing and benchmarking of digital correlators used in various optical correlation spectroscopy experiments where the photon statistics is Gaussian and the corresponding time correlation function can have any arbitrary shape. Starting from the HS developed in [Rev. Sci. Instrum. 74, 4273 (2003)], and using the same I/O board (PCI-6534 National Instrument) mounted on a modern PC (Intel Core i7-CPU, 3.07GHz, 12GB RAM), we have realized an instrument capable of delivering continuous streams of TTL pulses over two channels, with a time resolution of Δt = 50ns, up to a maximum count rate of 〈I〉 ∼ 5MHz. Pulse streams, typically detected in dynamic light scattering and diffuse correlation spectroscopy experiments were generated and measured with a commercial hardware correlator obtaining measured correlation functions that match accurately the expected ones.
THE DISTRIBUTION OF COOK’S D STATISTIC
Muller, Keith E.; Mok, Mario Chen
2013-01-01
Cook (1977) proposed a diagnostic to quantify the impact of deleting an observation on the estimated regression coefficients of a General Linear Univariate Model (GLUM). Simulations of models with Gaussian response and predictors demonstrate that his suggestion of comparing the diagnostic to the median of the F for overall regression captures an erratically varying proportion of the values. We describe the exact distribution of Cook’s statistic for a GLUM with Gaussian predictors and response. We also present computational forms, simple approximations, and asymptotic results. A simulation supports the accuracy of the results. The methods allow accurate evaluation of a single value or the maximum value from a regression analysis. The approximations work well for a single value, but less well for the maximum. In contrast, the cut-point suggested by Cook provides widely varying tail probabilities. As with all diagnostics, the data analyst must use scientific judgment in deciding how to treat highlighted observations. PMID:24363487
Large-scale structure non-Gaussianities with modal methods
NASA Astrophysics Data System (ADS)
Schmittfull, Marcel
2016-10-01
Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles
When representing the blade aerodynamics with rotating actuator lines, the computed forces have to be projected back to the CFD flow field as a volumetric body force. That has been done in the past with a geometrically simple uniform three-dimensional Gaussian at each point along the blade. Here, we argue that the body force can be shaped in a way that better predicts the blade local flow field, the blade load distribution, and the formation of the tip/root vortices. In previous work, we have determined the optimal scales of circular and elliptical Gaussian kernels that best reproduce the local flowmore » field in two-dimensions. Lastly, in this work we extend the analysis and applications by considering the full three-dimensional blade to test our hypothesis in a highly resolved Large Eddy Simulation.« less
Program For Parallel Discrete-Event Simulation
NASA Technical Reports Server (NTRS)
Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.
1991-01-01
User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.
Dry minor mergers and size evolution of high-z compact massive early-type galaxies
NASA Astrophysics Data System (ADS)
Oogi, Taira; Habe, Asao
2013-01-01
Recent observations show evidence that high-z (z ˜ 2-3) early-type galaxies (ETGs) are more compact than those with comparable mass at z ˜ 0. Such size evolution is most likely explained by the `dry merger sceanario'. However, previous studies based on this scenario cannot consistently explain the properties of both high-z compact massive ETGs and local ETGs. We investigate the effect of multiple sequential dry minor mergers on the size evolution of compact massive ETGs. From an analysis of the Millennium Simulation Data Base, we show that such minor (stellar mass ratio M2/M1 < 1/4) mergers are extremely common during hierarchical structure formation. We perform N-body simulations of sequential minor mergers with parabolic and head-on orbits, including a dark matter component and a stellar component. Typical mass ratios of these minor mergers are 1/20 < M2/M1 ≤q 1/10. We show that sequential minor mergers of compact satellite galaxies are the most efficient at promoting size growth and decreasing the velocity dispersion of compact massive ETGs in our simulations. The change of stellar size and density of the merger remnants is consistent with recent observations. Furthermore, we construct the merger histories of candidates for high-z compact massive ETGs using the Millennium Simulation Data Base and estimate the size growth of the galaxies through the dry minor merger scenario. We can reproduce the mean size growth factor between z = 2 and z = 0, assuming the most efficient size growth obtained during sequential minor mergers in our simulations. However, we note that our numerical result is only valid for merger histories with typical mass ratios between 1/20 and 1/10 with parabolic and head-on orbits and that our most efficient size-growth efficiency is likely an upper limit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, K.M.
1992-10-01
Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It ismore » recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Movahed, M. Sadegh; Khosravi, Shahram, E-mail: m.s.movahed@ipm.ir, E-mail: khosravi@ipm.ir
2011-03-01
In this paper we study the footprint of cosmic string as the topological defects in the very early universe on the cosmic microwave background radiation. We develop the method of level crossing analysis in the context of the well-known Kaiser-Stebbins phenomenon for exploring the signature of cosmic strings. We simulate a Gaussian map by using the best fit parameter given by WMAP-7 and then superimpose cosmic strings effects on it as an incoherent and active fluctuations. In order to investigate the capability of our method to detect the cosmic strings for the various values of tension, Gμ, a simulated puremore » Gaussian map is compared with that of including cosmic strings. Based on the level crossing analysis, the superimposed cosmic string with Gμ∼>4 × 10{sup −9} in the simulated map without instrumental noise and the resolution R = 1' could be detected. In the presence of anticipated instrumental noise the lower bound increases just up to Gμ∼>5.8 × 10{sup −9}.« less
Unfolding of Proteins: Thermal and Mechanical Unfolding
NASA Technical Reports Server (NTRS)
Hur, Joe S.; Darve, Eric
2004-01-01
We have employed a Hamiltonian model based on a self-consistent Gaussian appoximation to examine the unfolding process of proteins in external - both mechanical and thermal - force elds. The motivation was to investigate the unfolding pathways of proteins by including only the essence of the important interactions of the native-state topology. Furthermore, if such a model can indeed correctly predict the physics of protein unfolding, it can complement more computationally expensive simulations and theoretical work. The self-consistent Gaussian approximation by Micheletti et al. has been incorporated in our model to make the model mathematically tractable by signi cantly reducing the computational cost. All thermodynamic properties and pair contact probabilities are calculated by simply evaluating the values of a series of Incomplete Gamma functions in an iterative manner. We have compared our results to previous molecular dynamics simulation and experimental data for the mechanical unfolding of the giant muscle protein Titin (1TIT). Our model, especially in light of its simplicity and excellent agreement with experiment and simulation, demonstrates the basic physical elements necessary to capture the mechanism of protein unfolding in an external force field.
Non-Gaussian lineshapes and dynamics of time-resolved linear and nonlinear (correlation) spectra.
Dinpajooh, Mohammadhasan; Matyushov, Dmitry V
2014-07-17
Signatures of nonlinear and non-Gaussian dynamics in time-resolved linear and nonlinear (correlation) 2D spectra are analyzed in a model considering a linear plus quadratic dependence of the spectroscopic transition frequency on a Gaussian nuclear coordinate of the thermal bath (quadratic coupling). This new model is contrasted to the commonly assumed linear dependence of the transition frequency on the medium nuclear coordinates (linear coupling). The linear coupling model predicts equality between the Stokes shift and equilibrium correlation functions of the transition frequency and time-independent spectral width. Both predictions are often violated, and we are asking here the question of whether a nonlinear solvent response and/or non-Gaussian dynamics are required to explain these observations. We find that correlation functions of spectroscopic observables calculated in the quadratic coupling model depend on the chromophore's electronic state and the spectral width gains time dependence, all in violation of the predictions of the linear coupling models. Lineshape functions of 2D spectra are derived assuming Ornstein-Uhlenbeck dynamics of the bath nuclear modes. The model predicts asymmetry of 2D correlation plots and bending of the center line. The latter is often used to extract two-point correlation functions from 2D spectra. The dynamics of the transition frequency are non-Gaussian. However, the effect of non-Gaussian dynamics is limited to the third-order (skewness) time correlation function, without affecting the time correlation functions of higher order. The theory is tested against molecular dynamics simulations of a model polar-polarizable chromophore dissolved in a force field water.
Near grazing scattering from non-Gaussian ocean surfaces
NASA Technical Reports Server (NTRS)
Kim, Yunjin; Rodriguez, Ernesto
1993-01-01
We investigate the behavior of the scattered electromagnetic waves from non-Gaussian ocean surfaces at near grazing incidence. Even though the scattering mechanisms at moderate incidence angles are relatively well understood, the same is not true for near grazing rough surface scattering. However, from the experimental ocean scattering data, it has been observed that the backscattering cross section of a horizontally polarized wave can be as large as the vertical counterpart at near grazing incidence. In addition, these returns are highly intermittent in time. There have been some suggestions that these unexpected effects may come from shadowing or feature scattering. Using numerical scattering simulations, it can be shown that the horizontal backscattering cannot be larger than the vertical one for the Gaussian surfaces. Our main objective of this study is to gain a clear understanding of scattering mechanisms underlying the near grazing ocean scattering. In order to evaluate the backscattering cross section from ocean surfaces at near grazing incidence, both the hydrodynamic modeling of ocean surfaces and an accurate near grazing scattering theory are required. For the surface modeling, we generate Gaussian surfaces from the ocean surface power spectrum which is derived using several experimental data. Then, weakly nonlinear large scale ocean surfaces are generated following Longuet-Higgins. In addition, the modulation of small waves by large waves is included using the conservation of wave action. For surface scattering, we use MOM (Method of Moments) to calculate the backscattering from scattering patches with the two scale shadowing approximation. The differences between Gaussian and non-Gaussian surface scattering at near grazing incidence are presented.
Vehicle speed detection based on gaussian mixture model using sequential of images
NASA Astrophysics Data System (ADS)
Setiyono, Budi; Ratna Sulistyaningrum, Dwi; Soetrisno; Fajriyah, Farah; Wahyu Wicaksono, Danang
2017-09-01
Intelligent Transportation System is one of the important components in the development of smart cities. Detection of vehicle speed on the highway is supporting the management of traffic engineering. The purpose of this study is to detect the speed of the moving vehicles using digital image processing. Our approach is as follows: The inputs are a sequence of frames, frame rate (fps) and ROI. The steps are following: First we separate foreground and background using Gaussian Mixture Model (GMM) in each frames. Then in each frame, we calculate the location of object and its centroid. Next we determine the speed by computing the movement of centroid in sequence of frames. In the calculation of speed, we only consider frames when the centroid is inside the predefined region of interest (ROI). Finally we transform the pixel displacement into a time unit of km/hour. Validation of the system is done by comparing the speed calculated manually and obtained by the system. The results of software testing can detect the speed of vehicles with the highest accuracy is 97.52% and the lowest accuracy is 77.41%. And the detection results of testing by using real video footage on the road is included with real speed of the vehicle.
Effects of simulated turbulence on aircraft handling qualities
NASA Technical Reports Server (NTRS)
Jacobson, I. D.; Joshi, D. S.
1977-01-01
The influence of simulated turbulence on aircraft handling qualities is presented. Pilot opinions of the handling qualities of a light general aviation aircraft were evaluated in a motion-base simulator using a simulated turbulence environment. A realistic representation of turbulence disturbances is described in terms of rms intensity and scale length and their random variations with time. The time histories generated by the proposed turbulence models showed characteristics which are more similar to real turbulence than the frequently-used Gaussian turbulence model. The proposed turbulence models flexibly accommodate changes in atmospheric conditions and are easily implemented in flight simulator studies.
Robust radio interferometric calibration using the t-distribution
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.
2013-10-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.
Robust Gaussian Graphical Modeling via l1 Penalization
Sun, Hokeun; Li, Hongzhe
2012-01-01
Summary Gaussian graphical models have been widely used as an effective method for studying the conditional independency structure among genes and for constructing genetic networks. However, gene expression data typically have heavier tails or more outlying observations than the standard Gaussian distribution. Such outliers in gene expression data can lead to wrong inference on the dependency structure among the genes. We propose a l1 penalized estimation procedure for the sparse Gaussian graphical models that is robustified against possible outliers. The likelihood function is weighted according to how the observation is deviated, where the deviation of the observation is measured based on its own likelihood. An efficient computational algorithm based on the coordinate gradient descent method is developed to obtain the minimizer of the negative penalized robustified-likelihood, where nonzero elements of the concentration matrix represents the graphical links among the genes. After the graphical structure is obtained, we re-estimate the positive definite concentration matrix using an iterative proportional fitting algorithm. Through simulations, we demonstrate that the proposed robust method performs much better than the graphical Lasso for the Gaussian graphical models in terms of both graph structure selection and estimation when outliers are present. We apply the robust estimation procedure to an analysis of yeast gene expression data and show that the resulting graph has better biological interpretation than that obtained from the graphical Lasso. PMID:23020775
ERIC Educational Resources Information Center
Wang, Hung-Yuan; Duh, Henry Been-Lirn; Li, Nai; Lin, Tzung-Jin; Tsai, Chin-Chung
2014-01-01
The purpose of this study is to investigate and compare students' collaborative inquiry learning behaviors and their behavior patterns in an augmented reality (AR) simulation system and a traditional 2D simulation system. Their inquiry and discussion processes were analyzed by content analysis and lag sequential analysis (LSA). Forty…
Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.
Bouhrara, Mustapha; Spencer, Richard G
2018-06-01
The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Impact of spurious shear on cosmological parameter estimates from weak lensing observables
Petri, Andrea; May, Morgan; Haiman, Zoltán; ...
2014-12-30
We research, residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (Ω m,w,σ 8) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitudemore » smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of σ sys 2 ≈ 10 -7, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ≈ 100 deg 2, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (Ωm,w,σ8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.« less
ERIC Educational Resources Information Center
Ikeda, Kenji; Ueno, Taiji; Ito, Yuichi; Kitagami, Shinji; Kawaguchi, Jun
2017-01-01
Humans can pronounce a nonword (e.g., rint). Some researchers have interpreted this behavior as requiring a sequential mechanism by which a grapheme-phoneme correspondence rule is applied to each grapheme in turn. However, several parallel-distributed processing (PDP) models in English have simulated human nonword reading accuracy without a…
Role of excited state solvent fluctuations on time-dependent fluorescence Stokes shift
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tanping, E-mail: tanping@lsu.edu, E-mail: revatik@lsu.edu; Kumar, Revati, E-mail: tanping@lsu.edu, E-mail: revatik@lsu.edu
2015-11-07
We explore the connection between the solvation dynamics of a chromophore upon photon excitation and equilibrium fluctuations of the solvent. Using molecular dynamics simulations, fluorescence Stokes shift for the tryptophan in Staphylococcus nuclease was examined using both nonequilibrium calculations and linear response theory. When the perturbed and unperturbed surfaces exhibit different solvent equilibrium fluctuations, the linear response approach on the former surface shows agreement with the nonequilibrium process. This agreement is excellent when the perturbed surface exhibits Gaussian statistics and qualitative in the case of an isomerization induced non-Gaussian statistics. However, the linear response theory on the unperturbed surface breaksmore » down even in the presence of Gaussian fluctuations. Experiments also provide evidence of the connection between the excited state solvent fluctuations and the total fluorescence shift. These observations indicate that the equilibrium statistics on the excited state surface characterize the relaxation dynamics of the fluorescence Stokes shift. Our studies specifically analyze the Gaussian fluctuations of the solvent in the complex protein environment and further confirm the role of solvent fluctuations on the excited state surface. The results are consistent with previous investigations, found in the literature, of solutes dissolved in liquids.« less
Yang, Sejung; Lee, Byung-Uk
2015-01-01
In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach. PMID:26352138
Normal and tumoral melanocytes exhibit q-Gaussian random search patterns.
da Silva, Priscila C A; Rosembach, Tiago V; Santos, Anésia A; Rocha, Márcio S; Martins, Marcelo L
2014-01-01
In multicellular organisms, cell motility is central in all morphogenetic processes, tissue maintenance, wound healing and immune surveillance. Hence, failures in its regulation potentiates numerous diseases. Here, cell migration assays on plastic 2D surfaces were performed using normal (Melan A) and tumoral (B16F10) murine melanocytes in random motility conditions. The trajectories of the centroids of the cell perimeters were tracked through time-lapse microscopy. The statistics of these trajectories was analyzed by building velocity and turn angle distributions, as well as velocity autocorrelations and the scaling of mean-squared displacements. We find that these cells exhibit a crossover from a normal to a super-diffusive motion without angular persistence at long time scales. Moreover, these melanocytes move with non-Gaussian velocity distributions. This major finding indicates that amongst those animal cells supposedly migrating through Lévy walks, some of them can instead perform q-Gaussian walks. Furthermore, our results reveal that B16F10 cells infected by mycoplasmas exhibit essentially the same diffusivity than their healthy counterparts. Finally, a q-Gaussian random walk model was proposed to account for these melanocytic migratory traits. Simulations based on this model correctly describe the crossover to super-diffusivity in the cell migration tracks.
NASA Astrophysics Data System (ADS)
Aghandeh, Hadi; Sedigh Ziabari, Seyed Ali
2017-11-01
This study investigates a junctionless tunnel field-effect transistor with a dual material gate and a heterostructure channel/source interface (DMG-H-JLTFET). We find that using the heterostructure interface improves device behavior by reducing the tunneling barrier width at the channel/source interface. Simultaneously, the dual material gate structure decreases ambipolar current by increasing the tunneling barrier width at the drain/channel interface. The performance of the device is analyzed based on the energy band diagram at on, off, and ambipolar states. Numerical simulations demonstrate improvements in ION, IOFF, ION/IOFF, subthreshold slope (SS), transconductance and cut-off frequency and suppressed ambipolar behavior. Next, the workfunction optimization of dual material gate is studied. It is found that if appropriate workfunctions are selected for tunnel and auxiliary gates, the JLTFET exhibits considerably improved performance. We then study the influence of Gaussian doping distribution at the drain and the channel on the ambipolar performance of the device and find that a Gaussian doping profile and a dual material gate structure remarkably reduce ambipolar current. Gaussian doped DMG-H-JLTFET, also exhibits enhanced IOFF, ION/IOFF, SS and a low threshold voltage without degrading IOFF.
NASA Astrophysics Data System (ADS)
Jayanthi, Aditya; Coker, Christopher
2016-11-01
In the last decade, CFD simulations have transitioned from the stage where they are used to validate the final designs to the main stream development of products driven by the simulation. However, there are still niche areas of applications liking oiling simulations, where the traditional CFD simulation times are probative to use them in product development and have to rely on experimental methods, which are expensive. In this paper a unique example of Sprocket-Chain simulation will be presented using nanoFluidx a commercial SPH code developed by FluiDyna GmbH and Altair Engineering. The grid less nature of the of SPH method has inherent advantages in the areas of application with complex geometry which pose severe challenge to classical finite volume CFD methods due to complex moving geometries, moving meshes and high resolution requirements leading to long simulation times. The simulations times using nanoFluidx can be reduced from weeks to days allowing the flexibility to run more simulation and can be in used in main stream product development. The example problem under consideration is a classical Multiphysics problem and a sequentially coupled solution of Motion Solve and nanoFluidX will be presented. This abstract is replacing DFD16-2016-000045.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
Time scale of random sequential adsorption.
Erban, Radek; Chapman, S Jonathan
2007-04-01
A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.
Understanding and simulating the material behavior during multi-particle irradiations
Mir, Anamul H.; Toulemonde, M.; Jegou, C.; Miro, S.; Serruys, Y.; Bouffard, S.; Peuget, S.
2016-01-01
A number of studies have suggested that the irradiation behavior and damage processes occurring during sequential and simultaneous particle irradiations can significantly differ. Currently, there is no definite answer as to why and when such differences are seen. Additionally, the conventional multi-particle irradiation facilities cannot correctly reproduce the complex irradiation scenarios experienced in a number of environments like space and nuclear reactors. Therefore, a better understanding of multi-particle irradiation problems and possible alternatives are needed. This study shows ionization induced thermal spike and defect recovery during sequential and simultaneous ion irradiation of amorphous silica. The simultaneous irradiation scenario is shown to be equivalent to multiple small sequential irradiation scenarios containing latent damage formation and recovery mechanisms. The results highlight the absence of any new damage mechanism and time-space correlation between various damage events during simultaneous irradiation of amorphous silica. This offers a new and convenient way to simulate and understand complex multi-particle irradiation problems. PMID:27466040
Analysis of Digital Communication Signals and Extraction of Parameters.
1994-12-01
Fast Fourier Transform (FFT). The correlation methods utilize modified time-frequency distributions , where one of these is based on the Wigner - Ville ... Distribution ( WVD ). Gaussian white noise is added to the signal to simulate various signal-to-noise ratios (SNRs).
A continuous mixing model for pdf simulations and its applications to combusting shear flows
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Chen, J.-Y.
1991-01-01
The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in this work. A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models.
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Shi, Xiaodong; Udpa, Lalita; Deng, Yiming
2018-05-01
Magnetic Barkhausen noise (MBN) is measured in low carbon steels and the relationship between carbon content and parameter extracted from MBN signal has been investigated. The parameter is extracted experimentally by fitting the original profiles with two Gaussian curves. The gap between two peaks (ΔG) of fitted Gaussian curves shows a better linear relationship with carbon contents of samples in the experiment. The result has been validated with simulation by Monte Carlo method. To ensure the sensitivity of measurement, advanced multi-objective optimization algorithm Non-dominant sorting genetic algorithm III (NSGA III) has been used to fulfill the optimization of the magnetic core of sensor.
Parallel logic gates in synthetic gene networks induced by non-Gaussian noise.
Xu, Yong; Jin, Xiaoqin; Zhang, Huiqing
2013-11-01
The recent idea of logical stochastic resonance is verified in synthetic gene networks induced by non-Gaussian noise. We realize the switching between two kinds of logic gates under optimal moderate noise intensity by varying two different tunable parameters in a single gene network. Furthermore, in order to obtain more logic operations, thus providing additional information processing capacity, we obtain in a two-dimensional toggle switch model two complementary logic gates and realize the transformation between two logic gates via the methods of changing different parameters. These simulated results contribute to improve the computational power and functionality of the networks.
Working covariance model selection for generalized estimating equations.
Carey, Vincent J; Wang, You-Gan
2011-11-20
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.
[Spectral properties of light migration in apple fruit tissue].
Sun, Teng-Fei; Zhang, Teng-Teng; Zheng, Tian-Tian; Cao, Zeng-Hui; Zhang, Jun
2013-11-01
The present paper simulates laser wavelength 632 and 750 nm Gaussian beam migration in apple fruit tissue using Monte-Carlo method, and researches the spectral properties of absorption and scattering. It was shown that the special energy distribution characteristics of Gaussian beam influenced the diffusion of the laser in the tissue, the reflection, absorption and transmittance of 750 nm by tissue are lower, there are more photons interacting with tissue within the tissue, and they can more clearly reflect the information within the tissue. So, the transmission characteristics of the infrared light were relatively strong in biology tissue, which was convenient for researching biology tissue.
Graphical Models for Ordinal Data
Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji
2014-01-01
A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267
Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows
NASA Technical Reports Server (NTRS)
He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.
Solar wind interaction with Venus and Mars in a parallel hybrid code
NASA Astrophysics Data System (ADS)
Jarvinen, Riku; Sandroos, Arto
2013-04-01
We discuss the development and applications of a new parallel hybrid simulation, where ions are treated as particles and electrons as a charge-neutralizing fluid, for the interaction between the solar wind and Venus and Mars. The new simulation code under construction is based on the algorithm of the sequential global planetary hybrid model developed at the Finnish Meteorological Institute (FMI) and on the Corsair parallel simulation platform also developed at the FMI. The FMI's sequential hybrid model has been used for studies of plasma interactions of several unmagnetized and weakly magnetized celestial bodies for more than a decade. Especially, the model has been used to interpret in situ particle and magnetic field observations from plasma environments of Mars, Venus and Titan. Further, Corsair is an open source MPI (Message Passing Interface) particle and mesh simulation platform, mainly aimed for simulations of diffusive shock acceleration in solar corona and interplanetary space, but which is now also being extended for global planetary hybrid simulations. In this presentation we discuss challenges and strategies of parallelizing a legacy simulation code as well as possible applications and prospects of a scalable parallel hybrid model for the solar wind interactions of Venus and Mars.
NASA Astrophysics Data System (ADS)
Maleki, Mohammad; Emery, Xavier
2017-12-01
In mineral resources evaluation, the joint simulation of a quantitative variable, such as a metal grade, and a categorical variable, such as a rock type, is challenging when one wants to reproduce spatial trends of the rock type domains, a feature that makes a stationarity assumption questionable. To address this problem, this work presents methodological and practical proposals for jointly simulating a grade and a rock type, when the former is represented by the transform of a stationary Gaussian random field and the latter is obtained by truncating an intrinsic random field of order k with Gaussian generalized increments. The proposals concern both the inference of the model parameters and the construction of realizations conditioned to existing data. The main difficulty is the identification of the spatial correlation structure, for which a semi-automated algorithm is designed, based on a least squares fitting of the data-to-data indicator covariances and grade-indicator cross-covariances. The proposed models and algorithms are applied to jointly simulate the copper grade and the rock type in a Chilean porphyry copper deposit. The results show their ability to reproduce the gradual transitions of the grade when crossing a rock type boundary, as well as the spatial zonation of the rock type.
1998-06-01
4] By 2010, we should be able to change how we conduct the most intense joint operations. Instead of relying on massed forces and sequential ...not independent, sequential steps. Data probes to support the analysis phase were required to complete the logical models. This generated a need...Networks) Identify Granularity (System Level) - Establish Physical Bounds or Limits to Systems • Determine System Test Configuration and Lineup
Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damiani, Rick; Wendt, Fabian; Musial, Walter
The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, themore » turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.« less
Fan, Tingbo; Liu, Zhenbo; Zhang, Dong; Tang, Mengxing
2013-03-01
Lesion formation and temperature distribution induced by high-intensity focused ultrasound (HIFU) were investigated both numerically and experimentally via two energy-delivering strategies, i.e., sequential discrete and continuous scanning modes. Simulations were presented based on the combination of Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and bioheat equation. Measurements were performed on tissue-mimicking phantoms sonicated by a 1.12-MHz single-element focused transducer working at an acoustic power of 75 W. Both the simulated and experimental results show that, in the sequential discrete mode, obvious saw-tooth-like contours could be observed for the peak temperature distribution and the lesion boundaries, with the increasing interval space between two adjacent exposure points. In the continuous scanning mode, more uniform peak temperature distributions and lesion boundaries would be produced, and the peak temperature values would decrease significantly with the increasing scanning speed. In addition, compared to the sequential discrete mode, the continuous scanning mode could achieve higher treatment efficiency (lesion area generated per second) with a lower peak temperature. The present studies suggest that the peak temperature and tissue lesion resulting from the HIFU exposure could be controlled by adjusting the transducer scanning speed, which is important for improving the HIFU treatment efficiency.
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Detection methods for non-Gaussian gravitational wave stochastic backgrounds
NASA Astrophysics Data System (ADS)
Drasco, Steve; Flanagan, Éanna É.
2003-04-01
A gravitational wave stochastic background can be produced by a collection of independent gravitational wave events. There are two classes of such backgrounds, one for which the ratio of the average time between events to the average duration of an event is small (i.e., many events are on at once), and one for which the ratio is large. In the first case the signal is continuous, sounds something like a constant hiss, and has a Gaussian probability distribution. In the second case, the discontinuous or intermittent signal sounds something like popcorn popping, and is described by a non-Gaussian probability distribution. In this paper we address the issue of finding an optimal detection method for such a non-Gaussian background. As a first step, we examine the idealized situation in which the event durations are short compared to the detector sampling time, so that the time structure of the events cannot be resolved, and we assume white, Gaussian noise in two collocated, aligned detectors. For this situation we derive an appropriate version of the maximum likelihood detection statistic. We compare the performance of this statistic to that of the standard cross-correlation statistic both analytically and with Monte Carlo simulations. In general the maximum likelihood statistic performs better than the cross-correlation statistic when the stochastic background is sufficiently non-Gaussian, resulting in a gain factor in the minimum gravitational-wave energy density necessary for detection. This gain factor ranges roughly between 1 and 3, depending on the duty cycle of the background, for realistic observing times and signal strengths for both ground and space based detectors. The computational cost of the statistic, although significantly greater than that of the cross-correlation statistic, is not unreasonable. Before the statistic can be used in practice with real detector data, further work is required to generalize our analysis to accommodate separated, misaligned detectors with realistic, colored, non-Gaussian noise.
NASA Astrophysics Data System (ADS)
Kim, Ji Hye; Ahn, Il Jun; Nam, Woo Hyun; Ra, Jong Beom
2015-02-01
Positron emission tomography (PET) images usually suffer from a noticeable amount of statistical noise. In order to reduce this noise, a post-filtering process is usually adopted. However, the performance of this approach is limited because the denoising process is mostly performed on the basis of the Gaussian random noise. It has been reported that in a PET image reconstructed by the expectation-maximization (EM), the noise variance of each voxel depends on its mean value, unlike in the case of Gaussian noise. In addition, we observe that the variance also varies with the spatial sensitivity distribution in a PET system, which reflects both the solid angle determined by a given scanner geometry and the attenuation information of a scanned object. Thus, if a post-filtering process based on the Gaussian random noise is applied to PET images without consideration of the noise characteristics along with the spatial sensitivity distribution, the spatially variant non-Gaussian noise cannot be reduced effectively. In the proposed framework, to effectively reduce the noise in PET images reconstructed by the 3-D ordinary Poisson ordered subset EM (3-D OP-OSEM), we first denormalize an image according to the sensitivity of each voxel so that the voxel mean value can represent its statistical properties reliably. Based on our observation that each noisy denormalized voxel has a linear relationship between the mean and variance, we try to convert this non-Gaussian noise image to a Gaussian noise image. We then apply a block matching 4-D algorithm that is optimized for noise reduction of the Gaussian noise image, and reconvert and renormalize the result to obtain a final denoised image. Using simulated phantom data and clinical patient data, we demonstrate that the proposed framework can effectively suppress the noise over the whole region of a PET image while minimizing degradation of the image resolution.
Daneshzand, Mohammad; Faezipour, Miad; Barkana, Buket D.
2017-01-01
Deep brain stimulation (DBS) has compelling results in the desynchronization of the basal ganglia neuronal activities and thus, is used in treating the motor symptoms of Parkinson's disease (PD). Accurate definition of DBS waveform parameters could avert tissue or electrode damage, increase the neuronal activity and reduce energy cost which will prolong the battery life, hence avoiding device replacement surgeries. This study considers the use of a charge balanced Gaussian waveform pattern as a method to disrupt the firing patterns of neuronal cell activity. A computational model was created to simulate ganglia cells and their interactions with thalamic neurons. From the model, we investigated the effects of modified DBS pulse shapes and proposed a delay period between the cathodic and anodic parts of the charge balanced Gaussian waveform to desynchronize the firing patterns of the GPe and GPi cells. The results of the proposed Gaussian waveform with delay outperformed that of rectangular DBS waveforms used in in-vivo experiments. The Gaussian Delay Gaussian (GDG) waveforms achieved lower number of misses in eliciting action potential while having a lower amplitude and shorter length of delay compared to numerous different pulse shapes. The amount of energy consumed in the basal ganglia network due to GDG waveforms was dropped by 22% in comparison with charge balanced Gaussian waveforms without any delay between the cathodic and anodic parts and was also 60% lower than a rectangular charged balanced pulse with a delay between the cathodic and anodic parts of the waveform. Furthermore, by defining a Synchronization Level metric, we observed that the GDG waveform was able to reduce the synchronization of GPi neurons more effectively than any other waveform. The promising results of GDG waveforms in terms of eliciting action potential, desynchronization of the basal ganglia neurons and reduction of energy consumption can potentially enhance the performance of DBS devices. PMID:28848417
Daneshzand, Mohammad; Faezipour, Miad; Barkana, Buket D
2017-01-01
Deep brain stimulation (DBS) has compelling results in the desynchronization of the basal ganglia neuronal activities and thus, is used in treating the motor symptoms of Parkinson's disease (PD). Accurate definition of DBS waveform parameters could avert tissue or electrode damage, increase the neuronal activity and reduce energy cost which will prolong the battery life, hence avoiding device replacement surgeries. This study considers the use of a charge balanced Gaussian waveform pattern as a method to disrupt the firing patterns of neuronal cell activity. A computational model was created to simulate ganglia cells and their interactions with thalamic neurons. From the model, we investigated the effects of modified DBS pulse shapes and proposed a delay period between the cathodic and anodic parts of the charge balanced Gaussian waveform to desynchronize the firing patterns of the GPe and GPi cells. The results of the proposed Gaussian waveform with delay outperformed that of rectangular DBS waveforms used in in-vivo experiments. The Gaussian Delay Gaussian (GDG) waveforms achieved lower number of misses in eliciting action potential while having a lower amplitude and shorter length of delay compared to numerous different pulse shapes. The amount of energy consumed in the basal ganglia network due to GDG waveforms was dropped by 22% in comparison with charge balanced Gaussian waveforms without any delay between the cathodic and anodic parts and was also 60% lower than a rectangular charged balanced pulse with a delay between the cathodic and anodic parts of the waveform. Furthermore, by defining a Synchronization Level metric, we observed that the GDG waveform was able to reduce the synchronization of GPi neurons more effectively than any other waveform. The promising results of GDG waveforms in terms of eliciting action potential, desynchronization of the basal ganglia neurons and reduction of energy consumption can potentially enhance the performance of DBS devices.
NASA Astrophysics Data System (ADS)
Libera, Arianna; de Barros, Felipe P. J.; Riva, Monica; Guadagnini, Alberto
2017-10-01
Our study is keyed to the analysis of the interplay between engineering factors (i.e., transient pumping rates versus less realistic but commonly analyzed uniform extraction rates) and the heterogeneous structure of the aquifer (as expressed by the probability distribution characterizing transmissivity) on contaminant transport. We explore the joint influence of diverse (a) groundwater pumping schedules (constant and variable in time) and (b) representations of the stochastic heterogeneous transmissivity (T) field on temporal histories of solute concentrations observed at an extraction well. The stochastic nature of T is rendered by modeling its natural logarithm, Y = ln T, through a typical Gaussian representation and the recently introduced Generalized sub-Gaussian (GSG) model. The latter has the unique property to embed scale-dependent non-Gaussian features of the main statistics of Y and its (spatial) increments, which have been documented in a variety of studies. We rely on numerical Monte Carlo simulations and compute the temporal evolution at the well of low order moments of the solute concentration (C), as well as statistics of the peak concentration (Cp), identified as the environmental performance metric of interest in this study. We show that the pumping schedule strongly affects the pattern of the temporal evolution of the first two statistical moments of C, regardless the nature (Gaussian or non-Gaussian) of the underlying Y field, whereas the latter quantitatively influences their magnitude. Our results show that uncertainty associated with C and Cp estimates is larger when operating under a transient extraction scheme than under the action of a uniform withdrawal schedule. The probability density function (PDF) of Cp displays a long positive tail in the presence of time-varying pumping schedule. All these aspects are magnified in the presence of non-Gaussian Y fields. Additionally, the PDF of Cp displays a bimodal shape for all types of pumping schemes analyzed, independent of the type of heterogeneity considered.
Transport of cosmic-ray protons in intermittent heliospheric turbulence: Model and simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alouani-Bibi, Fathallah; Le Roux, Jakobus A., E-mail: fb0006@uah.edu
The transport of charged energetic particles in the presence of strong intermittent heliospheric turbulence is computationally analyzed based on known properties of the interplanetary magnetic field and solar wind plasma at 1 astronomical unit. The turbulence is assumed to be static, composite, and quasi-three-dimensional with a varying energy distribution between a one-dimensional Alfvénic (slab) and a structured two-dimensional component. The spatial fluctuations of the turbulent magnetic field are modeled either as homogeneous with a Gaussian probability distribution function (PDF), or as intermittent on large and small scales with a q-Gaussian PDF. Simulations showed that energetic particle diffusion coefficients both parallelmore » and perpendicular to the background magnetic field are significantly affected by intermittency in the turbulence. This effect is especially strong for parallel transport where for large-scale intermittency results show an extended phase of subdiffusive parallel transport during which cross-field transport diffusion dominates. The effects of intermittency are found to depend on particle rigidity and the fraction of slab energy in the turbulence, yielding a perpendicular to parallel mean free path ratio close to 1 for large-scale intermittency. Investigation of higher order transport moments (kurtosis) indicates that non-Gaussian statistical properties of the intermittent turbulent magnetic field are present in the parallel transport, especially for low rigidity particles at all times.« less
A New Algorithm with Plane Waves and Wavelets for Random Velocity Fields with Many Spatial Scales
NASA Astrophysics Data System (ADS)
Elliott, Frank W.; Majda, Andrew J.
1995-03-01
A new Monte Carlo algorithm for constructing and sampling stationary isotropic Gaussian random fields with power-law energy spectrum, infrared divergence, and fractal self-similar scaling is developed here. The theoretical basis for this algorithm involves the fact that such a random field is well approximated by a superposition of random one-dimensional plane waves involving a fixed finite number of directions. In general each one-dimensional plane wave is the sum of a random shear layer and a random acoustical wave. These one-dimensional random plane waves are then simulated by a wavelet Monte Carlo method for a single space variable developed recently by the authors. The computational results reported in this paper demonstrate remarkable low variance and economical representation of such Gaussian random fields through this new algorithm. In particular, the velocity structure function for an imcorepressible isotropic Gaussian random field in two space dimensions with the Kolmogoroff spectrum can be simulated accurately over 12 decades with only 100 realizations of the algorithm with the scaling exponent accurate to 1.1% and the constant prefactor accurate to 6%; in fact, the exponent of the velocity structure function can be computed over 12 decades within 3.3% with only 10 realizations. Furthermore, only 46,592 active computational elements are utilized in each realization to achieve these results for 12 decades of scaling behavior.
Ida, Masato; Taniguchi, Nobuyuki
2003-09-01
This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.
A Long-Lived Oscillatory Space-Time Correlation Function of Two Dimensional Colloids
NASA Astrophysics Data System (ADS)
Kim, Jeongmin; Sung, Bong June
2014-03-01
Diffusion of a colloid in solution has drawn significant attention for a century. A well-known behavior of the colloid is called Brownian motion : the particle displacement probability distribution (PDPD) is Gaussian and the mean-square displacement (MSD) is linear with time. However, recent simulation and experimental studies revealed the heterogeneous dynamics of colloids near glass transitions or in complex environments such as entangled actin, PDPD exhibited the exponential tail at a large length instead of being Gaussian at all length scales. More interestingly, PDPD is still exponential even when MSD was still linear with time. It requires a refreshing insight on the colloidal diffusion in the complex environments. In this work, we study heterogeneous dynamics of two dimensional (2D) colloids using molecular dynamics simulations. Unlike in three dimensions, 2D solids do not follow the Lindemann melting criterion. The Kosterlitz-Thouless-Halperin-Nelson-Young theory predicts two-step phase transitions with an intermediate phase, the hexatic phase between isotropic liquids and solids. Near solid-hexatic transition, PDPD shows interesting oscillatory behavior between a central Gaussian part and an exponential tail. Until 12 times longer than translational relaxation time, the oscillatory behavior still persists even after entering the Fickian regime. We also show that multi-layered kinetic clusters account for heterogeneous dynamics of 2D colloids with the long-lived anomalous oscillatory PDPD.
NASA Astrophysics Data System (ADS)
Broughton, Rachel; Gomez, Michael; Zolfaghari, Ali; Morris, Lewis
2016-10-01
A self-aligning Gaussian telescope has been designed to compensate for the effect of movement in the ITER vacuum vessel on the transmission line. The purpose of the setup is to couple microwaves into and out of the vessel across the vacuum windows while allowing for both slow movements of the vessel, due to thermal growth, and rapid movements, due to vibrations and disruptions. Additionally, a test stand has been designed specifically to hold this telescope in order to imitate these movements. Consequently, this will allow for the assessment of the efficacy in applying the self-aligning Gaussian telescope approach. The motions of the test stand, as well as the stress on the telescope mechanism, have been virtually simulated using ANSYS workbench. A prototype of this test stand and self-aligning telescope will be built using a combination of custom machined parts and ordered parts. The completed mechanism will be tested at the lab in four different ways: slow single- and multi-direction movements, rapid multi-direction movement, functional laser alignment and self-aligning tests, and natural frequency tests. Once the prototype successfully passes all requirements, it will be tested with microwaves in the LFSR transmission line test stand at General Atomics. This work is supported by US DOE Contract No. DE-AC02-09CH11466.
Statistical Analysis of Large Scale Structure by the Discrete Wavelet Transform
NASA Astrophysics Data System (ADS)
Pando, Jesus
1997-10-01
The discrete wavelet transform (DWT) is developed as a general statistical tool for the study of large scale structures (LSS) in astrophysics. The DWT is used in all aspects of structure identification including cluster analysis, spectrum and two-point correlation studies, scale-scale correlation analysis and to measure deviations from Gaussian behavior. The techniques developed are demonstrated on 'academic' signals, on simulated models of the Lymanα (Lyα) forests, and on observational data of the Lyα forests. This technique can detect clustering in the Ly-α clouds where traditional techniques such as the two-point correlation function have failed. The position and strength of these clusters in both real and simulated data is determined and it is shown that clusters exist on scales as large as at least 20 h-1 Mpc at significance levels of 2-4 σ. Furthermore, it is found that the strength distribution of the clusters can be used to distinguish between real data and simulated samples even where other traditional methods have failed to detect differences. Second, a method for measuring the power spectrum of a density field using the DWT is developed. All common features determined by the usual Fourier power spectrum can be calculated by the DWT. These features, such as the index of a power law or typical scales, can be detected even when the samples are geometrically complex, the samples are incomplete, or the mean density on larger scales is not known (the infrared uncertainty). Using this method the spectra of Ly-α forests in both simulated and real samples is calculated. Third, a method for measuring hierarchical clustering is introduced. Because hierarchical evolution is characterized by a set of rules of how larger dark matter halos are formed by the merging of smaller halos, scale-scale correlations of the density field should be one of the most sensitive quantities in determining the merging history. We show that these correlations can be completely determined by the correlations between discrete wavelet coefficients on adjacent scales and at nearly the same spatial position, Cj,j+12/cdot2. Scale-scale correlations on two samples of the QSO Ly-α forests absorption spectra are computed. Lastly, higher order statistics are developed to detect deviations from Gaussian behavior. These higher order statistics are necessary to fully characterize the Ly-α forests because the usual 2nd order statistics, such as the two-point correlation function or power spectrum, give inconclusive results. It is shown how this technique takes advantage of the locality of the DWT to circumvent the central limit theorem. A non-Gaussian spectrum is defined and this spectrum reveals not only the magnitude, but the scales of non-Gaussianity. When applied to simulated and observational samples of the Ly-α clouds, it is found that different popular models of structure formation have different spectra while two, independent observational data sets, have the same spectra. Moreover, the non-Gaussian spectra of real data sets are significantly different from the spectra of various possible random samples. (Abstract shortened by UMI.)
Free energy calculations, enhanced by a Gaussian ansatz, for the "chemical work" distribution.
Boulougouris, Georgios C
2014-05-15
The evaluation of the free energy is essential in molecular simulation because it is intimately related with the existence of multiphase equilibrium. Recently, it was demonstrated that it is possible to evaluate the Helmholtz free energy using a single statistical ensemble along an entire isotherm by accounting for the "chemical work" of transforming each molecule, from an interacting one, to an ideal gas. In this work, we show that it is possible to perform such a free energy perturbation over a liquid vapor phase transition. Furthermore, we investigate the link between a general free energy perturbation scheme and the novel nonequilibrium theories of Crook's and Jarzinsky. We find that for finite systems away from the thermodynamic limit the second law of thermodynamics will always be an inequality for isothermal free energy perturbations, resulting always to a dissipated work that may tend to zero only in the thermodynamic limit. The work, the heat, and the entropy produced during a thermodynamic free energy perturbation can be viewed in the context of the Crooks and Jarzinsky formalism, revealing that for a given value of the ensemble average of the "irreversible" work, the minimum entropy production corresponded to a Gaussian distribution for the histogram of the work. We propose the evaluation of the free energy difference in any free energy perturbation based scheme on the average irreversible "chemical work" minus the dissipated work that can be calculated from the variance of the distribution of the logarithm of the work histogram, within the Gaussian approximation. As a consequence, using the Gaussian ansatz for the distribution of the "chemical work," accurate estimates for the chemical potential and the free energy of the system can be performed using much shorter simulations and avoiding the necessity of sampling the computational costly tails of the "chemical work." For a more general free energy perturbation scheme that the Gaussian ansatz may not be valid, the free energy calculation can be expressed in terms of the moment generating function of the "chemical work" distribution. Copyright © 2014 Wiley Periodicals, Inc.
Simulating the effect of non-linear mode coupling in cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Kiessling, A.; Taylor, A. N.; Heavens, A. F.
2011-09-01
Fisher Information Matrix methods are commonly used in cosmology to estimate the accuracy that cosmological parameters can be measured with a given experiment and to optimize the design of experiments. However, the standard approach usually assumes both data and parameter estimates are Gaussian-distributed. Further, for survey forecasts and optimization it is usually assumed that the power-spectrum covariance matrix is diagonal in Fourier space. However, in the low-redshift Universe, non-linear mode coupling will tend to correlate small-scale power, moving information from lower to higher order moments of the field. This movement of information will change the predictions of cosmological parameter accuracy. In this paper we quantify this loss of information by comparing naïve Gaussian Fisher matrix forecasts with a maximum likelihood parameter estimation analysis of a suite of mock weak lensing catalogues derived from N-body simulations, based on the SUNGLASS pipeline, for a 2D and tomographic shear analysis of a Euclid-like survey. In both cases, we find that the 68 per cent confidence area of the Ωm-σ8 plane increases by a factor of 5. However, the marginal errors increase by just 20-40 per cent. We propose a new method to model the effects of non-linear shear-power mode coupling in the Fisher matrix by approximating the shear-power distribution as a multivariate Gaussian with a covariance matrix derived from the mock weak lensing survey. We find that this approximation can reproduce the 68 per cent confidence regions of the full maximum likelihood analysis in the Ωm-σ8 plane to high accuracy for both 2D and tomographic weak lensing surveys. Finally, we perform a multiparameter analysis of Ωm, σ8, h, ns, w0 and wa to compare the Gaussian and non-linear mode-coupled Fisher matrix contours. The 6D volume of the 1σ error contours for the non-linear Fisher analysis is a factor of 3 larger than for the Gaussian case, and the shape of the 68 per cent confidence volume is modified. We propose that future Fisher matrix estimates of cosmological parameter accuracies should include mode-coupling effects.
An energy function for dynamics simulations of polypeptides in torsion angle space
NASA Astrophysics Data System (ADS)
Sartori, F.; Melchers, B.; Böttcher, H.; Knapp, E. W.
1998-05-01
Conventional simulation techniques to model the dynamics of proteins in atomic detail are restricted to short time scales. A simplified molecular description, in which high frequency motions with small amplitudes are ignored, can overcome this problem. In this protein model only the backbone dihedrals φ and ψ and the χi of the side chains serve as degrees of freedom. Bond angles and lengths are fixed at ideal geometry values provided by the standard molecular dynamics (MD) energy function CHARMM. In this work a Monte Carlo (MC) algorithm is used, whose elementary moves employ cooperative rotations in a small window of consecutive amide planes, leaving the polypeptide conformation outside of this window invariant. A single of these window MC moves generates local conformational changes only. But, the application of many such moves at different parts of the polypeptide backbone leads to global conformational changes. To account for the lack of flexibility in the protein model employed, the energy function used to evaluate conformational energies is split into sequentially neighbored and sequentially distant contributions. The sequentially neighbored part is represented by an effective (φ,ψ)-torsion potential. It is derived from MD simulations of a flexible model dipeptide using a conventional MD energy function. To avoid exaggeration of hydrogen bonding strengths, the electrostatic interactions involving hydrogen atoms are scaled down at short distances. With these adjustments of the energy function, the rigid polypeptide model exhibits the same equilibrium distributions as obtained by conventional MD simulation with a fully flexible molecular model. Also, the same temperature dependence of the stability and build-up of α helices of 18-alanine as found in MD simulations is observed using the adapted energy function for MC simulations. Analyses of transition frequencies demonstrate that also dynamical aspects of MD trajectories are faithfully reproduced. Finally, it is demonstrated that even for high temperature unfolded polypeptides the MC simulation is more efficient by a factor of 10 than conventional MD simulations.
A new way of setting the phases for cosmological multiscale Gaussian initial conditions
NASA Astrophysics Data System (ADS)
Jenkins, Adrian
2013-09-01
We describe how to define an extremely large discrete realization of a Gaussian white noise field that has a hierarchical structure and the property that the value of any part of the field can be computed quickly. Tiny subregions of such a field can be used to set the phase information for Gaussian initial conditions for individual cosmological simulations of structure formation. This approach has several attractive features: (i) the hierarchical structure based on an octree is particularly well suited for generating follow-up resimulation or zoom initial conditions; (ii) the phases are defined for all relevant physical scales in advance so that resimulation initial conditions are, by construction, consistent both with their parent simulation and with each other; (iii) the field can easily be made public by releasing a code to compute it - once public, phase information can be shared or published by specifying a spatial location within the realization. In this paper, we describe the principles behind creating such realizations. We define an example called Panphasia and in a companion paper by Jenkins and Booth (2013) make public a code to compute it. With 50 octree levels Panphasia spans a factor of more than 1015 in linear scale - a range that significantly exceeds the ratio of the current Hubble radius to the putative cold dark matter free-streaming scale. We show how to modify a code used for making cosmological and resimulation initial conditions so that it can take the phase information from Panphasia and, using this code, we demonstrate that it is possible to make good quality resimulation initial conditions. We define a convention for publishing phase information from Panphasia and publish the initial phases for several of the Virgo Consortium's most recent cosmological simulations including the 303 billion particle MXXL simulation. Finally, for reference, we give the locations and properties of several dark matter haloes that can be resimulated within these volumes.
Novel high-fidelity realistic explosion damage simulation for urban environments
NASA Astrophysics Data System (ADS)
Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya
2010-04-01
Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.
MaMiCo: Software design for parallel molecular-continuum flow simulations
NASA Astrophysics Data System (ADS)
Neumann, Philipp; Flohr, Hanno; Arora, Rahul; Jarmatz, Piet; Tchipev, Nikola; Bungartz, Hans-Joachim
2016-03-01
The macro-micro-coupling tool (MaMiCo) was developed to ease the development of and modularize molecular-continuum simulations, retaining sequential and parallel performance. We demonstrate the functionality and performance of MaMiCo by coupling the spatially adaptive Lattice Boltzmann framework waLBerla with four molecular dynamics (MD) codes: the light-weight Lennard-Jones-based implementation SimpleMD, the node-level optimized software ls1 mardyn, and the community codes ESPResSo and LAMMPS. We detail interface implementations to connect each solver with MaMiCo. The coupling for each waLBerla-MD setup is validated in three-dimensional channel flow simulations which are solved by means of a state-based coupling method. We provide sequential and strong scaling measurements for the four molecular-continuum simulations. The overhead of MaMiCo is found to come at 10%-20% of the total (MD) runtime. The measurements further show that scalability of the hybrid simulations is reached on up to 500 Intel SandyBridge, and more than 1000 AMD Bulldozer compute cores.
Karacan, C.O.; Olea, R.A.; Goodman, G.
2012-01-01
Determination of the size of the gas emission zone, the locations of gas sources within, and especially the amount of gas retained in those zones is one of the most important steps for designing a successful methane control strategy and an efficient ventilation system in longwall coal mining. The formation of the gas emission zone and the potential amount of gas-in-place (GIP) that might be available for migration into a mine are factors of local geology and rock properties that usually show spatial variability in continuity and may also show geometric anisotropy. Geostatistical methods are used here for modeling and prediction of gas amounts and for assessing their associated uncertainty in gas emission zones of longwall mines for methane control.This study used core data obtained from 276 vertical exploration boreholes drilled from the surface to the bottom of the Pittsburgh coal seam in a mining district in the Northern Appalachian basin. After identifying important coal and non-coal layers for the gas emission zone, univariate statistical and semivariogram analyses were conducted for data from different formations to define the distribution and continuity of various attributes. Sequential simulations performed stochastic assessment of these attributes, such as gas content, strata thickness, and strata displacement. These analyses were followed by calculations of gas-in-place and their uncertainties in the Pittsburgh seam caved zone and fractured zone of longwall mines in this mining district. Grid blanking was used to isolate the volume over the actual panels from the entire modeled district and to calculate gas amounts that were directly related to the emissions in longwall mines.Results indicated that gas-in-place in the Pittsburgh seam, in the caved zone and in the fractured zone, as well as displacements in major rock units, showed spatial correlations that could be modeled and estimated using geostatistical methods. This study showed that GIP volumes may change up to 3. MMscf per acre and, in a multi-panel district, may total 9. Bcf of methane within the gas emission zone. Therefore, ventilation and gas capture systems should be designed accordingly. In addition, rock displacements within the gas emission zone are spatially distributed. From an engineering and practical point of view, spatial distributions of GIP and distributions of rock displacements should be correlated with in-mine emissions and gob gas venthole productions. ?? 2011.
Karacan, C. Özgen; Olea, Ricardo A.; Goodman, Gerrit
2015-01-01
Determination of the size of the gas emission zone, the locations of gas sources within, and especially the amount of gas retained in those zones is one of the most important steps for designing a successful methane control strategy and an efficient ventilation system in longwall coal mining. The formation of the gas emission zone and the potential amount of gas-in-place (GIP) that might be available for migration into a mine are factors of local geology and rock properties that usually show spatial variability in continuity and may also show geometric anisotropy. Geostatistical methods are used here for modeling and prediction of gas amounts and for assessing their associated uncertainty in gas emission zones of longwall mines for methane control. This study used core data obtained from 276 vertical exploration boreholes drilled from the surface to the bottom of the Pittsburgh coal seam in a mining district in the Northern Appalachian basin. After identifying important coal and non-coal layers for the gas emission zone, univariate statistical and semivariogram analyses were conducted for data from different formations to define the distribution and continuity of various attributes. Sequential simulations performed stochastic assessment of these attributes, such as gas content, strata thickness, and strata displacement. These analyses were followed by calculations of gas-in-place and their uncertainties in the Pittsburgh seam caved zone and fractured zone of longwall mines in this mining district. Grid blanking was used to isolate the volume over the actual panels from the entire modeled district and to calculate gas amounts that were directly related to the emissions in longwall mines. Results indicated that gas-in-place in the Pittsburgh seam, in the caved zone and in the fractured zone, as well as displacements in major rock units, showed spatial correlations that could be modeled and estimated using geostatistical methods. This study showed that GIP volumes may change up to 3 MMscf per acre and, in a multi-panel district, may total 9 Bcf of methane within the gas emission zone. Therefore, ventilation and gas capture systems should be designed accordingly. In addition, rock displacements within the gas emission zone are spatially distributed. From an engineering and practical point of view, spatial distributions of GIP and distributions of rock displacements should be correlated with in-mine emissions and gob gas venthole productions. PMID:26435558
Recovering Wood and McCarthy's ERP-prototypes by means of ERP-specific procrustes-rotation.
Beauducel, André
2018-02-01
The misallocation of treatment-variance on the wrong component has been discussed in the context of temporal principal component analysis of event-related potentials. There is, until now, no rotation-method that can perfectly recover Wood and McCarthy's prototypes without making use of additional information on treatment-effects. In order to close this gap, two new methods: for component rotation were proposed. After Varimax-prerotation, the first method identifies very small slopes of successive loadings. The corresponding loadings are set to zero in a target-matrix for event-related orthogonal partial Procrustes- (EPP-) rotation. The second method generates Gaussian normal distributions around the peaks of the Varimax-loadings and performs orthogonal Procrustes-rotation towards these Gaussian distributions. Oblique versions of this Gaussian event-related Procrustes- (GEP) rotation and of EPP-rotation are based on Promax-rotation. A simulation study revealed that the new orthogonal rotations recover Wood and McCarthy's prototypes and eliminate misallocation of treatment-variance. In an additional simulation study with a more pronounced overlap of the prototypes GEP Promax-rotation reduced the variance misallocation slightly more than EPP Promax-rotation. Comparison with Existing Method(s): Varimax- and conventional Promax-rotations resulted in substantial misallocations of variance in simulation studies when components had temporal overlap. A substantially reduced misallocation of variance occurred with the EPP-, EPP Promax-, GEP-, and GEP Promax-rotations. Misallocation of variance can be minimized by means of the new rotation methods: Making use of information on the temporal order of the loadings may allow for improvements of the rotation of temporal PCA components. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Fernández, Leandro; Monbaliu, Jaak; Onorato, Miguel; Toffoli, Alessandro
2014-05-01
This research is focused on the study of nonlinear evolution of irregular wave fields in water of arbitrary depth by comparing field measurements and numerical simulations.It is now well accepted that modulational instability, known as one of the main mechanisms for the formation of rogue waves, induces strong departures from Gaussian statistics. However, whereas non-Gaussian properties are remarkable when wave fields follow one direction of propagation over an infinite water depth, wave statistics only weakly deviate from Gaussianity when waves spread over a range of different directions. Over finite water depth, furthermore, wave instability attenuates overall and eventually vanishes for relative water depths as low as kh=1.36 (where k is the wavenumber of the dominant waves and h the water depth). Recent experimental results, nonetheless, seem to indicate that oblique perturbations are capable of triggering and sustaining modulational instability even if kh<1.36. In this regard, the aim of this research is to understand whether the combined effect of directionality and finite water depth has a significant effect on wave statistics and particularly on the occurrence of extremes. For this purpose, numerical experiments have been performed solving the Euler equation of motion with the Higher Order Spectral Method (HOSM) and compared with data of short crested wave fields for different sea states observed at the Lake George (Australia). A comparative analysis of the statistical properties (i.e. density function of the surface elevation and its statistical moments skewness and kurtosis) between simulations and in-situ data provides a confrontation between the numerical developments and real observations in field conditions.
NASA Astrophysics Data System (ADS)
Jones, Andrew P.; Crain, Jason; Sokhan, Vlad P.; Whitfield, Troy W.; Martyna, Glenn J.
2013-04-01
Treating both many-body polarization and dispersion interactions is now recognized as a key element in achieving the level of atomistic modeling required to reveal novel physics in complex systems. The quantum Drude oscillator (QDO), a Gaussian-based, coarse grained electronic structure model, captures both many-body polarization and dispersion and has linear scale computational complexity with system size, hence it is a leading candidate next-generation simulation method. Here, we investigate the extent to which the QDO treatment reproduces the desired long-range atomic and molecular properties. We present closed form expressions for leading order polarizabilities and dispersion coefficients and derive invariant (parameter-free) scaling relationships among multipole polarizability and many-body dispersion coefficients that arise due to the Gaussian nature of the model. We show that these “combining rules” hold to within a few percent for noble gas atoms, alkali metals, and simple (first-row hydride) molecules such as water; this is consistent with the surprising success that models with underlying Gaussian statistics often exhibit in physics. We present a diagrammatic Jastrow-type perturbation theory tailored to the QDO model that serves to illustrate the rich types of responses that the QDO approach engenders. QDO models for neon, argon, krypton, and xenon, designed to reproduce gas phase properties, are constructed and their condensed phase properties explored via linear scale diffusion Monte Carlo (DMC) and path integral molecular dynamics (PIMD) simulations. Good agreement with experimental data for structure, cohesive energy, and bulk modulus is found, demonstrating a degree of transferability that cannot be achieved using current empirical models or fully ab initio descriptions.
NASA Astrophysics Data System (ADS)
Gyasi-Agyei, Yeboah
2018-01-01
This paper has established a link between the spatial structure of radar rainfall, which more robustly describes the spatial structure, and gauge rainfall for improved daily rainfield simulation conditioned on the limited gauged data for regions with or without radar records. A two-dimensional anisotropic exponential function that has parameters of major and minor axes lengths, and direction, is used to describe the correlogram (spatial structure) of daily rainfall in the Gaussian domain. The link is a copula-based joint distribution of the radar-derived correlogram parameters that uses the gauge-derived correlogram parameters and maximum daily temperature as covariates of the Box-Cox power exponential margins and Gumbel copula. While the gauge-derived, radar-derived and the copula-derived correlogram parameters reproduced the mean estimates similarly using leave-one-out cross-validation of ordinary kriging, the gauge-derived parameters yielded higher standard deviation (SD) of the Gaussian quantile which reflects uncertainty in over 90% of cases. However, the distribution of the SD generated by the radar-derived and the copula-derived parameters could not be distinguished. For the validation case, the percentage of cases of higher SD by the gauge-derived parameter sets decreased to 81.2% and 86.6% for the non-calibration and the calibration periods, respectively. It has been observed that 1% reduction in the Gaussian quantile SD can cause over 39% reduction in the SD of the median rainfall estimate, actual reduction being dependent on the distribution of rainfall of the day. Hence the main advantage of using the most correct radar correlogram parameters is to reduce the uncertainty associated with conditional simulations that rely on SD through kriging.
An optimal control approach to the design of moving flight simulators
NASA Technical Reports Server (NTRS)
Sivan, R.; Ish-Shalom, J.; Huang, J.-K.
1982-01-01
An abstract flight simulator design problem is formulated in the form of an optimal control problem, which is solved for the linear-quadratic-Gaussian special case using a mathematical model of the vestibular organs. The optimization criterion used is the mean-square difference between the physiological outputs of the vestibular organs of the pilot in the aircraft and the pilot in the simulator. The dynamical equations are linearized, and the output signal is modeled as a random process with rational power spectral density. The method described yields the optimal structure of the simulator's motion generator, or 'washout filter'. A two-degree-of-freedom flight simulator design, including single output simulations, is presented.
Hybrid modeling of spatial continuity for application to numerical inverse problems
Friedel, Michael J.; Iwashita, Fabio
2013-01-01
A novel two-step modeling approach is presented to obtain optimal starting values and geostatistical constraints for numerical inverse problems otherwise characterized by spatially-limited field data. First, a type of unsupervised neural network, called the self-organizing map (SOM), is trained to recognize nonlinear relations among environmental variables (covariates) occurring at various scales. The values of these variables are then estimated at random locations across the model domain by iterative minimization of SOM topographic error vectors. Cross-validation is used to ensure unbiasedness and compute prediction uncertainty for select subsets of the data. Second, analytical functions are fit to experimental variograms derived from original plus resampled SOM estimates producing model variograms. Sequential Gaussian simulation is used to evaluate spatial uncertainty associated with the analytical functions and probable range for constraining variables. The hybrid modeling of spatial continuity is demonstrated using spatially-limited hydrologic measurements at different scales in Brazil: (1) physical soil properties (sand, silt, clay, hydraulic conductivity) in the 42 km2 Vargem de Caldas basin; (2) well yield and electrical conductivity of groundwater in the 132 km2 fractured crystalline aquifer; and (3) specific capacity, hydraulic head, and major ions in a 100,000 km2 transboundary fractured-basalt aquifer. These results illustrate the benefits of exploiting nonlinear relations among sparse and disparate data sets for modeling spatial continuity, but the actual application of these spatial data to improve numerical inverse modeling requires testing.
Jang, Cheng-Shin; Huang, Han-Chen
2017-07-01
The Jiaosi Hot Spring Region is one of the most famous tourism destinations in Taiwan. The spring water is processed for various uses, including irrigation, aquaculture, swimming, bathing, foot spas, and recreational tourism. Moreover, the multipurpose uses of spring water can be dictated by the temperature of the water. To evaluate the suitability of spring water for these various uses, this study spatially characterized the spring water temperatures of the Jiaosi Hot Spring Region by integrating ordinary kriging (OK), sequential Gaussian simulation (SGS), and Geographic information system (GIS). First, variogram analyses were used to determine the spatial variability of spring water temperatures. Next, OK and SGS were adopted to model the spatial uncertainty and distributions of the spring water temperatures. Finally, the land use (i.e., agriculture, dwelling, public land, and recreation) was determined using GIS and combined with the estimated distributions of the spring water temperatures. A suitable development strategy for the multipurpose uses of spring water is proposed according to the integration of the land use and spring water temperatures. The study results indicate that the integration of OK, SGS, and GIS is capable of characterizing spring water temperatures and the suitability of multipurpose uses of spring water. SGS realizations are more robust than OK estimates for characterizing spring water temperatures compared to observed data. Furthermore, current land use is almost ideal in the Jiaosi Hot Spring Region according to the estimated spatial pattern of spring water temperatures.
Estimating Risk of Natural Gas Portfolios by Using GARCH-EVT-Copula Model.
Tang, Jiechen; Zhou, Chao; Yuan, Xinyu; Sriboonchitta, Songsak
2015-01-01
This paper concentrates on estimating the risk of Title Transfer Facility (TTF) Hub natural gas portfolios by using the GARCH-EVT-copula model. We first use the univariate ARMA-GARCH model to model each natural gas return series. Second, the extreme value distribution (EVT) is fitted to the tails of the residuals to model marginal residual distributions. Third, multivariate Gaussian copula and Student t-copula are employed to describe the natural gas portfolio risk dependence structure. Finally, we simulate N portfolios and estimate value at risk (VaR) and conditional value at risk (CVaR). Our empirical results show that, for an equally weighted portfolio of five natural gases, the VaR and CVaR values obtained from the Student t-copula are larger than those obtained from the Gaussian copula. Moreover, when minimizing the portfolio risk, the optimal natural gas portfolio weights are found to be similar across the multivariate Gaussian copula and Student t-copula and different confidence levels.
NASA Astrophysics Data System (ADS)
Rychlik, Igor; Mao, Wengang
2018-02-01
The wind speed variability in the North Atlantic has been successfully modelled using a spatio-temporal transformed Gaussian field. However, this type of model does not correctly describe the extreme wind speeds attributed to tropical storms and hurricanes. In this study, the transformed Gaussian model is further developed to include the occurrence of severe storms. In this new model, random components are added to the transformed Gaussian field to model rare events with extreme wind speeds. The resulting random field is locally stationary and homogeneous. The localized dependence structure is described by time- and space-dependent parameters. The parameters have a natural physical interpretation. To exemplify its application, the model is fitted to the ECMWF ERA-Interim reanalysis data set. The model is applied to compute long-term wind speed distributions and return values, e.g., 100- or 1000-year extreme wind speeds, and to simulate random wind speed time series at a fixed location or spatio-temporal wind fields around that location.
Raudsepp, Allan; A K Williams, Martin; B Hall, Simon
2016-07-01
Measurements of the electrostatic force with separation between a fixed and an optically trapped colloidal particle are examined with experiment, simulation and analytical calculation. Non-Gaussian Brownian motion is observed in the position of the optically trapped particle when particles are close and traps weak. As a consequence of this motion, a simple least squares parameterization of direct force measurements, in which force is inferred from the displacement of an optically trapped particle as separation is gradually decreased, contains forces generated by the rectification of thermal fluctuations in addition to those originating directly from the electrostatic interaction between the particles. Thus, when particles are close and traps weak, simply fitting the measured direct force measurement to DLVO theory extracts parameters with modified meanings when compared to the original formulation. In such cases, however, physically meaningful DLVO parameters can be recovered by comparing the measured non-Gaussian statistics to those predicted by solutions to Smoluchowski's equation for diffusion in a potential.
Langevin dynamics for ramified structures
NASA Astrophysics Data System (ADS)
Méndez, Vicenç; Iomin, Alexander; Horsthemke, Werner; Campos, Daniel
2017-06-01
We propose a generalized Langevin formalism to describe transport in combs and similar ramified structures. Our approach consists of a Langevin equation without drift for the motion along the backbone. The motion along the secondary branches may be described either by a Langevin equation or by other types of random processes. The mean square displacement (MSD) along the backbone characterizes the transport through the ramified structure. We derive a general analytical expression for this observable in terms of the probability distribution function of the motion along the secondary branches. We apply our result to various types of motion along the secondary branches of finite or infinite length, such as subdiffusion, superdiffusion, and Langevin dynamics with colored Gaussian noise and with non-Gaussian white noise. Monte Carlo simulations show excellent agreement with the analytical results. The MSD for the case of Gaussian noise is shown to be independent of the noise color. We conclude by generalizing our analytical expression for the MSD to the case where each secondary branch is n dimensional.
The Gaussian streaming model and convolution Lagrangian effective field theory
Vlah, Zvonimir; Castorina, Emanuele; White, Martin
2016-12-05
We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM tomore » a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.« less
Nonlinear scalar forcing based on a reaction analogy
NASA Astrophysics Data System (ADS)
Daniel, Don; Livescu, Daniel
2017-11-01
We present a novel reaction analogy (RA) based forcing method for generating stationary passive scalar fields in incompressible turbulence. The new method can produce more general scalar PDFs (e.g. double-delta) than current methods, while ensuring that scalar fields remain bounded, unlike existent forcing methodologies that can potentially violate naturally existing bounds. Such features are useful for generating initial fields in non-premixed combustion or for studying non-Gaussian scalar turbulence. The RA method mathematically models hypothetical chemical reactions that convert reactants in a mixed state back into its pure unmixed components. Various types of chemical reactions are formulated and the corresponding mathematical expressions derived. For large values of the scalar dissipation rate, the method produces statistically steady double-delta scalar PDFs. Gaussian scalar statistics are recovered for small values of the scalar dissipation rate. In contrast, classical forcing methods consistently produce unimodal Gaussian scalar fields. The ability of the new method to produce fully developed scalar fields is discussed using 2563, 5123, and 10243 periodic box simulations.
Improved key-rate bounds for practical decoy-state quantum-key-distribution systems
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Zhao, Qi; Razavi, Mohsen; Ma, Xiongfeng
2017-01-01
The decoy-state scheme is the most widely implemented quantum-key-distribution protocol in practice. In order to account for the finite-size key effects on the achievable secret key generation rate, a rigorous statistical fluctuation analysis is required. Originally, a heuristic Gaussian-approximation technique was used for this purpose, which, despite its analytical convenience, was not sufficiently rigorous. The fluctuation analysis has recently been made rigorous by using the Chernoff bound. There is a considerable gap, however, between the key-rate bounds obtained from these techniques and that obtained from the Gaussian assumption. Here we develop a tighter bound for the decoy-state method, which yields a smaller failure probability. This improvement results in a higher key rate and increases the maximum distance over which secure key exchange is possible. By optimizing the system parameters, our simulation results show that our method almost closes the gap between the two previously proposed techniques and achieves a performance similar to that of conventional Gaussian approximations.
Multi-fidelity Gaussian process regression for prediction of random fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parussini, L.; Venturi, D., E-mail: venturi@ucsc.edu; Perdikaris, P.
We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgersmore » equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.« less
NASA Astrophysics Data System (ADS)
Gaztanaga, Enrique; Fosalba, Pablo
1998-12-01
In Paper I of this series, we introduced the spherical collapse (SC) approximation in Lagrangian space as a way of estimating the cumulants xi_J of density fluctuations in cosmological perturbation theory (PT). Within this approximation, the dynamics is decoupled from the statistics of the initial conditions, so we are able to present here the cumulants for generic non-Gaussian initial conditions, which can be estimated to arbitrary order including the smoothing effects. The SC model turns out to recover the exact leading-order non-linear contributions up to terms involving non-local integrals of the J-point functions. We argue that for the hierarchical ratios S_J, these non-local terms are subdominant and tend to compensate each other. The resulting predictions show a non-trivial time evolution that can be used to discriminate between models of structure formation. We compare these analytic results with non-Gaussian N-body simulations, which turn out to be in very good agreement up to scales where sigma<~1.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O; Gelfand, Alan E
2016-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O.; Gelfand, Alan E.
2018-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online. PMID:29720777
Estimating Risk of Natural Gas Portfolios by Using GARCH-EVT-Copula Model
Tang, Jiechen; Zhou, Chao; Yuan, Xinyu; Sriboonchitta, Songsak
2015-01-01
This paper concentrates on estimating the risk of Title Transfer Facility (TTF) Hub natural gas portfolios by using the GARCH-EVT-copula model. We first use the univariate ARMA-GARCH model to model each natural gas return series. Second, the extreme value distribution (EVT) is fitted to the tails of the residuals to model marginal residual distributions. Third, multivariate Gaussian copula and Student t-copula are employed to describe the natural gas portfolio risk dependence structure. Finally, we simulate N portfolios and estimate value at risk (VaR) and conditional value at risk (CVaR). Our empirical results show that, for an equally weighted portfolio of five natural gases, the VaR and CVaR values obtained from the Student t-copula are larger than those obtained from the Gaussian copula. Moreover, when minimizing the portfolio risk, the optimal natural gas portfolio weights are found to be similar across the multivariate Gaussian copula and Student t-copula and different confidence levels. PMID:26351652
The fast algorithm of spark in compressive sensing
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.
The Gaussian streaming model and convolution Lagrangian effective field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlah, Zvonimir; Castorina, Emanuele; White, Martin, E-mail: zvlah@stanford.edu, E-mail: ecastorina@berkeley.edu, E-mail: mwhite@berkeley.edu
We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM tomore » a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.« less
Spacecraft Data Simulator for the test of level zero processing systems
NASA Technical Reports Server (NTRS)
Shi, Jeff; Gordon, Julie; Mirchandani, Chandru; Nguyen, Diem
1994-01-01
The Microelectronic Systems Branch (MSB) at Goddard Space Flight Center (GSFC) has developed a Spacecraft Data Simulator (SDS) to support the development, test, and verification of prototype and production Level Zero Processing (LZP) systems. Based on a disk array system, the SDS is capable of generating large test data sets up to 5 Gigabytes and outputting serial test data at rates up to 80 Mbps. The SDS supports data formats including NASA Communication (Nascom) blocks, Consultative Committee for Space Data System (CCSDS) Version 1 & 2 frames and packets, and all the Advanced Orbiting Systems (AOS) services. The capability to simulate both sequential and non-sequential time-ordered downlink data streams with errors and gaps is crucial to test LZP systems. This paper describes the system architecture, hardware and software designs, and test data designs. Examples of test data designs are included to illustrate the application of the SDS.
A parallel computational model for GATE simulations.
Rannou, F R; Vega-Acevedo, N; El Bitar, Z
2013-12-01
GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
A sequential coalescent algorithm for chromosomal inversions
Peischl, S; Koch, E; Guerrero, R F; Kirkpatrick, M
2013-01-01
Chromosomal inversions are common in natural populations and are believed to be involved in many important evolutionary phenomena, including speciation, the evolution of sex chromosomes and local adaptation. While recent advances in sequencing and genotyping methods are leading to rapidly increasing amounts of genome-wide sequence data that reveal interesting patterns of genetic variation within inverted regions, efficient simulation methods to study these patterns are largely missing. In this work, we extend the sequential Markovian coalescent, an approximation to the coalescent with recombination, to include the effects of polymorphic inversions on patterns of recombination. Results show that our algorithm is fast, memory-efficient and accurate, making it feasible to simulate large inversions in large populations for the first time. The SMC algorithm enables studies of patterns of genetic variation (for example, linkage disequilibria) and tests of hypotheses (using simulation-based approaches) that were previously intractable. PMID:23632894